DUJS
Dartmouth Undergraduate Journal of Science FA L L 2 0 1 9
|
VO L . X X I
|
NO. 1
INTERSECTION
THE CROSSROADS OF SCIENCE
The Illusion of (the White) Race
p. 4
A Current Understanding of Visual Attention
p. 56
BBB: Better Biofuel Business
p. 82
1
A Letter from the Editors Dear Readers, Writing about science is a valiant undertaking, one that is both difficult and rewarding. It requires the communication of complex scientific concepts to an audience that is largely uninformed. This seems rather a thankless endeavor. Why should a scientist stop to explain their work to those who are not familiar with (and often, not appreciative of ) their daily research efforts? There are many answers to this question. First the practical one: if scientific results are not communicated to the broader public – more specifically to policymakers, business executives, lab directors, and other researchers equipped to act on them – then they are not relevant. Science may inform our current lifestyle and enhance our standard of living, but only if it is shared. Beyond its practical importance, another objective of scientific writing is to offer inspiration. The past development of powerful technologies and life-saving medications has driven generations of scientists to don their lab coats and tackle new scientific problems. Complex scientific problems such as these are completely unsurmountable without adequate communication that stimulates intellectual conversation on an international scale. And while this is a lofty task, those who write for DUJS do their part and contribute to the effort. We assume the task of bringing science to the broader community – promoting the findings of researchers from numerous academic institutions (Dartmouth chiefly among them) with the rest of the world. There were 33 writers who completed print articles for this volume of the Dartmouth Undergraduate Journal of Science. This is, by far, a record for the Journal – a body of writers so large that two issues were required to showcase their work. The theme of this term’s journal was “Intersection,” and it is evident in its articles. Our writers chose to explore the connections between biology and government, chemistry and the environment, physics and astronomy. The topics these writers addressed are as diverse as they are important. Several writers explored the allergic response, shedding light on the immense complexity of the human immune system. Another writer examined desalination – the process of removing the salt from saltwater and producing potable water for the developing world. Others looked at dieting trends (specifically intermittent fasting) – a subject of great interest to personal well-being and to public health. One writer even published her own original research, hoping that the work she has done will inspire other scientists to investigate the topics they find compelling. We encourage you to read this journal and hope that its articles inspire you to read and write about science, or even to conduct research of your own. We thank you for supporting our organization and for honoring our commitment to science. And we hope that you enjoy reading these articles as much as we have enjoyed editing them. Warmest Regards, Sam Neff ’21 Nishi Jain ‘21 DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs@dartmouth.edu Copyright © 2017 The Trustees of Dartmouth College
The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EDITORIAL BOARD President: Sam Neff Editors-in-Chief: Anders Limstrom, Nishi Jain Chief Copy Editor: Anna Brinks, Liam Locke STAFF WRITERS Aditi Gupta Alex Gavitt Alexandra Limb Allan Rubio Amber Bhutta Anahita Kodali Aniketh Yalamanchili Anna Brinks Annika Morgan Arjun Miklos Brandon Feng Catherine Zhao Chengzi Guo Daniel Cho Dev Kapadia Dina Rabadi Eric Youth Georgia Dawahare Hubert Galan Jennifer Chen Jessica Campanile John Ejiogu Julia Robitaille Klara Barbarossa Kristal Wong Liam Locke Love Tsai Maanasi Shyno Madeleine Brown Nishi Jain Sam Neff Zhanel Nugumanova
ADVISORY BOARD Alex Barnett – Mathematics Marcelo Gleiser – Physics/Astronomy David Glueck – Chemistry Carey Heckman – Philosophy David Kotz – Computer Science Richard Kremer – History William Lotko – Engineering Jane Quigley – Kresge Physical Sciences Library Roger Sloboda – Biological Sciences Leslie Sonder – Earth Sciences
SPECIAL THANKS Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company
A Conversation with Professor Kevin Peterson The Dartmouth Undergraduate Journal of Science recently sat down with Dr. Kevin Peterson, a professor of biological sciences at Dartmouth College. He opened up about his decision to pursue his particular career path, his draw toward science, and advice for future students in the field of STEM at large.
What was your path to your current position? I was born in Butte Montana in 1966 and moved to Helena MT where I attended the public schools. I then attended Carroll College (in Helena MT) earning in 1989 a B.A. in Biology and graduated Maxima cum laude. After taking two years off and working for the Montana Department of Highways I attending UCLA earing a Ph.D. in Geology in 1996 and then did my postdoctoral studies at the California Institute of Technology (Pasadena, CA). I came to Dartmouth as an Assistant Professor in 2000 and have been here ever since, becoming an Associate Professor in 2006 and a Full Professor in 2012.
Why did you choose to go into biology? I’m not sure I really even had a choice in this matter. Ever since I was four years old – probably after finding my very first fossil (which I still have! See the photo below) – I wanted to be a paleontologist. There was a significant detour to premedicine for a few years, but I eventually found my way back to my first love:
Why is biology changing/will change in the future? That’s the beautiful and inspiring thing about Biology - it's always changing! Everything that I work on today didn’t exist in 2000 and there was no way I could have ever imagined working on the problems that engage me today. Indeed, who would have ever thought that we could actually sequence a Neandertal genome and then discover that all Eurasian peoples carry a bit of their DNA in our genomes? Or CRISPR-Cas technology that is revolutionizing the way we approach the possibility of DNA editing, especially to potentially treat disease. So, the short answer to a deep question is that there is literally no way to predict where the field might be in 10 years or 20 years. But it will be something wonderfully engaging!
What do you see as the most significant areas of research in modern biology? Whatever fascinates the individual scientist is significant as there is no way to predict where a simple discovery made by an individual doing what they love might lead. Just let us follow our noses – and our hearts – and humanity will reap the rewards.
What is important about the intersection between biology and the humanities? Science is an art form predicated on simple rules of honesty, repeatability, and testability, but is ultimately done by humans with all their strengths and foibles, their passions and prejudices, their failures as well as their joys of discovery. There is no science without humans, and hence there can be no science without the humanities.
What do you like to do when you're not doing biology? I love to garden and hike in the summer, spend time with my family and our dogs, and listen and play music (I’m a fairly untalented bass player, but I do love it!).
What advice do you have for budding scientists?
(And when I found this as a four-year old the fossil filled my entire hand!!!)
Science is a wonderful and rewarding way to spend a life. Enjoy! But remember, many a time you will be wrong-headed, misguided, misdirected, and sometimes just plain deceived. But, as Fox Mulder always said, “The truth is out there.” And I like to think that through dogged perseverance coupled with copious amounts of curiosity and a passion for a problem that just won’t sate, that journey to truth – whatever or wherever it might be ¬– is well worth it. Don’t do it for the prizes, the awards, the accolades. Do it because there is nothing else you’d rather do, or in my case, can do.
Table of Contents The Illusion of (the White) Race
4
Professor Kevin J. Peterson
4
The Merits of Technology for the Neurodiverse
12
Aditi Gupta Resolving the Hubble Tension
17
Alex Gavitt Gut Microbiome Brain Communication: The
22
Unexpected Intersects of Body and Mind Alexandra Limb
22
26
The New Alternatives to Meat Production Allan Rubio
31
The Plastic Problem Amber Bhutta Medical Tourism's Impact on Rural Indian Citizens
36
Anahita Kodali Rejected Alzeimer's Drugs Might Actually Work!
31
41
Aniketh Yalamanchili A New Frontier in Cancer Treatment:
45
Immunotherapy and Adoptive T Cell Therapy Anna Brinks Deuterium Oxide on Maintaining Viability in Coliphage Bacteriophages
50
Annika Morgan A Current Understanding of Visual Attention
50
56
Arjun Miklos 61
An Overview of Garbage Collection Brandon Feng The Role of Machine Learning in Allergy Medicine
66
Catherine Zhao Revisiting Smallpox Variolation, Vaccination, and
70
Equination
61
Chengzi Guo Speeding up Neuro-Regeneration in the Human Body
77
Daniel Cho BBB: Better Biofuel Business
82
Dev Kapadia
70 3
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
I L LU S I O N O F R A C E
The Illusion of (the White) Race BY KEVIN J. PETERSON
Alejandro* sits near the middle of the third row in my introductory biology class at Dartmouth College. Born in San Juan to parents of Puerto Rican descent, he is thinking about a major in biology and eventually attending medical school. Although his skin, like all non-albinos, is pigmented, he himself is not a pigment. Alejandro is cognizant of his Hispanic history, but he does not self-identify as Hispanic. He will come to discover that he (like all of us) has deep African history, but he is not African; he has Native American history, but he is not Native American; he has African slave history, but he is not African-American. Indeed, he will soon see that his genome is rife with the genetic signatures of sexual and ethnic inequality, as his Native American and likely African-American roots are maternal in nature, in contrast to his paternally-derived Hispanic heritage. And he will soon discover that, like all people of Eurasian descent, he even has a little bit of Neandertal in him as well.
Sitting next to Alejandro is his friend Lyla. Born in Iowa to Native American parents, she too is thinking about majoring in both biology and Native American studies. She belongs to the Navajo and Ute nations, but has grandfathers of European descent. Although her skin-tone appears to be darker than Alejandro’s, she is not “brown.” She is definitely not “red”—she is not a crayon, or a mascot; she is certainly not a logo. In front of Lyla sits Jenny. Born in Salem Massachusetts to parents of Chinese descent, she is involved in health-care outreach and plans on majoring in biology. To my eye, she and Lyla have very similar skin tones. However, we will come to find that they acquired their similar levels of skin pigmentation through different sets of genetic mutations acting within their pigmentation pathways.
Cover Source: Wikimedia Commons
"However, we will come to find that they acquired their similar levels of skin pigmentation through different sets of genetic mutations acting within their pigmentation patterns."
Sarah sits to the right of Jenny. An Ashkenazic woman from outside of Boston, Massachusetts, she is considering majoring in
* SOME OF THE NAMES OF THE PEOPLE DISCUSSED IN THIS ARTICLE HAVE BEEN CHANGED. THOSE NOT CHANGED ARE AT THE REQUEST OF THE INDIVIDUAL.
FALL 2019
4
“But what they might not know is that their status as 'white people' is a story that needs to be revealed for what it is - a myth.”
Figure 1. Animal models and the genetic exploration of human pigmentation patterns. A. A wild-type zebrafish. In those with the golden mutation the dark stripes running horizontally down the fish’s body are not as pronounced due to a mutation in the scl24a5 gene. B. A white tiger. Here, the mutant tiger still has the black stripes, but the orange coloration is absent due to a mutation in a different gene—Slc45a2— that is involved in pheomelanin production. C. A chestnut horse with reddish coat due to a mutation in the Mc1r gene that, in contrast to the white tiger, results in the absence of eumelanin rather than pheomelanin.
biology and government. She has deep red hair, very fair skin, and green eyes. To the left of Jenny is Aubrey. Born in Illinois to parents of northern European descent, Aubrey, like Sarah, is an aspiring biology major and a varsity athlete that is learning to manage the perpetual juggling act that is the life of the student-athlete. With her blonde hair and blue eyes, she would be identified by most people as “white”, and although I do not know this for sure, I cannot imagine that both Aubrey and Sarah are not aware of this. But what they might not know is that their status as “white” people is a story that needs to be revealed for what it is—a myth: a myth that fails to recognize that their suite of traits that we associate with “whiteness” is actually an amalgamation of traits (and hence mutated genes) from three very different peoples that came together just a few thousand years ago to generate a phenotype unique in human history and only possible at northern latitudes. Further, most of these traits are not directly predictable from any one person’s genotype and are influenced in unclear ways by the presence of Neandertal DNA. It is time to end the illusion of (the white) race and instead revel in our shared history— our genealogy—that is written in our genes, in our languages, in our traits, in our shared humanity. This is that story. We begin with a discovery made in, of all things, the zebrafish. One of the commonalities between fish and humans—indeed most vertebrate animals—is that they usually color themselves with just two pigments: the brown/black eumelanin and the red/yellow pheomelanin, and the ratio between the two, in part, affects the final coloration of the
individual. Both pigments are synthesized by structures called melanosomes that are housed in specialized skin cells called melanocytes. Once synthesized, the pigments are then transferred to a second skin cell, the keratinocyte. There, the melanin protects us from ultra-violet (UV) radiation that can cause DNA damage and result in skin cancer. But like most things in life, this protection from UV radiation—at least in humans—comes at a cost: an increase in the amount of melanin decreases the ability to produce vitamin D whose synthesis requires UV light. A balancing act now ensues: less UV radiation means less vitamin D production, which necessitates less melanin production. More UV radiation, on the other hand, means more vitamin D production, but also more potential for cancer, which necessitates more melanin production. As people migrated north into latitudes with less UV radiation, it was evolutionarily advantageous to have a lightened skin tone to ensure sufficient vitamin D production, particularly in order to avoid diseases like rickets. On the other hand, as people moved from southern Africa into more equatorial regions of the planet, it was advantageous to have a darkened skin tone to protect the skin from UV damage and, ultimately, skin cancer. Like different kinds of apples or dogs, mutations in zebrafish arise that give them certain traits that people favor and can propagate through domestic selection. One of these fish mutations is called golden. Fish with the golden mutation do not make as much melanin and are lightly colored relative to wild-type (i.e., normal) fish (Fig. 1). In 2005, Professor Keith Cheng and his colleagues at
Source: Zebrafish from https:// en.wikipedia.org; tiger figure is from https://pixabay.com. Horse figure courtesy of the Dartmouth equestrian team.
5
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
I L LU S I O N O F R A C E Figure 2. The genotypes of the people discussed in this article with their respective pigmentation phenotypes mapped to their actual or historical homeland. For each gene and for each person the respective genotype is given using the standard abbreviations for DNA nucleotides (A = adenine; G = guanine; C = cytosine; T = thymine). Mutations are bolded. The color of each box is the “average” of each person’s respective skin color phenotype (see Fig. 3) as determined by Photoshop. Source: World map from https://pixabay.com/vectors/ world-map-globe-geographyearth-47959/.
Penn State discovered that the genetic cause of the golden strain of fish was due to a mutation in a gene called slc24a5 that codes for a protein that promotes melanin production in the melanosome. Thus, lighter pigmented fish arise from a mutation affecting melanin production; their paleness is due to a slightly broken pigmentation pathway. These researchers then asked if a mutation to the same gene in people—called SLC24A5—is responsible, at least in part, for evolutionary changes to human skin coloration. Remarkably, they found that this was indeed the case: People from northern latitudes, including northern Europeans and native Americans, have a mutated version of this gene, just like golden fish. People from areas farther South like Africa and Australia generally do not. Thus, mutations in genes like SLC24A5 are not only important for generating phenotypic differences amongst people, they are also important markers for inferring the deep ancestry of human beings as well. Dagmawi sat in the middle and towards the back of the lecture hall in the class one year ahead of Alejandro, Lyla, Sarah, Jenny and Aubrey. We acknowledged one another at the beginning of each lecture with a slight nod of our heads and then a smile. He helped me more accurately pronounce “Addis Ababa,” the near-literal nexus of humanity from which people have been coming into and out of Africa for hundreds of thousands of years, and near his home town in Ethiopia. He considered majoring either in computer science or political science and eventually decided to major in both. During one lecture, I summarized the remarkable work of Professor Svante Pääbo and
his colleagues at the Max Planck Institute for Evolutionary Anthropology in Germany, who discovered that Neandertals and descendants of the last great (unforced) exodus from Africa mated with one another nearly 50,000 years ago. I mentioned to the class that I have about two percent Neandertal DNA, average for most Eurasians, and I pointed out that people from Africa like Dagmawi likely have far less Neandertal DNA. This is because, as far as we know, Neandertals never ventured into Africa. However, Dagmawi’s story might be a bit more complicated as his ancestors likely mated with humans who migrated back into Africa from the Middle East carrying with them a small percentage of Neandertal DNA. I did not know this at the time, but Dagmawi was fascinated with the possibility that his genome might house Neandertal DNA. Over winter break he sent a saliva sample off to the commercial genotyping company 23andMe®. Later that winter, we met to go over his data. As expected, we saw that he has some Neandertal DNA— about half as much as me, but likely quite a bit more than his friends on campus who hail from sub-Saharan Africa, like Gabrielle, a third-year Kenyan native majoring in Engineering. I joked that although he is twice the person that I am, he is only half the person he could be.
“Thus, mutations in genes like SLC24A5 are not only important for generating phenotypic differences amongst people, they are also important markers for inferring the deep ancestry of human beings as well.”
The next academic year I met Alejandro and his classmates. He and Lyla invited me to a student-faculty lunch, during which Lyla discussed some of the Native American issues that she is involved with on campus. She contrasted her native status with Alejandro’s and mine. I was a bit surprised as I had assumed that Alejandro, as a Puerto Rican, had Native 6
"The human body is a remarkable panopoly of phenotypes built in real time by the interaction between the environment and our genotype and are designed through deep time by our natural and sexual selection."
American ancestry as well, in particular on his maternal side. But when Alejandro said nothing, I suggested that they both consider submitting saliva samples to a genotyping company like 23andMe®, just like Dagmawi had done, to learn more about their respective deep ancestries. They decided to do this and off went their samples. Aubrey and Sarah submitted samples on their own. I asked several other students that I knew including Gyan, an Indian-American from El Paso, Lali, a Mexican woman from Los Angeles, Chelsea, an African-American woman from Charlotte, as well as both Jenny and Gabrielle to consider submitting samples as well. My daughter and I submitted samples too. The human body is a remarkable panoply of phenotypes built in real time by the interaction between the environment and our genotype and designed through deep time by natural and sexual selection. Contrary to what we are taught in high school, most traits—whether height, weight, or skin pigmentation—arise not just from a “normal” or “mutated” version of a single gene, like the yellow versus green peas of Gregor Mendel, but instead are the result of interactions between many genes and the environment. Perusing the raw data provided to us by 23andMe®, we noted that Sarah, Aubrey, Lyla, Alejandro, Gyan and I have two copies of the mutated versions of SLC24A5, one from each of our parents; Gabrielle, Lali, Chelsea, Jenny and Dagmawi have two copies of the unmutated or wild-type gene (Fig. 2). To my eye, Jenny is about as fair skinned as Lyla and much lighter than both Dagmawi and Gyan, so it was surprising to us that she shares the wild-type versions with Dagmawi, whereas Gyan shares the mutant version with the more lightly pigmented people like Lali (Fig. 3). Clearly something more is going on here than just one mutated gene causing a lightening of skin pigmentation in people. We are not legumes. Unlike the golden mutation, where the mutation affects both eumelanin and pheomelanin deposition, other mutations to
different genes can directly affect one without affecting the other. For example, Bengalese white tigers are characterized by the presence of brown/black eumelanin pigment in the hair of their stripes and in their eyes, and the absence of the red/yellow pheomelanin pigment in their fur (Fig. 1B), an absence caused by a mutation in the Slc45a2 gene. On the other hand, animals like chestnut horses (Fig. 1C) possess mutations in the Mc1r gene that prevents its ability to deposit eumelanin, but does not affect its ability to deposit pheomelanin, resulting in a beautiful red coat. When we examined our 23andMe®data, we found that only Sarah, Aubrey and I have two copies of the mutated SLC45A2 gene, and only Sarah has one mutated copy of the MC1R gene, whereas all of the other students have two copies of the wild-type unmutated versions for both genes, including both Jenny and Dagmawi who have very different pigmentation levels (Fig. 3). In order to tease apart Jenny’s and Dagmawi’s respective phenotypes, for example, we need to find other variants that affect skin pigmentation, in particular variants that arose in Africa. Professor Sara Tishkoff and her colleagues at the Perelman School of Medicine have recently shown that when people migrated from our ancestral homelands in southern Africa into more equatorial latitudes, mutations arose in genes like MFSD12 that allowed for a better product, higher melanin deposition, a deeper skin pigmentation, and ultimately less skin cancer. Many of our cells contain a structure called the lysosome that it is responsible for the breakdown of cellular waste-products. The product of the MFSD12 gene is found in the lysosomes of melanocytes, and a decrease in its activity results in higher melanin production because less melanin is degraded. People from northern latitudes usually have the unmutated version of this gene, resulting in lighter skin tone due to more melanin degradation, correlating with less intense UV radiation but enhanced vitamin D production. People from equatorial Africa like Gabrielle and Chelsea, though, usually
Figure 3: The pigmentation phenotypes for each individual in this study. The inner part of the upper arm for each individual was photographed under identical illumination using the same camera equipment. Source: Photography by Bob Roberts (Dartmouth College).
7
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
I L LU S I O N O F R A C E have two copies of the mutated gene, and people like Dagmawi from Ethiopia and Gyan often have at least one mutated copy (Fig. 2). All four of these people are mutants as well, uniquely suited for the intense sunlight that characterizes equatorial African and India (Fig. 3). Race is a useless biological concept, in part because the genotypes underlying characteristics like pigmentation are not directly predictive of any one phenotype. And indeed, according to Professor Tishkoff and her colleagues, the genes discussed above account for less than 30% of the overall pigmentation phenotype—much of it remains unknown even today primarily because numerous variants affect skin pigmentation, but each in a very small and largely unpredictive way. And some of these variants, at least in Eurasian populations, are actually derived from a wholly different species: Homo neanderthalensis. One of the great ironies of life is that the very traits most used by people to erect fallacious racial hierarchies—namely skin and hair features—are the very same characteristics most influenced by the presence of Neandertal DNA acquired from a mating between humans and Neandertals that happened sometime between 54,000 and 49,000 years ago. A gene particularly enriched for Neandertal DNA is POU2F3, which is involved in the proliferation
of the skin’s keratinocytes. Indeed, when I examined a single variant in the POU2F3 gene, I noted that I am heterozygous, meaning that I have two different copies of the gene; one, the human version that that I inherited from one of my parents, and the other, Neandertal, that I inherited from the other. Sarah and Jenny are the same; everyone else though is homozygous at this position, and in all cases but one possessing two copies of the human variant. My daughter, Jane, is the exception: she inherited two copies of the Neandertal variant (Fig. 2), one from me and one from her mother. Out of the nearly 7 billion building blocks of DNA found in the cells of her body in this one position she is not human—she is pure Neandertal. Why selection prefers Neandertal variants associated with hair and skin genes is not clear. Maybe these bits and pieces of archaic DNA help regulate the gene so that it is tweaked just a bit better for the northern Eurasian environment, an environment the Neandertals and their ancestors lived in and mastered for hundreds of thousands of years. What is clear, however, is that selection—whatever the reason —preferred these variants as they must have conferred a survival advantage to these ancestral Eurasians tens of thousands of years ago. But—and this is very important—the last common ancestral population of all humans
"Race is a useless biological concept, in part because the genotypes underlying characteristics like pigmentation are not directly predictive of any one phenotype."
Figure 4. A necessarily incomplete history of the people discussed in this article highlighting Jane’s maternal (red) versus paternal (blue) history (shared history is shown in magenta). Time runs from the top of the figure to the bottom, and is relative such that events above occurred before events below, but aside from where indicated with absolute dates (Ka = 1000 years ago), the amount of time between any two events is largely unknown. The populational history of Africa is poorly known relative to Eurasia so both Gabrielle and Dagmawi’s respective stories are greatly simplified, as is Sarah’s. See the text for further details, and Reich (2018) for a full treatment of these data. G. P. = ghost population: a group of people inferred on the basis of statistical analyses of the genomes of modern populations, but does not exist today in unmixed form. Modern populations are shown in bold.
8
living today was not in Eurasia, but in Africa, and the only people on the planet whose DNA is largely, if not entirely, Neandertal free are sub-Saharan Africans. Yet some people, with all their mutations and their Neandertal DNA, try to flip the story on its head, desperately trying to argue that they sit on the top rung of some sort of racial ladder, and from this wholly-imagined vantage point they then try to hierarchically arrange people based on the degree of lightness versus darkness of their skin pigmentation, a continuous phenotype (Fig. 3) that simply reflects a tradeoff between the amount of UV radiation and vitamin D production, and influenced in unclear ways by the presence of archaic, non-human DNA.
"The construct of race has long implied that the genes underlying the racial phenotype, and hence the 'race' itself, exist in solution and in unmixed form relative to all other 'races.'"
Figure 5. Screen shots from the Genographic project showing Alejandro’s respective paternal (left) versus maternal (right) histories. These histories are plotted as heat maps where the colors represent the percentage frequency of the genotype in populations from different geographic regions with red indicating high concentrations and yellow lower concentrations. The arrows indicate the inferred migration routes. Notice that Alejandro’s paternal history can be traced to western Europe as his Y chromosome derives from Iberia (Spain and Portugal), whereas his maternal history can be traced from Asia into the Americas as his mitochondrial DNA is Native American.
The construct of race has long implied that the genes underlying the racial phenotype, and hence the “race” itself, exist in isolation and in un-mixed form relative to all other “races.” It goes without saying then that many people are not of a single race. Take my daughter for instance: I am “white” and her mother is “Chinese.” She, then, is neither white nor Chinese, but instead is “bi-racial.” So even though she can use the hyphen when describing her ethnicity (“Chinese-Irish”, or, more simply, “American”), she cannot use the hyphen when describing their race. She would have to check more than one box. But as Professor David Reich and his colleagues have shown and compellingly described in his book Who We Are and How We Got Here, the idea that her parents — her mother and me — are unmixed “races” is as flawed as describing her as belonging to any one single race. Consider her respective maternal (Fig. 4, red) versus paternal histories (Fig. 4, blue). As native Cantonese speakers, her mother’s ancestry traces itself back to south-eastern China. But modern Chinese people are a fusion between what Reich calls “ghost populations”—the Yellow River and the Yangtze River peoples — that are ultimately derived from East Asian hunter gatherers. These two ghost populations would, in their
own time, have constituted distinct races by those with the propensity to see people in this manner, but are now largely lost to time as they, at least in unmixed form, are both extinct. As a northern European, my deep ancestry also lies within a populational fusion, this time of three different peoples, the huntergatherers of western Europe, farmers from the fertile crescent, and another ghost population called the Ancient North Eurasians (Fig. 4, blue). Starting around 9,000 years ago, as farmers began moving into Europe from western and central Turkey, they encountered the hunter-gatherers that had been living in northern Europe for tens of thousands of years. These hunter-gatherers would make a small contribution to my genome, but from a certain perspective, a large contribution to my phenotype: It is from these relatively dark haired and darkly pigmented people that I (and Aubrey) inherited our blue eyes. A larger fraction of my genome is from the farmers, and it is from these people that Aubrey and I likely inherited both the SCL24A5 and SLC45A2 variants for our lightly pigmented skin. But the major contribution to my genome is from the Ancient North Eurasians that invaded Europe from the Siberian steppe sometime around 5,000 years ago, bringing with them their blonde hair, their horses, their Indo-European language (that would one day evolve into modern English), and their likely violent, male-centric culture. The white “race” is thus a mixture of three different peoples each possessing distinct genotypes and distinct phenotypes, and all three, if employing the criteria for modern races, would each constitute their own, now extinct, race. There is no such thing as a “pure” white person —there never was. Alejandro’s story is similar, albeit at a slightly deeper time scale than Jane’s. A fascinating fact about the human cell is that we can trace our paternal history (if one is male) separately from our maternal history because there are two different markers, one passed from fathers
Source: https://genographic. nationalgeographic.com/
9
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
I L LU S I O N O F R A C E Figure 6. Modified screen shots showing the chromosomal paintings for Chelsea (left) and Lali (right), along with their respective mitochondrial haplotypes. Note the European contributions into their nuclear genomes, but their indigenous west African and American mitochondrial lineages for Chelsea and Lali, respectively. Source: 23andMe®
to sons, and one for mothers and children passed down through her daughters to her grandchildren. On the paternal side is the maledetermining Y chromosome, and Alejandro’s Y chromosome can be traced back to modern day Spain and Portugal (Iberia), but likely has been present in the Caribbean since the 16th century or so (Fig. 5, left). However, the DNA in Alejandro’s mitochondria, the small structures in each cell that generate most of his energy, tells a different story. Here, his mitochondria, which he inherited from his mother, are not Iberian but Native American, passed down uninterrupted from mother to child, and from daughter to grandchild (Fig. 5, right). Notice the pattern: his male history is Spanish, a language that Alejandro (ironically) considers to be his “mother” tongue, whereas his true maternal tongue and culture has long since vanished. This is an all-too common pattern where the genomics of the conquerors are
seen not only in the spoken language, but also in the Y chromosome, while the genomics of the vanquished, revealed in the mitochondrial DNA, are passed along from mothers to their children, children who at some point no longer speak her native language. One cannot look to Alejandro’s father though and label him “Iberian” nor can one label his mother as “Native American”. Alejandro’s father also carries a Native American mitochondrion that he inherited from his mother, and Alejandro’s mother’s genome, including both of her X chromosomes, contains significant amounts of Iberian DNA, about the same amount as Daniel’s father. And somewhere along the line—likely via paternal grandmothers just a few generations back on both his father’s and mother’s side— Daniel inherited DNA ultimately derived from West Africa. This is a story that transcends
"This is an all-too common pattern where genomics of the conquereors are seen not only in the spoken langauge, but also in the Y chromosome, while the genomics of the vanquished, revealed in mitochondrial DNA, are passed along from mothers to their children, children who at some point that no longer speak her native langauge." Figure 7. Chromosomal paintings of four of the individuals in this article derived from 23andMe®. Although both myself and Jenny are considered distinct “races” whereas both Jane and Alejandro are not, all four of us represent amalgamations of different peoples just at different times in our respective pasts. Blue represents DNA derived from European ancestry; red East Asian; yellow Native American; and various shades of pink and purple DNA derived from more recent African ancestry.
10
"And like everyone else, none of us really belong to any one "race." We are all composites of at least two, if not three or even four, different groups of people."
three people to encompass an entire island, nay an entire country and continent, and is representative of an entire species across the entire world. Indeed, like Alejandro, both Lali (a Mexican woman) and Chelsea (an African-American woman) harbor not insignificant amounts of European DNA in their nuclear genome – likely the result of genetic contributions from Europeans on their deep paternal sides – but nonetheless, carry either a Native American or a westAfrican mitochondrion going back tens, if not hundreds, of generations to women indigenous to the Americas (Lali) and to Africa (Chelsea), respectively. Sadly, like with Alejandro, the languages and cultures of these people were largely, if not entirely, exterminated by people possessing languages, cultures and DNA from a different part of the world, languages that Chelsea and Lali currently speak, whose cultures they have largely assimilated, and whose DNA imprints they still bear (Fig. 6). And so, Alejandro’s, or Lali’s, or Chelsea’s stories are no different from Jane’s, or Jenny’s or even my own—they are all just on different time scales (Fig. 7). And like everyone else, none of us really belong to any one “race.” We are all composites of at least two, if not three or even four, different groups of people. Indeed, it is likely to be even more, and we would be able to see this if we could trace our detailed ancestries all the way back into Africa, but too little is currently known about deep African genetic history (but this will change with further study and discovery). And, of course— lest we forget—all of us Eurasians are an amalgamation of at least two different species as well. It’s the start of a new academic year and a room full of bright and eager faces look to me with anticipation as we begin the first lecture. They come from all over the world in bodies of different shapes and sizes, with different hair types and colors, with different skin tones, different talents and interests, different hopes and dreams, but I know nothing specific about any of them, yet. Two things I do know for certain. First, they all have a story, a past, a collection of experiences that goes back thousands of generations, experiences that have made them, at least in part, who they are today. And recognizing race for the illusion that it is does not in any way repudiate these individual histories. Instead, we can treasure them as they teach us something fundamental about people, namely that we move, we migrate, we mate, we vanquish, we innovate, we assimilate, we adapt, we evolve. This is what we have been doing for literally hundreds
11
of thousands of years, and will continue to do for hundreds of thousands of years more. And no matter who they are, what they look like, or where they come from, there is a second thing I know as well, a truth about these people long held as self-evident: that all of them were born uniquely equal. I cannot wait to get to know each and every last one of them.
ACKNOWLEDGMENTS I would like to thank the students that participated in this project, and, in particular, Gyan Moorthy, who initially suggested the idea for the paper and gave critical feedback on the text itself. I would also like to thank my friends and family – in particular Anne Peterson, Roger Sloboda, Terryl Stacy, Ryan Calsbeek, and Pamela Andruszkiewicz – who read and critiqued previous drafts of the paper; Anne Peterson, Bastian Fromm, Mary Lee Sargent, Sam Westelman, and Alejandro and his family for their support and encouragement; and Bob Roberts (Dartmouth College) for his assistance in photography and figure construction. A final thank you goes to the proprietors of The Tree House (Bel Air, CA) – Hedda and Cedric – who provided the warmth and atmosphere for the initial blossoming of an idea. Further Reading Crawford, N. G., Kelly, D. E., Hansen, M. E. B., Beltrame, M. H., Fan, S., Bowman, S. L., … and Tishkoff, S. (2017). Loci associated with skin pigmentation identified in African populations. Science, 358(6365), eaan8433. http://doi. org/10.1126/science.aan8433 Green, R. E., Krause, J., Briggs, A. W., Maricic, T., Stenzel, U., Kircher, M., … and Pääbo, S. (2010). A Draft sequence of the Neandertal genome. Science, 328(5979), 710–722. Lamason, R. L., Mohideen, M., Mest, J. R., Wong, A. C., Norton, H. L., Aros, M. C., … and Cheng K. C. (2005). SLC24A5, a putative cation exchanger, affects pigmentation in zebrafish and humans. Science, 310(5755), 1782–1786. Pääbo, S. (2015). The diverse origins of the human gene pool. Nature Reviews Genetics, 16(6), 313–314. Pavan, W.J., and Sturm, R.A. 2019. The genetics of human skin and hair pigmentation. Annual Reviews of Genomics and Human Genetics 20, 41–72. Reich, D. (2018). Who We Are and How We Got Here. Pantheon Books. New York. Xu, X., Dong, G.-X., Hu, X.-S., Miao, L., Zhang, X.-L., Zhang, D.-L., … and Luo S.-J. (2013). The genetic basis of white tigers. Current Biology, 23(11), 1031–1035.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
NEURODIVERSE
The Merits of Technology for the Neurodiverse
BY ADITI GUPTA
DEFINING AUTISM SPECTRUM DISORDER AND AFFECT As many scholars continue to unravel the biological mechanisms underlying Autism spectrum disorder (ASD), a neurodevelopmental disorder1, a dynamic understanding of neurodiverse individuals is beginning to emerge. While the neurological causes of the disorder are still unknown, there are core features of ASD that are often present in children diagnosed with the condition1. As of 2014, it is estimated that as many as 1 in 59 children are diagnosed with ASD in the United States1. Given the prevalence of ASD, it is imperative to have a basic familiarity with the core features of ASD, which clinicians use to diagnose children. These features include communication difficulties, sensory deficits, restricted and repetitive behaviors, and poor social abilities1, which can each persist for life2. In the relatively new field of affect regulation, scholars are paying particular interest to emotion dysregulation and maladaptive behavior among adolescents with ASD. Affect regulation is defined as the process by which an individual experiences and responds to affect, the broad term for stress, mood, and emotion. While emotion dysregulation is not considered a core feature of ASD, it has been linked to all core features of ASD, and it is most strongly associated with repetitive behaviors1. In a study at Stanford University, Dr. Samson and her colleagues compared clinical features between 56 children (ages
Figure 1: Diffusion Tensor 6 to 16) with ASD with 38 neurotypical Imaging depicting the side participants. Indexes include measurements of view of the brain. emotion dysregulation, cognitive functioning, Source: Wikimedia Commons restricted repetitive behaviors, and social responsiveness. The most important finding, consistent with similar studies, revealed that the Emotion Dysregulation Index (EDI) “was significantly c o rrelated w i th t h e t o tal s c ores of the core features as measured by the Social Responsiveness Scale, Repetitive Behavior Scale-Revised, and Short Sensory Profile”1. Rounding out their conclusions, Samson and her colleagues round out the study encourage the “development of effective treatment “Given the for emotion dysregulation that could also prevalence of ASD, improve the core features of autism, it is imperative especially restricted and repetitive 1 behaviors” . In a separate study, Samson et al. to have a basic presented 21 high functioning adolescents familiarity with hte and 22 neurotypical adolescents with core features of ASD, everyday scenarios designed to “elicit which clinicians negative emotional reactions”3. They found use to diagnose that participants with ASD were less likely to use cognitive reappraisal strategies children." (strategies to reframe one’s outlook in a stressful situation) and more likely to use suppression than neurotypical participants. Both studies outline the increasing interest in new interventions that encourage positive emotion regulation strategies (such as cognitive reappraisal) and decrease maladaptive behavior (such as temper tantrums, self-injury, anxiety, and irritability) for healthier development of children with ASD3. 12
TECHNOLOGY AND ITS MERITS TO SUPPORT HEALTH
"Currently, most technological interventions focus on physical health, such as measuring step count or heart rate. However, an emerging field of healthrelated technology is focusing on emotional health."
Figure 2: Apple Watch (top) and FitBit (bottom). Both the Apple Watch and Fitbit are popular wearable devices that promote health. Scientists are increasingly interested in adapting and innovating health-focused wearables for neurodiverse populations. Source: Flikr
13
From Life Alert to the Apple Watch, portable technological devices are increasingly intertwined with individual health. Currently, most technological interventions focus on physical health, such as measuring step count or heart rate. However, an emerging field of health-related technology focuses on emotional health. One field is haptic technology, or devices that communicate with the user via touch. More recently, scholars have begun designing technological interventions for neurodiverse populations. The intersection of affect regulation, technological interventions, and neurodiverse populations together show great promise. The following sections discuss various haptic and non-haptic devices that have the potential to decrease social difficulties and/or increase emotion regulation for children with ASD.
POTENTIAL HAPTIC DEVICES TO REGULATE EMOTIONS Hearing the words “Your teacher asks to see you after class”3 would likely be stressful for anyone. In high-stress scenarios, it is difficult for even neurotypical individuals to use positive emotion regulation strategies, such as cognitive reappraisal, defined earlier as a technique to reframe one’s outlook in a high-stress scenario. Therefore, scholars are working on new interventions to promote emotion regulation strategies and decrease anxiety that can apply to a broad range of people, including those with neurological conditions. For example, Dr. Pardis Miri at Stanford University developed a haptic breathing pacer designed to deliver personalized vibrations so the user can synchronize his/her breathing with the device in an effort to reduce negative arousal4. In an interview with the Stanford Daily, Dr. Miri explains that the prototype is promising for individuals who are lowscoring in measurements of openness and cognitive reappraisal, two traits commonly associated with individuals with ASD4. Though this study was limited to neurotypical participants, Miri
plans to evaluate the efficacy of her breathing pacer among children with ASD. In designing the haptic breathing pacer, Miri and her team found that the pacer’s implementation should begin “ideally before a high-arousal negative emotion is even generated”5. Of course, this places the agency on the user, who would have to predict his/her emotions before deciding whether or not to cue the affect regulation technology5. Finally, Miri’s team challenges scientists to design technology that can learn from environmental cues and user interactions to “deliver timely interventions after sufficient learning”5. Such bold directions for affect regulation technology like machine learning and user agency support the advent of technology to assist the neurodiverse population. Researchers at the MIT Media Laboratory built four haptic devices designed specifically for therapeutic use by individuals with ASD. Vaucelle and the MIT Media Laboratory centered their device designs around sensory integration, or touch-based interventions that provide comfort to patients with developmental disorders6. Based on the established knowledge that tight pressure can reduce anxiety among individuals with ASD, Vaucelle et al. created various touch-based prototypes to simulate pressure in controlled environments. The first device is Touch Me, a large fabric vibrotactile grid connected to a keyboard whose various keys correspond to different kinds of touch that the vibrotactile motor simulates. Touch Me is designed to allow caregivers and the patient to map locations of touch that are “soothing” to the patient without the risk of direct touch6. By acclimatizing children with ASD to various touches, Touch Me has the potential to limit maladaptive behavioral outbursts like temper tantrums and aggression that children with ASD are prone to have in social situations. The second device is Squeeze Me, a wearable pressure vest simulating therapeutic holding, or a tight hug. The digitally controlled Squeeze Me uses air compression to inflate the vest inconspicuously from the inside such that it can be worn on an everyday basis to prevent panic attacks in high stress scenarios6. The last two devices developed in this MIT study are Hurt Me and Cool Me Down; both wearable devices are meant to “ground the senses” by providing temporary painful stimuli without causing injury to the wearer6. Hurt Me, though a problematic name, is an inflatable bracelet with plastic studs to simulate pain6. Cool DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
NEURODIVERSE
Me Down wraps around the arm to deliver a brief temperature shock without injuring the epidermis in order to self-administer simulated pain6. Because self-injury is a particularly dangerous form of maladaptive behavior seen in children with ASD, these interventions promote the health and safety of the user whilst mediating this population’s tendency for restricted and repetitive behavior1. While Vaucelle et al. consulted with five focus groups of clinicians to discuss the merits and drawbacks of each device’s efficacy o f r children in need of mental help, it is crucial to note that these prototypes were not evaluated clinically6. Alhough there are many challenges in evaluating these devices in clinical trials, these MIT prototypes encourage further innovation of haptic and non-haptic devices designed specifically for the neurodiverse population.
NON-HAPTIC DEVICES TO IMPROVE SOCIALIZATION One example of a non-haptic device designed to improve socialization for children with ASD is Superpower Glass. Catalin Voss and his colleagues at Stanford University adapted Google Glasses to develop an intervention that detects facial expressions and encourages social education for the wearer7. Unlike the MIT Haptic Designs or Dr. Miri’s breathing pacer, Superpower Glass was studied in a clinical trial with 71 child participants diagnosed with ASD. Recognizing the social impediments and communication deficits commonly observed in individuals with ASD1, Voss et al. designed Superpower Glass specifically for this neurodiverse population — placing the Superpower Glass intervention at the forefront of the burgeoning technological field serving individuals with autism. Moreover, as a relatively lower-cost, portable intervention compared to therapy, Voss et al. recognize that “Learning aids based on novel ubiquitous technologies using machine learning can...create opportunities for therapy that are accessible outside of the clinician’s office”7.
to recognize eight facial expressions in real time, “within 100 milliseconds”7. So, when the child wears the Superpower Glass at home, the system classifies one of eight emotions expressed by the child’s “social partners”: “happy, sad, angry, scared, surprised, disgust, ‘meh’ [apathy], and neutral”7. When the system detects an expression, it cues the wearer via audio to classify the emotion with corresponding emoticons for the eight emotions. In addition, the Superpower Glass connects to an app, where caregivers were able to record and delete video footage, control the use of the Superpower Glass, and provide manual inputs. Three activities were programmed in the Superpower Glass: “capture the smile”, in which audio prompts the child to elicit an emotion in a family member to promote contextual awareness, “guess the emotion,” in which the caregiver acts out expressions and asks the child to recognize the emotion before the caregiver manually inputs the response in the app, and “free play,” in which the child simply receives emotional cues in real time. The purpose of this study was to compare whether participants using Superpower Glass and engaging in behavioral therapy improved socialization compared to participants who only engage in behavioral therapy. Therefore, in the subject pool of 71 children diagnosed with
Figure 3: Dr. Miri’s haptic breathing pacer (HBP). To be inconspicuous, the translucent pads attach directly to the skin and deliver personalized vibrations for the user to synchronize his/her breathing with. The device connects via Bluetooth to a smartphone to control vibration delivery. When tested with neurotypical subjects, the HBP shows promise in encouraging openness and cognitive reappraisal4. Source: Stanford Daily
"Though there are many challenges in evaluating these devices in clinical trials, these MIT prototypes encourage further innovation of haptic and non-haptic devices designed specifically for the neurodiverse population." Figure 4 & 5: The MIT Media Laboratory is world famous for designing groundbreaking technologies from Boston, MA. Vaucelle and his colleagues at MIT Media Laboratory designed a variety of touchbased interventions to acclimate patients with ASD to human touch. Source: Wikimedia Commons
Superpower Glass uses machine learning FALL 2019
14
CONCLUSION
Figure 5: Google Glass. Voss et al adapted Google Glasses to improve social outcomes for children with Autism Spectrum Disorder. Voss et al’s Superpower Glass pairs with a mobile application; together, the intervention encourages users to identify facial expressions and encourage social education. Source: Wikimedia Commons
ASD, 40 children were randomly assigned to the treatment group (Superpower Glass + therapy for six weeks) and 31 children were randomly assigned to the control group (only therapy for six weeks). After the first follow up analysis, 25 participants from the control group received Superpower Glass Intervention for six weeks; of the 25, only 12 participants continued to follow up for the second and third measurements. The dwindling participation reflected here shows a major challenge with clinical trials, especially for trials that involve a population so diverse and difficult to er cr uit like minors with ASD. Moreover, the families using the Superpower Intervention achieved only 51% of the recommended use7, reflecting another difficulty with studies co nd ucted outside of a laboratory setting. Since Superpower Glass is designed for everyday use, the settings for the study are unfortunately unavoidable. Despite engaging in fewer sessions than recommended, the participants who used the Superpower Glass along with therapy showed significant i mprovements b ased o n a socialization subscale7. Superpower Glass is one of the few devices designed for and clinically tested on children with ASD. Voss and his team at Stanford successfully developed a sophisticated AI system that can work in conjunction with therapy to improve socialization. By teaching “emotion recognition, facial engagement, and the salience of emotion,” this novel intervention improves social outcomes and decreases maladaptive behavior for children with ASD. Finally, the way Superpower Glass promotes this vital social education via its games, title emoticons, and appealing encourages future directions of technology designed to improve the core deficits observed in people with ASD.
15
Autism spectrum disorder continues to be among the most exciting fields of study for neurologists and psychologists. Now, a burgeoning group of engineers and computer scientists are transforming this field. As academia becomes more interdisciplinary and technology becomes more readily accessible, it is crucial to include populations of study been historically marginalized like that have neurodiverse populations. While the intersection of technology and the population with ASD is a relatively recent development, the creation of haptic and non-haptic wearable technologies by researchers at MIT and Stanford reflect a myriad of new avenues for technologies that can promote the holistic growth of adolescents with ASD.
ACKNOWLEDGEMENTS The author would like to thank Ms. Vijaya Verma, a special-education elementary teacher, for inspiring this article. Additionally, special thanks extended to Dr. Pardis Miri and Dr. James J Gross at the Stanford Psychophysiology Laboratory for providing ample resources to further the author’s research interests on affect regulation, neurodiversity, and haptic technology. References [1] Samson, A.C., Phillips, J.M., Parker, K.J. et al., (2014). Emotion Dysregulation and the Core Features of Autism Spectrum Disorder. J Autism Dev Disord 44 (7), 1766– 1772. doi:10.1007/s10803-013-2022-5 [2] Baio, J., Wiggins, L., Christensen, D.L., et al., (2018). Prevalence of Autism Spectrum Disorder among children aged 8 years — Autism and developmental disabilities monitoring network, 11 Sites, United States, 2014. MMWR Surveill Summ 2018, 67 (6), 1-23. [3] Samson, A.C., Hardan, A.Y., Podell, R.W, et al., (2014). Emotion regulation in children and adolescents with Autism Spectrum Disorder. International Society for Autism Research, 8. doi: 10.1002/138 [4] Pak, C. (22 Oct 2019). Researchers work on device to help individuals with autism handle stress. The Stanford Daily. [5] Miri, P., Jusuf, E., Gross, J.J, et al., (2019). Affect regulation using technology: lessons learned by taking a multidisciplinary perspective. 8th International Conference on Affect Computing and Intelligent Interaction (ACII). [in press]
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
NEURODIVERSE [4] Vaucelle, C., Bonanni, L., Ishii, H. (2009). Design of haptic interfaces for therapy. Association for Computing Machinery. 467-470. doi: 10.1145/1518701.1518776. [5] Voss, C., Schwartz, J., Daniels, J., et al., (2019). Effect of wearable digital intervention for improving socialization in children with Autism Spectrum Disorder a random clinical trial. JAMA Pediatrics. 173(5): 446-454. doi: 10.1001/0285.
16
Resolving the Hubble Tension BY ALEXANDER GAVITT
HUBBLE'S LAW Figure A: An image showing how the opaque plasma of the cosmic microwave background prevents us from seeing the light of anything farther away. Source: Wikimedia Commons (Credit: NASA), public domain
"For every galaxy to be moving away from us, the universe must be expanding and carrying all other galaxies away from us as it does so."
17
Like all things, the universe had a beginning. In the past few decades, astronomers have made increasingly precise measurements of the Hubble constant (H0), which measures the speed at which the universe is expanding and plays a critical role in determining the age of the universe. Until recently, astronomers had two main methods of measuring the Hubble constant; originally, the margin of error was large enough that the two methods agreed. As the measurements became more precise, however, the two methods diverged to two separate values. Astronomers refer to this discrepancy as the “Hubble tension.” The Hubble constant arose from a series of observations in the early 20th century. In 1917, astronomer Vesto Slipher observed distant objects now known to be galaxies outside of the Milky Way and discovered that light coming from them was heavily redshifted. Red shift can occur in all waves; if the source of the wave is moving away from you, the frequency of waves passing your location will decrease. Notably, this means that, the faster the source is moving away from you, the more the waves will be redshifted. Since red is the lowest frequency light in the visible spectrum, we call this red shift (the opposite, where the frequency is increased because the source is moving towards you, is called blue shift). The classic example of red shift is what happens to an ambulance’s siren as it travels: as it comes up from behind, the siren
is blue shifted and sounds higher-pitched, but as soon as it passes, it is now redshifted and the pitch drops dramatically. Slipher’s observations therefore reveal an important truth about the universe: almost all other galaxies are moving away from us. When Albert Einstein derived his general theory of relativity in 1915, he, like almost all scientists at the time, assumed a “static universe” that would neither grow nor shrink. In the 1920s, however, Alexander Friedmann and George Lemaître independently derived equations from general relativity that allowed for the expansion and contraction of the universe. By the end of the decade, Edwin Hubble and Milton Humason had measured the distance to nearby galaxies and, when they compared those distances to the redshift Slipher observed, found that the farther away the galaxy, the higher its redshift. That observation became known as Hubble’s law: the speed at which a galaxy is moving away from us depends on how far away it is and is calibrated by Hubble’s constant. For every galaxy to be moving away from us, the universe must be expanding and carrying all other galaxies away from us as it does so. These observations thus invalidated the static universe model and led to the development of the expanding universe model. Since light travels at a limited speed, when distant galaxies are observed, they are seen in the positions that they occupied years ago; the farther a galaxy, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
H U B B L E C O N S TA N T the further back in time we see it since the initial light took longer to reach Earth. When astronomers first began calculating the Hubble constant, their data only included objects relatively close to Earth, and their observations indicated that the Hubble constant truly is constant across time and space. Subsequent observations and theoretical models, however, took issue with that.
still tiny fluctuations. By measuring those fluctuations, cosmologists c an calculate a value for the Hubble constant1, 2. This system of measurement is based on the prevailing cosmological model, called ΛCDM. Recent measurements of the CMB by the Planck satellite put the Hubble constant at 67.4 ± 0.5 km*s-1*Mpc-1 2.
Observationally, since farther galaxies are receding faster, space must have been expanding faster in the past as compared to the present, which should mean the Hubble constant was higher in the past. Theoretically, the equations developed by Friedmann indicate that the Hubble constant depends on, among other things, the density of matter and radiation in the universe. Since those things can neither be created nor destroyed, as the universe expands the densities should go down as they are spread across a larger volume, and Friedmann’s first equation indicates that as those densities go down, so too should the value of the Hubble constant. Ultimately, astronomers agreed that the “Hubble constant” is not constant in time. At any given time, however, its value is constant across space; formally, the term Hubble parameter, denoted by H, is the changing value of the Hubble constant across time and H0 refers to the value of the Hubble constant in the present. By measuring the Hubble constant across a variety of galaxies—and therefore across several times—we can calculate the age of the universe4.
The Hubble constant can also be measured by following in the footsteps of Hubble and Humason’s original observations and measuring the distance to stars. The question of how to measure those distances, however, has plagued astronomers for centuries. Once astronomers knew the distance from the Earth to the Sun, they could use parallax, the apparent change in position of an object relative to its background when viewed from different angles, to measure the distance to nearby stars [Figure B]. Humans and other animals do this all the time; it’s what allows the brain to figure out how far away something is. This can be exploited to help build an understanding of parallax: blink from one eye to another with a finger held in front of them; the finger will appear to shift position relative to the background and the closer it is to the eyes, the more it appears to shift. This simple experiment also helps reveal a limit of parallax: it depends on the distance between the two detectors, known as the baseline; if the baseline is too short, no shift will be observed for objects past a certain distance. By taking measurements six months apart, we can use the diameter of the Earth’s orbit as our baseline, but even that is too short to resolve the distance to many stars4, 9. Astronomers, however, have another trick in their toolkit. The apparent luminosity of a
CMB Measurements If the universe is expanding, it must have been smaller in the past. Run the clock back far enough, and all the matter and energy in the universe would have been packed into a space so small that everything was in a highenergy plasma state that glowed with a roughly uniform light. As the universe expanded and cooled, it eventually reached a point where light could travel more freely. If we look back far enough, eventually we see the light from the moment the universe transitioned between these two states; we cannot see light from anything farther, as it is blocked by the glow of the early universe [Figure A]. Since the universe is expanding, the light from that moment is moving away from us, and so is redshifted. Today, this light has been redshifted to the frequency of microwaves, so it is known as the Cosmic Microwave Background, or CMB.
The Cosmic Distance Ladder
“If the universe is expanding, it must have been smaller in the past. Run the clock back far enough, and all the matter and energy in the universe would have been packed into a space so small that everything was in a high energy plasma state that glowed with a roughly uniform light."
Figure B: A depiction of how parallax causes objects to appear to shift relative to their background when viewed from different angles. Source: Wikimedia Commons (Credit: Chris-martin), CC-BYSA-3.0
While the CMB has been stretched by the expansion of the universe and is of nearuniform temperature (about 2.7 K), there are FALL 2019
18
"Once the distances to various stars are known, they can be used to calibrate observations of type la supernovae and calculate absolute distance measurements."
Figure C: A depiction of how the cosmic distance ladder works. 1) Astronomers use parallax to measure the distance to nearby Cepheid variables (top left). 2) After noting the period of the change in the Cepheids’ luminosity, astronomers look for similar Cepheids in nearby galaxies with recent type Ia supernova, using the Cepheids to calibrate the supernovae distances (bottom left). 3) Astronomers look to more distant galaxies with type Ia supernovae and calculate the distance to them based on the calibration (bottom right). Source: Wikimedia Commons (Credit: ESA/Hubble), CCBY-4.0
star (how bright it appears at a given distance) decreases based on the square of the distance to it. If two stars have the same absolute luminosity (put out the same amount of light) but one is four times as distant as the other, it will appear one-sixteenth as bright. Knowing that two stars have the same absolute luminosity isn’t always easy, but fortunately there are some pulsating stars, known as Cepheid variables, whose absolute luminosity is linked to their pulsation period. Therefore, by measuring the pulsation period, stars with the same absolute luminosity can be found. And, if one of them is close enough to find the distance with parallax, the distance of the farther star can be discerned from the relationship between their apparent luminosities4, 9. Once the distances to various stars are known, they can be used to calibrate observations of type Ia supernovae and calculate absolute distance measurements. Type Ia supernovae occur in binary star systems where one star has already completed its life cycle and ended as a white dwarf. The white dwarf can siphon gas from its companion star, increasing its own mass. Eventually, its mass reaches a critical threshold that triggers a sudden burst of nuclear fusion; these events are incredibly consistent, which makes them good “standard candles” to serve as the basis for measurements. With absolute distances thus calculated from parallax, Cepheid variables, and type Ia supernovae, it’s theoretically simple to combine the distance to objects with their redshift and calculate the Hubble constant, which, after all, is a measure of how fast objects at a certain distance are receding4, 9 [Figure C]. In practice, refining estimates of the Hubble constant from this method often come down to making more precise calibrations of the instruments that carry out the
observations, like the Hubble Space Telescope. Recent calculations based on seventy Cepheid variables in the Large Magellanic Cloud (a satellite galaxy of the Milky Way that is rich in Cepheid variables) calculated a Hubble constant of 74.22 ± 1.82 km*s−1*Mpc−1 9.
NEW METHODS OF MEASUREMENT It is evident that these two measurements (67.4 ± 0.5 km*s-1*Mpc-1 from the CMB and 74.22 ± 1.82 km*s−1*Mpc−1 from the cosmic distance ladder) cannot both be correct. As mentioned before, the margins of error were greater before, and overlapped such that the true value of the Hubble constant could have been somewhere between them. Over time, however, the discrepancy between the two methods has only grown and that is no longer possible. This is not because the measurements are moving apart— they’ve stayed remarkably steady—but because the margin of error on those measurements has decreased. Scientists use the standard deviation, represented by the lowercase Greek letter sigma (σ), to measure confidence in their data. As data points get closer together, the standard deviation—and therefore the margin of error— decrease. With the two measurements in conflict, scientists now combine the data sets to measure the standard deviation between them, which serves as a proxy for how much the measurements disagree with each other [Figure D]. In 2013, the margins of error on the measurements were large enough that the tension between them was only 2.5σ. By the time the 2018 Planck data—which resulted in the measurement from the CMB quoted above—was analyzed, the tension had risen to 3.5σ, while the most recent Cepheid variable observations— which resulted in the cosmic distance ladder measurement quoted above—brought the Hubble tension up to 4.4σ 2. The discrepancy is now reaching the point where the possibility of some error in our cosmological models is beginning to arise4, 9. In the face of this tension, cosmologists are looking to different methods of observation to measure the Hubble constant. Some recent attempts, for example, have tried to calibrate type Ia supernovae based on baryon acoustic oscillations (BAO). Baryons compose most of the “normal matter” of the universe, including protons and neutrons; baryon acoustic oscillations are, in simple terms, a measure of how spread out galaxies tend to be today, based on the early conditions of the universe3, 8. Using 329 type Ia supernovae, a recent study found that, when the supernovae were calibrated based on BAO instead of Cepheid variables, they resulted in a Hubble constant
19
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
H U B B L E C O N S TA N T value of 67.8 ± 1.3 km*s−1*Mpc−1—consistent with the CMB measurements, but in 2.5σ tension with the standard cosmic distance ladder measurements8. The detection of gravitational waves in 2016 opened a new door for measuring the Hubble constant. When neutron stars or black holes collide, detectors can pick up ripples in spacetime and use those waves to calculate the absolute distance to the collision. In the case of collisions between two neutron stars, optical telescopes can follow-up and detect light bursts from the collision. When combined with measures of redshift, this approach can serve as an independent method—known as the “standard siren” approach—for calculating the Hubble constant. While there have been few measurements of the Hubble constant from gravitational wave observations so far, projections suggest that within five y ears gravitational wave observations can lead to a measure of the Hubble constant with only two percent error5. At this point, there is one measurement of the Hubble constant from gravitational wave data that resulted from GW170817, the collision of a binary neutron star system. That analysis found a value of 70.3±5.3 km*s−1*Mpc −1, which can agree with either the CMB or cosmic distance ladder measurements. The authors hope that, with around 15 more mergers with data of a similar quality they could pin down a more exact value for the present value of the Hubble constant, though they note the conditions of that specific collision were especially favorable for data collection7.
WHAT IF THE HUBBLE TENSION BREAKS COSMOLOGY? The simplest explanation for the Hubble
tension is some sort of systematic error or bias that no one has realized yet4. The far more interesting possibility, however, is that there’s something wrong with the current cosmological models. While scientists are generally careful about upending broad swaths of work, they enjoy considering multiple possibilities. These possibilities, however, remain speculation. Proponents of the cosmic distance ladder have suggested that the Hubble tension points to physics “beyond ΛCDM,” including various hypotheses about dark matter (a mysterious gravitational attraction, unaccounted for by the observable matter, that holds galaxies together) such as exotic dark matter, dark matter-radiation, or dark matter decay, as well as hypotheses about relativistic particles and inter-neutrino interactions9. On the more extreme end, some cosmologists have suggested that the curvature of the universe itself, which determines its basic shape, may need to be revisited; current models give the universe no curvature (making it flat), but some cosmologists have suggested both positive and negative curvature, resulting in a closed or open universe respectively8, 6.
"Ultimately the most recent observations make it clear that there is more to the story of the expansion of the universe than we currently know."
Ultimately, the most recent observations make clear that there is more to the story of the expansion of the universe than we currently know. Maybe there’s a systematic error in the CMB or cosmic distance ladder measurements and new methods of measurement will agree with one and point to flaws in the other. There could also be errors in both that, when resolved, will bring the methods into harmony. with each other. Or, perhaps, just as the work of Slipher, Friedmann, Lemaître, Hubble, and Humason discredited the static universe model, new observations will discredit the current cosmological model. It seems, however, that emerging methods of measurement may soon bring new data to this debate and finally
Figure D: A normal distribution, showing the probabilities associated with standard deviations (σ). For example, there is approximately a 2.2% chance of naturally finding variation greater than 2σ. Source: Wikimedia Commons (Credit: M. W. Toews), CC-BY-2.5
20
resolve the Hubble tension, one way or another References [1] Abitbol, M., Hill, C. & Cheluba, J. Measuring the Hubble constant from the cooling of the CMB monopole. arXiv:1910.09881 [2] Aghanim, N. et al. Planck 2018 results. VI. Cosmological parameters. arXiv:1807.06209 [3] Bassett, B. & Hlozek, R. Baryon Acoustic Oscillations. arXiv:0910.5224 [4] Chen, H. The mystery of the cosmic ages. Nat Astron 3, 384–385 (2019) [5] Chen, H., Fishbach, M. & Holz, D.E. A two per cent Hubble constant measurement from standard sirens within five years. Nature 562, 545–547 (2018) [6] Di Valentino, E., Melchiorri, A. & Silk, J. Planck evidence for a closed Universe and a possible crisis for cosmology. Nat Astron (2019) [7] Hotokezaka, K., Nakar, E., Gottlieb, O. et al. A Hubble constant measurement from superluminal motion of the jet in GW170817. Nat Astron 3, 940–944 (2019) [8] Macaulay, E. et al. First cosmological results using Type Ia supernovae from the Dark Energy Survey: measurement of the Hubble constant, Monthly Notices of the Royal Astronomical Society, Volume 486, Issue 2, June 2019, Pages 2184–2196 [9] Riess, A. et al. Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics beyond ΛCDM. 2019 Astrophysical Journal 876 85
21
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
GUT MICROBIOME
Gut Microbiome-Brain Communication: The Unexpected Intersects of Body and Mind BY ALEXANDRA LIMB
Society has reached new heights with its enthusiasm for the health and wellness industry, particularly with an emphasis on gut health. Grocery stores nationwide are inundated with an endless array of prebiotic and probiotic products, shelves lined with the promise of restoring the health of our gut microbiome. From probiotic yogurts to drinks to supplements to granola bars, the popularity of these microbe-based campaigns is evident by the numbers, with the probiotic market holding a value of $2.263B in 2017 with expected growth to $3.511B by 2022.1 Probiotic yogurt sales in the United States alone were $4.354B in 2017. Even multinational corporations such as PepsiCo have jumped on the trend; Tropicana Probiotics, the company’s first probiotic line of juices, is accompanied by the cheery slogan “Feel Like a Billion.2” But such claims are more than just grandiose marketing schemes designed to sell more juices. An understanding of the critical biological role of the human gut microbiome for proper body functioning proves that the probiotic market is more than just a fleeting health trend. The human microbiome encompasses a vast, rich community of 100 trillion microbes made up of bacteria, viruses, fungi, and archaea that reside in various mucosal surfaces such as the skin, mouth, gut, and vagina3,4 [Figure 2]. The majority of these microbes exist in the human gut and are within the Bacteroidetes and Firmicute phyla, where there are over 1000 different symbiotic and pathogenic bacterial
species.3 The gut microbiome is a complicated and advanced environment made of 3.3 million protein-encoding genes, compared to a meager 23,000 genes making up the entirety of the human genome.3,5 Thriving on shed human epithelial cells and nutrients, the microbiome develops from birth to early childhood and has a constantly changing composition dependent on mode of birth and breast-feeding, which deposits Bifidobacterium and Lactobacillus bacterial strains.3,7 Upon the establishment of an adult microbiome, environmental factors such as diet, disease, and the use of antibiotics cause further variability in microbe composition.3 These alterations in bacterial makeup are associated with the pathogenesis of certain chronic medical conditions such as inflammatory bowel disease, type 1 diabetes, and atopic diseases such as eczema and food allergies.3 Since the gut microbiome’s primary function is to maintain homeostasis within the body, changes in its composition result in a disruption of normal regulatory activities.5
Figure 1: Visualization of lactobacillus bacterial strain, an important group of gut microbes for the immune system that is commonly found in probiotic products. Source: Wikimedia Commons
“An understanding of the critical biological role of the human gut microbiome for proper body functioning proves that the probiotic market is more than just a fleeting health trend."
Figure 2: Anatomical depiction of the human gut, indicating the stomach, small and large intestine, sigmoid colon, rectum and anus. The gut microbiome resides within mucosal surfaces within these larger scale regions of the human body. Source: Pixabay
22
“The main conclusion derived from these experiments was that the microbiota produce powerful molecules called metabolites which circulate in the bloodstream and affect microglia, the brain's immune cells."
Extensive research has established the benefits of gut microbes for one’s nutritional and inflammatory health, especially its array of protective bodily functions. Its capacities for immunomodulation and stimulation of the immune system, were revealed by one study in which a bacterial metabolite indole3-carboxaldehyde stimulated innate lymphoid cells (ILC) and intestinal macrophages, two types of immune cells.7 In addition to immune system assistance, gut microbes also facilitate the breakdown of toxins in the body. Additionally, the gut microbiota’s complex genome provides it with machinery to synthesize vitamin B, vitamin K, and amino acids.5 Certain Bacteroides species can synthesize linoleic acid to moderate the immune system and increase concentrations of pyruvic and citric acid, which provide increased energy for metabolism.7 The microbiota within the gut also strengthen the microvasculature of the gastrointestinal tract by expressing sprr2a protein, which maintain cell junctions known as desmosomes within the epithelial villi. Microbes also metabolize nutrients by breaking down and fermenting undigested fibers of complex oligosaccharides via digestive enzymes called glycosyl transferases, producing short chain fatty acids (SCFA).5,7 These are a rich source of nutrients for the body and protect against Crohn's disease and ulcerative colitis.5 Using the bacteria’s additional metabolic capabilities, the microbiota enhance our ability to extract energy from various foods by depositing new genes to digest a diverse array of substrates. Given the gut microbiota’s multifunctional role in providing a crucial line of defense, the use of antibiotics to remove pathogenic bacterial species from within our bodies makes many scientists wary. Antibiotics can negatively impact the richness and diversity of the gut microbiota, taking up to four years following drug treatment to replenish certain bacterial species.6 It is therefore understandable why consumers are drawn towards the prebiotic and probiotic industry over potentially harmful antibiotics. Probiotics alter the microbiota’s composition by preventing the growth of other microbes on the intestinal mucosal surface through antimicrobial agents that act as suppressants. Lactobacillus strains, which are commonly found in probiotic products, also have other functions such as maintaining the structure of the intestinal wall, diminishing gastrointestinal infection, and bolstering immune cell activity.8 While the immune and nutrition-related activities of gut microbes are more commonly understood, the most fascinating and relatively novel function of the microbiota is being
23
uncovered through ongoing research. The gutbrain axis, a mechanism of communication between the brain’s emotional and cognitive regions and the intestinal tract, allows researchers to view the gut microbiome from a neurobehavioral lens.9 A powerful intersection between two distant regions of the human body, this research highlights the fundamental linkage between body and mind, as well as the interdependency between the fields of biology, psychology, and psychiatry. Gut-brain communication has valuable implications for our understanding of the pathogenesis and treatment of behavioral disorders, specifically post-traumatic stress disorder (PTSD), schizophrenia, and depression. With experiments conducted on mice, researchers were able to make significant strides, more clearly understanding the biological mechanisms behind the gut-brain influence. By studying fear conditioning responses in groups of mice with and without gut microbiota, this study determined how changes in the microbiota lead to neuronal and behavioral alterations. The main conclusion derived from these experiments was that microbiota produce powerful molecules called metabolites which circulate in the bloodstream and affect microglia, the brain’s immune cells.10 The study discovered that in microbiota deficient mice, there were four types of metabolites that were less abundant, proving that metabolites to be a major player in microbiome-brain communication.12 Metabolites alter the microglia’s ability to engulf and degrade dendritic spines, which are responsible for forming synaptic connections between neurons [Figure 3].10 The creation of these synapses mediates synaptic plasticity, which occurs when certain synaptic patterns lead to changes in synaptic strength.11 Synaptic plasticity is critical for learning and memory, and changes in synaptic plasticity are often associated with neuropsychiatric disorders such as PTSD, anxiety, and depression disorders.10 Scientists found that changes in the microbiota lead to changes in microglial gene expression, impacting function and leaving it in an immature state.12 By observing fear response in mice, which most directly can correlate to the pathogenesis of post-traumatic stress disorder (PTSD) in humans, it was discovered that mice without a healthy microbiome had diminished fear extinction capacities.10 Fear extinction refers to the ability stop a conditioned fear response after non-threatening exposure to a previous fear conditioned stimulus.13 After both experimental and control mice. A greater understanding of the biological basis behind microbiome-brain communication is also DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
GUT MICROBIOME
providing researchers with insight into the manifestation of other severe mental disorders such as schizophrenia. Schizophrenia is a chronic behavioral disorder that results in diminished mental clarity, emotional management, and decision-making abilities, currently affecting between 0.25% to 0.64% of adults in the U.S, and is one of the top 15 leading causes of disability worldwide.17 A severe form of mental illness, schizophrenia is associated with symptoms such as hallucinations, disorganized speech and behavior, and emotional flatness.14 While schizophrenia pathogenesis is not clearly understood, risk factors include genetics, environmental exposure to viruses, and disruptions in brain chemistry.16 Alterations in chemical reactions within the brain have led to the discovery that the gut microbiome plays a critical role. In a recent study, researchers gave microbiome-free mice fecal transplants from patients with schizophrenia.15 The transplant-receiving mice had disruptions in levels of glutamate, glutamine, and gamma aminobutyric acid (GABA), three critical amino acids for brain and nervous system activity.15,18 Glutamine is a precursor for glutamate, an excitatory neurotransmitter, as well as for GABA, an inhibitory neurotransmitter.18 It is changes in the levels of these neurotransmitters that lead to changes in normal neuronal activity. The experimental mice receiving fecal transplants were found to have lower levels of glutamate and higher levels of glutamine and GABA against healthy controls.15 Gut microbes are responsible for the production of glutamine and GABA metabolites that travel in the bloodstream, concurrent with the observations of differences in metabolite expression in the experimental mice. Increases in glutamine and GABA neurotransmitters in conjunction with lower glutamate in the hippocampus lead to alterations in brain activity related to schizophrenia [Figure 4].15 Decreased glutamatergic activity leads to diminished activity of the receptor glutamate normally binds to, known as N-methyl-D-aspartic acid (NMDA) [Figure 5].23 Abnormalities in the expression and function of NMDA receptors have been linked to the negative symptoms of schizophrenia, such as lack of motivation and disorganized speech, as well as hyperactive transmission of dopamine in the mesolimbic
system.23 Dopamine is a neurotransmitter highly related to schizophrenia, and current antipsychotic drugs aim to normalize dopamine levels.24 Scientists also observed that experimental mice had a less diverse microbiome and distinct differences within microbe composition, allowing scientists to conclude that schizophrenia pathogenesis is rooted in changes in the microbiome, which can ultimately lead to alterations in brain activity. Specifically, the experimental mice had decreases in Lachnospiraceae and Ruminococcaceae bacterial families. Another study found schizophrenia comorbidity with gastrointestinal disorders, which are already understood to be governed by changes in the gut microbiota. Additional research shows that prenatal microbial infections result in a tremendous 10 to 20-fold increase in schizophrenia incidence.15 The gut microbiota’s production of molecular metabolites also may provide explanation for the development of depression. In a study on a group of Belgian participants, depressed individuals were discovered to be missing Coproccocus and Dialister bacterial strains within their gut microbiome. Another group of Dutch individuals were sampled and those with severe clinical depression were also missing both strains. With these observations, researchers concluded that the Coproccocus bacterial strain makes a type of short chain fatty acid known as butyrate, which prevents inflammation. Inflammation is commonly associated with depression.19 Other research has shown that Firmicute species decreased in individuals with depression, while Bacteroidetes and Proteobacterial strains increased, showing distinct patterns of microbiome composition in depressed patients. The relationship between the microbiome and mental illnesses are also found to be bidirectional. Disruptions in microbial makeup result in susceptibility to illness, and the illness itself will result in changes to bacterial environments that are also harmful.20
Figure 3: The structure of a neuron, illustrating the neuronal dendrites involved in creating synaptic connections between neurons. Source: Wikimedia Commons
"A greater understanding of the biological basis behind the microbiome-brain communication is also providing researchers with insight into the manifestation of other severe mental disorders such as schizophrenia."
Figure 4: Illustration of the human brain’s prefrontal cortex, hippocampus, anterior cingulate cortex, and amygdala. The hippocampus are where activities of GABA, glutamate, and glutamine are particularly concentrated, and are the brain regions most affected by schizophrenia. Source: NIH Image Gallery
24
References [1] Decker, K. J. (2018, October 30). What will it take to grow the U.S. probiotic market? [2] Buss, D. (2017, March 10). Tropicana Wants You To 'Feel Like a Billion' (and Try Probiotics). [3] Fast Facts About The Human Microbiome. (n.d.). Retrieved from https://depts.washington.edu/ceeh/ downloads/FF_Microbiome.pdf.
between glutamine, glutamate, and GABA in nerve endings under Pb-toxicity conditions. Journal of Inorganic Biochemistry, 98(6), 951–958. doi: 10.1016/j. jinorgbio.2004.02.010 [19] PennisiFeb, E., MervisDec, J., WadmanDec, M., MlotDec, C., GrimmDec, D., MervisDec, J., … FrederickDec, E. (2019, February 4). Evidence mounts that gut bacteria can influence mood, prevent depression.
[4] About the Microbiome. (n.d.). Retrieved from https:// www.kavlifoundation.org/about-microbiome.
[20] Taylor, V. H. (2019). The microbiome and mental health: Hope or hype? Journal of Psychiatry and Neuroscience, 44(4), 219–222. doi: 10.1503/jpn.190110
[5] The Microbiome. (2019, September 4). Retrieved from https://www.hsph.harvard.edu/nutritionsource/ microbiome/#what-is-microbiome.
[21] Mental Health Medications. (n.d.). Retrieved from https://www.nimh.nih.gov/health/topics/mental-healthmedications/index.shtml.
[6] Ursell, L. K., Metcalf, J. L., Parfrey, L. W., & Knight, R. (2012). Defining the human microbiome. Nutrition Reviews, 70. doi: 10.1111/j.1753-4887.2012.00493.x
[22] Moncrieff, J., Cohen, D., & Porter, S. (2013). The Psychoactive Effects of Psychiatric Medication: The Elephant in the Room. Journal of Psychoactive Drugs, 45(5), 409–415. doi: 10.1080/02791072.2013.845328
[7] Jandhyala, S. M. (2015). Role of the normal gut microbiota. World Journal of Gastroenterology, 21(29), 8787. doi: 10.3748/wjg.v21.i29.8787 [8] Hemarajata, P., & Versalovic, J. (2012). Effects of probiotics on gut microbiota: mechanisms of intestinal immunomodulation and neuromodulation. Therapeutic Advances in Gastroenterology, 6(1), 39–51. doi: 10.1177/1756283x12459294 [9] Carabotti, M., Scirocco, A., Maselli, M. A., & Severi, C. (2015). The gut-brain axis: interactions between enteric microbiota, central and enteric nervous systems. Annals of gastroenterology, 28(2), 203–209.
[23] Goff, D. C., & Coyle, J. T. (2001). The Emerging Role of Glutamate in the Pathophysiology and Treatment of Schizophrenia. American Journal of Psychiatry, 158(9), 1367–1377. doi: 10.1176/appi.ajp.158.9.1367 [24] Brisch, R., Saniotis, A., Wolf, R., Bielau, H., Bernstein, H.-G., Steiner, J., … Gos, T. (2014). Corrigendum: The Role of Dopamine in Schizophrenia from a Neurobiological and Evolutionary Perspective: Old Fashioned, but Still in Vogue. Frontiers in Psychiatry, 5. doi: 10.3389/ fpsyt.2014.00110
[10] Chu, C., Murdock, M.H., Jing, D. et al. The microbiota regulate neuronal function and fear extinction learning. Nature 574, 543–548 (2019) doi:10.1038/s41586-0191644-y [11] (n.d.). Retrieved from https://www.nature.com/ subjects/synaptic-plasticity. [12] Kiraly, D. D. (2019, October 23). Gut microbes regulate neurons to help mice forget their fear. [13] Myers, K. M., Ressler, K. J., & Davis, M. (2006). Different mechanisms of fear extinction dependent on length of time since fear acquisition. Learning & memory (Cold Spring Harbor, N.Y.), 13(2), 216–223. doi:10.1101/ lm.119806 [14] NAMI. (n.d.). Retrieved from https://www.nami.org/ learn-more/mental-health-conditions/schizophrenia. [15] Zheng, P., Zeng, B., Liu, M., Chen, J., Pan, J., Han, Y., … Xie, P. (2019). The gut microbiome from patients with schizophrenia modulates the glutamate-glutamineGABA cycle and schizophrenia-relevant behaviors in mice. Science Advances, 5(2). doi: 10.1126/sciadv.aau8317 [16] Schizophrenia. (n.d.). Retrieved from https://www. nimh.nih.gov/health/topics/schizophrenia/index.shtml. [17] Schizophrenia. (n.d.). Retrieved from https://www. nimh.nih.gov/health/statistics/schizophrenia.shtml. [18] Strużyńska, L., & Sulkowski, G. (2004). Relationships 25
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
New Alternatives to Meat Production BY ALLAN RUBIO
INTRODUCTION Chicken rice. Ham sandwiches. Beef chili. Fish and chips. Meat is incorporated into the cuisines of almost every culture and place around the world. It is a ubiquitous source of sustenance and is synonymous with the word food itself. Why is it then, that there has been a movement to eliminate this staple food from our diets? The story begins in the earliest sources of recorded history with the advent of domestication practices 12,000 years ago1. Prior to this, the hunter-gatherer lifestyle of humans required them to hunt for animal meat—sacrificing time, effort, and even their lives to get just a morsel of the precious delicacy. Through time and experimentation, however, humans were able to bring the animals they once chased in the wild to live and breed right outside their doorstep. Beginning with the domestication of goat and sheep around 9000BP and then cattle in 8000BP1, animals were bred for the purpose of food and convenience, giving birth to the term livestock and maximizing food production. The huntergatherer lifestyle was practically no more, and the world was changed irrevocably by the vastly expanded availability of food. The human population has grown dramatically since then, from around a million during the emergence of domestication to more than 7 billion today1. Technology
skyrocketed with the Renaissance and the Industrial Revolution, bringing ways to domesticate animals more efficiently and to mold their species to consumers’ desires through selective breeding. These technologies did not only spur improvements in the nature of food production, but also allowed the meat industry to match the growing demands of an increasingly large population. Animals were produced in magnitudes never seen before in the history of humankind and efficiency was championed over everything else. Meat demand has risen sharply, quadrupling in only the past 50 years and still increasing significantly today. This increased demand worldwide coupled with a growing population is contributing to a plethora of issues that are devastating the modern world 2,3. Producing meat is quite resource-heavy, placing a toll on the planet few other industries can compete with. Raising livestock requires mass production of crops and accounts for 35% of the entire agricultural market1. As of now, agriculture has spread to more than 40% of all land surface available for farming and has destroyed around 7 to 11 million km2 of forest. In addition to deforestation, there are many secondary issues caused by agriculture that damage the environment such as soil erosion, pesticide runoff, and biodiversity reduction. Perhaps the most concerning consequence of mass domestication, however, are the enormous greenhouse gas (GHG) emissions
Cover Image Source: Airman 1st Class Larissa Greatwood
"Producing meat is quite resourceheavy, placing a toll on the planet few other industries can compete with."
26
was patented by the company. They discovered that some plants contain symbiotic plant hemoglobin known as leghemoglobin (LegH) that is present in their roots5. The hemoprotein controls the oxygen concentration in the area surrounding symbiotic nitrogen-fixing bacteria and is a close structural ortholog of myoglobin, a protein abundant in animal muscle tissue. Folded inside both myoglobin and LegH is the heme cofactor which induces the indistinguishable flavor of meat that is so familiar5. Following this discovery, Impossible then extracted the gene responsible for producing this plant-based heme and mass produced it using FDA-certified genetically engineered yeast that specialize in producing large amounts of heme after fermentation6. Figure 2: Industrial raising of livestock. Source: Wikimedia Commons
“Genetically engineering more effecient livestock and veganism are both valid ways of combating the problem, but a recent solution that could change this industry permanently is faux meat - a synthetic meat alternative that avoids animal slaughter altogether."
Figure 3: Impossible Burger patty served at a restaurant. Source: Wikimedia Common
27
domestication, however, are the enormous greenhouse gas (GHG) emissions that are actively contributing to rising levels of global warming. From the moment land is cleared for the animals, to the production of manure, raising livestock involves steps that release harmful emissions such as CO2, nitrous oxides, and methane in inordinate amounts. By some metrics, it is reported that the meat production sector releases an astounding amount of GHG’s that surpass even the transportation sector, including aviation4. Additionally, simply keeping the animals alive causes stress on the environment as livestock respiration is responsible for over a fifth of global CO2 emissions1. It is apparent that the continuation of this practice will eventually bring the earth to a tipping point that permanently damages the planet. As stated by domestication expert Fabrice Teletchea, the discovery of domestication was a “pivotal change in the history not only of humanity but also of the biosphere.”1
POSSIBLE ALTERNATIVES: IMPOSSIBLE'S FAKE MEAT BURGER To combat this harmful yet entrenched evil, a few options have arisen to mitigate the issue of the meat industry’s excessive resource use. Genetically engineering more efficient livestock and veganism are both valid ways of combating the problem, but a recent solution that could change this industry permanently is faux meat – a synthetic meat alternative that avoids animal slaughter altogether.
A sustainability report produced by the company revealed that the regular Impossible Burger uses around only a quarter of the water and releases only a tenth of the GHG’s used to make a regular burger7. In addition to Heme, the burger uses other plant-based products such as “protein from potatoes, wheat and soy, and fat from coconuts”6. While the discovery is a milestone in stepping away from meat, it is still inaccessible to many. Not only is the product limited to only partnered restaurants, the price, while not exorbitant, is still more expensive than meat. The texture of the meat-substitute created by Impossible is also limited to burgers, which might not suit everyone’s tastes. If Impossible were to become a household name for all meat eaters, there would still need to be a significant amount of innovation and expansion of its products.
POSSIBLE ALTERNATIVES: IN VITRO MEAT PRODUCTION Fortunately, Impossible is not the only one leading the meat alternative movement. In the last decade, growing meat in vitro (or cultured meat) has been receiving significant attention from consumers and researchers alike. A biotechnical innovation, in vitro meat production allows edible cells to be grown outside of an animal, eliminating the need for animal slaughter. And unlike Impossible, this
Many modern companies are using plantbased products as meat substitutes—one of the most popular being Impossible Foods. Impossible Foods’ way of tackling the issue is unique, revolving around a recent discovery that DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
M E AT P R O D U CT I O N technology doesn’t feign meat, but actually replicates it through laboratory procedures. There are two primary means for in vitro meat production: the self-organization technique and the scaffold-based technique. The basic premise of the self-organization technique involves explanting from the muscle of a donor animal and having the tissue proliferate in a nutrient medium to a certain point where it is ready to harvest8. Cells sourced for this explantation are either embryonic stem cells, myosatellite cells or adult stem cells, each with unique strengths and weaknesses8. Embryonic stem cells, while an obvious option due to their differentiating potential, were susceptible to genetic mutations and do not allow infinite identical replication. Myosatellite cells can produce muscle with high efficiency and accuracy but are rare and possess limited regenerative potential. Finally, adult stem cells also have the necessary differentiating capabilities, but are much less flexible than their embryonic counterparts8. With an understanding of these different cell types, a serum (growth medium) can be selected and other lab conditions can be manipulated to ensure maximum production of myocytes. The scaffold-based method, on the other hand, adds an extra step not included in selforganization. Stem cells are still obtained from cultured tissue, but are then attached to a scaffold where they grow in a selected culture medium and bioreactor8. The scaffold is an integral addition to the previous method and acts as a surface area for the cells to attach, multiply, and differentiate on, giving the overall composition a different texture and appearance that researchers can manipulate8. As such, careful choice of a compatible and flexible scaffold is imperative in producing similar textures to meat that stem cells can thrive on. Through these methods, only a few initial cells are needed to produce edible muscle cells outside of the animal. It also shares the same environmental benefits as Impossible, taking
less toll on the environment and releasing far fewer emissions than traditional livestock farming.
SHORTCOMINGS AND INNOVATIONS IN THE CULTURED MEAT FIELD While these systems are innovative and exciting, they are not by any means flawless. The serum which the cells rely on is traditionally extracted from either a newborn animal or a fetus, namely fetal bovine serum (FBV)8. Having to extract this growth component for every cell culture is an expensive task that also goes against one of the major goals of this entire technology: limiting livestock as a resource. And with every explant and serum extraction, liquid biopsies would have to be performed to ensure quality, adding to the list of expenses and inefficiencies. The grown cells, while looking like meat, also lack much taste8. As described earlier, taste is formed by the heme cofactor along with a multitude of other water soluble and fat derived components that are still unaccounted for in self-organizing cell cultures. Evidently, there is much area for improvement and exploration in this field. Researchers Benjaminson and Lorenz at the NSR/Touro Applied BioScience Research Consortium propose that shiitake and maitake mushroom extracts may be used as substitutes for FBV. While the yield is not yet viable, further research in the field using different mediums and different samples could possibly lead to new breakthroughs in the edible cultured meat industry9. Another part of this technology being studied are the stem cells used for generating the edible meat. A fascinating method of stemcell programming discovered by Cambridge and Stanford researchers now being implemented by a Dutch company could potentially transform the way stem cell differentiation can be controlled and used to our advantage. Currently, the systems discussed face issues of scalability and time—by further programming stem cell growth, however, in vitro meat production could be expedited and maximized to the point of viability. This could be achieved through Optimized overexpression (Opti-Ox) technology, which operates by overexpressing transgenes from the original organism. It is built upon the Tet-ON system which uses a “constitutively expressed transcriptional activator protein … and an inducible promoter regulated by rtTA (reverse tetracycline transactivator) that drives expression of the transgene.” 11 Using the Tet-ON system and homozygous dual targeting of genomic safe harbor sites (sites on the genome that
“Currently, the systems discussed face issues of scalability and time - by further programming stem cell growth, however, in vitro meat production could be expedited and maximized to the point of viability."
Figure 4: Diagram of stem cell function. Source: Wikimedia Commons
28
Figure 5: Cell culturing in a petri dish. Source: Wikimedia Commons
“With what excessive meat production entails, the Earth may be damaged irreversibly as more land is dedicated toward raising livestock and more emissions heat up the globe."
allow for integration of new genetic material without causing harmful alterations to the host organism), researchers discovered that the expression of inducible transgenes in human pluripotent stem cells (hPSCs) could possibly be controlled11. This experiment was then continued by programming hPSCs into cortical neurons, oligodendrocytes, and most relevant to this topic, skeletal myocytes - all resulting in more rapid and robust development11. This same technology can be used to overexpress important proteins in cultured meat such as the heme cofactor and enzymes important for lipid and amino acid catabolism. Findings such as these are what keep the cultured meat discussion alive and full of promise. The scaffold method, where there is a push to create more realistic meat-like textures, also has its fair share of good news. Researchers at Harvard are using a new method inspired by cotton candy machines to make fibrous scaffolds that could mimic muscle fibers better than any method used before13. By using immersion rotary jet spinning, gelatin fibers were spun at high rates to generate thin strings of scaffold for culturing cells12. The end result resembled meat, not only in texture, but also in tightness and tenderness13. Crosslinking or co-spinning the gelatin with a microbial crosslinking enzyme was also discovered to have an inhibitory effect on fiber degradation. Though the technology is still far from reaching the diets of consumers, the future of greener meat is inching closer to the present with every new innovation
DISCUSSION As the state of the planet becomes increasingly unstable, the need for change is greater now than ever before. The FAO (Food and Agriculture Organization of the United States) has computed the aggregate production values of various regions around the world to show meat demand and growth rate, and the future is clear: people are not stopping meat-eating habits any time soon3. 29
Despite consumption levels slowly lowering globally, developing countries and India are still showing significant growth in their meat consumption levels3. What is also worrisome is that projections 30 years in the future show only developed countries experiencing negative growth in meat production and consumption whereas the rest of the world is still experiencing growth in both sectors3. Meat trade expansion is also likely to continue with meat export and import constantly increasing. Economic drive will be what dictates the production of meat as seen in the sudden growth of animal product exports in Brazil3. With what excessive meat production entails, the Earth may be damaged irreversibly as more land is dedicated toward raising livestock and more emissions heat up the globe. Meat alternatives offer a promising and effective method for the future of meat eaters. If the technologies discussed reach fruition and enter the global market one day, it “can be expected to provide a near solution to these environmental problems” devastating our planet8. References [1] Teletchea, F. (2019, June 7). Animal Domestication: A Brief Overview. Animal Domestication. doi: 10.5772/ intechopen.86783 [2] Ritchie, H., & Roser, M. (2017, August 25). Meat and Dairy Production. Retrieved from https://ourworldindata.org/meat-production. [3] Alexandratos, N. and J. Bruinsma. (2012). World agriculture towards 2030/2050: the 2012 revision. ESA Working paper No. 12-03. Rome, FAO. [4] Nordgren, A. (2012). Meat and Global Warming: Impact Models, Mitigation Approaches and Ethical Aspects. Environmental Values, 21(4), 437–457. https://doi.org/10.3197/096327112X13466893628067 [5] Fraser, R. Z., Shitut, M., Agrawal, P., Mendes, O., & Klapholz, S. (2018). Safety Evaluation of Soy Leghemoglobin Protein Preparation Derived From Pichia pastoris, Intended for Use as a Flavor Catalyst in Plant-Based Meat. International Journal of Toxicology, 37(3), 241–262. https://doi.org/10.1177/1091581818766318 [6] Brown, P. O. R. (2018, June 14). Heme & Health: The Essentials. Retrieved from https://medium.com/impossible-foods/heme-health-theessentials-95201e5afffa. [7] impossiblefoods.com. (2017). Impossible™ 2017 Sustainability Report. Retrieved from https://impossiblefoods.app.box.com/v/presskit/ file/176187206081 .
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
M E AT P R O D U CT I O N [8] Sharma, S., Thind, S., & Kaur, A. (2015). In vitro meat production system: why and how? Journal of Food Science and Technology, 52(12), 7599–7607. https://doi.org/10.1007/s13197-015-1972-3 [9] Benjaminson, M., Gilchriest, J., & Lorenz, M. (2002). In vitro edible muscle protein production system (mpps): stage 1, fish. Acta Astronautica, 51(12), 879–889. https://doi.org/10.1016/S0094-5765(02)00033-4 [10] Meatable: Dutch lab grown meat startup bags €9M to develop pork prototype. (2019, December 6). Retrieved from https://siliconcanals.com/crowdfunding/meatable-bagse9m-to-develop-pork-prototype/. [11] Pawlowski, M., Ortmann, D., Bertero, A., Tavares, J. M., Pedersen, R. A., Vallier, L., & Kotter, M. (2017). Inducible and Deterministic Forward Programming of Human Pluripotent Stem Cells into Neurons, Skeletal Myocytes, and Oligodendrocytes. Stem cell reports, 8(4), 803–812. doi:10.1016/j. stemcr.2017.02.016 [12] Macqueen, L., Alver, C., Chantre, C., Ahn, S., Cera, L., Gonzalez, G., … MacQueen, L. (2019). Muscle tissue engineering in fibrous gelatin: implications for meat analogs. NPJ Science of Food, 3(1), 20–20. https://doi.org/10.1038/ s41538-019-0054-8 [13] Pero, J. (2019, October 22). New method of making lab-grown meat spins gelatin fibers like cotton candy for realistic texture. Retrieved from https://www.dailymail.co.uk/sciencetech/article-7601973/ New-method-making-lab-grown-meat-spins-gelatinfibers-like-cotton-candy-realistic-texture.html.
30
The Plastic Problem AMBER BHUTTA Cover Image Source: Pexels (Open Source)
"Some estimates posit that at the current rate of plastic production, the total mass of plastics will outweight the biomass of fish by 2050."
INTRODUCTION From packaging to water bottles to straws, plastic is a cornerstone of 21st century life. Despite its role as a short-term conduit for ease and convenience, plastic poses a severe threat to all tiers of marine ecosystems as well as human health. With increasing awareness of its environmental repercussions in recent years, elimination of single-use plastic has made its way to the forefront of the climate justice movement. This rise in awareness and action has been driven by a rise in the single-use plastic level in bodies of water. Some estimates posit that at the current rate of plastic production, the total mass of plastics will outweigh the biomass of fish by 2050.1 Though many people will forego plastic straws and water bottles, the environmental consequences of single-use plastic extend far beyond “saving the turtles.” Such consequences place an increased sense of urgency on single-use plastic reduction and development of possible alternatives. Both the problem and potential solutions lie at the intersection of different disciplines, including environmental science, chemistry, education, and public policy.
WHAT IS PLASTIC? Plastic materials are polymers whose chemical structure allows them to be shaped at elevated temperatures and pressures.2 Two key characteristics that make plastics so useful— their light weight and durability—also make waste plastics a significant environmental 31
and durability—also make waste plastics a significant environmental threat. Most plastics are durable and as a result degrade very slowly, their chemical structure rendering them resistant to many natural processes of degradation. Most plastics break down slowly through a combination of photodegradation, oxidation and mechanical abrasion.3 Thick plastic items persist for decades, even when subject to direct sunlight, and survive even longer when shielded from UV radiation under water or in sediments. With the exception of expanded polystyrene, plastics take much longer to degrade in water than on land due to the reduced UV exposure and lower temperatures found in aquatic habitats.3 This resistance to degradation allows plastic waste to persist in marine ecosystems for extended periods of time, exacerbating environmental consequences. There are two primary categories of plastic: macroplastics and microplastics. Macroplastics include all plastics greater than 5mm and tend to enter marine environments through rivers, dumping, or poor waste management.4 Microplastics include particles less than 5mm in size, typically originating from macroplastics. Microplastics also include plastics manufactured as less than 5mm in size, notable examples including abrasive scrubbers from cosmetics (primarily microbeads), and fibers from clothing.5 Their small size renders microplastics more difficult to identify and remove from the environment, making them DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
PLASTIC PROBLEM an especially potent form of pollution.5 Though they have unique characteristics, both macroplastics and microplastics pose a threat to marine life and human health when they enter bodies of water. Plastics enter bodies of water in a number of ways: through water systems, rivers, winds, tides, sewage disposable, and most notably through massive floods.6 Though there are differing estimates of how much plastic waste has been produced in the last century, one estimate suggests that one billion tons of plastic waste have been discarded since the 1950s.7 Another estimate gives a cumulative human production of 8.3 billion tons of plastic, of which 6.3 billion tons is waste, with a recycling rate of only 9% across the globe [Figure 1].7 Much of this waste material may persist for centuries or longer.7
THREATS TO MARINE LIFE Plastic pollution threatens marine life through three primary mechanisms: ingestion, entanglement, and contamination. Ingestion primarily occurs because many marine organisms incorrectly identify plastic debris as food.8 The tiniest microplastics are small enough to even be mistaken for food by zooplankton, allowing plastic to enter the food chain at very low trophic levels, potentially implicating species at high trophic levels through biomagnification. At higher levels, larger predators are thought to confuse larger macroplastics with fish eggs or other food sources.9 Ingestion of marine debris induces detrimental effects such as pathological alteration, starvation and mechanical blockage of digestive processes.9 Plastic bags are frequently responsible; they are particularly hazardous for sea turtles who often mistake them for jellyfish and ingest them, causing esophageal blockage.10 Entanglement tends to cause physical damage to marine organisms. Most often animals are reported as entangled in fishingrelated litter, especially in fishing nets. Derelict fishing gear abandoned or lost to the environment kills and injures countless marine mammals, seabirds, fish, and invertebrates annually.11 In addition to fishing-related litter, other anthropogenic (human-generated) debris, plastics such as ropes, plastic bags, and drink holders, can cause entanglement. Besides the direct injuries or death such litter leaves in its wake, it also damages benthic habitats through trough abrasion, translocation of seabed features, and smothering.11 Contamination
occurs
due
to
the
interaction of organic pollutants with plastic fragments, especially those at micrometric and nanometric scales. These interactions are especially significant to the detrimental biological effects on organisms in the water column as well as in the sedimentary environment.12 Hydrophobic pollutants cooccurring in the aquatic environment may adsorb onto microplastic debris. Depending on size, plastic fragments then have the potential to transport contaminants more potently through biological membranes and ultimately inside cells of aquatic organisms. As such, contamination can also be considered a secondary consequence of ingestion. The presence of organic pollutants on marine plastics has been illustrated for a wide range of chemicals in natural aquatic conditions.13 Though exposure routes of organic pollutantenriched microplastics are varied, the toxicity is, for the most part, inversely correlated to the size of the particles: the smaller the particle, the greater the possibility for penetration into the organism, releasing toxic chemicals into the acidic gut. There are a number of toxicity mechanisms, including increased oxidative stress, genotoxicity, depletion of immune competence, impairment of key cell functioning, loss in reproductive performance, disorders in energy metabolism, and changes in liver physiology.14
"Though the specific human health impacts of plastic in bodies in water remain unknown given the field's recent emergence, many researchers have posited theories about human interaction with plastic waste."
THREATS TO HUMAN HEALTH Though the specific human health impacts of plastic in bodies of water remain unknown given the field’s recent emergence, many researchers have posited theories about human interaction with plastic waste. Anthropogenic marine debris, especially plastic, have been discovered in marine animals across multiple species and trophic levels, and researchers have investigated for the presence of such debris in seafood. After analysis of the gastrointestinal composition of fish and shellfish intended for human consumption in American and Indonesian seafood markets, researchers noted the presence of plastic debris in 28% of fish and shellfish from Indonesia and 9% of fish and shellfish from the United States.15 Though the exact level of debris present varied among species, fish and shellfish from different trophic levels were implicated.15 Such exposure has a number of potential implications for human health; small anthropogenic debris have been shown to cause physical damage leading to cellular necrosis, inflammation and lacerations of tissues in the gastrointestinal tract.15 Coupled with mechanisms such as biomagnification and bioaccumulation, anthropogenic debris such as plastic also have the potential to cause physical damage when ingested with seafood. 32
Figure 2: Researcher Jenna Jambeck’s map of per capita plastic waste generation by country. Source:
with mechanisms such as biomagnification and bioaccumulation, anthropogenic debris such as plastic also have the potential to cause physical damage when ingested with seafood.
POLICIES TO REDUCE SINGLEUSE PLASTIC WASTE AND THEIR EFFICACY Given the increasingly dire situation presented by anthropogenic plastic pollution, multiple actors have implemented national and international initiatives to reduce plastic waste and remedy existing pollution.
International “Given the increasingly dire situation presented by anthropogenic plastic pollution, multiple actors have implemented national and international initiatives to reduce plastic waste and remedy existing pollution."
33
In international law, the International Convention for the Prevention of Pollution From Ships (MARPOL) was signed in 1973, although a complete ban on plastics disposal at sea was not enacted until 1988. Although 134 countries agreed to eliminate plastic disposal at sea, research has shown that marine debris has worsened since MARPOL was signed, a potential indication that, given recent research, the marine debris problem is more causally related to incorrect disposal of waste on land rather than at sea.16
National When considering the issue at the level of individual countries, little research has been done on the efficacy of regulating plastic on a national level.17 As such, a narrower scope is required, focused on policy regarding plastic bags. Governments all over the world have initiated bans on the sale of lightweight bags, charges to customers for lightweight bags, and taxes from stores who sell them.17 Some examples include local jurisdictions in North America, Australia, and the United Kingdom,
where bans and fees have been enacted. Some countries in Europe impose a fee per bag used. Germany and Denmark were early adopters of plastic bag bans in most retail stores in 1991 and 1994. However, since 2002, countries in Africa, Asia, and Europe have introduced bans (South Africa, Bangladesh and India) or levies (Ireland) on plastic bags.17 In most cases, governments have undertaken national approaches. Several countries in Africa and Asia completely banned the use of plastic bags while others have implemented levies. Levies range in cost, frequency (i.e. Malaysia charges a levy on plastic bags on Saturday only), and in plastic bag quality (several countries have levies on bags below a minimum thickness).17 Generally, bans on plastic bag thickness are inconsistent, making environmentally informed decisions for consumers and retailers difficult. Countries with coastal borders tend to discharge plastic into the oceans, with the largest quantities estimated to come from rapidly developing countries such as India and China.18 However, both India and China have already introduced bans of plastic bags. In 2002, India banned the production of plastic bags with a thinness of less than 20 μm to prevent clogging of municipal drainage systems and mortality of cows from ingesting plastic bags containing food, but enforcement of bans remains a problem. In China, a total ban on plastic bags (< 25 μm), and a fee on plastic bags was introduced on June 1, 2008, which caused plastic bag use to fall 60 to 80% in Chinese supermarkets, and the use of 40 billion fewer bags. Despite this, the use of plastic bags remains prevalent particularly among street vendors and smaller stores.18
Educational Campaigns Beyond large-scale initiatives, measures to inform individuals of the dangers of DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
PLASTIC PROBLEM plastic pollution and encourage individual responsibility have also shown potential. A study recently found that school children in the UK significantly improved their understanding of the causes and negative impacts of marine litter after educational intervention related to plastic marine debris.19 Children’s perceptions changed as they learnt more about marine litter and understood the causes, impacts and solutions. More specifically, children’s recognition of the problem significantly increased after the intervention, as did their concern about the issue, resulting in more ecologically responsible behavior.19a
CONCLUSION
When considering the myriad of contemporary issues threatening the planet, plastic waste pollution in marine environments has become one of the most prominent problems. The chemical composition and inherent properties of plastic render it resistant to degradation, especially in aquatic environments. Both national and international measures have been implemented to address this issue, to varying degrees of success. Plastic marine litter education campaigns, especially those targeting children to promote ecological consciousness, have potential implications in reducing plastic waste. By acting through both legislative and educational avenues to reduce plastic pollution, we can promote individual and societal responsibility to avert irreparable harm.
References [1] Jambeck, J. R., Geyer, R., Wilcox, C., Siegler, T. R., Perryman, M., Andrady, A., … Law, K. L. (2015, February 13). Plastic waste inputs from land into the ocean. Retrieved from https://science.sciencemag.org/CONTENT/347/6223/768. abstract. [2] Webb, K., H., Arnott, Jaimys, Crawford, J., R., … P., E. (2012, December 28). Plastic Degradation and Its Environmental Implications with Special Reference to Poly(ethylene terephthalate). Retrieved from https://www. mdpi.com/2073-4360/5/1/1. [3] Thompson, R. C., Moore, C. J., vom Saal, F. S., & Swan, S. H. (2009, July 27). Plastics, the environment and human health: current consensus and future trends. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC2873021/. [4] Eerkes-Medrano, D., Thompson, R. C., & Aldridge, D. C. (2015, February 17). Microplastics in freshwater systems: A review of the emerging threats, identification of knowledge gaps and prioritisation of research needs. Retrieved from https://www.sciencedirect.com/science/ article/abs/pii/S0043135415000858?via=ihub.
[5] Wu, W.-M., Yang, J., & Criddle, C. S. (2016, December 30). Microplastics pollution and reduction strategies. Retrieved from https://link.springer.com/article/10.1007/s11783017-0897-7. [6] Galafassi, S., Nizzetto, L., & Volta, P. (2019, July 22). Plastic sources: A survey across scientific and grey literature for their inventory and relative contribution to microplastics pollution in natural environments, with an emphasis on surface water. Retrieved from https://www.sciencedirect. com/science/article/pii/S0048969719334199. [7] Geyer, R., Jambeck, J. R., & Law, K. L. (2017, July 1). Production, use, and fate of all plastics ever made. Retrieved from https://advances.sciencemag.org/ content/3/7/e1700782. [8] Wilcox, C., Mallos, N. J., Leonard, G. H., Rodriguez, A., & Hardesty, B. D. (2016, January 11). Using expert elicitation to estimate the impacts of plastic pollution on marine wildlife. Retrieved from https://www.sciencedirect.com/ science/article/pii/S0308597X15002985. [9] Seltenrich, N. (2015, February). New link in the food chain? Marine plastic pollution and seafood safety. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC4314237/.
“When considering the myriad of contemporary issues threatening the planet, plastic waste pollution in marine environment has become one of the most prominent problems."
[10] Lazar, B., & Gračan, R. (2010, October 30). Ingestion of marine debris by loggerhead sea turtles, Caretta caretta, in the Adriatic Sea. Retrieved from https://www.sciencedirect. com/science/article/abs/pii/S0025326X10004297. [11] Murray, G., Murray R. Gregory Murray R. Gregory, Gregory, M. R., & Murray R. Gregory Google. (2009, July 27). Environmental implications of plastic debris in marine settings-entanglement, ingestion, smothering, hangers-on, hitch-hiking and alien invasions. Retrieved from https://royalsocietypublishing.org/doi/full/10.1098/ rstb.2008.0265. [12] Gomiero, A., Strafella, P., & Fabi, G. (2018, November 5). From Macroplastic to Microplastic Litter: Occurrence, Composition, Source Identification and Interaction with Aquatic Organisms. Experiences from the Adriatic Sea. Retrieved from https://www.intechopen.com/books/ plastics-in-the-environment/from-macroplastic-tomicroplastic-litter-occurrence-composition-sourceidentification-and-interactio. [13] Rani, M., Shim, J., Jang, M., Al-Odaini, N. A., Song, Y. K., & Hong, S. H. (2015, September 2). Qualitative Analysis of Additives in Plastic Marine Debris and Its New Products. Retrieved from https://link.springer.com/article/10.1007/ s00244-015-0224-x. [14] Gallo, F., Fossi, C., Weber, R., Santillo, D., Sousa, J., Ingram, I., … Romano, D. (2018). Marine litter plastics and microplastics and their toxic chemicals components: the need for urgent preventive measures. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/ 34
PMC5918521/. [15] Rochman, C. M., Tahir, A., Williams, S. L., Baxa, D. V., Lam, R., Miller, J. T., … Teh, S. J. (2015, September 24). Anthropogenic debris in seafood: Plastic debris and fibers from textiles in fish and bivalves sold for human consumption. Retrieved from https://www.nature.com/ articles/srep14340. [16] Xanthos, D., & Walker, T. R. (2017, February 21). International policies to reduce plastic marine pollution from single-use plastics (plastic bags and microbeads): A review. Retrieved from https://www.sciencedirect.com/ science/article/pii/S0025326X17301650. [17] Schnurr, R. E. J., Alboiu, V., Chaudhary, M., Corbett, R. A., Quanz, M. E., Sankar, K., … Walker, T. R. (2018, October 12). Reducing marine pollution from single-use plastics (SUPs): A review. Retrieved from https://www.sciencedirect.com/ science/article/pii/S0025326X18307033. [18] Jambeck, J. R., Geyer, R., Wilcox, C., Siegler, T. R., Perryman, M., Andrady, A., … Law, K. L. (2015, February 13). Plastic waste inputs from land into the ocean. Retrieved from https://science.sciencemag.org/ CONTENT/347/6223/768.abstract. [19] Hartley, B. L., Thompson, R. C., & Pahl, S. (2014, November 26). Marine litter education boosts children's understanding and self-reported actions. Retrieved from https://www.sciencedirect.com/science/article/pii/ S0025326X14007334.
35
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
M E D I C A L TO U R I S M
Medical Tourism's Impact on Rural Indian Citizens BY ANAHITA KODALI
WHAT IS MEDICAL TOURISM? In 2008, Henry Konczak, a 65-year old musician and video producer living in Ohio, was diagnosed with a heart murmur and blood infection. Cleveland Clinic told him that, not counting fees from the cardiac surgeon, he would have to pay more than $130,000. Without medical insurance to foot some of the cost, Konczak was faced with a seemingly impossible choice: bankruptcy or death. He instead chose a third option: flying to India for surgery. The cost of his entire trip, including hotels, travel, and medical bills? 10,000 dollars1. Medical tourism involves citizens of one country traveling abroad to other countries in order to receive medical care. Medical tourism has historically been undertaken by citizens of underdeveloped countries, who would travel to America or Europe to receive care from more modern and wealthy hospitals and doctors. Recently, however, a trend has emerged wherein citizens of highly developed countries choose to travel and receive care in developing nations rather than in their home countries2. In 2015, the US International Trade Commission estimated that between 150,000 and 320,000 Americans chose to become â&#x20AC;&#x153;medical touristsâ&#x20AC;? and travel internationally to have medical procedures performed3. One of the most common reasons American citizens choose to travel for medical care is the cost of American healthcare. In 2016,
the US spent almost two times more than other high-income countries on healthcare, yet performed consistently under-average on many health outcomes despite having no significant differences in healthcare utilization as compared to others4. Driving this disproportionate expenditure is the more expensive medical services in the US5. For example, one doctor told the story of his motherâ&#x20AC;&#x2122;s issue with medical expenses in the US: after suffering from chest pain, she was told by doctors that a conservative estimate for the cost of cardiac catheterization (which amounted to one stent and one overnight stay at the hospital) would be 47,000 dollars. They decided to travel to Bangalore for medical treatment instead and ultimately paid 110,000 rupees (1,700 dollars) for the cost of a CABG and 5 nights of stay at the hospital20. In addition to the higher cost of services in America, there is also a widespread lack of affordable insurance. Between the years 1987 and 2008, the uninsured rate stayed relatively high and stable at aboutt 15% [see Figure 1]; during the next 8 years, the country managed to reach record low uninsured rates (Source: Wikipedia Commons). In 2016, only 8.8% of Americans went without insurance18. Between the years 2016 and 2018, the number of Americans without insurance increased by 1.2 million (about 45% of those uninsured cited high costs of insurance as the reason why)6. The high costs of medical services in the US, coupled with a lack of insurance for some,
Cover Image Source: Wikimedia Commons
"Medical tourism has historically been undertaken by citizens of underdeveloped countries, who would travel to America or Europe to receive care from more modern and wealthy hospitals and doctors.
36
Figure 2: This graph is representative of health insurance coverage in the US between the years 1987 and 2008. The data shows that, while the number of uninsured US citizens rose, the uninsured rate stayed relatively stable at a conservative estimate of 15%.. Source: Wikimedia Commons.
"While obvious choices for international care include first-world Western nations with socialized healthcare, India has emerged in the past few years as one of the world's premier medical tourism destinations."
has led many Americans to choose “medical tourism.” While obvious choices for international care include first-world Western nations with socialized healthcare, India has emerged in the past few years as one of the world’s premier medical tourism destinations. While medical tourism has clear benefits for the tourists themselves and benefits for the nation of India, juxtaposing the Indian government’s facilitation of medical tourism with the lack of public medical access for the poor local population forces us to stop and consider the effects of medical tourism on Indian citizens.
THE BENEFITS OF MEDICAL TOURISM There are several reasons why Americans choose to travel to India to receive medical care. In general, medical tourists highly prioritize cost and service quality when choosing where to travel 7. As with care from most Asian medical tourist destinations, travelers on average can save about 50% on their medical costs if they go to India7. More Figure 3: This image is representative of the Ayurvedic doshas (or life forces). Ayurvedic medicine is based on a complete system of wellness, involving the three central doshas of vatha, pitha, kapha. All matter is composed of the five outer elements: ether, earth, water, fire, air. These doshas and elements lead to the five main tenants of Ayurvedic medicine: movement, cold, cohesion, transformation, and light. By following these tenants, people can live a healthy lifestyle. Source: Wikimedia Commons. 37
than just having cheap care, however, India’s standard of care in private hospitals rivals the standard of care in developed Western nations. India offers quick and sophisticated cardiac and orthopedic procedures to Americans who otherwise would have to wait months and pay hundreds of thousands of dollars8. The Indian government has further aided medical tourism by granting medical visas (M visas), which are given to travelers coming to India for medical treatment in recognized hospitals9. The Indian government has gained a lot by making India a top tier medical tourist destination, particularly due to stimulation of the Indian economy. Between the years 2016 and 2018, medical tourists, which accounted for slightly under 5% of tourists coming into the country, provided about 470,000 crore rupees (over 65 billion US dollars) in profit for Indian hospitals. Additionally, after such income is invested back into the private medical sector, technological and medical improvements have allowed India’s healthcare system to modernize and have top-of-the-line medical equipment and doctors11. By smoothing the process of traveling to India for medical care through implementation of the M visa, and creating an easy-touse online website available in several languages that is accessible to Westerners, the Indian government has certainly made the development of medical tourism a policy priority11. Medical tourism has accordingly been made a main focus of the government’s tourists branches. The Ministry of Tourism now recognizes medical tourism as a Niche Tourism Product (which designates it as a specific industry to promote more than others). Additionally, a National Medical and Wellness Tourism Board has been created to promote the use of natural Indian systems of medicine DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
M E D I C A L TO U R I S M
Figure 4: This is an image of a hospital in Tamil Nadu, a rural area of India From it, you can immediately see the problems that healthcare staffers would have, including a lack of good infrastructure (the road is unpaved) and a lack of physical space (the hospital building is small). Source: Wikimedia Commons
(some of the most common include Ayurveda and yoga [see Figure 2])11(Source: Wikipedia Commons). The government is also trying to restructure the infrastructure of Indian airports to better serve tourists10. Health policies encourage hospitals to cater to and provide services for foreigners10. Finally, in partnership with foreign companies, India has built many specialty hospitals specifically created for medical tourists10.
PROBLEMS IN INDIAN MEDICAL INFRASTRUCTURE Healthcare in India can be divided into two separate systems: public and private. The public sector, run by the government, is comprised of a complex three-tier system. When broken down, these three tiers include subcenters, primary health centers (PHCs), and community health centers (CHCs). Subcenters are built in areas with 5,000 people or less or areas that are hilly and hard to reach and have 3000 people. They act as the first point of contact between communities and primary health care systems and handle issues such as maternal and child health, counseling, and disease management. These offices need to have at least 3 staffers: a midwife, a male health worker, and a female health worker and are subsidized by the national government with welfare payments12. PHCs are built in areas with 30,000 people or less or in hilly and hard to reach areas with 20,000 people. Their main goals are outbreak prevention and preventative medicine. These offices need to have at least 15 workers: a medical officer and 14 other staffers. They are maintained by state governments12. CHCs are built in areas with 120,000 people or in hilly and hard to reach areas with 80,000 people. They have the most specialized
and technologically advanced facilities. These offices require a surgeon, physician, gynecologist or obstetrician, and pediatrician, along with 21 other staffers. They are also maintained by state governments12. The private sector is relatively unstructured. The government theoretically plays a key role in regulating it, but there is no main policy framework to universally regulate private medical companies, and existing laws are enforced poorly. As a consequence of this underregulation, private healthcare has received a disproportionate amount of subsidies, keeping desperately needed money from the public sector. When the government has attempted policy reform, there has been a significant amount of pushback from the private sector, and the private sector to remain virtually unchecked11.
â&#x20AC;&#x153;When the government has attempted policy reform, there has been a significant amount of pushback from the private sector, and the private sector remains virtually unchecked."
Indiaâ&#x20AC;&#x2122;s rural communities present unique challenges to medical care providers. For one, there is a basic lack of equipment and access [see Figure 3] (Source: Wikipedia Commons). Thousands of CHCs in the rural country operate with a severe lack of equipment, and interruptions in water and electrical supplies have taken a toll on their already suffering performance. A lack of government buildings (separate from PHCs and CHCs) required to establish and direct construction of PHCs and CHCs have prevented certain areas of the country from easy access to medical facilities. The enormous shortage of staffers further worsens this condition. Although each tier is legally required to maintain certain numbers of doctors, most well trained doctors and technicians prefer to work in urban settings, causing a lack of proper medical care even when equipment and buildings are available to rural citizens. Even when equipment is available and staffers are well trained, since rural communities are generally poorer, 38
Figure 5: This graph shows the poverty rates in India in the year 2012. The deeper the red color is, the more citizens of that state live below the poverty line. Many of the states that are the darkest red are in North India, which is the relatively underdeveloped half of the country. Overall, Indiaâ&#x20AC;&#x2122;s poverty average was well above international levels, and the nation makes up a significant proportion of the worldâ&#x20AC;&#x2122;s entire number of impoverished people.
to work in the private sector more, leaving fewer well-trained doctors for rural communities that are only served by public hospitals10. Based on these trends and the current status of private medicine, it is likely that the government will allocate less resources to the public sector in coming years. The limited financial resources reserved for public medicine will get redirected into growing the private sector when medical tourism becomes a priority14. The private sectorâ&#x20AC;&#x2122;s land and subsidy demands are legitimized when the Indian government puts medical tourism above all else; thus, as the industry grows and more tourists flock to India, the private sector will likely continue to receive priority over the public sector in terms of funding.
Source: Wikimedia Commons
"The Indian government can offset the growing gap between public and private healthcare by diverting money generated from medical tourism back to the public sector to directly address the lack of infrastructure, equipment, and government buildings in rural India."
39
their facilities are traditionally outdated, so staffers cannot provide their best level of care, as outdatedness causes equipment to malfunction and makes hospitals generally more inaccessible13. On the contrary, private hospitals are frequently well kept, have good facilities, and are easily accessible to those living in urban areas. Based on the equipment, personnel, and performance of public and private sector medical facilities, the divergence in quality of care is blatantly obvious. The impact of this gap in the quality of care can be clearly seen in a map of poverty levels in India. [see Figure 4] (Source: Wikipedia Commons). Much of the poverty is concentrated in the North, and, as a result, there is a huge disparity between the levels of healthcare in the North and the South without even considering the impacts of medical tourism; the disparity is caused because the North is more rural than the South and therefore has less hospitals19. Compared to the South, Northern households make inefficient use of medical services provided to them; in addition, Northern hospitals incurred more costs and received more patients over the course of one year19[SLN13] . Looking to the future, this gap will only be exacerbated by the growing prevalence of medical tourism in the country. Travelers who pay for their medical services out of pocket go to private medical centers to receive their medical care, thereby markedly increasing the income for the private sector and establishing it as a cornerstone of the Indian economy. This, coupled with the fact that private medicine is underregulated and generally has more funding to begin with, makes hospitals more desirable places to work for trained doctors and technicians. Thus, medical staffers are choosing
Overall, medical tourism has clearly contributed to the disparity between the public and private medical sectors in India. Pre-existing problems for the public sector (like lack of access to well-trained medical staffers, money, and other resources) will only get worse as time goes on if the government continues to underregulate the private sector.
POSSIBLE SOLUTIONS The Indian government can offset the growing gap between public and private healthcare by diverting money generated from medical tourism back into the public sector to directly address the lack of infrastructure, equipment, and government buildings in rural India. This money could also serve to provide more resources and funding to rural medical centers and modernizing older equipment. This will make the centers, particularly PHCs and CHCs, attractive to well-trained doctors, who would be given the chance to work with modern equipment that they would encounter in private sector hospitals and also receive competitive salaries. Since so many of the decisions regarding PHCs and CHCs are made at the state and local level, the government needs to make an effort to work with local representatives rather than directly allocating funds15. Local officials can tell the government where its additional financial resources are required, making the use and allocation of funds much more effective. The government also needs to focus on creating equitable growth between the public and private sectors rather than allowing the private sector to grow rampantly. An important step towards this goal would be the creation of a national health insurance policy. This would allow the government to demonstrate its commitment to reducing poverty levels and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
M E D I C A L TO U R I S M giving rural communities more access to better quality medicine. Because of growing activism from the lower and middle class to improve public medicine, also including social activists in national discussions of healthcare will allow the government to understand the needs of citizens at a local level16. Of course, barriers exist to closing the gap between the public and private sectors. For one, diverting the funds from medical tourism to the public sector slows the growth of the medical tourism industry, leaving less money being generated overall. Corruption also runs rampant within all levels of the Indian government17. A concentrated effort to synchronize local and national levels of government is therefore problematic. If corrupt officials at the local level are swayed by private businesses, then the information the national government needs to allocate more resources to local areas will never reach them. If private businesses are able to bribe officials at the national level, the officials have the power to stop certain research or the inclusion of local activists in discussions.
CONCLUSIONS Medical tourism has, in many ways, helped India. Besides boosting the Indian economy, medical tourism has helped make India globally renowned as one of the world’s premier sites for medical innovation and excellence. This exposure has also highlighted other attractions the country has to offer, including its rich culture and gorgeous landscapes. While there is due pessimism about the future of the Indian healthcare system, there are clearly steps that the Indian government can take in order to decrease the disparities between public and private medicine. As India continues to develop, the hope is that many of the problems regarding corruption will fade; that the country will achieve greater economic stability and allocate resources more effectively. Researchers, doctors, and patients alike can then hold much greater optimism about India’s future. References [1] Block, D. (2018, January 2). India's Hospitals Are Filling Up With Desperate Americans. Retrieved from https:// foreignpolicy.com/2018/01/02/indias-hospitals-are-filling-upwith-desperate-americans/. [2] Horowitz, M. D., Rosensweig, J. A., & Jones, C. A. (2007). Medical Tourism: Globalization of the Healthcare Marketplace. MenGenMed, 9(4). [3] Chambers, A. (2015, August). TRENDS IN U.S. HEALTH TRAVEL SERVICES TRADE.
Care Spending in the United States and Other High-Income Countries. JAMA Network, 319(10), 1024–1039. doi: 10.1001/ jama.2018.1150 [5] Fuchs VR. How and Why US Health Care Differs From That in Other OECD Countries. JAMA. 2013;309(1):33–34. doi:https:// doi.org/10.1001/jama.2012.125458 [6] Tolbert, J., Orgera, K., Singer, N., & Damico, A. (2019, December 13). Key Facts about the Uninsured Population. [7] Sultana, S., Haque, A., Momen, A., & Yasmin, F. (2014). Factors Affecting the Attractiveness of Medical Tourism Destination: An Empirical Study on India- Review Article. Iranian Journal of Public Health, 43(7), 867–876. [8] Sengupta, A. (2005). The private health sector in India Is burgeoning, but at the cost of public health care. Thebmj, 331(7526), 1157–1158. doi: 10.1136/bmj.331.7526.1157 [9] India Visa. (n.d.). Retrieved from https://india.travisa.com/ VisaInstructions.aspx?CitizenshipID=US&CountryID=IN&Trav elerTypeID=MED&ResidenceID=US&PartnerID=TA. [10] Government of India Ministry of Tourism. (2018, March 19). Over 4 Lakhs Foreign Tourist Arrivals (FTAs) in India for Medical Purpose during 2016: Tourism Minister. [11] Hazarika, I. (2010). Medical tourism: its potential impact on the health workforce and health systems in India. Health Policy and Planning, 25(3), 248–251. doi: https://doi.org/10.1093/ heapol/czp050
“Besides boosting the Indian economy, medical tourism has helped make India globally renowned as one of the world's premier sites for medical innovation and excellence."
[12] Chokshi, M., Patil, B., Khanna, R., Neogi, S. B., Sharma, J., Paul, V. K., & Zodpey, S. (2016). Health systems in India. Journal of Perinatology, 36(Suppl 3), S9–S12. doi: 10.1038/jp.2016.184 [13] Agarwal, S. P. (2002). National Health Policy 2002: New perspectives. THE NATIONAL MEDICAL JOURNAL OF INDIA, 15(4), 213–214. [14] Mutalib, N., Ming, L. C., Yee, E., Wong, P., & Soh, Y. (2016). Medical Tourism: Ethics, Risks and Benefits. Indian Journal of Pharmaceutical Education and Research. 50, 261-270. doi: 10.5530/ijper.50.2.6. [15] Balarajan, Y., Selvaraj, S., & Subramanian, S. V. (2011). Health care and equity in India. Lancet, 377(9764), 505–515. doi: 10.1016/S0140-6736(10)61894-6 [16] Bloom, G., Kanjilal, B., & Peters, D. H. (2008). Regulating Health Care Markets In China And India. Health Affairs, 27(4), 952–963. doi: 10.1377/hlthaff.27.4.952 [17] Desai, R. D. (2018, March 7). India Continues To Rank Among Most Corrupt Countries In The World. [18] Barnett, J. C., & Berchick, E. R. (2018, July 25). Health Insurance Coverage in the United States: 2016 [19] Prinja, S., Kanavos, P., & Kumar, R. (2012). Health care inequities in north India: Role of public sector in universalizing health care. Indian Journal of Medical Research, 136(3), 421– 431. [20] Rao, S. R. (2014). A Tale of 2 Countries: The Cost of My Mother’s Cardiac Care in the United States and India. Annals of Family Medicine: American Academy of Family Physicians, 12(5), 470–472. doi: 10.1370/afm.1676
[4] Papanicolas, I., Woskie, L. R., & Jha, A. K. (2018). Health 40
Insurance Coverage in the United States: 2008", United States Department of Commerce: United States Census Bureau, Page 22, by C. DeNavas-Walt, B. D. Proctor, J. C. Smith. 2008. Retrieved from https://commons.wikimedia.org/wiki/ File:U.S._Uninsured_and_Uninsured_Rate_(1987_to_2008). JPG. Figure 2. Image of Ayurvedic doshas and elements. Reprinted from Wikipedia Commons “Ayurveda humors.” 2010. Retrieved from https://commons.wikimedia.org/wiki/File:Ayurveda_ humors.png Figure 3: This is an image of a hospital in Tamil Nadu, a rural area of India From it, you can immediately see the problems that healthcare staffers would have, including a lack of good infrastructure (the road is unpaved) and a lack of physical space (the hospital building is small). Figure 4. Graph of poverty rates in India in 2012. Reprinted from Wikimedia Commons “2012 Poverty distribution map in India by its states and union territories,” by M. T. Hunter. 2012. Retrieved from https://commons.wikimedia.org/wiki/File:2012_Poverty_ distribution_map_in_India_by_its_states_and_union_ territories.svg
Rejected Alzheimer's Drugs Might Actually Work! BY ANIKETH YALAMANCHILI Figure 1: Proteins that display the structural connections of a patient with Alzheimer's. Source: Flickr
"In general, [Alzheimer's] is characterized by the destruction of memory and other important mental functions, a consequence of the brain's cells - and the connections between them degenerating and dying." Figure 2: Portrait of Dr. Alois Alzheimer. Dr. Alzheimer was the scientist to discover Alzheimer's disease in 1906 while studying the brain tissue of a woman who'd exhibited symptoms of Alzheimer's while alive; Dr. Alzheimer was a key figure for the advancement of neurological research in the 20th century. Source: Wikipedia Commons 41
INTRODUCTION Alzheimer’s disease, also known as senile dementia, is a horrifying predicament. A crippling confusion can beset affected individuals at any moment due to seemingly random memory loss. One such affected individual is named Harry Urban and has lived with Alzheimer’s disease for seven years. It has made his reality a living nightmare; Harry says that he does not know what to expect when he wakes up in the morning, and has “good days,” “bad days,” and “Alzheimer’s days.3” Alzheimer’s disease negatively affects Harry’s quality of life - his sudden memory loss disrupts any and all understanding of what Harry is or should be doing. Harry cannot touch anything that has to be plugged in, cannot go to a restaurant for fear of being overwhelmed by choices, cannot drive because he won’t know what lane to be in, and cannot go shopping because too many voices will drive him crazy.3
of death among older people specifically, ranking behind heart disease and cancer.2 Alzheimer’s disease is also the most common type of dementia, accounting for 50%-75% of dementia cases.4 Dementia leads to a loss of cognitive function, which affects the ability to think, remember, reason, and behave (to the extent that the capacity to carry out daily activities is compromised).2
So, what is Alzheimer’s disease exactly and how does it work? In general, it is characterized by the destruction of memory and other important mental functions, a consequence of the brain’s cells - and the connections between them - degenerating and dying. The sad fact of the matter is that there exists no cure; the only remedies are medication and management strategies to mitigate symptoms.1 This progressive, irreversible brain disorder is the sixth leading cause of death in the United States and the third leading cause DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
ALZHEIMER'S Alzheimer’s disease was named after the scientist who discovered it in 1906, Dr. Alois Alzheimer. Dr. Alzheimer discovered the disease when studying changes in the brain tissue of a woman that had exhibited “memory loss, language problems, and unpredictable behavior.2” A postmortem examination of the woman’s brain revealed atypical clumps and tangled bundles of fibers.2 The tangled bundles of fibers were actually, “intraneuronal neurofibrillary tangles containing phosphorylated tau proteins” and the atypical clumps were “extracellular neuritic plaques containing aggregated amyloid beta (Aβ) peptide.4” These amyloid plaques and tau tangles are key characteristics of Alzheimer’s disease. They signify the loss of the brain’s nerve cell connections, which hinders the transmission of messages between the brain and other organs. As further studies are conducted on changes occurring in the brains of Alzheimer’s patients, it’s that such neural changes can occur a decade before memory problems start to surface. It is in this preclinical stage of Alzheimer’s disease that people appear fine behind harmful changes occurring in their brain; such changes include irregular proteins forming amyloid plaques and tau tangles and nerve cells losing their function, breaking their connections, and starting to die. Specifically these changes occur in parts of the brain involved in making memories, the hippocampus and entorhinal complex first,. As more nerve cells die, more of the brain is affected by atrophy. The last stage of Alzheimer’s disease brings widespread destruction and significantly shrunken brain tissue.2 In terms of the symptoms, memory issues are usually the first signs of Alzheimer’s diseaserelated cognitive impairment. Such memory loss was termed mild cognitive impairment (MCI) and is characterized by more memory issues than is proper for a patient’s age, but not to the extent that the symptoms interfere
the symptoms interfere with their daily lives. Additional symptoms of MCI include difficulty moving and an impaired sense of smell. Although older people with MCI have a large risk of developing Alzheimer’s disease, it is not definite, and some even regain normal cognition. MCI-like symptoms are not necessarily the first symptoms of Alzheimer’s disease; such symptoms are not the same in every person. For instance, some experience a “decline in non-memory aspects of cognition, such as word-finding, vision/spatial issues, and impaired reasoning or judgment, at the very early stages of Alzheimer’s disease.” Early diagnosis of Alzheimer’s disease requires more research; hopeful for a breakthrough, scientists are looking for biomarkers (“biological signs of disease detected in brain images, cerebrospinal fluid, and blood”) in the brains of those with MCI and others at risk of developing Alzheimer’s disease. Mild Alzheimer’s disease is indicated by heightened memory loss and other difficulties in cognition, leading to commonly seen problems such as “wandering and getting lost.” Moderate Alzheimer’s disease is indicated by damage in areas of the brain responsible for including “language, reasoning, sensory processing, and conscious thought,” which might make it hard to remember friends and family. Severe Alzheimer’s disease occurs when amyloid plaques and tau tangles have completely taken over and shrunk brain tissue, at which point a person is completely bedridden.2
"As more nerve cells die, more of the brain is affected by atrophy. The last stage of Alzheimer's disease brings widespread destruction and significantly shrunken brain tissue."
Other than MCI, risk factors for Alzheimer’s disease include age (as aging can diminish the brain), family history, and genetics (although genetic indications only account for 5% of Alzheimer’s cases). Alzheimer’s, as caused by any of the aforementioned factors, can only be definitively diagnosed around 65 years, as middle-aged diagnosis of Alzheimer’s tend to be inaccurate.1
Figure 3: Image of a healthy brain on the left and a brain suffering damage from Alzeimer's disease on the right. The hippocampus and entohimal cortex, areas of the brain important for memory, are shown to have atrophied because of amyloid plaques and tau tangles destroying nerve cells and nerve cell connections. Source: Wikimedia Commons
42
TREATMENT OPTIONS Alzheimer’s disease has no cure, and the complexity of the disease makes it likely that no single drug or treatment will be able to cure it. Existing treatments aim to preserve existing (albeit diminished) mental function, control the behavioral symptoms, and slow the progression of symptoms. While current drugs approved by the FDA for treating Alzheimer’s are far from adequate, they serve a noble purpose in allowing the victims of Alzheimer’s disease to feel as comfortable as possible while the disorder progresses. Most drugs and treatments target the starting and intermediate stages of Alzheimer’s disease, but none can stop it altogether.5
"Alzheimer's disease has no cure, and the complexity of the disease makes it likely that no single drug or treatment will be able to cure it."
For the treatment of mild to moderate Alzheimer’s disease, cholinesterase inhibitors are often prescribed. The cholinesterase inhibitors Razadyne® (galantamine), Exelon® (rivastigmine), and Aricept® (donepezil) work to reduce certain symptoms while controlling other behavioral ones. While there is no concrete consensus on how cholinesterase inhibitors work, it is believed they prevent the degradation of acetylcholine (a chemical in the brain involved in memory and thinking). The degradation of acetylcholine is brought about by amyloid plaques present in those with Alzheimer’s disease by increasing cholinesterase’s activity of breaking down acetylcholine. However, the progression of Alzheimer’s disease causes the brain to produce progressively less acetylcholine, so the cholinesterase inhibitors at a certain point have no effect.5 For the treatment of moderate to severe Alzheimer’s disease, prescriptions for NMDA antagonist Namenda® (memantine) are called for. The main purpose of this medication is to lessen symptoms so as to allow for individuals to carry out their daily tasks for a longer time. Namenda works by regulating glutamate, another important neurotransmitter that can cause brain cell death in excess amounts. Because its mode of action is fundamentally different from the cholinesterase inhibitors, the two medications can be prescribed in tandem.5 In addition to Namenda, the FDA has approved Aricept®, the Exelon® patch (other cholinesterase inhibitors), and Namzaric® (mixture of an NMDA antagonist and cholinesterase inhibitors), and a combination of Namenda® and Aricept® to treat moderate to severe Alzheimer’s disease.5 Doctors will start Alzheimer’s patients at low doses of aforementioned drugs, the dosage of which a patient receives is increased
43
by the doctor a marginally after seeing how the patient reacts. It is suggested that some individuals might be better served by higher doses of cholinesterase inhibitors, for the minimum effective dose might not be all that effective for them. The drawback is that as the dosage increases, so too does the likelihood that side effects will occur, and as such, patients should always be watched carefully when started on a new drug.5 Each of the aforementioned medications have possible side effects, which include nausea, vomiting, diarrhea, and loss of appetite.5 While the aforementioned drugs deal more with decelerating the progress of Alzheimer’s symptoms related to mental function, treatment for the behavioral symptoms of Alzheimer’s disease is still being researched. The most common behavioral symptoms include, sleeplessness, wandering, agitation, anxiety, aggression, restlessness, and depression.5 However, there is a general consensus that medication should be used to address these symptoms only after other avenues not involving drugs have been exhausted. This is presumably because Alzheimer’s disease is dynamic and how symptoms are treated changes accordingly. Once drugs enter the equation, it can be hard to see whether they are having any effect, but risky still to stop taking them for fear of symptoms worsening.5
SOMETHING DIFFERENT A drug called Aducanumab, which was supposed to slow the progression of Alzheimer’s disease, was dismissed in 2019 due to unpromising initial testing. This was as expected, since the results of Aducanumab testing has historically been a mixed bag. It was initially thought to remove amyloid from the brain, and yet preliminary data from two phase III trials comparing the antibody to standard Alzheimer’s treatments did not predict the drug would achieve its mission in slowing the destruction of the brain along with the symptoms.6 The initial prediction was wrong as higher doses of Aducanumab weren’t tested for fear of side effects. However, more extensive testing was done in two clinical trials that showed patients receiving the highest doses of the drug may, in fact, have benefited from it. The results show that individuals given the highest dose of Aducanumab, 10 milligrams per kilogram of body weight, saw disease progression at a 30 percent lower rate than those that received placebo. Given these promising results, the biotechnology company Biogen plans to request drug approval for Aducanumab from DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
ALZHEIMER'S the FDA in early 2020.6 What makes the results of the two clinical trials for Aducanumab so exciting is the fact that no current treatments for Alzheimer’s disease focus on stopping damage done to the brain. They instead tackle symptoms of Alzheimer’s disease, only be a temporary measure. Aducanumab has the potential to delay or halt brain damage caused by Alzheimer’s disease which gives rise to its devastating symptoms.6 Aducanumab’s clinical trial results also lend credence to the assumption that accumulation of amyloid, a sticky protein (the misfolding of the protein causes clumping, hence making it “sticky”) which forms amyloid plaques in the brains of those with Alzheimer’s disease is a major initial step of the brain disorder.8 The confusion regarding the importance of amyloid aggregation is due to argument it is not an important step of Alzheimer’s disease, but rather a red herring. Being an antibody, Aducanumab is designed to target amyloid fibrils (or plaques) for removal.6 Biogen increased the doses of some participants that had been assigned lower doses because of potential side effects. Surveys measured 3,285 participants’ capabilities in, “memory, orientation, personal care and problem solving.6” The subgroup of 228 people in the first trial receiving the highest dose saw a 30% slower progression of symptoms, despite some mental skills still being lost over the span of eighteen months. The subgroup of 282 in the second trial saw a 27% slower progression of symptoms than those that took a placebo. Brain scans even showed that, compared to placebo treatments, those that took Aducanumab had fewer problematic amyloid and tau proteins.6
countries). Yet the threat of Alzheimer’s disease today is as prevalent as ever because of how, despite being better equipped to combat infectious diseases, chronic diseases are still ever-present. Treatments like Aducanumab are revolutionary as they indicate another leap in medicine that will promise cures for chronic diseases. References [1] Alzheimer's disease. (2018, December 8). Retrieved from https://www.mayoclinic.org/diseases-conditions/alzheimers-disease/ symptoms-causes/syc-20350447. [2] Alzheimer's Disease Fact Sheet. (n.d.). Retrieved from https://www.nia.nih.gov/health/alzheimers-disease-fact-sheet. [3] Botek, A.-M. (2012, October 10). Another World: People With Alzheimer's Share Their Perspectives. Retrieved from https://www.agingcare.com/articles/ alzheimers-patients-share-their-experiences-153702.htm.
“As medical care has made remarkable leaps in its effectiveness at preserving human health compared to the standards of care at the start of the 20th century, people are living longer."
[4] Ferrero, J., Williams, L., Stella, H., Leitermann, K., Mikulskis, A., O'Gorman, J., & Sevigny, J. (2016, June 20). First-in-human, double-blind, placebo-controlled, single-dose escalation study of aducanumab (BIIB037) in mild-tomoderate Alzheimer's disease. [5] How Is Alzheimer's Disease Treated? (n.d.). Retrieved from https://www.nia.nih.gov/health/how-alzheimers-disease-treated. [6] Sanders, L. (2019, December 17). A once-scrapped Alzheimer's drug may work after all. Retrieved from https://www.sciencenews.org/article/once-scrappedalzheimers-drug-aducanumab-may-work-after-all. [7] Selkoe, D. J. (2013, September). The therapeutics of Alzheimer's disease: where we stand and where we are heading. Retrieved from https://www.ncbi.nlm.nih. gov/pubmed/25813842. [8] staff, S. X. (2015, December 18). The case of the sticky protein. Retrieved from https://phys.org/news/2015-12-case-sticky-protein.html.
It is evident that further adjustment is needed on Biogen’s behalf. Scientists point out that Aducanumab’s clinical trials changed dosing halfway through and ended early. Not only that, but as with current Alzheimer’s disease medications, the higher the dosage, the more likely the side effects. 40 percent of those who taking the highest dose of Aducanumab displayed “brain swelling or bleeding.6”
A FUTURE FOR ADUCANUMAB As medical care has made remarkable leaps in its effectiveness at preserving human health, compared to the standards of care at the start of the 20th century, people are living longer. This owes, in large part, to the development of highly effective treatment for infectious disease - gone are the massive epidemics that strike swaths of the population (at least in developed 44
A New Frontier in Cancer Treatment: Immunotherapy and Adoptive T Cell Therapy BY ANNA BRINKS Cover Image: Anti-tumor immune response: Using a novel imaging technique called transparent tumor tomography that threedimensionally visualizes the tumor microenvironment at a single cell resolution, researchers obtained this image from a mouse model for HER2-positive breast cancer. Shown are cytotoxic T cells (CD3 in yellow; CD8 in red; CD31 in blue) attacking the tumor after treatment with radiation and an immune checkpoint blockade therapy. New knowledge about the mechanism of inducing antitumor immune responses may lead to better treatments. Source: NCI Cancer Close up 2016 Collection, Creator: Steve-Young Lee
"Cancer has existed for as long as the human race itself."
INTRODUCTION: THE NATURE OF CANCER Immortality has captured our imagination for centuries. From literature’s bloodsucking, immortal vampires to religions’ concept of the immortal soul, people tend to envision a rather abstract rendition of immortality. Immortality exists today, but its reality is significantly less glamorous than the idea enshrined in humankind’s collective psyche. When renegade cells in the human body achieve immortality in the form of cancer, it portends a tragedy. The first immortal human cell line isolated in laboratories, dubbed HeLa, belonged to a young African American woman named Henrietta Lacks who died from aggressive cervical cancer at Johns Hopkins Hospital in 1951 [Figure I].11 Now a controversial example of ethical violations in medical research, her cells (which were used for research without her or her family’s consent) have aided in the development of the polio vaccine, therapeutics for leukemia, influenza, hemophilia, and Parkinson's disease, as well as advancing knowledge of cancer and genetics.11 The story of Henrietta Lacks is only one convolution in the complicated and long history of our fight against cancer. Cancer has existed for as long as the human race itself. Mummies in 400 A.D. bore its ravages, and its first description was recorded by ancient Egyptian physician
45
Imhotep in 2500 BC. However, for nearly a millennium after Imhotep, its influence was eclipsed in history books by the rampage of diseases such as tuberculosis and the bubonic plague.12 Scientific advances that arose as civilizations modernized revealed this latent killer by curing these more prominent historical diseases and increasing life expectancies. Cancer is, therefore, a disease characteristic of the modern era, as tuberculosis was of the nineteenth century: “Cancer is riddled with more contemporary images. The cancer cell is a desperate individualist, in every possible sense, a nonconformist, [it] asphyxiates us by filling our bodies with too many cells; it is consumption in its alternate meaning— the pathology of excess.”12 This “pathology of excess” required decades of research and innovation to understand on a biological level, and countless new mysteries and questions still remain to be explored. Originally, the humoral theory of illness claimed that disease was caused by an imbalance of one of four fluids in the human body: blood, phlegm, yellow-bile, and blackbile [Figure II].12 Depression and cancer were prescribed to the ominous “black-bile.” The friction between a local and systemic (humoral) definition of cancer was a point of contention that called for radically different treatment approaches. Surgery on local tumors was considered to be a pointless exercise, as the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
I M M U N OT H E R A P Y Figure I: Scanning electron micrograph of an apoptotic HeLa cell, a cell type in an immortal cell line used in scientific research. It is the oldest and most commonly used human cell line. The line was derived from cervical cancer cells taken in 1951 from Henrietta Lacks, a patient who eventually died of her cancer. Source: NIH Online Visuals. Creator: Tom Deerinck, NIGMS, NIH)
tumors were thought to be only local eruptions of a systemic disease. After evidence for the black bile rationale failed to materialize, cancer theory underwent several iterations. Scientists chased viruses and then carcinogens as possible causes, indiscriminately testing thousands of chemicals and viruses to see if they could induce cancer in tissue samples or animal models. After revolutionary advancements in DNA sequencing capabilities, it finally became clear that cancer is caused by mutations in genes involved in cell growth and replication—proto-oncogenes, tumor suppressor genes, and DNA repair genes. With such mutations, the cell is able to grow uncontrollably and frequently invade other tissues and sites of the body, a condition called metastasis. Mutations can accumulate when DNA is perpetually damaged by carcinogens (such as cigarette smoke) or by errors that occur as cells copy DNA during mitosis (cell division.) While the former may be preventable, the latter is random and inherent in the cellular processes that allow for growth, aging, regeneration, healing and reproduction. Cancer is therefore permanently “stitched into our genome.”12 The immortality wielded by HeLa cells is only one wily strategy cancer cells can use to aid their survival and proliferation. The six hallmark abilities of cancer cells include: selfsufficiency in growth signals (the ability to stimulate their own growth), insensitivity to anti-growth signals, evading apoptosis (cell death), sustained angiogenesis (stimulating the growth of blood vessels to supply nutrients to tumors), tissue invasion and metastasis, and limitless replicative potential (“immortality”).4
Additionally, cancer cells often alter their energy metabolism (potentially to better synthesize the macromolecules required to assemble new cells) and are capable of evading immune destruction [Figure III].5 These abilities are obtained through mutations in the DNA and result in adaptations to the physical environment. They are also the reason many existing chemotherapeutic agents fail after a certain time – the tumor mutates to develop resistance to the therapy. The resulting cell “lives desperately, inventively, fiercely, territorially, cannily, and defensively—at times, as if teaching us how to survive. To confront cancer is to encounter a parallel species, one perhaps more adapted to survival than even we are.”12
"The immortality wielded by HeLa cells is only one wily strategy cancer cells can use to aid their survival and proliferation."
Figure II: 13th century illustration showing the four humors and the veins Source: Wikipedia
46
Figure IV: Infection with certain types of human papillomavirus (HPV) is associated with various cancers. Researchers are working to understand the processes by which HPV can transform a healthy cell into a cancerous one. This image shows a cell transfected with the HPV-16 E5 oncoprotein. Fluorescent red color represents E5, green represents karypherin-beta-3, and co-localization of both proteins is shown in yellow. The circular dark object in the middle is a cell nucleus. Source: NCI Cancer Close up 2016 Collection, Creator: Ewa Krawczyk
Cancer has a monumental impact on the healthcare system and human life: in the United States, one in three women and one in two men will develop cancer during their lifetime. Overall, one quarter of American deaths and about 15 percent of all deaths worldwide are attributable to cancer.12 In 2017, the estimated national expenditure for cancer care in the United States was $147.3 billion.8 In response to this lethal illness, decades of work and billions of dollars have been dedicated to improving our understanding of cancer in order to develop more effective treatments. These efforts have resulted in a significant 26% fall in the overall cancer death rate in the United States from 1991 to 2015, with still much room to grow.8
IMMUNOTHERAPHY: AN OVERVIEW
â&#x20AC;&#x153;One of the greatest challenges of treating cancer is that it is a mutiny of our own cells."
Figure III: The hallmarks of cancer. Green: sustaining proliferative signals. Brown: insensitivity to anti-growth signals. Pink: evading immune destruction. Blue: limitless replicative potential. Orange: tumor promoting inflammation. Black: tissue invasion and metastasis. Red: sustained angiogenesis. Blue: genome instability and cell mutations. Grey: evading apoptosis. Purple: deregulating cellular energetics. Source: Wikimedia Commons
47
One of the greatest challenges of treating cancer is that it is a mutiny of our own cells. As a result, significant effort has been devoted to developing a better understanding of cancer biology, since this can provide opportunities for more effective treatments that work to selectively exploit differences between normal cells and cancer cells. Surgery and radiation target local masses of cancer cells, removing or killing the entire affected area. Chemotherapy works on a systemic level to target fast-growing cells in the body. Hormone therapy can be used with cancers that rely on hormones to grow, such as prostate and breast cancers. Targeted therapies rely the most heavily on an intimate understanding of cancer biology and work by interfering with the specific proteins and pathways that allow cancer cells to proliferate. These specific proteins may differ widely in different types of cancer, and even within the same cancer type (for example, breast cancer can arise from multiple different mutations and therefore can have different proteins to target). Immunotherapy is an example of a targeted therapy that uses components of the immune system to attack cancer cells.7
The immune system is the bodyâ&#x20AC;&#x2122;s greatest tool in attacking diseases and infections. It is a complex network of specialized cells and tissues that can recognize, attack, and keep a record of foreign invaders such as bacteria and viruses. The immune system also plays an important role in cancer, as an estimated 15% of all human cancers worldwide may be attributed to viruses.6 For example, the Epstein-Barr virus, human papilloma virus (HPV), hepatitis B virus, and human herpes virus-8 are DNA viruses capable of triggering cancerous growth, and T lymphotropic virus type 1 and hepatitis C viruses are RNA viruses that contribute to cancer [Figure IV].6 Vaccines available against these viruses provide an important opportunity to reduce the prevalence of their associated cancers; for instance, when the HPV vaccine was implemented, the prevalence of precancers caused by the HPV strains linked to cervical cancer dropped by 40 percent among vaccinated women.1 The immune system, however, is also critical in the other 85% of cancers not linked to viruses. It plays a role in policing the bodyâ&#x20AC;&#x2122;s tissues and recognizing abnormal cells. When mice were genetically engineered to be deficient for various components of the immune system, tumors arose more frequently and grew more rapidly than in mice with normal immune systems. It was also observed that both innate and adaptive components of the immune system participated in immune surveillance.5 Additionally, clinical epidemiology has shown that patients with colon and ovarian tumors that are heavily infiltrated with CD8+ cytotoxic T lymphocytes (CTLs) and natural killer (NK) cells have a better prognosis.5 Finally, some immunosuppressed organ transplant recipients have developed donor-derived cancers, suggesting that in the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
I M M U N OT H E R A P Y inflammatory cells such as Tregs that can suppress CTLs.5
infiltrated with CD8+ cytotoxic T lymphocytes (CTLs) and natural killer (NK) cells have a better prognosis.5 Finally, some immunosuppressed organ transplant recipients have developed donor-derived cancers, suggesting that in the seemingly tumor-free donors, the cancer cells were held in check and kept dormant by a fully functional immune system.5 Immunotherapies enhance the immune system’s ability to fight cancer and target tumor cells. Types of immunotherapy treatments include targeted antibodies (proteins that can be used to target specific markers on cancer cells), cancer vaccines (which elicit an immune response against specific cancer antigens), oncolytic virus therapy (which uses modified viruses to infect and destroy tumor cells), immunomodulators (which intervene in the immune system’s regulatory pathways), and finally adoptive cell therapy (which uses the immune system’s cells).3 Countless hours of research and clinical development support each respective treatment, with adoptive T cell therapy displaying special promise due to the key role T cells play in the immune system [Figure V].
ADOPTIVE CELL THERAPY: T CELLS T cells are a type of lymphocyte that develop in the thymus gland. They are an essential part of the immune system and can differentiate into several different types of cells which each have a unique role. CTLs, or “killer” T cells, are able to kill virally infected cells as well as cancerous cells. CD4+ cells, or “helper” T cells, coordinate with the rest of the immune system and communicate with B cells (which also have specialized types with different roles such as forming the immune system’s memory or secreting cytokines). Regulatory T cells (Tregs) are immunosuppressive and help prevent autoimmune disease. Cancer cells may attempt to avoid immune responses by secreting immunosuppressive factors (such as TGF-β) or by recruiting immunosuppressive
There are many kinds of adoptive cell therapies that involve T cells. Tumor-infiltrating lymphocyte (TIL) therapy harvests the naturally occurring T cells that have already infiltrated the tumor, activates them, grows their numbers, and then reinfuses them into the patient to attack the cancer cells.3 For patients without T cells capable of recognizing their tumors, engineered T cell receptor (TCR) therapy equips the patient’s T cells with a new T cell receptor that allows them to target specific cancer antigens.3 Both of these therapies, however, can only target cancer cells with antigens attached to a specific protein called the major histocompatibility complex (MHC). Finally, chimeric antigen receptor (CAR) T cell therapy uses a synthetic chimeric antigen receptor that can recognize antigens not bound to MHCs [Figure VI].3 However, they can only interact with cell surface proteins (not with proteins found in the interior of the cell). Both Axicabtagene ciloleucel (Yescarta®) and Tisagenlecleucel (Kyrmriah®) are current FDA-approved CAR T cell immunotherapies that are used with subsets of patients with lymphoma and leukemia.3 Many areas in immunotherapy are continuing to yield further insight into treatment options but still require future research. For example, T cells can lose their ability to kill cancer cells over time, a phenomenon called T cell exhaustion. Multiple recent publications have explored the mechanism behind this decrease in efficacy, identifying high levels of the transcription factors TOX, TOX2, and NR4A as important in the process. The exhaustion may be a result of epigenetic mechanisms, as TOX has been found to interact with several enzymes involved with
Figure V: Shown here is a pseudo-colored scanning electron micrograph of an oral squamous cancer cell (white) being attacked by two cytotoxic T cells (red), part of a natural immune response. . Source: NCI Cancer Close Up 2016 Collection, Creator: Rita Elena Serda
"The immortality wielded by HeLa cells is only one wily strategy cancer cells can use to aid their survival and proliferation."
Figure VI: This schematic shows the steps involved in CAR T cell therapy, a type of treatment in which a patient’s T cells are genetically engineered so they will bind to specific antigens on cancer cells and kill them. Source: National Cancer Institute
48
“Cancer is, after all, a blanket term to describe an array of abnormal cells that may manifest in hundreds of different ways."
chromatin folding.9 Engineering CAR T cells to lack both TOX and TOX2 in mouse trials resulted in treatment that was far more effective than standard CAR T cells.9 Continuing to explore ways of avoiding T cell exhaustion may be a crucial step in making treatments involving T cells more effective. Additionally, creative ways of administering T cells to the body are being explored; a recent finding revealed that nickel titanium micromesh implants loaded with tumor specific CAR T cells implanted into mice tumors fostered the rapid expansion of T cells, the delivery of a high density of T cells directly to the tumor, and significantly improved survival.2 These results show promise not only in the speed of the treatment, but additionally in its potency as well as its targeted nature. An improvement of these three core elements as compared to the status quo are astonishing, and this treatment is now being investigated further as a possible improvement over intravenous methods.
CONCLUSION Great strides have been made over the past century both in understanding cancer and treating it, but there is still room for substantial progress. Despite past hopes, a singular, powerful cure for cancer is not within reach. Cancer, after all, is a blanket term to describe an array of abnormal cells that may manifest in hundreds of different ways. Carcinomas begin in the skin or tissues that line or cover organs. Sarcomas begin in connective and supportive tissues such as bone, cartilage, fat, and muscle. Leukemia begins in bloodforming tissues.10 These are only a subset of the major cancers, and each one has dozens of more specific iterations unique in their biology and tumor infrastructure. Even between individuals with the same cancer, each unique iteration may be drastically different and have a different prognosis influenced by the internal body environment and adaptations that have promoted proliferation. Despite this multiplicity, the hallmarks of cancer have highlighted commonalities in biological pathways that overlap in many cancers and have provided a unifying framework for carcinogenesis. This framework has facilitated the development of many targeted therapies, including immunotherapy treatments which have numerous advantages in addressing some of the most challenging aspects of cancer. The immune system is necessarily highly precise, and therefore can be an effective way to target tumors cells without affecting normal cells.3 The immune system also mirrors cancer in its ability to evolve and adapt dynamically and continuously, allowing it to keep pace with a changing cancer cell.u Finally, the immune 49
system’s memory of past threats can help prevent the future recurrence of cancer.3 Utilizing and optimizing our own natural defenses is an almost eloquent response to an enemy also derived from the self: the deadly subversion of our own cells. References [1] CDC Staff. (2019). Human Papillomavirus (HPV): Vaccinating Boys and Girls. Centers for Disease Control and Prevention. [2] Coon, M. E., Stephan, S. B., Gupta, V., Kealey, C. P., & Stephan, M. T. (2019). Nitinol thin films functionalized with CAR-T cells for the treatment of solid tumours. Nature Biomedical Engineering. https:// doi.org/10.1038/s41551-019-0486-0 [3] CRI Staff. (2019). Immunotherapy. Cancer Research Institute. [4] Hanahan, D., & Weinberg, R. A. (2000). The Hallmarks of Cancer. Cell, 100(1), 57–70. https://doi.org/10.1016/S0092-8674(00)81683-9 [5] Hanahan, D., & Weinberg, R. A. (2011). Hallmarks of Cancer: The Next Generation. Cell, 144(5), 646–674. https://doi.org/10.1016/j. cell.2011.02.013 [6] Liao, J. B. (2006). Viruses and human cancer. The Yale Journal of Biology and Medicine, 79(3–4), 115–122. [7] Nature Staff. (2019). Cancer Immunotherapy. Nature. [8] NCI Staff. (2018). Cancer Statistics. National Cancer Institute. [9] NCI Staff. (2019, July 18). Improving Cancer Immunotherapy: Overcoming the Problem of “Exhausted”T Cells. [10] NCI Staff. (2019). NCI Dictionary of Cancer Terms. National Cancer Institute. [11] Shah, S. (2010). Henrietta Lacks’ story. The Lancet, 375(9721), 1154. https://doi.org/10.1016/S0140-6736(10)60500-4 [12] Siddhartha Mukherjee. (2010). The Emperor of All Maladies: A Biography of Cancer. Scribner.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
D E U T E R I U M OX I D E
Deuterium Oxide on Maintaining Viability in Coliphage Bacteriophages under Low Temperatures to Model Live Attenuated Viral Vaccine Additives BY ANNIKA MORGAN
ABSTRACT Viral particles used in vaccines and viral therapies are often damaged due to the molecular movement of their storage solution and require storage at low temperatures to reduce the velocity of the Water molecules. Deuterium oxide ( D2O ) is made with an isotope of hydrogen that increases the density of water to 1.11 g/mL; when viral particles are stored in D2 O the increased weight reduces the molecular speed of the solution, reducing trauma to the particles, increasing the temperature in which samples can be stored. A T4 bacteriophage was used to test how a viral particle would react to its environment and deteriorate over time while stored in D2O and deionized water. A sample of Coliphage bacteriophages stored in D2O was compared to a sample stored in deionized water at 16 °C To determine the infectivity titer of the samples over time using plaque assays. The sample stored in D2O showed significantly less deterioration over time and slowed the rate of degradation to 6% that of deionized water. D2O proved to be a more advantageous solution than deionized water in supporting the health of the phages and is a promising storage additive for viral samples. This solution has application for use to increase the storage temperature of live attenuated viral vaccines, such as the Ebola vaccine, rVSV EBOV that often require low temperatures during transport and storage to remain effective and viable for administration to patients
INTRODUCTION Live vaccines are created by taking a sample of a virus and altering it so the body
recognizes it as an invader and the immune system makes antibodies against it, but it does not infect the body’s cells and make you sick. These antibodies stay in the body so when you come in contact with the same illness, it can effectively fight it off. The base of the vaccine is still a live virus and it needs to be kept in specific conditions to stay viable and infectious so the body can identify it as a viral particle that needs to be eradicated. Some vaccines are very unstable and have to be kept at very low temperatures to stay viable for long periods of time. Vaccines like this are very difficult to transport and lead to an issue in the Cold-Chain, keeping a medication at perfect conditions from manufacturer to patient. Countries spend millions of dollars every year transporting medications and keeping them at the required conditions. If it cannot be verified that a medication/vaccine was kept at the proper conditions, then it cannot be used on patients. Since it is so difficult to maintain temperatures such as -40°C during transport, many vaccines are wasted because these conditions aren’t maintained correctly through the entire trasportation of the vaccine from production to patient. It is also difficult to transport medications that require very intense storage conditions to remote areas that are not accessible by plane or vehicle. Areas like this are fairly common in Sub Saharan Africa where there is reduced accessibility to many citizens, leading to a large decrease in mas vaccination because the medications cannot be maintained at the proper conditions to reach the patients who need them. There are many proposed solutions to the Cold-Chain problem, but for the
Cover Source: Wikimedia Commons
“Countries spend millions of dollars every year transporting medications and keeping them at the required conditions."
50
purposes of this paper only vaccine additives will be focused on as a possible solution. The purpose of storage solution additives is to raise the temperature tolerance of the vaccine and to slow the degradation of the viral particles that cause the vaccine to be effective in the first place. Bacteriophages (phages) are small viral particles that infect specific bacteria. Phages consist of three separate anatomical sections: the capsid head, tail, and tail fibers which all allow the virus to infect its target bacterium. Since bacteriophages are analogous to the viruses that infect humans, they are a good model for how viruses and vaccines survive and react to their environment. Bacteriophage T4 infects Escherichia coli specifically and is used in this experiment to determine the degradation of the phage over time. They are also an adequate model to test how another virus would respond to additives and/or changes to the solution in which it is stored. 4,7
“D20 slows down this damanging movement while also providing a suitable environment for the virus because the increased mass of the deuterium reduces the molecule's velocity."
51
The T4 phage engages in the lytic cycle of reproduction when it is assayed; this cycle consists of five distinct steps for successful lysis. The first step is attachment during where the proteins in the tail fibers of the phage bind to distinct receptors on the surface of the bacterium. These receptors are specific to each type of cell and act as an identifier to the phage; The receptor must line up with the proteins on the phage’s tail in order for attachment to occur. This is why each phage can only infect specialized types of bacteria. The second step is entry; the phage injects its DNA genome into the cell’s cytoplasm for replication. The genome is stored inside the capsid head of the phage and travels through the body before exiting through the hexagonal base plate of the phage into the intercellular fluid. This method leads to step three: DNA replication and protein synthesis. The phage genome is copied and proteins are synthesized in the cytoplasm of the infected bacterium for the creation of new phages. The phage uses the bacterium’s organelles and synthesis processes to create its own progeny by injecting its DNA into the cell’s existing genome so it is copied in conjunction with the cell’s DNA. Once the necessary proteins are created for new phage construction, the fourth step commences which is the assembly of new phages from the proteins and copied DNA. When the new phages are fully assembled, lysis can occur, the fifth and final step in the lytic cycle. The new phage pokes a hole in the plasma membrane and cell wall of its host bacterium and water enters into the cell causing it to expand and eventually burst. This releases all of the newly
assembled phages into the extracellular fluid. These phages can then go on to infect nearby cells and continue this phage replication, spreading exponentially through a population of bacteria.1 This process is the molecular basis of the plaque assay analysis that is performed in this experiment to determine the original infectivity titer of the phage sample. The plaque assay procedure involves growing a lawn of bacteria on sterile agar and then infecting the lawn with the phage sample. The phages go into the lytic cycle and lyse groups of bacteria on the plate forming multiple plaques. These plaques are then counted and based on the starting dilution factor of the sample of virus, the original infectivity titer can be determined.9 Normally, when vaccines are stored in water, the molecular movement of the water particles damages the virus and reduces its infectivity, making the vaccine less viable and effective. Enveloped viruses such as Vesicular Stomatitis Virus (VSV) surround themselves with a small portion of the phospholipid bilayer of their host cell when they exit the cell during lysis. This envelope acts as a protection mechanism and can aid the virus in evading the host’s immune system because the organism’s own cell membranes are used as a viral envelope or covering. This layer can be damaged easily because it is not naturally part of the virus’s anatomy and therefore cannot be repaired or replaced by the virus. The layer is so delicate that the movement of water molecules around the outside is enough to rip the membrane, causing the virus to become inactive and no longer infectious. The Ebola vaccine uses VSV as the basis of the vaccine, so it also experiences damage from fast moving particles such as water molecules.6 D2O slows down this damaging movement while also providing a suitable environment for the virus because the increased mass of the deuterium reduces the molecule’s velocity, therefore reducing damage to the phage. Since D2O is made of a nonradioactive and benign isotope of hydrogen, it is completely safe in the body and behaves virtually the same as water, making it biocompatible in humans and safe for organisms or viruses. 8,4
EXPERIMENTAL METHODS The bacteriophages were stored in one solution of mainly D2O and one solution of deionized water. The viability of the phage sample was determined by performing a plaque assay test which shows the original infectivity titer of the original phage sample. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
D E U T E R I U M OX I D E The original phage sample was stored in a solution of 3 mL tryptone broth at an infectivity titer of 10 particles 4 per mL and was refrigerated until dilution so the sample did not degrade by an appreciable amount. The sample was diluted to 10 phages through serial dilution with deionized water where 1 mL of the original phage sample was placed in 9 mL of deionized water and then 1 mL of this solution was placed in another 9 mL of deionized water. This process was completed one more time until the phage dilution was 10 particles per mL. 1 mL of this final solution was then placed in one sample of 9 mL deionized water to make the deionized sample and 1 mL was placed into 9 mL of heavy water to create the D2O sample. Both of these samples were stored in sterilized test tubes and immediately placed into the 16° C medical freezer at time 0. A plaque assay was performed to determine the infectivity titer over time for each of the samples. The *tryptone Base-layer agar was melted in a water bath at 97° C and then poured into four sterile petri dishes until a 5mm thick layer was coating the bottom. Petri dishes were left to harden and cool for 2 hours on a clean surface away from drafts. A tryptone Escherichia coli host solution was madeby warming a 5 mL sample of *tryptone broth to 37° C and placing it onto a *tryptone slant agar host of Escherichia coli and teetering the solution back and forth until the broth is cloudy with cells. The host sample was then placed into a sterile test tube, sealed, and refrigerated until needed. Two sterile tubes of 4 mL* tryptone broth were brought to room temperature and 0.5 mL of tryptone Escherichia coli host solution was added to each of the tubes and sealed. Both tubes were incubated at 37° C for 30 minutes, swirling each tube every 5 minutes to enhance growth. After 30 minutes elapsed the samples were inoculated with 0.5 mL of each of the phage dilutions (one in deionized water and one in D2O ) to bring each tube up to 5 mL total. These test tubes were incubated for 20 minutes at 37° C, swirling every 5 minutes to enhance growth.
quickly covered and left to sit and harden for 15 minutes before being placed lid down in the incubator for 16 hours. Once the petri dishes were incubated for 16 hours, they were removed and refrigerated for further analysis. Assays were analyzed under 40x magnification by counting the number of plaques in the field of view of a representative sample of the whole assay and then mathematically estimating the amount of plaques on the entire assay plate. Once the number of plaques had been estimated for the entire plate, the number was divided by the original dilution (10) to get the infectivity titer of the sample. This is the number that was analyzed to determine degradation in each sample. Each dish was also photographed under 40x magnification to provide qualitative information about the size and shape of plaques formed to observe in order to determine the health of phage sample. The infectivity titer and the magnified images of the phages are used in conjunction to determine how effective each solution was at protecting the bacteriophage from damage.
RESULTS
“The infectivity titer and the magnified images of the phages are used in conjunction to determine how effective each solution was at protecting the bacteriophage from damage."
Over the testing period of 10 days, both samples showed signs of degradation both qualitatively and quantitatively. The sample stored in deionized water degraded at a higher rate than the sample stored in heavy water. At day 1, both samples produced large plaques that were bulbous and mostly round with smooth borders (Figure 1), but by day 3 the Figure 1: ) Day 1. Left: D2O sample. Right: Deionized sample. Figure 2: Day 3. Left: D2O sample. Right: Deionized sample. Figure 3: Day 10 Left: D2O sample. Right: Deionized sample.
While the samples were incubating, two tubes of 2 mL *tryptone soft agar were melted at 97° C and then cooled to 45° C for mixing with sample. After the sample finished its second incubation, 1 mL of each phage and bacteria solution and 0.3 mL of E. coli host solution was mixed with the soft agar and then immediately poured onto each petri dish and teetered to cover the plate. The dishes were 52
Figure 5: Regression graph of change in infectivity titer over time (10 days).
samples already showed signs of deterioration (Figure 2). The deionized sample produces much smaller plaques with jagged borders and irregular shapes. The D2O sample produced smaller plaques with slightly irregular borders but showed significantly less deterioration than the deionized sample. By day 10 (Figure 3) the deionized sample was nearly entirely deteriorated; the plaques were unrecognizable and therefore the sampleâ&#x20AC;&#x2122;s infectivity could not be counted and was zero by default. The D2O sample produced small plaques but maintained their shape and even borders.
ANALYSIS Since the infectivity titer and qualitative health of the phage consistently decreased over time, one can see how storing the phage sample in D2O is preferable to deionized water.
"The D2O sample produced smaller plaques with slightly irregular borders but showed significantly less deterioration than the deionized sample."
53
Graphical analysis was performed to show the deterioration of the phages, and the change in infectivity titer over time. Error bars allow for up to 20% error in infectivity titer to account for the possibility of phage miscount in each assay. Since no error bars overlap on the graphs, the difference in degradation is statistically significant and has an extremely low chance (<5% probability) of occuring by chance. By analyzing the slope of deterioration in the regression graph of infectivity titer (Figure 4), t was determined that the D2O sample degraded at 2 approximately 100 phages per day while the deionized water sample degraded at 1700 phages per day. Each graph has regression and correlation coefficients of .95 and .99 respectively indicating that the data has a high correlation and the graph is a valid way to determine degradation rates. The D2O sample degraded at 6 +/-1.2 % the rate that the deionized sample deteriorated.
This is obviously a very significant difference in degradation, showing that D2O is more preferable to deionized water based on the infectivity titer over the testing time period. The qualitative health and infection effectiveness of the bacteriophage also decreased over the testing period. At day one (Figure 1) the phages in both samples displayed large rounded plaques on the agar, implying that each phage was very effective in infecting the bacterium and lysed the surrounding cells evenly. This shows that both samples started at the same qualitative health level and werenâ&#x20AC;&#x2122;t damaged to begin with. The D2O sample started at an infectivity of 4.0 x 104 PFU and the deionized sample 4 started at an infectivity of 1.6 x 104 PFU. Even though each sample started at a different titer, degradation is measured relative to the starting titer, so the starting difference is accounted for in the calculations. By day 3 (Figure 2) the D2O sample plaques were smaller and had more variance in their edge shape but remained fairly rounded with even barriers. This shows that the phages in this sample were slightly less healthy than day 1 (even though the sample showed no appreciable infectivity titer degradation) because when the phage went through the lytic cycle in the agar, it did not expand equally in all directions. The day 3 results for D2O show a qualitative reduction in health of the phage, but this reduction is very small compared to the degradation observed in the deionized water sample. This sample formed very small plaques with heavily irregular borders and also showed degradation in its infectivity titer (1.1x 104 PFU). The irregular borders show the declining infectivity of the phage because the phage is not producing the even circular lysis bloom of progeny that one would expect from DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
D E U T E R I U M OX I D E a healthy phage. Instead, it is lysing in heavily irregular patterns indicating poor health and an inability to properly infect the bacterium. At day 10 (Figure 3) the D2O sample produced smaller plaques than the day 3 test and had a decline in infectivity titer (3.1 x 103). Despite the size difference, the plaques still maintained their mostly circular shape, meaning that they were still infectious and viable. The deionized water sample was completely degraded by day 10. There were no discernable plaques and the sample formed random spots of infection with very irregular borders. Although the phages were still able to infect an extremely small amount of bacterium, they were too weak to form a recognizable plaque. This sample was determined to be entirely degraded despite the minor spots of infection because although there were a few viable phages, the sample was too unhealthy to be deemed viable in its entirety. If this sample had been an actual vaccine, it would be insufficient in alerting the patientâ&#x20AC;&#x2122;s immune system and therefore not be potent enough to produce the desired effect of immunity. After the graphical analysis was performed on the results, it was found that the degradation difference was statistically significant and the addition of D2O to a vaccine storage solution greatly impacted the rate of degradation of the sample both in infectivity titer and qualitative health of the phage. The sample stored in D2O not only degrade more slowly than the deionized water sample, it preserved the overall health of the individual phages better than the deionized water when analyzed under magnification.
CONCLUSIONS D20 is a viable additive for vaccine storage solutions, and could aid in the reduction of problems that originate in the Cold-Chain. Since the main challenge facing mass vaccination in third world countries is the availability of vaccines, making transportation of these critical medications easier is a vital step in preventing epidemics. The Ebola outbreak of 2016 was a catastrophic global health crisis that lead to the development of a promising new vaccine, rVSV EBOV. While the creation of a vaccine was beneficial, the vaccine still faces challenges in the Cold-Chain due to the extremely low temperatures that it requires By increasing the length of time that the vaccine can remain viable and the temperature it can be stored at, the vaccine can be madeavailable to larger groups of people in need.
D2Oâ&#x20AC;&#x2122;s ability to prolong vaccine potency is a possible solution to some of the barriers hindering mass vaccination in third world countries. This extension of storage time could allow for more remote areas to be reached and less vaccines to be wasted due to improper transportation and storage. This experiment modeled how vaccine potency could be extended by storing it in D2O compared to the deionized water that is normally used. The degradation of the infectivity titer of each of the samples supported D2O as a more suitable storage solution than deionized water because the reduction in infectious phages was significantly lower than the standard deionized water sample. D2O also supported the qualitative health and infection effectiveness of the phages as seen through magnification analysis. D2O is also an ideal storage solution because it has nearly the same molecular interactions as water so it can be combined with other additives to further increase stability.
â&#x20AC;&#x153;D2O's ability to prolong vaccine potency is a possible solution to some of the barriers hindering mass vaccination in third world countries."
SUGGESTIONS FOR FURTHER RESEARCH This additive needs to be tested with an actual sample of the rVSV EBOV vaccine to ensure that this experiment is an accurate model for how the vaccine would respond to being stored in heavy water. Then it has to be tested under many different conditions that would be present during the transport of the vaccine. Vaccines are exposed to temperature fluctuations during transport and D2O needs to be evaluated at different temperatures to determine first if it is as effective at other temperatures, and second if the solution can maintain its stabilizing effect while also tolerating temperature changes. If it can tolerate both of these factors, then an ideal temperature zone needs to be determined where D20 maintains the most infectious potency while enabling easy transportation of vaccines. This additive also needs to be tested in conjunction with other vaccine storage additives to test for any possible interactions. Although it is very unlikely that D2O would interfere with any molecular processes in the other vaccine stabilization additives, this still needs to be verified. In addition to testing for interactions between other additives and D2O, the safety of the additive needs to be assessed in humans. As 54
with other additive interactions, it is extremely unlikely that D2O could cause an adverse reaction in people due to its inert nature and its similar chemical makeup to water, but it still needs to be verified. The oral consumption of heavy water in humans has already been determined to be completely safe, but the safety of an intravenous route still needs to be confirmed.
“Finally, economic research must be conducted to determine the cost effectiveness of replacing deionized water with D2O in individual vaccine doses. "
8.Crainic, R. (1996). The Replacement of Water with Deuterium Oxide Significantly Improves the Thermal Stability of the Oral Poliovirus Vaccine. Retrieved 2019. 9.Ellis, E. L., & Delbruck, M. (1939, January 20). The Growth of Bacteriophage. Retrieved 2019.
Finally, economic research must be conducted to determine the cost effectiveness of replacing deionized water with D2O in individual vaccine doses. Although D2O does add an additional cost to each vaccine dose, it would relieve significant financial stress from the transportation process and would allow for less vaccines to be wasted. The costs and benefits of using D2O in place of deionized water needs to be formally evaluated.
DEDICATION This Paper is dedicated to my grandfather, Dr. Ken Rillings, who taught me to love chemistry from a young age, and to never stop questioning.
ACKNOWLEDGMENT I would like to thank Mr. John Davis of Yale Medical School for inspiring this project and introducing me to the issues facing the future of Vaccine development. My chemistry teachers Dr. Katherine Nuzzo and Mr. Paul Testa, and my parents for supporting me through this entire project and giving me the freedom to explore my interests and passions for microbiology and biochemistry. References 1.Yap, M. L., & Rossmann, M. G. (2014). Structure and function of bacteriophage T4. Future microbiology, 9(12), 1319-27. 2.Kristensen, Debra D. et al. “Can Thermostable Vaccines Help Address Cold-Chain Challenges? Results from Stakeholder Interviews in Six Low- and Middle-Income Countries.”Vaccine 34.7 (2016): 899–904. PMC. Web. 26 Sept. 2018. 3.Pelliccia, M., Andreozzi, P., Paulose, J., D’Alicarnasso, M., Cagno, V., Donalisio, M., . . . Krol, S. (2016). Additives for vaccine storage to improve thermal stability of adenoviruses from hours to months. Nature Communications, 7, 13520. doi:10.1038/ncomms13520 4.Kutter, E., & Sulakvelidze, A. (Eds.). (2004). Bacteriophages: biology and applications. CRC Press. 5.Wu, R., Georgescu, M. M., Delpeyroux, F., Guillot, S., Balanant, J., Simpson, K., & Crainic, R. (1995). Thermostabilization of live virus vaccines by heavy water (D2O). Vaccine, 13(12), 1058-1063. 6.Grabow, W. O. K. (2001). Bacteriophages: update on application as models for viruses in water. Water Sa, 27(2), 251-268. 7.Jofre, J., Lucena, F., Blanch, A. R., & Muniesa, M. (2016, May 12). Coliphages as Model Organisms in the Characterization and Management of Water Resources. Retrieved from https://www.mdpi.com/2073-4441/8/5/199 55
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
V I S U A L AT T E N T I O N
A Current Understanding of Visual Attention BY ARJUN MIKLOS
INTRODUCTION The visual world is extremely complicated; walking into a room presents a multitude of visual stimuli that would seemingly be overwhelming to the sensory system. However, the brain parses through the overload of information, focusing attention only on a few locations in visual space. The process by which visual selection takes place has been heavily debated over the past 30 years. The main concern in developing a theory for visual selection is determining the control over selection that an individual possesses. This control is not complete—the brain often selects locations in the visual field for the eyes to focus on that are unaligned with the goals of the observer. For example, when a person scans a crowd for a friend, they inadvertently focus their attention on other individuals in clothing that stands out from the crowd before they finally locate their friend. By having an image of the friend’s appearance in mind, however, they can hold their attention on the friend when they ultimately cross their visual field. Clearly, there are elements of visual selection that are out of our control and elements that we can control. Determining this balance allows us to learn about how the brain analyzes the visual world. Defining the methods by which the brain allots visual attention has intriguing applications, especially within the field of photography. By keeping a model of visual selection in mind, photographers can predict how an audience will scan a photograph. Understanding what elements attract attention and the order in which viewers are most likely to scan through them allows photographers to
craft images that best express desired themes and feelings.
BOTTOM-UP CONTROL In the example discussed above, individuals in clothing that stands out from the crowd capture a person’s attention even though the person was not searching for them. This unintentional focus demonstrates bottom-up control of visual selection: when the attention of an individual is controlled by the physical properties of a scene without regard for his or her intentions3. Bottom-up control is based on the salience of features within a visual field. Elements that differ in terms of one or more features (e.g. luminescence, color, or shape) from their surroundings have local feature contrast and are said to be salient3. In essence, bottomup control involves the brain calculating the relative salience of the items in the visual field, then automatically focusing attention on the most salient object, regardless of whether it was the goal of the observer to focus on that target. An irrelevant singleton paradigm study conducted at the TNO Institute for Perception elegantly demonstrates bottom-up control. In this study, subjects were tasked with finding a green circle surrounded by green squares. The display also included a distractor: a red square. Researchers found that the presence of the distractor slowed down the identification of the target, since the salient distractor causes attention to be focused at its location prior to that of the target5. Later studies repeated the experiment and found that in the first 60 milliseconds (ms), attention was focused on
Figure A: An fMRI brain scan. fMRI technology functions by measuring changes in blood flow to different portions of the brain. Cerebral blood flow has been linked to brain activity— when there are spikes in blood flow to certain areas of the brain, activity in those areas increases. Source: Wikimedia Commons
“The main concern in developing a theory for visual selection is determining the control over selection that an individual possesses."
56
distractor. Only after 150 ms was attention focused on the desired target6.
“Top down control allows for attention to be focused on objects according to the goals and intentions of the individual."
Recent studies have focused on mapping bottom-up control networks within the human brain. Using functional magnetic resonance imaging (fMRI) technology [Figure A], the ventral attention system, which is composed of regions in the temporoparietal junction (TPJ) and the ventral frontal cortex (VFC), has been associated with bottom-up control activities4,7. [Figure B]
TOP DOWN CONTROL Top-down control allows for attention to be focused on objects according to the goals and intentions of the individual3. With strong top-down control, individuals are able to ignore external salient stimuli and rapidly focus attention on a target. Top-down control can be demonstrated effectively with two experiments. The first experiment is an endogenous cueing study, in which observers are told to locate a known target and are shown the likely location of that target with an overlain arrow prior to the display being revealed. Observers who are cued to the likely location of the target locate it about 25 ms faster than observers without the cue8. Thus, when the observer is shown the arrow, they ignore salient distractors and limit attention to the likely location of the target. The second experiment that demonstrates top-down control is known as a conjunction search task, where the desired target is defined by multiple features. In a study conducted at Johns Hopkins University, observers were tasked with identifying a red letter “O” in a mixture of black O’s and red N’s. Researchers noticed that observers found the desired target at the same speed when more black O’s were added to the display9. The researchers hypothesized that this occurred because the
Figure B: Diagram illustrating the lobes of the brain. Areas of the brain associated with bottom-up control include the temporoparietal junction (TPJ), located at the intersection between the parietal and temporal lobes, and the ventral frontal cortex (VFC), located near the middle of the frontal lobe. Top-down control has been associated with the frontal eye fields (FEF), located at the top of the frontal lobe, and the superior parietal lobe (SPL), located at the top of the parietal lobe. Source: Wikimedia Commons 57
observers ignored all the black elements and restricted their search to only the red letters, demonstrating top-down control. Recently, fMRI technology has been used to map functions of the brain related to top-down control to the dorsal attention system, which is composed of the frontal eye fields (FEF) and the superior parietal lobe (SPL)4,7. [Figure B]
CURRENT MODEL FOR VISUAL ATTENTION One currently supported model of visual attention is the Neural Theory of Visual Attention (NTVA). In this model, visual attention consists of two waves of activity in the brain: the pre-attentive stage and the attentive stage10. The pre-attentive stage occurs prior to the allocation of attention. In this stage, the brain scans the entire visual field and computes a salience map that indexes the relative salience of objects at different locations in space11. In the attentive stage, the brain selects the location with the highest salience from the previously computed salience map for further processing. This selection process results in the allocation of focal attention1,3. Pre-attentive and attentive brain activity have been distinguished from each other. The pre-attentive stage consists of a feedforward sweep through the brain: activity in the brain builds from the low-level visual areas associated with stimuli response to the higher order visual centers involved in the analysis of items selected through visual selection, including the temporal, parietal, and frontal areas12. The attentive stage is characterized by recurrent processing, in which the higher-order areas influence the firing of neurons in the lower visual areas12. The pre-attentive stage can be thought of as bottom-up control. Here, the brain builds a saliency map and identifies the most salient object. The observer has no conscious input into object selection since the higher visual centers are not initially involved. In contrast, the attentive stage represents conscious, top-down control. The higher order visual centers analyze the object to determine if it is the target and communicate back to the lower visual centers. In essence, the NTVA model suggests that initially visual selection is driven by stimuli and is thus under bottom-up control and only later can the goals and intentions of the observer influence selection through top-down control1,3. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
V I S U A L AT T E N T I O N
EVIDENCE FOR THE NVTA Four main categories of evidence for the NTVA exist: behavioral studies, eye movement tracking, event-related potential (ERP), and functional magnetic resonance imaging (fMRI). A main behavioral study in support of the model was the aforementioned irrelevant singleton paradigm. The study included a modification of the previously discussed red square experiment in which the red square distractor was replaced with a square colored in a shade similar to the target. Now, the distractor was less salient then the target. Whereas previously the distractor had led to a slowing of about 20 ms in locating the target, in the trials with the less salient distractor there was no slowing in locating the target, demonstrating that selectivity primarily depends on the saliency of the objects in the display5. This behavioral study demonstrates that focal attention is allocated to the feature that is the most salient and thus “pops-out” from the rest of the display, even if this feature is a distractor and not the actual target5. Thus, bottom-down control initially drives selection and top-down control cannot have an influence until later on. Eye movement tracking is also useful evidence for demonstrating the early dominance of bottom-up control. In a Vrije Universiteit Amsterdam study that presented observers with a display containing a target and a distractor either more, equally, or less salient, early eye movements were shown to be completely stimulus driven. Only the salience of the distractor relative to the target determined which element received the observer’s attention13. Salience had no effect on eye movements occurring later on, demonstrating that these movements were more goal-driven13. Thus, early eye movements are under bottom-up control, whereas later eye movements fall under top-down control.
Results demonstrated that an N2pc was observed for both the distractor and target, with the distractor N2pc occurring first15. A fourth category of evidence for the initial bottom-up dominance model are the results of fMRI imaging, which measures brain activity by detecting changes in blood flow to different regions of the brain. A University of London study demonstrated that attention capture by a color singleton, or element that is highly salient due to its color, was associated with enhanced activity in the left lateral precentral gyrus of the frontal cortex and in areas of the superior parietal lobe (SPL)16. As SPL activity occurs after attentional shifts, the singleton must have captured the observer’s attention3,7. The study demonstrated a negative correlation between the behavioral interference by the color singleton and the strength of activity in the left lateral precentral gyrus of the frontal cortex, an area of the brain associated with top-down control16. In other words, individuals with greater top-down control activity were able to more rapidly determine that the color singleton was not the desired target. Still, however, the SPL activity demonstrates that initially, the singleton did capture attention, therefore reinforcing the early importance of bottom-up control.
“In other words, individuals with greater top-down control activity were able to more rapidly determine that the color singleton was not the desired target."
APPLICATION FOR PHOTOGRAPHY A University of Sussex study applied the understanding of the visual attention system to the analysis of photographs. Researchers Figure C: An example of an electroencephalography (EEG) test. Electrodes are placed over the subject’s scalp and monitor activity in different parts of the brain. EEG tests measure event related potentials (ERP), which track the brain’s response to a sensory stimulus. Source: Wikimedia Commons
The third category of evidence includes event related potential (ERP) studies, which monitor the brain’s electrophysiological response to a stimulus using electroencephalography (EEG), a technique that places electrodes on the scalp to monitor brain activity [Figure C]. A common brain response that has been associated with attention selection is an N2pc, which is a large negative voltage in regions contralateral to an attended visual stimulus14. An ERP study by the University of London monitored the brain for N2pc responses when the brain was shown a display including a target and a distractor. 58
“An understanding of the visual attention center has many possible applications, but one particularly interesting one involves application to photography."
presented images of natural scenes, both indoors and outdoors, to participants. The eye movements of the observers were tracked as they viewed the images. The study found that, in the first second of viewing, most observers selected similar fixation locations, but after several seconds, viewers selected different fixation targets2. Specifically, the greatest divergence in fixation locations was found after the first three seconds of viewing2. Also of note, the two features most strongly implicated in attracting early attention were contrast, or the difference in tone within elements of the image, and edge-content, or the presence of edges within an element of the image2. The researchers proposed that the early movements of the eye were driven by bottomup control, as almost all viewers focused initially on the most salient elements, but later diverging eye movements were driven by the differing top-down control inputs of the viewers, which could have been based upon different goals of the observers in looking at the image2. These conclusions are in line with the NTVA model’s emphasis on initial dominance by stimulus-driven bottom-up control followed by a later shift to goal-driven top-down control. The fact that the interpretation of photographs seems to follow these rules provides photographers with important information to help set up shots. Certain highly salient elements of the photo— especially areas of high contrast or high edgecontent—will initially attract the attention of the viewer. If the photographer frames their image strategically, they can use salient items to guide the viewer through the image. This can provide photographers a chance to best express their intended theme or message. After a few seconds, viewers will overcome bottomup control and transition to top-down control, causing most viewers to focus on differing locations throughout the frame. At this point, viewers turn their attention to areas that align with their beliefs, ideas, and emotions about the photograph and are able to make their own interpretations of the image.
CONCLUSION The debate over our understanding of the visual attention system hinges on the balance between the brain automatically allocating attention—bottom-up control—and the role of the individual in actively selecting which locations to focus on—top-down control. Current behavioral, eye movement tracking, ERP, and fMRI studies provide evidence to support the Neural Theory of Visual Attention (NTVA). This model of visual attention proposes 59
that the initial sweep of a visual field is entirely under bottom-up control. Only later—after attention has already automatically been allocated to a target—can top-down control take effect. An understanding of the visual attention center has many possible applications, but one particularly interesting one involves its application to photography. By knowing that all observers of photographs are initially under bottom-up control and are attracted to and focus on the same highly salient elements of images, photographers can compose shots to guide viewers through the frame. Eventually, as top-down control takes effect, viewers can look anywhere in the image, but for a brief time the photographer has an opportunity to express their desired theme through controlling exactly what the viewer will see. References [1] Theeuwes, J., Itti, L., Fecteau, J., & Cognitive Psychology. (2005). Topdown and bottom-up control of visual selection. Acta Psychologica, 135(2), 77–99. [2] Tatler, B., Baddeley, R., & Gilchrist, I. (2005). Visual correlates of fixation selection: effects of scale and time. Vision Research, 45(5), 643–659. [3] Stigchel, S., Belopolsky, A., Peters, J., Wijnen, J., Meeter, M., & Theeuwes, J. (2009). The limits of top-down control of visual attention. Acta Psychologica, 132(3), 201–212. [4] Petersen, S., & Posner, M. (2012). The Attention System of the Human Brain: 20 Years After. Annual Review of Neuroscience, 35(1), 73–89. [5] Theeuwes, J. (1992). Perceptual selectivity for color and form. Perception and Psychophysics, 51(6), 599-606. [6] Kim, M. S., & Cave, K. R. (1999). Top-down and bottom-up attentional control: On the nature of interference from a salient distractor. Perception & Psychophysics, 61, 1009–1023. [7] Corbetta, M., & Shulman, G.L. (2002). Control of goal-directed and stimulus-driven attention in the brain. Nat. Rev. Neurosci, 3, 201–215. [8] Posner, M. I. (1980). Orienting of attention, the VIIth Sir Frederic Bartlett lecture. Quarterly Journal of Experimental Psychology, 32, 3–25. [9] Egeth, M., Virzi, R. A., & Garbart, H. (1984). Searching for conjunctively defined targets. Journal of Experimental Psychology: Human Perception and Performance, 10, 32–39. [10] Bundesen, C., Habekost, T., & Kyllingsbaek, S. (2005). A neural theory of visual attention: Bridging cognition and neurophysiology. Psychological Review, 112, 291–328. [11] Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97–136. [12] Lamme, V. A. F., & Roelfsema, P. R. (2000). The distinct modes of vision offered by feedforward and recurrent processing. Trends in Neuroscience, 23, 571–579. [13] Van Zoest, W., Donk, M., & Theeuwes, J. (2004). The role of stimulusdriven and goal-driven control in saccadic visual selection. Journal of Experimental Psychology: Human Perception and Performance, 30(4), 746–759. [14] Kiss, M., Van Velzen, J., & Eimer, M. (2008). The N2pc component and its links to attention shifts and spatially selective visual processing. Psychophysiology, 45(2), 240–249. [15] Hickey, C., McDonald, J. J., &Theeuwes, J. (2006). Electrophysiological evidence of the capture of visual attention. Journal of Cognitive Neuroscience, 18(4), 604–613. [16] De Fockert, J., Rees, G., Frith, C. D., & Lavie, N. (2004). Neural DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
V I S U A L AT T E N T I O N correlates of attentional capture in visual search. Journal of Cognitive Neuroscience, 16, 751â&#x20AC;&#x201C;759.
60
An Overview of Garbage Collection BY BRANDON FENG Cover Image Source: Pixnio
“It is a crucial - yet often overlooked - component of computer programming."
Figure 2: Heap memory is often modeled by a graph, with the nodes representing objects. The garbage collector’s job is to find unreferenced nodes (such as node 13) and remove them from the heap. Source: Wikimedia Commons
61
GARBAGE COLLECTION: AN OVERVIEW Garbage collection is a significant subset of automatic memory management. Automatic memory management refers to operating systems that automatically allocate and deallocate memory for the user; garbage collection, in particular, involves the “collection” of unused objects within a program and the freeing of the memory space allocated to them.1 It is a crucial – yet often overlooked – component of computer programming. Some languages, especially lower-level languages such as C, have manual memory management systems. As its name suggests, manual memory management requires users to allocate and deallocate objects in memory themselves. While this method allows for greater personal control and optimization, it can be unsafe. If memory is not handled properly, memory leakage – memory occupancy by unused and unreached objects – can occur. That is why many high-level programming languages, such as Python and Java, utilize garbage collectors to shield users from such leakages, even if collectors may sacrifice efficiency and flexibility, given that users have less opportunities to tailor optimized memory usage for their specific programs.2
garbage collection system.4 Now, most popular languages have some form of garbage collector. There are a few languages that still do not use garbage collectors, most notably C and C++, although there are third-party libraries that provide lightweight collectors for these languages. Developing optimizations for garbage collection systems remains a popular section of computer science research.
AN INTRODUCTION TO MEMORY AND BASIC GARBAGE COLLECTION Memory is allocated for a program in three ways. First, certain items have memory allocated to them which is intended to remain occupied for the duration of the entire program, such as global variables that are visible and accessible throughout the entire program, not localized to a particular function. Second, some items have memory allocated on the runtime stack, which stores items of fixed memory size, such as primitives. Third, additional items have memory allocated in the heap, which stores items that do not have fixed memory sizes, such as arrays and objects. Managing the first two types of memory is easy; allocating heap memory proves far more complex and is what garbage collection manages.5
Garbage collection was first adopted in the 1960s by Lisp, the first popular programming language to use a form of automatic memory management.3 However, garbage collection processes remained relatively inefficient and were not widely used until the 1990s, when the rise of Java brought about a more efficient DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
GA R B AG E C O L L E CT I O N Heap memory differs from stack memory in that accessing heap memory is less structured. While the stack follows a last-in-first-out (LIFO) order, in which memory is accessed in a defined way, heap memory is randomly accessible. Consequently, heap memory is often modeled by a graph, in which the nodes are the objects in memory and the edges represent references of one object by another.6 A garbage collector’s job is to locate objects no longer attached to the graph and either remove or relocate them [Figure I]. A very basic example of a garbage collector is the mark-and-sweep collector. The markand-sweep collector is regarded as the first garbage collector ever implemented and was created by John McCarthy, the inventor of Lisp. This type of collection occurs in two phases: the mark phase and the sweep phase. During the mark phase, a graph traversal is performed in which all objects referenced in the program are marked, starting from a “root set” – segments of memory that are guaranteed to be referenced, such as global variables. Then, during the sweep phase, all objects that are unmarked have their allocated memory freed [Figure II].7
GARBAGE COLLECTION SCHEMES Various other garbage collection schemes exist, most of which fall under three categories: reference counting, trace-based collection, and short-pause collection. Reference counting is the most rudimentary. In reference counting, a “counter” is allocated for each object in a program, which increases and decreases depending on how the object is used. For example, the program increments by 1 when the object is first allocated or when the object is passed in as a function parameter; conversely, the program will decrement by 1 if the object is returned. If the count reaches 0, the memory allocated for the object is freed.8 Trace-based collection is the most popular type of garbage collection. It involves “tracing” the path to an object and determining if the object is still reachable.9 The most elementary example of trace-based allocation is the aforementioned mark-and-sweep collector, which involves tracing from a root set to find all object references. Another popular trace-based collector is the mark-and-compact collector. This collector follows the same procedure as mark-and-sweep when “marking” all referenced objects.10 However, during the sweeping phase, it first compacts the objects in the heap, which involves organizing fragmented bits of memory. This usually entails dividing the heap into three categories: objects to be freed,
objects currently in use, and free space. Such organization makes clearing memory faster and takes away needless overhead normally dedicated to finding open memory addresses. There are various ways of compacting objects, including utilizing hash tables to organize new object addresses or separating marked and unmarked objects by dividing the heap in two – something that is known as Cheney’s algorithm.11 Short-pause collection is sometimes considered a subset of trace-based collection, but differs from mark-and-sweep and its counterparts. Mark-and-sweep is considered a “stop-the-world” algorithm, meaning that the entire program pauses in order for the garbage-collector to execute its own tasks.12 Short-pause approaches aim to reduce this execution time. There are two approaches: incremental (based on time) and partial (based on space). In an incremental algorithm, the garbage collector will first mark down the root set, then operate as the program executes by noting any possible instances of changes in reachability and performing tracing concurrently with the program execution. The program then pauses when the tracing is complete in order to conduct its sweep phase.13 In a partial algorithm, objects in the heap are divided and dealt with accordingly. For example, generational garbage collection determines the “age” of each object (how long the object has been allocated) and focuses on freeing newer objects first because younger objects tend to be dereferenced quicker (i.e the program no longer points to the object).14
“Various other garbage collection schemes exist, most of which fall under three categories: reference counting, trace-based collection, and short pause collection."
ASSESSING GARBAGE COLLECTION There are several ways to assess the overall efficiency and efficacy of a garbage collector, the first of which is time. Pause times, reallocation times, and overall runtimes are crucial assessment metrics. Accuracy is also important – a garbage collector must limit memory leaks, avoid deleting objects still in use, and ensure the memory it allocates is not already occupied. Coupled with these questions of speed and accuracy are the overhead costs on heap memory of executing the collector and the complexity of the algorithm itself.
Figure 2: Pseudocode for the mark-and-sweep collection algorithm. Starting from the root set, it marks all referenced objects, then sweeps, freeing all unmarked objects and resetting marked objects to unmarked for the next sweep phase. Source: Wikimedia Commons
62
Figure 3: Manual memory management allows for much greater freedom. For example, C provides the functions malloc() and free(), which allow the programmer to allocate and free up memory in the heap. Of course, this freedom allows for more potential mistakes; in this example, a dangling pointer is left, in which memory allocated by malloc() is freed but the pointer address (indicated by *i) remains unchanged. Source: Wikimedia Commons)
“Garbage collectors are decidedly complex and implementing them in the first place can be time consuming. And when implemented, they may cause performance issues and stalled execution times (among other problems) if not designed properly."
Additional metrics can include the collector’s impact on battery life, its compatibility across languages, its portability across different architectures, and its scalability according to heap size.15
THE DIFFICULTY OF GARBAGE COLLECTION The fact that there is significant variation between garbage collection algorithms demonstrates an inherent difficulty of automatic memory management. There are many trade-offs and questions to consider when designing, implementing, and optimizing garbage collection systems. For any garbage collector, there are difficulties with implementation. Garbage collectors are decidedly complex and implementing them in the first place can be time consuming. And when they are implemented, they may cause performance issues and stalled execution times (among other problems) if not designed properly. They may also limit programmers’ freedom by preventing them from being able to allocate memory as they please, which can hinder optimization efforts [Figure III]. To this effect, much research has been dedicated to the design of manual memory management systems that possess the safety offered by garbage collectors.16 Garbage collection also incurs significant overhead because of its need to consume extra computing power, using up to five times the memory of a manual memory management system in order to achieve the same execution speed.17 Furthermore, there are trade-offs in the optimization of garbage collection algorithms that make a universal system a virtual impossibility. For example, stop-theworld algorithms such as mark-and-sweep are simple and present low memory overhead, but are hard to scale when program and heap sizes grow overly large. On the other hand, algorithms such as incremental garbage collection consume much more computing
63
power, but are more scalable.18 Small-scale programs that prioritize execution speed may be better suited to stop-the-world systems, while large-scale programs that involve realtime, data-intensive processes, such as search engines, are willing to forgo speed and adapt short-pause systems to ensure near-constant execution. “Conservative” garbage collectors eliminate the risk of deleting in-use objects by opting to “mark” objects more frequently, increasing the risk of memory leakage, while non-conservative collectors do the opposite.19
THE CURRENT STATE OF GARBAGE COLLECTION The original, simpler approaches to garbage collection, such as Lisp’s mark-and-sweep, have become outdated as new methods continue to arise. Many of the most inventive new garbage collection systems are being made for the Java Virtual Machine, or JVM, which has adapted to an increasingly diverse suite of programming languages. The JVM is a type of virtual machine, an emulation of a real computer that allows for languages run on the virtual machine to run on any computer the virtual machine is designed for. Two of these systems are Shenandoah and ZGC, which represent arguably the most significant garbage collection breakthroughs to date [Figure IV].20 Shenandoah and ZGC are similar in terms of their performance capabilities. Both are relatively slow in their program throughput (the speed in which the program executes) with overhead averages of approximately 15%, high number. However, both have very small pause times, typically of less than a millisecond.21 These low pause times are partly enabled by these systems’ ability to operate while the program runs, utilizing concurrent compaction to make collections and memory reallocations during pause times faster and more precise.22 Both collectors have especially notable compaction systems that operate concurrently with the program. The two collectors deploy different methods to move objects in memory DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
GA R B AG E C O L L E CT I O N Figure 4: ZGC and Shenandoah, two of the JVM’s more impressive garbage collectors, were developed by Oracle (A) and RedHat (B), respectively. Source: Flickr
without having to pause the program. Typically, doing so is unsafe; if a program tries to access an object using an outdated address after the object has been moved, it leads to a crash and a memory leak. Shenandoah solves this problem through a Brooks forwarding pointer. Every object in the program is assigned a pointer, originally pointing to its own address. Whenever an object must be moved, the pointer is assigned to its new address. Whenever the program accesses the object, it accesses the pointer, which will always be directed to the object no matter where the garbage collector moves it.23 ZGC uses colored pointers. Rather than having pointers solely for addresses, ZGC adds 4 extra bits to the address that will mark the pointer as good or bad. If bad, once the object is referenced, the garbage collector moves the object accordingly, then reverts the pointer’s bits back to “good” and changes the address.24 These systems help demonstrate popular trends in modern garbage collection: the preference for low pause times, the need for mutator-concurrent marking and reallocation, the emphasis on compaction, and the reliance on high-powered computing (Shenandoah, for example, is only compatible with 64 bit architecture), and the need for compatibility by designing for the JVM rather than a particular language. That being said, as their slower runtimes indicate, both collectors are far from universally applicable. Other collectors, such as Go’s, are better suited for lower compile time.25 There are also broader questions about the garbage collector’s relationship with the computer that have yet to be tackled. Research has been dedicated to extending garbage collectors to other items in the OS such as files to optimize performance. There has also been focus on ensuring better relationships between the OS and the garbage collector. For example, before its recent updates, Java’s G1 garbage collector was criticized for its page release issues, or its tendency to hold memory from being released back to the OS. Furthermore, there is the question of portability. The current focus on real-time applications – ranging from
collecting wearable sensor data to gathering high-volume computerized financial trading information – makes short pause times particularly important and ensures that a lowerbound of efficiency will be a continual topic of research. That being said, the focus on highpowered computing brings forth a debate over the necessity of portability, and whether new garbage collectors such as Shenandoah should be able to operate on lower-powered 32-bit architectures.26 Many trade-offs, optimization opportunities, and questions still remain, ensuring that garbage collection, for all its recent improvements, will remain a central topic of research in computer science circles for years to come. References [1] Wilson[SLN1] , P. R. (1992, September). Uniprocessor garbage collection techniques. In International Workshop on Memory Management (pp. 1-42). Springer, Berlin, Heidelberg. [2] Kedia, P., Costa, M., Parkinson, M., Vaswani, K., Vytiniotis, D., & Blankstein, A. (2017, June). Simple, fast, and safe manual memory management. In ACM SIGPLAN Notices (Vol. 52, No. 6, pp. 233-247). ACM. [3] McCarthy, J. (1978). History of LISP. ACM Sigplan Notices, 13(8), 217-223. [4] Byers, R. Garbage Collection Algorithms. [5] Wilson [SLN1] , P. R. (1992, September). Uniprocessor garbage collection techniques. In International Workshop on Memory Management (pp. 1-42). Springer, Berlin, Heidelberg. [6] Byers, R. Garbage Collection Algorithms. [7] Zorn, B. (1990, May). Comparing mark-and sweep and stop-and-copy garbage collection. In Proceedings of the 1990 ACM conference on LISP and functional programming (pp. 87-98). ACM. [8] Bevan D. (1987) Distributed garbage collection using reference counting. In: de Bakker J.W., Nijman A.J., Treleaven P.C. (eds) PARLE Parallel Architectures and Languages Europe. PARLE 1987. Lecture Notes in Computer Science, vol 259. Springer, Berlin, Heidelberg.
“These systems help demonstrate popular trends in modern garbage collection: the preference for low pause times, the need for mutatorconcurrent marking and reallocation, the emphasis on compaction, and the reliance on high-powered computing."
[9] Plainfossé D., Shapiro M. (1995) A survey of distributed garbage collection techniques. In: Baler H.G. (eds) Memory Management. IWMM 1995. Lecture Notes in Computer Science, vol 986. Springer, Berlin, Heidelberg. [10] Zorn, B. (1990, May). Comparing mark-and sweep and stop-and-copy garbage collection. In Proceedings of the 1990 ACM conference on LISP and functional programming (pp. 87-98). ACM. [11] Birkedal, L., Torp-Smith, N., & Reynolds, J. C. (2004, January). Local reasoning about a copying garbage collector. In ACM SIGPLAN Notices (Vol. 39, No. 1, pp. 220-231). ACM. 64
[12] Boehm, H. J., Demers, A. J., & Shenker, S. (1991, June). Mostly parallel garbage collection. In PLDI (Vol. 91, pp. 157164). [13] Hughes, R. J. M. (1982). A semi‐incremental garbage collection algorithm. Software: Practice and Experience, 12(11), 1081-1082. [14] Boehm, H. J., Demers, A. J., & Shenker, S. (1991, June). Mostly parallel garbage collection. In PLDI (Vol. 91, pp. 157164). [15] Byers, R. Garbage Collection Algorithms. [16] Kedia, P., Costa, M., Parkinson, M., Vaswani, K., Vytiniotis, D., & Blankstein, A. (2017, June). Simple, fast, and safe manual memory management. In ACM SIGPLAN Notices (Vol. 52, No. 6, pp. 233-247). ACM. [17] Hertz, M., & Berger, E. D. (2005, October). Quantifying the performance of garbage collection vs. explicit memory management. In ACM SIGPLAN Notices (Vol. 40, No. 10, pp. 313-326). ACM. [18] Gidra, L., Thomas, G., Sopena, J., & Shapiro, M. (2013). A study of the scalability of stop-the-world garbage collectors on multicores. ACM SIGPLAN Notices, 48(4), 229-240. [19] Hertz, M., & Berger, E. D. (2005, October). Quantifying the performance of garbage collection vs. explicit memory management. In ACM SIGPLAN Notices (Vol. 40, No. 10, pp. 313-326). ACM. [20] Pufek, P., Grgić, H., & Mihaljević, B. (2019, May). Analysis of Garbage Collection Algorithms and Memory Management in Java. In 2019 42nd International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO) (pp. 1677-1682). IEEE. [21] Maas, M., Asanović, K., & Kubiatowicz, J. (2018, June). A hardware accelerator for tracing garbage collection. In Proceedings of the 45th Annual International Symposium on Computer Architecture (pp. 138-151). IEEE Press. [22] Flood, C. H., Kennke, R., Dinn, A., Haley, A., & Westrelin, R. (2016, August). Shenandoah: An open-source concurrent compacting garbage collector for openjdk. In Proceedings of the 13th International Conference on Principles and Practices of Programming on the Java Platform: Virtual Machines, Languages, and Tools (p. 13). ACM. [23] Flood, C. H., Kennke, R., Dinn, A., Haley, A., & Westrelin, R. (2016, August). Shenandoah: An open-source concurrent compacting garbage collector for openjdk. In Proceedings of the 13th. [24] Zhao, W. (2018). Understanding & Analyzing the Garbagefirst Family of Garbage Collectors. [25] Sibiryov, A. (2017). Golang's Garbage. [26] Byers, R. Garbage Collection Algorithms.
65
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
M AC H I N E L E A R N I N G
The Role of Machine Learning in Allergy Medicine
BY CATHERINE ZHAO
BACKGROUND ON ALLERGIES
[Figure 1, Figure 2]3,4,13,28.
It is not uncommon anymore to see someone neatly arrange a couple of EpiPens on the dining table as they settle down at a restaurant or swallow a Claritin tablet before venturing outside. Despite rising numbers on top of the 30% of adults and 40% of children in the United States who are currently plagued with allergies, there is still no cure3.
Prior to treatment, the allergies and allergens are diagnosed through skin and blood tests. Skin tests expose patients to a small amount of an allergen to examine the immune response and blood tests measure the amount of IgE antibodies in the bloodstream after exposure [Figure 3]16. After diagnosis, the most common treatments are using medications to ease the symptoms of allergic reactions, administering an epinephrine shot to help reverse anaphylaxis, or immunotherapy, which involves gradual exposure to the allergen until immunity is developed2,16,28. Some treatments involve making changes in living habits, for example, to eliminate allergens from living spaces, while others include exploring alternative medicines such as acupuncture16. Studies have tested the effects of acupuncture on reducing the allergic symptoms of pollen allergies and hay fever12,30. While the results have shown positive effects of acupuncture, scientists have come to a consensus that more research is needed to confirm the effects of acupuncture on allergies and its symptoms12,30.
Allergies are one of the most common chronic diseases3. An allergic reaction is caused by the immune system’s overreaction to mundane substances3. These substances are known as allergens and include everyday items ranging from tree nuts to animal fur, pollen, and even certain medicines1. During an allergic reaction, the immune system produces an antibody of the Immunoglobulin E (IgE) family specific to the allergen, which then triggers inflammatory cells (mast cells, basophils, lymphocytes, dendritic cells, eosinophils and neutrophils) to release chemicals such as histamine, leukotrienes, cytokines, and chemokines14. This process causes inflammation, resulting in dilated small blood vessels (a greater blood volume) that allows immune system cells to reach the area of infection13. Inflammatory responses can range in severity, from a runny nose – caused by the mucous membranes releasing more fluids when they are inflamed—to the potentially life-threatening anaphylaxis, where histamine causes tightening of the airways [Figure 1,
The lack of an established cure for allergies is a significant issue given that they are among the most common medical afflictions in the United States, and affect around 25% of people around the world7,9,32. Nearly 10 percent of the United States population, 32 million people, have food allergies alone19. The severity of such allergies is increasing dramatically, with a 377% rise in the insurance claims for anaphylactic
“The lack of an established cure for alleriges is a significant issue given that they are among the most common medical afflictions in the United States, and affect around 25% of the people around the world."
66
Figure 1: An example of the body’s immune response to an allergic reaction. An allergen-specific IgE antibody binds to the allergen and triggers inflammatory cells to release chemicals that cause inflammation at the affected area. Source: Wikimedia Commons
“Machine learning can therefore be used in disease identification and diagnosis, personalized medicine, drug discovery and manufacturing, clinical trial research, radiology, and epidemic outbreak prediction."
Figure 2: A diagram of the body’s immune response to an allergic reaction. The allergen enters the body and the allergen-specific IgE antibody binds to the allergen. This triggers inflammatory cells, such as mast cells, to release chemicals, such as histamine. These chemicals cause inflammation in various parts of the body, resulting in responses such as mucous secretion or pain and itchiness. The increased blood flow to the affected area from inflammation causes increased flow of immune cells to the affected area as well. Source: Wikimedia Commons
is increasing dramatically, with a 377% rise in the insurance claims for anaphylactic food allergen exposures from 2007 to 201619. The source of the rise of allergies in the last 50 years in developed countries has yet to be determined, but scientists believe that both genetics and environment play a part in their development1,9,19. A notable hypothesis for this increase, named the “hygiene hypothesis,” is that as our living standards have improved, there is reduced exposure to infections, and instead, increased exposure to harmless environmental allergens, resulting in the development of immune responses to them9.
ML IN MEDICINE Machine learning stems from artificial intelligence, the idea that machines can learn from experiences through data analysis and pattern recognition to perform subsequent tasks with greater precision26. It trains the machine how to learn about certain conditions by using algorithms to find patterns in large amounts of data, and is executed in three different ways: supervised, unsupervised, and reinforcement11,27. In supervised learning, the data being processed has been labeled, so the machine’s guess at the data’s categorical identity can be compared to its real identity, and the machine learning algorithm can be duly adjusted based on whether its guess was correct or incorrect11. In unsupervised learning, the data has not been labeled, so the machine looks for patterns without the knowledge of which patterns exist11. In reinforcement learning, the machine learns patterns through trial and error, where the machine gets rewarded or punished based on its decision and the machine has a goal of maximizing its reward, with no initial idea of how to do so11. Machine learning can be utilized to help identify patterns in data to improve decision making in fields like medicine, where it can help improve diagnosis and treatments of diseases27.
machine learning because of the massive amount of data available, collected from medical research and development, physicians and clinics, patients, and caregivers8,20. Once this abundance of data has been classified and curated, the next step is to process and make inferences to affect future patient outcomes8. Machine learning can therefore be used in disease identification and diagnosis, personalized medicine, drug discovery and manufacturing, clinical trial research, radiology, and epidemic outbreak prediction8. Machine learning algorithms have been used to identify skin cancer from dermoscopic images and to predict patient progression of pre-diabetes to type 2 diabetes from electronic health records29. Machine learning can accomplish these diagnoses by detecting patterns extracted from large data sets that cover millions of past occurrences of a particular disease10. Additionally, machine learning can be used to speed up drug discovery and development through rapid modeling and predictions of the effects and properties of new drugs20. However, challenges still exist for the application of machine learning to healthcare. Despite the abundance of data, the data must be cleaned to remove errors and duplicates and organized extensively to be utilized accurately10. This is a new challenge for the healthcare industry; IT divisions and administration at hospitals and laboratories are learning to keep up with and create jobs to handle the new task18. Additionally, startups are popping up to tackle this new problem, with novel technology solutions to automate the data cleaning31. In addition to maintaining the existing data sets and organizing them to allow for an accurate and efficient pattern from machine learning programs, security has increasingly become a concern due to the amount of sensitive information that is being stored. Preventing malicious input in the data (that could eventually lead to a misdiagnosis) is yet another concern that has led to important security measures implemented at healthcare facilities20.
The medical field is especially ripe for 67
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
M AC H I N E L E A R N I N G
ML IN ALLERGY MEDICINE Immunologists are at the intersection of allergy research and machine learning, applying machine learning techniques to improve our scientific understanding of allergies. Work in this field has led to the recognition that machine learning has multiple applications for allergy medicine, including enabling better prediction of allergenic proteins, and determining, more accurately, the risk of developing allergies based on the relationships between IgE and allergens4,6,23,25. In one paper, Thomas PlattsMills and Matthew Perzanowski explain how machine learning is used to find patterns in allergen sensitization profiles and revealed that people with particular groupings of allergies have a higher likelihood of developing asthma in addition to similar responses the same allergens23. Other studies at Virginia Tech and the Medical University of Sofia have utilized amino acid sequence patterns to predict the likelihood that a protein is an allergen, which can prevent new allergens from entering the food market4,5. They use Food and Agriculture Organization of the United Nations (FAO) and World Health Organization (WHO) guidelines to find similarities in allergen protein sequences within large databases of known allergen proteins5. Other allergen prediction methods involve identifying structural motifs in allergens or finding IgE binding sites on allergens5,25. Another study relies on our smartphones for patient self-reported data; the researchers who developed the mobile application doc. ai want to find the answers to the mysteries concerning allergies and their causes and cures by collecting patient data and using machine learning to better understand patterns among them6. With the results of doc.ai’s first data trial in the fall of 2018, scientists were able to learn more about the triggers and symptoms of allergies for patients in the United States. The most common symptoms were itching, followed by headaches, difficulty breathing, and wheezing22. Machine learning can also have the potential to improve the diagnosis and treatment of allergies. For example, the
diagnosis of asthma was improved by using machine learning to model combinations of symptoms and medical tests33. And there are ways to improve the diagnosis of allergies through the analysis of genetic markers as well15. In terms of existing treatment, one of the most effective treatments for allergies is immunotherapy. However, the applicability of allergen immunotherapy varies depending on the type of allergen and whether the patient is exposed to the actual allergen verses a crossreacting component, a protein that is similar to the allergen17. Machine learning can parse through the extensive array of clinical results to help identify which allergies are treatable via immunotherapy17. Finally, machine learning can be applied in a patient’s daily life to improve their experience with allergies. Instead of applying machine learning to medical data, one study in Australia conducted by Jia Rong decided to find patterns in tweets related to allergies24. The amount of data on social media platforms is constantly growing, and many users use these platforms to discuss health issues, including allergies24. This study observed a correlation between tweets about hay fever, an allergy to pollen, and the number of actual cases of hay fever in Australia24. Another application of machine learning to patients’ daily lives is the application AI allergy. It helps patients explore new cuisines and recipes based on their existing allergies, with the goal of improving the quality of life while minimizing the challenge of avoiding allergens21.
Figure 2: An example of a skin allergy patch test. In the test, the skin is exposed to small amounts of the allergen. After a couple of days, the skin is observed again to see whether it has reacted to the allergen or not. In this case, the patient shows a strong reaction to Balsam of Peru. This test has diagnosed that the patient is allergic to Balsam of Peru. Source: Wikimedia Commons
“Immunologists are at the intersection of allergy research and machine learning, applying machine learning techniques to improve our scientific understanding of allergies."
CONCLUSION, FUTURE IMPACT The rise of allergies around the world is a growing issue, especially with no cure in sight. In order to reach a cure, the first step is to identify the causes of allergies and of the recent rise in allergies. Identifying these causes would help indicate the next step, for example, according to the “hygiene hypothesis”, whether early exposure to allergens could reduce the number of allergic disorders. The past methodologies for the diagnosis of allergens— avoidance of allergens, temporary medications, and immunotherapy—cannot keep up with the growing number of allergies. However, new AI technologies are evolving and integrating with the field of medicine. Machine learning algorithms can quickly generate insights from large amounts of data, improving the diagnosis and treatment of various diseases, allergies included. Machine learning has already expanded our understanding of allergies through research, improvements in diagnosis and treatment, and analysis of patients’ daily lives. In the future, with so much data on 68
allergies to analyze, more studies involving machine learning will likely emerge. The door is also open to more creative applications of machine learning as well, going beyond strictly medical data. In addition to the search for a cure, machine learning can help improve the lives of patients with allergies—affecting a patient’s experience at grocery stores, restaurants, or in new climates and countries.
G. W. (2017). The Use of Emerging Technologies in Allergen Immunotherapy Management. EMJ Allergy Immunol. 2(1), 81-86. https://www.emjreviews.com/allergy-immunology/ article/the-use-of-emerging-technologies-in-allergenimmunotherapy-management/
References
[20] Nature Staff. (2019). Ascent of machine learning in medicine. Nature Materials. https://doi.org/10.1038/s41563019-0360-1
[1] American Academy of Allergy, Asthma, and Immunology Staff. Allergy Statistics. American Academy of Allergy, Asthma, and Immunology. https://www.aaaai.org/about-aaaai/ newsroom/allergy-statistics
[18] Miliard, M. (2018). AI table stakes: clean and well-governed data. Healthcare IT News. https://www.healthcareitnews.com/ news/ai-table-stakes-clean-and-well-governed-data [19] Miller, K. (2019). Why Food Allergies Are Surging. Leapsmag. https://leapsmag.com/why-food-allergies-aresurging/
[21] Park, R., Ghosal, M., Dong, E. & Shah, S. (2019). AI Allergy. Devpost. https://devpost.com/software/ai-allergy
[2] American College of Allergy, Asthma, and Immunology Staff. Allergy Immunotherapy. American College of Allergy, Asthma, and Immunology. https://acaai.org/allergies/allergytreatment/allergy-immunotherapy
[22] Patel, C. & Manrai A. (2019). What we’re learning from the first thousand doc.ai research participants about allergies. Doc.ai. https://doc.ai/blog/what-were-learning-from-the-firstthousa/
[3] Asthma and Allergy Foundation of America Medical Scientific Council. (2015). Allergies. Asthma and Allergy Foundation of America. https://www.aafa.org/allergies.aspx
[23] Platts-Mills, T. A. E. & Perzanowski, M. (2018). PLoS Med. 15(11). https://doi.org/10.1371/journal.pmed.1002696
[4] Dang, H. X., & Lawrence, C. B. (2014). Allerdictor: fast allergen prediction using text classification techniques. Bioinformatics, 30(8), 1120-1128. https://doi.org/10.1093/bioinformatics/ btu004
[24] Rong, J., Michalska, S., Subramani, S., Du, J. & Wang, H. (2019). Deep learning for pollen allergy surveillance from twitter in Australia. BMC Medical Informatics Decision Making. 19(208). https://doi.org/10.1186/s12911-019-0921-x
[5] Dimitrov, I. Application of machine learning techniques for allergenicity prediction. School of Pharmacy, Medical University of Sofia. http://scc.acad.bg/ncsa/documentation/ Papers/IvDimitrov_abstract.pdf
[25] Saha, S. & Raghava, G. P. S. (2006). AlgPred: Prediction of allergenic proteins and mapping of IgE epitopes. Nucleic Acids Research. https://www.researchgate.net/ publication/6940812_AlgPred_Prediction_of_allergenic_ proteins_and_mapping_of_IgE_epitopes
[6] Doc.ai Staff. Digital Health Trial: Can Artificial Intelligence Predict Your Risk of Allergies? Doc.ai. https://doc.ai/datatrials/ allergies/
[26] SAS Institute Staff. Artificial Intelligence. SAS Institute, Inc. https://www.sas.com/en_us/insights/analytics/what-isartificial-intelligence.html
[7] Elflein, J. (2019). Allergies in the U.S. – Statistics & Facts. Statista. https://www.statista.com/topics/3948/allergies-inthe-us/
[27] SAS Institute Staff. Machine Learning. SAS Institute, Inc. https://www.sas.com/en_us/insights/analytics/machinelearning.html#machine-learning-workings
[8] Faggella, D. (2019). 7 Applications of Machine Learning in Pharma and Medicine. Emerj. https://emerj.com/ai-sectoroverviews/machine-learning-in-pharma-medicine/
[28] Shah, S. (2015). How Epinephrine Reverses Anaphylaxis. Premier Allergy & Asthma. https://www.premierallergyohio. com/doctors-blog/how-epinephrine-reverses-anaphylaxis
[9] Galli, S. J., Tsai, M. & Piliponsky, A. M. (2008). The development of allergic inflammation. Nature. 454(7203), 445-454. https:// doi.org/10.1038/nature07204
[29] Sidey-Gibbons, J. & Sidey-Gibbons, C. (2019). Machine learning in medicine: a practical introduction. BMC Med Res Methodol. https://doi.org/10.1186/s12874-019-0681-4
[10] Gharagyozyan, H. (2019). A Practical Application of Machine Learning in Medicine. Macadamian. https://www. macadamian.com/learn/a-practical-application-of-machinelearning-in-medicine/
[30] Sifferlin, A. (2013). Is Acupuncture an Antidote for Allergies? TIME. http://healthland.time.com/2013/02/19/isacupuncture-the-antidote-for-allergies/
[11] Hao, K. (2018). What is machine learning? MIT Technology Review. https://www.technologyreview.com/s/612437/whatis-machine-learning-we-drew-you-another-flowchart/ [12] Hauswald, B. & Yarin, Y. M. (2014). Acupuncture in allergic rhinitis. Allergo Journal International. 23(4), 115-119. https:// doi.org/10.1007/s40629-014-0015-3 [13] Institute for Quality and Efficiency in Health Care Staff. (2018). What is an inflammation? Institute for Quality and Efficiency in Health Care. https://www.ncbi.nlm.nih.gov/ books/NBK279298/
[31] Spotless Data Staff. Data Cleaning of Healthcare Data. Spotless Data Ltd. https://spotlessdata.com/blog/datacleaning-healthcare-data [32] The Asthma and Allergy Center Staff. Allergy Statistics. The Asthma and Allergy Center. https://www. asthmaandallergycenter.com/article/allergy-statistics/ [33] Tomita, K., Nagao, R., Touge, H., Ikeuchi, T., Sano, H., Yamasaki, A. & Tohda, Y. (2019). Deep learning facilitates the diagnosis of adult asthma. Allergology International. 68(4), 456-461. https://doi.org/10.1016/j.alit.2019.04.010
[14] Kubo, T., Morita, H., Sugita, K. & Akdis, C. A. (2017). Middleton’s Allergy Essentials. Elsevier, Inc. https://doi. org/10.1016/B978-0-323-37579-5.00001-5 [15] Lallensack, R. (2019). Teen Inventor Designs Noninvasive Allergy Screen Using Genetics and Machine Learning. Smithsonian Magazine. https://www.smithsonianmag.com/ innovation/teen-inventor-designs-noninvasive-allergyscreen-using-genetics-and-machine-learning-180971692/ [16] Mayo Clinic Staff. (2018). Allergies. Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/allergies/ diagnosis-treatment/drc-20351503 [17] Melioli, G., Riccio, A., Ledda, S., Passalacqua, G. & Canonica, 69
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
S M A L L P OX
Revisiting Smallpox Variolation, Vaccination, and Equination BY CHENGZI GUO
Smallpox, a devastating disease that killed 30% of people it infected less than fifty years ago, was declared to be eradicated in 1980 following aggressive disease surveillance and owing to one of the first – and most effective – vaccines developed1. However, recent concerns over the reintroduction of the smallpox virus as a bioterrorism weapon and virulence associated with the vaccine (following news of a lab technician who was exposed to a modified smallpox strain in a lab accident after turning down the smallpox vaccination) has prompted renewed interest in smallpox and the associated vaccine.
THE STORY OF EDWARD JENNER The story of the first vaccine usually goes something like this: the observant, slightly eccentric, English physician Edward Jenner noticed that milkmaids never seemed to have scars characteristic of smallpox survivors. However, they occasionally caught cowpox, which presented itself as a number of mild pocks and rashes on their hands, from handling the udders of infected cows, but the symptoms were nowhere near as serious as the debilitating and often lethal effects of smallpox2. Intrigued by the potential protective abilities of cowpox against the closely-related smallpox virus, Jenner made several scratches on his gardener’s eight-year-old son, James Phipps, and rubbed infected pustular material from one of the pocks on a milkmaid’s hand into the incision. As expected, James fell ill
with cowpox for several days and recovered, after which Jenner infected the boy with the smallpox virus. Consistent with Jenner’s hypothesis and to his great relief, James did not develop smallpox. The smallpox vaccine used in the latest smallpox epidemic is, surprisingly, relatively unchanged from the original vaccine Jenner developed, which consisted of live, attenuated viral particles3. And while Jenner was the first to subject the concept of vaccination to a (not quite so ethical) scientific test, the vaccine’s extraordinary efficacy can be attributed more to luck than a prior understanding of immunology and virology. The vaccine Jenner passed down generation after generation, which he termed vaccinia, is not actually the same strain of virus as modern-day cowpox4, as would be suggested by popular lore and Jenner’s own records. In fact, the origin of vaccinia remains shrouded in obscurity3.
THE VARIOLA VIRUS The causative agent of smallpox, the variola virus (VARV), has infected approximately 10% of all of humankind throughout history, with 300 million people affected in the 20th century alone. Highly contagious and most common in children, the virus can spread via direct personto-person contact as well as aerosol droplets. Once infected, there is a 30% fatality rate, and even if a patient is lucky enough to survive, the disease often leaves them severely scarred and vaccinia strains found that parental DNA was
Cover: Vaccinia virus, a member of the orthopoxvirus genus, seen under transmission electron microscopy. Source: Wikimedia Commons
“However, recent concerns over the reintroduction of the smallpox virus as a bioterrorism weapon and virulence associated with the vaccine...has prompted renewed interest in smallpox and the associated vaccine."
70
occasionally disfigured or blinded.
“Particularly devastating was the virus's historical and cultural impact on Incas, Aztecs, and Native Americans, who encountered it when it was brought over by Europeans to the New World."
Figure 1: Records from a three-volume text on pox rashes, prevention, and prophylaxis written by Sun Qishun (Qing period, 16441911), dated 1817, describing the practice of smallpox variolation. (A) One method of variolation involved obtaining virus from a smallpox sore from an infected patient and transferring it to a healthy individual. (B) A variolation knife used to make superficial incisions in the skin. The text cautions against using undue force, which may cause excessive bleeding and wash the pox germs away from the site of variolation. Source: Wikimedia Commons
71
Particularly devastating was the virus’s historical and cultural impact on Incas, Aztecs, and Native Americans, who encountered it when it was brought over by Europeans to the New World. In contrast to European, Middle Eastern, African, and Asian populations, which have encountered numerous smallpox epidemics and other closely related diseases, indigenous people in the Americas had never encountered poxviruses and therefore were illprepared to mount a strong immune response against the virus. This immune deficiency account for the rapid viral dissemination and high fatality rates of the disease among indigenous people. In what can be considered the first use of biological weapons, the disease led to extensive depopulation and weakening of Native American tribes5. The poxviruses, or poxviridae, to which the variola virus (VARV), vaccinia virus (VACV), and cowpox virus (CPXV) belong, comprise a large family of double-stranded DNA (dsDNA) viruses with a brick-shaped outer envelope that measures approximately 240 by 300 nm. This makes them one of the largest known families of viruses (Figure 1)6. Poxviruses also have highly elaborate replication mechanisms. Unlike most dsDNA viruses that hijack host proteins involved in DNA replication, poxviruses encode their own replication and transcription machinery7. Upon entry into a cell by membrane fusion, viral DNA replication and gene expression are carried out in a controlled, coordinated fashion that leads to morphogenesis into different infectious forms that specialize in modes of transmission (for instance, in cell-to-cell spread
or large-scale dissemination)8,9. In contrast to cowpox, which can infect a broad range of mammalian host species, the natural reservoir of VARV is restricted to humans10. The virus’s inability to survive outside of the human host, concomitant with an effective vaccine, enabled smallpox to be the first and only infectious disease ever eradicated (though there are concerns of the virus persisting in corpses buried in permafrost escaping due to climate change).
THE HISTORY OF VARIOLATION AND VACCINATION
Evidence of “variolation” or “inoculation” has been documented in China, Africa, and the Middle East long before Jenner’s experiment in 1796. Variolation can be traced back at least a century prior to parts of the Ottoman Empire, where dried pox scab powder or pus from a patient with a mild case of smallpox was transferred with needles to non-infected individuals11. The practice may have originated in ancient China, where records dating back to the 1500’s describe stuffing a cotton ball soaked in smallpox pus up the nose of a susceptible person (Figure 2)12. The Chinese and Turkish method was not completely riskfree, however. A small proportion (2-3%) of individuals came down with the disease, and sometimes the unintentional transmission of syphilis and hepatitis, following inoculation but the percentage was much lower than the one-in-four chance of being infected during an epidemic11. Nor was Jenner the first person to introduce the practice to Britain. Lady Mary Wortley Montagu, wife of a British diplomat in Constantinople, learned of the practice while accompanying her husband on trip to Turkey. Having lost her brother to smallpox and being a smallpox survivor herself, Montagu subjected her son to inoculation. Upon returning to Britain, Montagu encouraged other wealthy and royal families to get inoculated. The rising popularity of inoculation among nobles also gave birth to the original anti-vaccination movement, motivated more by anti-Royalists as well as people suspicious of its “barbarian” Constantinople origin than legitimate concern over the practice itself11,13. The term “vaccination” did not arise until the mid-1700’s, when farmer Benjamin Jesty protected his family from a smallpox epidemic with pus from a cow infected with cowpox13. Jenner, who took the lion’s share of credit for smallpox vaccination and touted his title as the “father of vaccination,” can be credited for conducting well-documented studies showing DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
S M A L L P OX
the connection between cowpox infection and protection from smallpox. He also determined that cowpox maintained its immunogenicity when passed from human to human14. His published manuscripts eventually convinced the scientific community that vaccination was both significantly safer and more effective than variolation11,14.
THE ORIGIN OF VACCINIA
For almost two centuries after Jenner’s discovery, the scientific community equated vaccinia (VACV), the virus constituting the smallpox vaccine, with cowpox (CPXV). While the name derives from “variolae vacciniae,” meaning smallpox of the cow, Jenner actually performed multiple inoculations and noted that the most effective involved a chain of horse to cow to human15. Complicating the situation is the fact that horsepox, which has previously only been reported in Europe, is now believed to be extinct15. As a member of the British Royal Society put it in 1889, “What is vaccinia? Is it cow-pox, or horse-pox...or horse-pox cow-pox, or small-pox cow-pox?”16 Some progress was made when British researcher Allan Downie showed in 1939 that cowpox and vaccinia exhibited biochemical differences in agglutination, neutralization, and complement fixation and therefore could not possibly be the same virus15,17. DNA endonuclease restriction site mapping beginning in the late 1970’s revealed that VACV belonged to the same Poxviridae family and Orthopoxvirus genus but is otherwise genetically distinct from other orthopoxviruses (OPV)18,19. In contrast to VACV, the genome of CPXV is larger. In fact, CPXV is unique in having the largest, and hence, most “complete” genome of all poxviruses20. Therefore, an ancestral CPXV has been hypothesized to have the highest likelihood of being the common ancestor of all poxviruses, with viruses evolving from it via point mutations, truncation and, complete deletion of genes20. Using poxvirus-specific gene set prediction and sequence comparison techniques, researchers found that no single
poxvirus species has acquired genes not found in CPXV and that the CPXV genome contains every gene present in all other poxvirus genomes21. Reductive evolution, as defined by gene loss, likely facilitated speciation by limiting emerging viruses to narrower host ranges, eventually leading to divergence of poxvirus species as genetic crossover becomes more limited with growing reproductive isolation.
EVOLUTION OF VACCINIA VIRUS
It is thought that a virus similar to the modern-day cowpox may be the common ancestor of the poxvirus family, but building the evolutionary tree for different poxvirus members proves to be more difficult. Recently, full genome sequencing confirmed the hypothesis that vaccinia is a separate virus and also revealed that CPXV is a polyphyletic group (not all species are derived from a single common ancestor, in contrast to a monophyletic group) consisting of three to five lineages, some of which are more closely related to VACV than others4,22,23. Scientists have determined that VACVs used in 19th century vaccination efforts were not uniform based on results indicating genetically different strains. Utilizing modern sequencing techniques of viral genomes, researchers have identified genetically distinct strains24,25. For instance, a single vial of Dryvax vaccine (a specific smallpox vaccine) contained four different VACV variants24.
Figure 2: Schematic of viral recombination from co-infection of a host cell by two different strains. Recombination of viral DNA within the cell can lead to the generation of new hybrid viruses with potentially new immunogenicity and pathogenicity profiles, such as the ability to infect other host species neither of the original strains could. Source: Wikimedia Commons
“Despite understanding the genetics behind vaccinia, elucidation of the evolutionary relationships between extant vaccinia strains and identification of common ancestral strains is complicated by several factors."
Despite understanding the genetics behind vaccinia, elucidation of the evolutionary relationships between extant vaccinia strains and identification of a common ancestral strain is complicated by several factors. One is the ease in which recombination events could occur across closely related OPVs and across different vaccinia strains. When two different viruses coinfect a cell, they may swap genetic information, resulting in the formation of new, hybrid viruses (Figure 3). One study simulating coinfection in vitro with two vaccinia strains found that parental DNA was swapped an average of 18 times per genome, resulting in heterologous progeny with many new combinations of genes with potentially altered 72
swapped an average of 18 times per genome, resulting in heterologous progeny with many new combinations of genes with potentially altered immunogenicity and pathogenicity profiles26. Sub-optimal and frequently unstandardized vaccine manufacturing processes could also have led to horizontal gene transfer via intentional or accidental mixing of strains in human and animal hosts 27 . A 1963 survey of “vaccine farms” conducted by the WHO revealed 67 different producers spanning 45 countries passaging the virus through cow, sheep, rabbit, water buffalo, or embryonated egg before WHO took steps to modernize and standardize the process15, 28. Despite high genetic diversity across strains, most of it derives from single-nucleotide polymorphisms (SNPs) and in-frame deletions (or insertions) of short tandem repeats, which are possibly a result of low selective pressure in vaccine manufacturing (as opposed to the higher selective pressure of CPXV in animal passaging)24,29,30. Because imperfect homology is tolerated for recombination events, SNPs do not significantly obstruct mixing in coinfected cells24.
"Despite rapid genetic shuffling and acquisition of mutations, understanding the relationship between VACV strains is not impossible thanks to full genome sequencing."
73
Moreover, there is no known natural host for vaccinia, so it is difficult to determine whether vaccinia-like viruses (VLV) found in nature are derived from another OPV or from a stray virus that managed to escape during vaccine manufacturing15. Vaccinia’s broad host range enables a vaccinated individual to accidentally transmit the virus to contact animals, including rodents, livestock, and domestic pets31. Though transmission events are rare, escapee strains have been able to establish animal-animal transmission cycles independent of human hosts. For example, scientists have detected epizootic outbreaks among cows and horses in southern Brazil32,33,34. Vaccinia circulating in populations of domestic and indigenous animals can be grouped into multiple strains with distinct biological characteristics genetically and in their virulence potential, which are likely to further diverge as genetic elements are shuffled or new ones are introduced35,36. Furthermore, extensive geographic mixing through vaccine distribution and livestock transportation in the 19th century, especially between Europe and the Americas, makes evolutionary mapping both a scientific and historical endeavor37,38. As infectious agents were passed around the world, detailed records suggest that some strains may have been introduced to the same region multiple times38. Meanwhile, feral escapee strains continued to evolve in parallel and could have been re-introduced into the human population via zoonotic infection, as
seen by the emergence of VACV outbreaks from contact with infected animals39,40. Despite rapid genetic shuffling and acquisition of mutations, understanding the relationship between VACV strains is not impossible thanks to full-genome sequencing. Additionally, while recombination occurs very efficiently under certain conditions, physical constraints limit intracellular DNA mixing. Each virus replicates within a membrane-enclosed factory called a virosome and two virosomes must collide and fuse for recombination to occur. 20% of virosomes never fuse, so recombination has not been so complete that evolutionary relationships are fully obscured24,41.
VACCINATION, EQUINATION, OR BOTH?
Using recombination detection algorithms and completing genomic alignment of the few remaining linked markers, researchers have constructed phylogenetic trees illustrating evolutionary relationships both within extant VACV strains and between VACV and other poxviruses (Figure 4). Interestingly, fullgenome sequencing of the French Lister VACV strain revealed remnants of two CPXV genes while sequencing several Dryvax strains revealed the presence of three genes absent in all other vaccinia except for two Dryvax clones and the horsepox virus (HSPV), which is believed to have once been prevalent in Europe but is now extinct15,41. In fact, only a single horsepox virus, derived from diseased Mongolian horses in 1976, has ever been isolated and sequenced42. With a 212 kilobase pair genome, it is the only OPV other than CPXV (220 kbp) to have a genome size larger than 200 kbp42,43. Phylogenetic analysis of conserved regions indicates that HSPV is closely related to sequenced vaccinia isolates and rabbitpox, suggesting that it is a vaccinia-like virus42. Further evidence suggesting that at least a subset of smallpox vaccine strains were derived from HSPV came from a 2015 study that placed several vaccinia strains (VACV-IOC, Brazilian field strains Cantagalo, and Serro 2 virus) with HSPV in a novel, independent phylogenetic cluster after sequence comparison revealed they shared several common nonfunctional virulence gene fragments (Figure 4)25. Nextgeneration sequencing of a smallpox vaccine manufactured in 1902 by the American company H.K. Mulford revealed that the core genome had 99.7% similarity to HSPV, confirming the direct involvement of HSPV in Mulford’s manufacturing process44. Since there is no evidence of naturally-occurring HSPV in North America, the virus most likely came DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
S M A L L P OX Figure 3: Simplified depiction of proposed phylogenetic relationships, showing the newly identified cluster with horsepox virus and a subset of vaccinia strains. Sequences were obtained via full-genome sequencing and the maximum clade credibility tree was constructed by Bayesian inferences as detailed in Megdalia et al. All VACV strains are shown in green font and branch lengths are arbitrary.
from Europe44. From a historical perspective, the presence of HSPV in the modern vaccine is unsurprising. Jenner himself documented obtaining virus from “grease,” a type of equine dermatitis that causes infectious lesions on horse hoofs and may have been confused with HSPV41. In the early stages of the smallpox epidemic, vaccination and equination were often used interchangeably, and two Italian doctors specifically communicated to Jenner about the practice of equination15,41. While the evidence suggests that HSPV is present in at least one subset of vaccinia strains, the origins of different vaccinia strains and their relationships with CPXV and HSPV have yet to be accurately mapped. If poxvirus evolution by gene loss is generalizable across all members of the family, that CPXV and HSPV have the largest genomes out of poxviruses seems to support the hypothesis that VACV derived from CPXV- and HSPV-like viruses. More genomic sequencing of HSPV, however, will be needed before drawing conclusions – a challenging endeavor given that HSPV is seemingly extinct. And in any case, the possibility that an extinct, naturally occurring vaccinia virus recombined with CPXV and HSPV to give rise to extant VACV strains cannot be ruled out without further investigation.
NEED FOR FURTHER THERAPEUTIC ANDPROPHYLACTIC DEVELOPMENT
While the risk of smallpox infection is now extremely low (notwithstanding the case of a bioterrorism), the increasing proportion of unvaccinated individuals has led to increased prevalence of zoonotic infections from other OPVs such as VACV, CPXV and monkeypox (MPXV). Despite the high efficacy of the vaccinia vaccine used during past smallpox epidemics, fully replicative strains, even attenuated ones, come with a risk of adverse
medical complications individuals that have led to significant host morbidity45. Infantile, elderly, and immunocompromised populations are the most susceptible to vaccine-triggered side effects or VACV infection. An increasing number of OPV outbreaks through contact with VACV-infected livestock and pets (or laboratory accidents with vaccinated animals) have been reported worldwide in the past decade39,46–48. The growing susceptibility of the general population and the risk of zoonotic infection with VACV or another OPV begs reconsideration of current vaccination policy and development of safer and more protective vaccines and antiviral agents.
"And in any case, the possibility that an extinct, naturally occuring vaccinia virus recombined with CPXV- and HSPV-like viruses."
Developing smallpox vaccines today faces the additional obstacle that smallpox disease in humans no longer exists. Evaluation and licensing relies on animal models of closelyrelated OPVs, but no single animal-virus combination can fully recapitulate the disease profile of variola in humans49. Interestingly, scientists from the University of Alberta synthesized an infectious horsepox virus vaccine from DNA fragments that exhibited lower virulence than VACV vaccines but still protected mice from a lethal dose of virus, which suggests potential for further development as an alternative to VACV-based vaccines50. Research in not only smallpox, but also other enveloped viruses such as cytomegalovirus, influenza virus, HIV, and hepatitis C virus, has gained the field increasing understanding of the driving forces behind the elicitation of a protective immune response. Therefore, the ideal smallpox vaccine may shift away from attenuated viral particles to non-infectious subunits that can hopefully induce effective immune responses without the concerns faced by whole viral particles. Vaccines involving rational immunogen design of viral proteins or nucleic acids encoding pox 74
“Therefore the ideal smallpox vaccine may shift away from attenuated viral particles to noninfectious subunits that can hopefully induce effective immune responses without the concerns faced by whole viral particles. "
behind the elicitation of a protective immune response. Therefore, the ideal smallpox vaccine may shift away from attenuated viral particles to non-infectious subunits that can hopefully induce effective immune responses without the concerns faced by whole viral particles. Vaccines involving rational immunogen design of viral proteins or nucleic acids encoding pox genes have shown promise in vitro and in limited animal studies, though whether they present a clear advantage over the current vaccine remains to be seen10,51. In either case, despite the eradication of the virus, continued research towards vaccine development would be advantageous in case of intentional or accidental release of smallpox back into the population.
ACKNOWLEDGMENTS
The author would like to thank Dr. David A. Leib for helpful comments and suggestions, as well as Sam Neff for copyediting. References 1. Damon, I. K., Damaso, C. R. & McFadden, G. Are We There Yet? The Smallpox Research Agenda Using Variola Virus. PLoS Pathog. 10, e1004108 (2014). 2. Edward Jenner - Jenner Institute. https://www.jenner.ac.uk/ edward-jenner. 3. Damaso, C. R. Revisiting Jenner’s mysteries, the role of the Beaugency lymph in the evolutionary path of ancient smallpox vaccines. The Lancet Infectious Diseases vol. 18 e55– e63 (2018). 4. Dabrowski, P. W., Radonić, A., Kurth, A. & Nitsche, A. GenomeWide Comparison of Cowpox Viruses Reveals a New Clade Related to Variola Virus. PLoS One 8, e79953 (2013). 5. Patterson, K. B. & Runge, T. Smallpox and the Native American. American Journal of the Medical Sciences vol. 323 216–222 (2002). 6. Baxby, D. Poxviruses. Medical Microbiology (1996).
18. Muller, H. K. et al. Comparison of five poxvirus genomes by analysis with restriction endonucleases HindIII, BamI and EcoRI. J. Gen. Virol. 38, 135–147 (1978). 19. Esposito, J. J. & Knight, J. C. Orthopoxvirus DNA: A comparison of restriction profiles and maps. Virology 143, 230–251 (1985). 20. Hatcher, E. L., Hendrickson, R. C. & Lefkowitz, E. J. Identification of Nucleotide-Level Changes Impacting Gene Content and Genome Evolution in Orthopoxviruses. J. Virol. 88, 13651–13668 (2014). 21. Hendrickson, R. C., Wang, C., Hatcher, E. L. & Lefkowitz, E. J. Orthopoxvirus Genome Evolution: The Role of Gene Loss. Viruses 2, 1933–1967 (2010). 22. Carroll, D. S. et al. Chasing Jenner’s Vaccine: Revisiting Cowpox Virus Classification. PLoS One 6, e23086 (2011). 23. Mauldin, M. R. et al. Cowpox virus: What’s in a name? Viruses 9, (2017). 24. Qin, L., Upton, C., Hazes, B. & Evans, D. H. Genomic Analysis of the Vaccinia Virus Strain Variants Found in Dryvax Vaccine. J. Virol. 85, 13049–13060 (2011). 25. Medaglia, M. L. G. et al. Genomic Analysis, Phenotype, and Virulence of the Historical Brazilian Smallpox Vaccine Strain IOC: Implications for the Origins and Evolutionary Relationships of Vaccinia Virus. J. Virol. 89, 11909–11925 (2015). 26. Qin, L. & Evans, D. H. Genome Scale Patterns of Recombination between Coinfecting Vaccinia Viruses. J. Virol. 88, 5277–5286 (2014). 27. Garcel, A., Crance, J. M., Drillien, R., Garin, D. & Favier, A. L. Genomic sequence of a clonal isolate of the vaccinia virus Lister strain employed for smallpox vaccination in France and its comparison to other orthopoxviruses. J. Gen. Virol. 88, 1906–1916 (2007). 28. Singh, R. K., Balamurugan, V., Bhanuprakash, V., Venkatesan, G. & Hosamani, M. Emergence and reemergence of vaccinialike viruses: Global scenario and perspectives. Indian Journal of Virology vol. 23 1–11 (2012). 29. Osborne, J. D. et al. Genomic differences of Vaccinia virus clones from Dryvax smallpox vaccine: The Dryvax-like ACAM2000 and the mouse neurovirulent Clone-3. Vaccine 25, 8807–8832 (2007).
7. Condit, R. C., Moussatche, N. & Traktman, P. In A Nutshell: Structure and Assembly of the Vaccinia Virion. Advances in Virus Research vol. 65 31–124 (2006).
30. Coulson, D. & Upton, C. Characterization of indels in poxvirus genomes. Virus Genes 42, 171–177 (2011).
8. Moss, B. Poxvirus DNA replication. Cold Spring Harb. Perspect. Biol. 5, (2013).
31. Lima, M. T. et al. An update on the known host range of the brazilian vaccinia virus: An outbreak in Buffalo Calves. Front. Microbiol. 10, (2019).
9. Alzhanova, D. & Früh, K. Modulation of the host immune response by cowpox virus. Microbes and Infection vol. 12 900–909 (2010). 10. Melamed, S., Israely, T. & Paran, N. Challenges and achievements in prevention and treatment of smallpox. Vaccines vol. 6 (2018). 11. Hager, T. Ten Drugs: How Plants, Powders, and Pills Have Shaped the History of Medicine. (Abrams Press, 2019). 12. Boylston, A. The origins of inoculation. J. R. Soc. Med. 105, 309–313 (2012). 13.Riedel, S. Edward Jenner and the History of Smallpox and Vaccination. Baylor Univ. Med. Cent. Proc. 18, 21–25 (2005). 14.Smith, K. A. Edward Jenner and the small pox vaccine. Frontiers in Immunology vol. 2 (2011). 15. Esparza, J., Schrick, L., Damaso, C. R. & Nitsche, A. Equination (inoculation of horsepox): An early alternative to vaccination (inoculation of cowpox) and the potential role of horsepox virus in the origin of the smallpox vaccine. Vaccine vol. 35 7222–7230 (2017). 75
16.Taylor, H. H. What is Vaccinia? Br. Med. J. 2, 951–952 (1889). 17. Downie, A. W. A study of the lesions produced experimentally by cowpox virus. J. Pathol. Bacteriol. 48, 361– 379 (1939).
32. Damaso, C. R. A., Esposito, J. J., Condit, R. C. & Moussatché, N. An emergent poxvirus from humans and cattle in Rio de Janeiro state: Cantagalo virus may derive from brazilian smallpox vaccine. Virology 277, 439–449 (2000). 33. Quixabeira-Santos, J. C., Medaglia, M. L. G., Pescador, C. A. & Damaso, C. R. Animal movement and establishment of vaccinia virus Cantagalo Strain in Amazon Biome, Brazil. Emerg. Infect. Dis. 17, 726–729 (2011). 34. Campos, R. K. et al. Assessing the variability of Brazilian Vaccinia virus isolates from a horse exanthematic lesion: Coinfection with distinct viruses. Arch. Virol. 156, 275–283 (2011). 35. Ferreira, J. M. S. et al. Virulence in Murine Model Shows the Existence of Two Distinct Populations of Brazilian Vaccinia virus Strains. PLoS One 3, e3043 (2008). 36. Felipetto Cargnelutti, J. et al. Vaccinia viruses isolated from cutaneous disease in horses are highly virulent for rabbits. Microb. Pathog. 52, 192–199 (2012). 37. Baxby, D. The Jenner bicentenary: the introduction and early distribution of smallpox vaccine. FEMS Immunol. Med. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
S M A L L P OX Microbiol. 16, 1–10 (1996). 38. Franco-Paredes, C., Lammoglia, L. & Santos-Preciado, J. I. The Spanish Royal Philanthropic Expedition to Bring Smallpox Vaccination to the New World and Asia in the 19th Century. Clin. Infect. Dis. 41, 1285–1289 (2005). 39. Abrahão, J. S. et al. Outbreak of severe zoonotic vaccinia virus infection, Southeastern Brazil. Emerg. Infect. Dis. 21, 695–698 (2015). 40. Moussatché, N., Damaso, C. R. & McFadden, G. When good vaccines go wild: Feral Orthopoxvirus in developing countries and beyond. Journal of infection in developing countries vol. 2 156–173 (2008). 41. Sánchez-Sampedro, L. et al. The evolution of poxvirus vaccines. Viruses vol. 7 1726–1803 (2015). 42. Tulman, E. R. et al. Genome of Horsepox Virus. J. Virol. 80, 9244–9258 (2006). 43. Qin, L., Favis, N., Famulski, J. & Evans, D. H. Evolution of and Evolutionary Relationships between Extant Vaccinia Virus Strains. J. Virol. 89, 1809–1824 (2015). 44. Schrick, L. et al. An Early American Smallpox Vaccine Based on Horsepox. N. Engl. J. Med. 377, 1491–1492 (2017). 45. Mota, B. E. F. et al. Adverse events post smallpoxvaccination: Insights from tail scarification infection in mice with vaccinia virus. PLoS One 6, (2011). 46. Cardeti, G. et al. Fatal outbreak in tonkean macaques caused by possibly novel orthopoxvirus, Italy, January 2015. Emerg. Infect. Dis. 23, 1941–1949 (2017). 47. Lu, B. et al. Outbreak of vaccinia virus infection from occupational exposure, China, 2017. Emerg. Infect. Dis. 25, 1192–1195 (2019). 48. Antwerpen, M. H. et al. Use of next generation sequencing to study two cowpox virus outbreaks. PeerJ 2019, (2019). 49. Chapman, J. L., Nichols, D. K., Martinez, M. J. & Raymond, J. W. Animal models of orthopoxvirus infection. Vet. Pathol. 47, 852–870 (2010). 50. Noyce, R. S., Lederman, S. & Evans, D. H. Construction of an infectious horsepox virus vaccine from chemically synthesized DNA fragments. PLoS One 13, e0188453 (2018). 51. Moss, B. Smallpox vaccines: Targets of protective immunity. Immunol. Rev. 239, 8–26 (2011).
76
Speeding up Neuro-generation in the Human Body An Overview of Garbage Collection BY DANIEL CHO Cover Image: The Brain Source: Wikimedia Commons
“There is, therefore, a growing need for research into methods of enhancing the rate and effectiveness of neuro-generation."
77
INTRODUCTION The nervous system is broken up into two components: the central nervous system (CNS), which consists of a person’s brain and spinal cord, and the peripheral nervous system (PNS), which consists of all the nerves that branch out from the CNS. Less protected by the body’s skeletal structure, the PNS is more likely to be damaged. Peripheral neuropathy, nerve damage in the PNS, can develop due to a wide variety of factors – the most common of these being physical injury, diabetes, vascular problems, and autoimmune diseases1. Whether developing due to a genetic disorder or acquired damage, peripheral nerve damage leads to symptoms including muscle weakness, pain, loss of ability to feel touch, and uncontrollable autonomic nervous system responses such as excess sweating and heat intolerance1. In response to such damage, the body typically regenerates its nerves2. This regenerative process occurs mostly in the PNS as nerve regeneration (neuro-regeneration) in the CNS is unlikely due to its non-permissive growth environment as a result of myelin-associated inhibitors expressed by oligodendrocytes that limit axon growth3. There is a clear relationship between axons and glial cells to perform the process of neuro-regeneration3. However, neuro-regeneration in the PNS is slow and often incomplete4. There is, therefore, a growing need for research into methods of enhancing the rate and effectiveness of neuro-regeneration. Current studies have brought forth a few ideas, including electrical stimulation and gene therapy5,6. This paper seeks to investigate these methods and provide an overarching analysis of the current state of neuro-regeneration research.
HOW DOES NERVE GENERATION HAPPEN IN OUR BODIES? The leading cause of peripheral neuropathy is physical injury (e.g. sports injury, car crash, war injury), due to which there are varying degrees of nerve damage1. The nerve fibers or the myelin sheath insulating them may be damaged or the nerve can be severed completely. When a nerve in the body is cut or crushed, the damaged axon undergoes a process called Wallerian degeneration. Wallerian degeneration is a necessary prerequisite for neuro-regeneration as it creates a pro-growth environment for the damaged nerve. A normal intact axon is enwrapped by myelinating Schwann cells with fibroblasts scattered between nerve fibers. The fibroblasts aid in producing nerve growth factor7. Once the tissue is damaged, Schwann cells’ innate injury sensing mechanism decreases the synthesis of myelin lipids. Owing to reduced myelin production, the myelin sheaths separate from the axons and deteriorate into bead-like structures. Then, the Schwann cells degrade their own myelin while macrophages assemble at the lesion site to perform phagocytosis of the myelin debris7. Opsonins work to assist macrophages by labeling myelin debris for removal. Wallerian degeneration ensures that the nerve is properly “cleaned” and prepared for regeneration to occur. When a nerve experiences damage, nerve growth factor (NGF) levels increase as more NGF mRNA is transcribed. Macrophages play a large role in stimulating Schwann cells and fibroblasts to produce NGF8. NGFs are an important set of proteins that leads to axonal regeneration [Figure A]. NGFs are synthesized through regeneration-associated genes (RAGs), such as ATF-3, Sox11, and Cap-23) that induces DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
N E U R O - G E N E R AT I O N expression of proteins that help regenerate a damaged nerve3. The exact mechanism between these newly synthesized protein and the actual act of the nerve regenerating is still under investigation. Dr. Vargas and Dr. Barres of Stanford University conducted a literature review of existing works surrounding Wallerian Degeneration and found that certain growth factors are expressed at specific levels at different stages of neuro-regeneration to maximize nerve growth following nerve injury7. It is of great importance for the future of neuro-regenerative research to acquire a fuller understanding of the roles that different growth factors play. Axonal regeneration is often hindered in the CNS due to putative inhibitors associated with CNS myelin and a lack of growth factors to promote myelin sheath growth and subsequent nerve regeneration9,10. Considering that neuro-regeneration in the CNS is less likely, current research focuses on neuro-regeneration in the PNS.
NERVE DAMAGE IN THE STATUS QUO Peripheral neuropathy, where nerves are damaged resulting in the loss of ability to properly send messages to and from the brain and spinal cord, is a pressing issue that currently affects over 20 million Americans1. Individuals of all demographics, ranging from war veterans to citizens dealing with diabetes, can develop peripheral neuropathy. As mentioned earlier, peripheral neuropathy leads to symptoms -such as muscle weakness, numbness, and the inability to feel touch -- that can become lifelong complications and nuisances. Long term cases of peripheral neuropathy have a negative impact on an individualâ&#x20AC;&#x2122;s quality of life and well-being11. Moreover, this pain is correlated
with higher levels of depression12. Considering the life-long problems inflicting patients with neuro-degenerative disorders, peripheral nerve injuries have a significant social impact. Consequently, researchers are working hard to find ways to both speed up and enhance the process of neuro-regeneration. For now, most patients dealing with long-term peripheral neuropathy have treatment options that address specific symptoms. Medication and mechanical aids are the most frequently prescribed methods to help patients tolerate the pain brought about by nerve damage1. Individuals experiencing severe complications from peripheral neuropathy typically undergo surgery. Surgery is a good option for patients where the nerve growth is too slow or impossible given the severity of the injury (typically complete tears in the nerve). Specifically, there are two methods of surgery that involve tissue transfer. First, surgeons can use autografts, or the transfer of tissue from one spot to another on the patientâ&#x20AC;&#x2122;s body. Second, surgeons can use allografts, a transplant of tissue from one person to another, for patients who need peripheral nerve reconstruction due to extensive loss of nerve tissue [Figure B]. 13
CURRENT RESEARCH: CLINCAL TRIALS
â&#x20AC;&#x153;Consequently, researchers are working hard to find ways to both speed up and enhance the process of neurogeneration."
There are currently very few commercially and clinically available treatments for neuroregeneration. However, two have been welldeveloped and researched heavily. The first is tacrolimus, an immunosuppressant. In mice models, tacrolimus brings about the more rapid onset of functional recovery and accelerates nerve regeneration overall by affecting intracellular calcium flux and cell cycle regulation. Tacrolimus influences the internal signaling pathway of a damaged
Figure A: A diagram of the general mechanism for neuro-regeneration. Injury in the nerve leads to a signaling pathway that ultimately leads to the transcription of regeneration-associated genes. Source: Wikimedia Commons
78
Figure B: Structure of a nerve cell. Source: Wikimedia Commons
“In theory, gene therapy can serve a few purposes: replace a mutated gene, inactivate a mutated gene, or introduce a new gene."
nerve to stimulate enhanced production of NGFs. However, due to its immunosuppressive nature, tacrolimus is difficult to test in humans as it compromises immune function. Longterm usage increases the risk of infection, effects of drug toxicity, and other problems14. Still, many researchers are hopeful that tacrolimus can act as a catalyst for additional research into medication that can enhance neuro-regeneration in the body – particularly for patients that get allografts or autografts due to severe nerve damage. In addition, Dr. Gordon of the University of Alberta, specialist in neuroscience, has begun to look into electrical stimulation as a means to increase the rate of nerve repair and recovery in human subjects. Many studies have found that electrical stimulation accelerates axonal regeneration in laboratory animals. A randomized control study in human subjects where the subjects are assigned to one of several clinical treatments identified that electrical stimulation of the cAMP pathway enhances the transcription of cytoskeletal proteins and cytokines, both of which play a role in nerve regeneration. Through the activation of PKA and cAMP response element binding protein, cAMP activates the transcription of cytoskeletal proteins required to accelerate nerve regeneration15. The study demonstrated positive outcomes for patients that are treated with brief electrical stimulation post-surgery15. Also, electrical stimulation has been found to upregulate the expression of growth factors in Schwann cells through the activation of voltage-gated calcium channels4. However, there has been little research conducted to analyze neuro-regeneration in individuals already dealing with chronic peripheral neuropathy.
79
In terms of treating peripheral neuropathy through tissue transfer, researchers are trying to find alternatives to autografts and allografts due to its taxing nature to the individual the tissue is obtained from. Namely, acellular nerve grafts and tissue engineered nerve grafts are currently being developed by biomedical engineers as an alternative to live tissue. In theory, these artificial nerve grafts are preferred because they can be uniquely modified to fit a patient’s needs, have better incorporation rates to damaged nerves, reduce the demand for tissue donors, and assist the optimal delivery of neurotrophic factors since they are artificially synthesized in response to a specific patient’s injury16 [Figure C].
CURRENT RESEARCH: GENE THERAPY Gene therapy is an experimental technique that involves the alteration of DNA sequence to treat diseases. In theory, gene therapy can serve a few purposes: replace a mutated gene, inactivate a mutated gene, or introduce a new gene17. As of now, there are two main methods to carry out gene therapy. First, non-homologous end joining introduces indel mutations at the target site18. Second, homology directed repair repairs doublestrand breaks by offering a homologous repair template for the CRISPR Cas9-gene targeting system to introduce specific changes at a site of choice in the genome19. In neuro-regeneration, researchers are looking to see if genes can be constructed to promote the expression of neurotrophic factors, proteins supporting the growth of neurons20. Gene therapy has been mainly analyzed in the scope of adenoassociated viruses (AAVs). Different AAVs prefer to infect different DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
N E U R O - G E N E R AT I O N Figure C: An AAV, carrying the therapeutic cargo, binds to the cell membrane and enters the cell through endocytosis. The vector reaches the nucleus by which the vector injects the artifically synthesized gene into the nucelus. The cell takes this genetic information and creates a new protein. Source: Wikimedia Commons
cell types. Interestingly, these viruses infect humans but have no inherent pathological consequences on the human body. These viruses are currently being used as vectors for gene therapy – because they are naturally capable of inserting genetic material into the human genome, they cause sustainable, longterm changes to the DNA. Identifying an AAV that targets cells of the nervous system is a critical step in the development of gene therapy for neuro-regeneration. AAVs have been used to infect brainstem neurons, fluorescently labeling their axons to make axon regeneration measurable21. People are beginning to investigate how these AAVs can be used to alter the gene expression of certain growth factors. In theory, AAVs will be able to change the intrinsic state of neurons that ultimately modifies various signaling pathways by acting as carriers for therapeutic cargo22. AAVs are one mode of therapy where researchers are actually focusing on neuro-regeneration in the CNS – a much harder endeavor given its anti-growth environment. However, when looking at the PNS, there is still some room for improvement. Lentiviral vectors have proven to be effective in integrating their genetic information into the host cell genome in mice models but run the risk for being potentially be harmful for the targeted cells6. AAV vectors have seldom been used in research for PNS neuro-regeneration studies since most researchers and physicians believe existing treatment options are adequate. However, it is hypothesized that they could work as an effective gene delivery tool for Schwann cells6.
FUTURE IMPLICATIONS The main challenge now is finding a way
to translate research conducted on neurodegenerative therapy away from animal models (mainly mice) to human clinical trials. The applicability of these therapies to the human body has yet to be tested extensively due to the novelty of the neuro-regeneration field. Additionally, it is now important to begin looking for molecules that can act as carriers to deliver the necessary therapeutic cargo – molecules that can accurately navigate through the body and find the damaged nerves. While AAVs are an emerging plausible contender as viral vectors for gene therapy, there is still a lot of interest in suitable non-viral vectors. Non-viral vectors are a much safer option given their lack of infectious capabilities. AAVs, though non-pathogenic, are too expensive to be reasonable for most people. Non-viral vectors such as cationic polymers, lipids, and engineered nanoparticles are in the works. Unfortunately, clinical trials for gene therapy are heavily skewed towards the use of viral vectors. It is still uncertain if and how researchers can effectively deliver genetic material to target cells through non-viral vectors22. Thus, a clearer understanding of its pathway and impact on the human body will be of huge importance to develop a more cost-effective and safer method of gene therapy that can help tackle peripheral neuropathy and even nerve damage to the CNS.
“The main challenge now is finding a way to translate research conducted on neurodegenerative therapy away from animal models (mainly mice) to human clinical trials."
CONCLUSION Neuro-regeneration is an emerging field in healthcare. Due to its novelty, there is still much work to be done to understand how nerve regeneration occurs and how the process can be enhanced. Currently, however, there is a clear understanding that the body has processes in 80
“Currently, however, there is a clear understanding that the body has processes in place to promote nerve regeneration."
place to promote nerve regeneration. Wallerian degeneration is a necessary prerequisite for cleaning up any debris while promoting the production of various growth factors to help the damaged nerves regrow. Current research also proves that speeding up nerve regeneration is, in fact, a feasible goal with clinical strategies and gene therapy have been developed in the mice model displaying resounding success. Now, the challenge is to find ways to safely and effectively translate this research to the human body to develop treatment options that are commercially available. Though nerve damage does not affect as broad a patient population as cancer or heart disease, it is still a highly prevalent issue that afflicts many individuals with long-term symptoms. Given the complexity of the PNS and the CNS, it will be interesting to see what the future holds. References 1] Peripheral Neuropathy Fact Sheet. (2019). National Institute of Neurological Disorders and Stroke.
Neural Regen Res. 2015;10(1):22–24. doi:10.4103/16735374.150641 [15] Gordon T, Amirjani N, Edwards DC, Chan KM. Brief post-surgical electrical stimulation accelerates axon regeneration and muscle reinnervation without affecting the functional measures in carpal tunnel syndrome patients. Experimental Neurology. 2010;223(1):192-202. doi:10.1016/j. expneurol.2009.09.020 [16] Patel N, Lyon K, Huang J. An update–tissue engineered nerve grafts for the repair of peripheral nerve injuries. Neural Regen Res. 2018;13(5):764. doi:10.4103/1673-5374.232458 [17] What is Gene Therapy. (2019). Genetics Home Reference. [18] Zaboikin M, Zaboikina T, Freter C, Srinivasakumar N (2017) Non-Homologous End Joining and Homology Directed DNA Repair Frequency of Double-Stranded Breaks Introduced by Genome Editing Reagents. PLoS ONE 12(1): e0169931. https:// doi.org/10.1371/journal.pone.0169931 [19] Hahn F, Eisenhut M, Mantegazza O, Weber APM. Homology-Directed Repair of a Defective Glabrous Gene in Arabidopsis With Cas9-Based Gene Targeting. Frontiers in Plant Science. 2018;9:424. doi:10.3389/fpls.2018.00424 [20] de Winter F, Hoyng S, Tannemaat M, et al. Gene therapy approaches to enhance regeneration of the injured peripheral nerve. European Journal of Pharmacology. 2013;719(1-3):145152. doi:10.1016/j.ejphar.2013.04.057
[2] Symptoms of Peripheral Neuropathy. (2016). The Foundation for Peripheral Neuropathy.
[21] Williams RR, Pearse DD, Tresco PA, Bunge MB. The assessment of adeno-associated vectors as potential intrinsic treatments for brainstem axon regeneration. J Gene Med. 2012;14(1):20–34. doi:10.1002/jgm.1628
[3] Huebner EA, Strittmatter SM. Axon regeneration in the peripheral and central nervous systems. Results Probl Cell Differ. 2009;48:339–351. doi:10.1007/400_2009_19
[22] Jayant RD, Sosa D, Kaushik A, et al. Current status of nonviral gene therapy for CNS disorders. Expert Opin Drug Deliv. 2016;13(10):1433–1445. doi:10.1080/17425247.2016.1188802
[4] Grinsell, D, and Keating, C.P.. (2014). Peripheral Nerve Reconstruction after Injury: A Review of Clinical and Experimental Therapies. BioMed Research International. doi. org/10.1155/2014/698256. [5] Gordon, Tessa. (2016). Strategies to promote peripheral nerve regeneration: electrical stimulation and/or exercise. European Journal of Neuroscience, 43. [6] Hoyng SA, de Winter F, Tannemaat MR, Blits B, Malessy MJ, Verhaagen J. Gene therapy and peripheral nerve repair: a perspective. Front Mol Neurosci. 2015;8:32. Published 2015 Jul 15. doi:10.3389/fnmol.2015.00032 [7] Vargas ME, Barres BA. Why Is Wallerian Degeneration in the CNS So Slow? Annu Rev Neurosci. 2007;30(1):153-179. doi:10.1146/annurev.neuro.30.051606.094354 [8] R Heumann, S Korsching, C Bandtlow, H Thoenen; Changes of nerve growth factor synthesis in nonneuronal cells in response to sciatic nerve transection.. J Cell Biol 1 June 1987; 104 (6): 1623–1631. doi: https://doi.org/10.1083/ jcb.104.6.1623 [9] He Z, Koprivica V. THE NOGO SIGNALING PATHWAY FOR REGENERATION BLOCK. Annu Rev Neurosci. 2004;27(1):341368. doi:10.1146/annurev.neuro.27.070203.144340 [10] Furusho M, Dupree JL, Nave K-A, Bansal R. Fibroblast Growth Factor Receptor Signaling in Oligodendrocytes Regulates Myelin Sheath Thickness. J Neurosci. 2012;32(19):6631. doi:10.1523/JNEUROSCI.6005-11.2012 [11] Wojtkiewicz DM, Saunders J, Domeshek L, Novak CB, Kaskutas V, Mackinnon SE. Social impact of peripheral nerve injuries. Hand (N Y). 2015;10(2):161–167. doi:10.1007/s11552014-9692-0 [12] Bailey R, Kaskutas V, Fox I, Baum CM, Mackinnon SE. Effect of Upper Extremity Nerve Damage on Activity Participation, Pain, Depression, and Quality of Life. Journal of Hand Surgery. 2009;34(9):1682-1688. doi:10.1016/j.jhsa.2009.07.002 [13] Ray WZ, Mackinnon SE. Management of nerve gaps: autografts, allografts, nerve transfers, and end-to-side neurorrhaphy. Exp Neurol. 2010;223(1):77–85. doi:10.1016/j. expneurol.2009.03.031 [14] Tung TH. Clinical strategies to enhance nerve regeneration. 81
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
BIOFUELS
BBB: Better Biofuel Business
BY DEV KAPADIA
“Because we are now running out of gas and oil, we must prepare quickly for a third change to strict conservation and to the use of coal and permanent renewable energy sources, like solar power.” In 1977, President Jimmy Carter addressed the nation regarding his concern about sustainable energy during a speech1. Carter’s words are prescient today, with the exception that the situation is worsened by the fact that we are running out of coal. Energy is required for many functions necessary to sustain human life. Food production, for instance, uses agricultural technology that requires fuel, which may not be available to developing nations if fuel prices continue to rise due to shortages2. Earth is projected to run out of oil by the end of 2052, gas by the end of 2060, and coal by the end of 20903. Because these resources take hundreds, even thousands, of years to form, we cannot simply wait for more to be produced. We need another option. A discussion about potential alternative energy sources twenty years ago would have included the following three sources: chemical energy derived from the absorption of sunlight, nuclear reactions that release energy upon fusion, and thermomechanical energy from wind, water, or geological sources4. However, in recent years, another alternative has demonstrated efficacy similar to that of fossil fuels while being sustainable and environmentally friendly: biofuels. Biofuels encompass any source of energy produced
from renewable organic matter, which includes any biological material that is living or was once living. Chief among them is biomethane, which has emerged as a promising and profitable development. Biomethane are the models of some of the attractive qualities (sustainability, affordability, efficacy) that characterize biofuels. Biomethane is evidently an environmentally friendly, sustainable, and effective energy source – so what is stopping its implementation? While biomethane can provide a reliable source of energy in the future, current concerns regarding cost and efficacy are stopping biomethane implementation as a legitimate potential alternative energy source5. However, while these concerns are valid, people who have these concerns are often not aware of the benefits that biomethane can bring with further research and development. This article will include an elaboration on the production of biomethane along with a brief analysis of the potential profitability of investment in biomethane. The article will conclude with a brief discussion of current barriers to biomethane acceptance with a refutation of current, commonly held concerns regarding biofuels.
Cover Image: The production of biogas. Source: Wikimedia Commons
“While biomethane can provide a reliable source of energy in the future, current concerns regarding cost and efficacy are stopping biomethane implementation as a legitimate potential alternative energy source."
BIOMETHANE PRODUCT 1: BIOGAS PRODUCTION To picture the bright future of biomethane as an effective alternative energy source, a basic understanding of its production is necessary. Usually, biogas flows through a solid 82
“Therefore, while there is substantial research being performed into optimizing biogas production, there is also plenty of research upgrading biogas into a much more useful form known as biomethane."
a basic understanding of its production is necessary. Biogas is an intermediate in the production of biomethane. While no exact composition can be given for biogas as it can vary depending on the composition of the inputs, it is generally comprised of about 60% methane, about 40% carbon dioxide, the rest being a mixture of gases6. While biogas can be used as fuel, it is ill-advised as it can lead to mechanical deterioration as the result of impurities7. Therefore, while there is substantial research being performed into optimizing biogas production, there is also plenty of research regarding upgrading biogas into a much more useful form known as biomethane. Two primary methods are used for biogas production: anaerobic digestion of organic waste by anaerobic bacteria and chemical breakdown of biodegradable landfill waste by chemical reactions and microbes, especially anaerobic methanogens. The latter is less efficacious as it depends on the composition and amount of waste as well as a variety of factors in the environment8. The former can turn biomass (i.e. farm wastes or energy crops) into biogas in a variety of environmental conditions, including psychrophilic (12-16 °C), mesophilic (35-37 °C), or thermophilic (55-60 °C) conditions6. Anaerobic digestion is the breakdown of organic matter by obligate anaerobic bacteria. The production of biogas via anaerobic digestion can be split into three stages: hydrolysis, acidogenesis, and methanogenesis. During hydrolysis, the goal is to take the insoluble complex organic molecules and convert them into soluble components. This step is achieved when hydrolytic enzymes, which are secreted by microbes, hydrolyze the complex organic molecules (i.e. proteins, lipids, and carbohydrates) into monomers. This step can limit the rate of the anaerobic digestion process, which leads industrial manufacturers to add chemical reagent molecules to expedite the process6.
Figure A: The setup for the water wash system. Biogas enters the high-pressure column while water flows from the top. Methane gas exits the top while water filled with CO2 enters the second column, is stripped of its CO2 and then re-enters into the first tube9. Source: Biocycle
methanogenesis. This step can be executed by either reducing the carbon dioxide with hydrogen or cleaving acetic acid molecules to generate both carbon and methane. Because of the limited hydrogen concentration in anaerobic digestion environments, cleaving acetic acid produces the majority of the methane, despite carbon dioxide reduction being a more efficient process. After methanogens reduce carbon dioxide and acetic acid is cleaved, the resulting gas is a mixture of methane and carbon dioxide with trace amounts of hydrogen, hydrogen sulfide, ammonia, siloxanes, and other substances. Because the anaerobic process produces methane along with several other gases, the resulting biogas is impure and in need of several other steps to eventually produce biomethane6.
BIOMETHANE PRODUCT 2: BIOGAS UPGRADING The conversion of biogas to biomethane is commonly known as “biogas upgrading.” The proportions of the contents of biogas are dependent on the pH (acidity) of the environment along with the nature of the substrate. Usually, methane proportion is around 60%, carbon dioxide proportion is around 40%, and the rest of the composition is comprised of trace amounts of various other compounds; however, methane is the only gas useful for energy production and will compose almost all of biomethane. In fact, natural gas specifications for some countries can specify that methane composition of biomethane must be as high as 95% or above. Four methods of “biogas upgrading” are the current commonly accepted industry practice: water wash, pressure swing adsorption (PSA), amine
During the next step, multiple acetogenic (acid-forming) bacteria produce simple organic acids from the organic molecule monomers of the previous step. Of these, the most typical are acetic acid (CH3COOH), butyric acid (C4H8O2), ethanol (C2H5OH), and propionic acid (CH3CH2COOH). The reaction to produce these acids gives off carbon dioxide and hydrogen gas, and once these acids are produced, they can undergo the last process that will eventually produce biogas6. The last step of biogas production is
83
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
BIOFUELS scrubbing, and membrane separation. The water wash technique is currently the most widely used technology for biogas upgrading. It relies on the increased solubility of CO2 and H2S relative to methane. Two columns are used, one at high pressure (610 bar) and the other at low pressure (2.5-3.5 bar). These columns are usually filled with packing material to increase the transfer of the gases and liquids in the process. A variety of packing materials can be selected and there is much past and current research investigating the most effective material for gas and liquid transfer. Biogas is injected into the bottom of the highly pressurized column while water is pumped from the top. Because CO2 and H2S are more soluble than methane, these compounds dissolve into the water coming from the top along with small amounts of methane, leaving the large majority of methane to float to the top and be harvested as biomethane. Because CO2 is only water soluble when pressure is maintained and escapes when a difference in pressure is experienced, the difference in pressure of the columns will be key for recovering CO2 from the water. The water containing CO2, H2S, and methane is then transferred to the low-pressure column, where the decrease in pressure causes CO2 to bubble up from the water and escape to the top of the second column and out into either a separate recovery tank or into the atmosphere. Because of the large amounts of water required for this process, many companies utilize “regenerative absorption” to reproduce clean water from the dirty water leftover in the second column. Regenerative absorption is carried out by decompression of the gas-filed water in another column that is at atmospheric pressure. The gases dissolved in the water are released and the water diverts the water back to the high-pressure column8. The next most common method of biogas upgrading is pressure swing adsorption, or PSA. Firstly, it is important to note the
difference between absorption (the dissolving of a fluid within a liquid or solid absorbent compound) which will be vital in the amine scrubbing process and adsorption (the adherence of molecules to a surface) which is the key to PSA. PSA is based on the principle that pressurized gases are attracted to solid surfaces, so by increasing the pressure inside the columns, more gases can be attracted to the solid surfaces inside the columns. Usually, four columns are used for the process, but multiple columns are almost always used to ensure the process remains continuous. In PSA, compressed biogas enters into a highly pressurized tank with adsorption material, such as zeolites and mesoporous silicas, that selectively retains CO2. Because high pressure causes gases to adhere to the adsorbing material (and especially CO2 in this case), the adsorbent material will selectively retain CO2 and trace amounts of the other gases that are already in small proportions N2, O2, H2O, and H2S, leaving methane to rise and escape from the top of the column to a holding tank. Once the adsorption material is saturated with gases, the biogas moves into the next column and continues the same process. Meanwhile, in the saturated column, the pressure in the entire column is decreased and the gases that were trapped on the adsorption material are released8. A third method of biogas upgrading is amine scrubbing. The setup is similar to water washing in that there are two columns, but this method contains an absorption (not adsorption) column from which the methane escapes is at a lower temperature (about 30 °C) and the other column that strips CO2 is at a higher temperature (upwards of 100 °C). In the absorption column, biogas is flowed in from the bottom while the aqueous amine solution flows from the top. CO2 chemically reacts with the amine solution and bounds to the solution along with H2S. The amine, CO2, and H2S solutions are then directed to the stripping column while methane escapes from the top of the absorption column. The high temperatures in the stripping column cause chemical bonds between the CO2 and the amine solution to break, allowing for CO2 to escape to the atmosphere or be reused. Then the amine solution is cooled and rerouted back into the absorption tank for reuse8.
“PSA is based on the principle that pressurized gases are attracted to solid surfaces, so by increasing the pressure inside the columns, more gases can be attracted to the solid surfaces inside the columns."
Figure B: The setup for the water wash system. Biogas enters the high-pressure column while water flows from the top. Methane gas exits the top while water filled with CO2 enters the second column, is stripped of its CO2 and then re-enters into the first tube9. Source: Biocycle
The last method of biogas upgrading is membrane separation. The key to this method is the difference in diffusion rates due to the difference in pressure within a membrane. Usually, biogas flows through a solid membrane, out of which CO2 diffuses, leaving only methane 84
Figure C: The setup for the Amine upgrading process. Biogas is pumped into the bottom of the low temperature column along with an amine solution that is pumped from the top. The amine solutions absorb the contents of the gas except methane, which escapes to the top. The amine solution is then pumped into the second-high temperature column, expels the gases and is then cooled and re-pumped into the first column9.
energy in cutting down on greenhouse gas emissions. However, the major problem with wind generation is the highly variable output that it supplies, which can be a big problem on times when the variable output of wind cannot come close to the variable demand of energy.
Source: Biocycle
"Therefore, if these methods can prove to be economically lucrative for corporations that choose to produce energy from biomethane, then biomethane can and should be considered as a sustainable, effective, and afforable energy source in the future."
Figure D: Membrane gas upgrading setup. Pressurized biogas is pumped into one side of the membrane. Because of the high pressure within the membrane, CO2 will selectively diffuse out of the membrane as the compressed biogas moves through the membrane, leaving methane to exit at the other side9.
membrane, out of which CO2 diffuses, leaving only methane at the end. Because stripping the CO2 is the focus of this process, the biogas is processed beforehand to strip it of H2S and water. In the membrane, the CO2 diffuses because of its higher permeability as a result of the higher diffusion coefficient, owing to its smaller size and high ionic charge compared to methane (which is relatively large and nonpolar) with many polymeric membranes. High pressure is maintained within the membrane to keep the methane inside, leaving the CO2 on the outside regions of the membrane. Ideally, the CO2 and methane membrane permeability difference will be large so that most of the methane is retained8. No matter what biogas upgrading process is used, all of them produce quality biomethane yields of at least 96% methane composition if executed properly. Therefore, if these methods can prove to be economically lucrative for corporations that choose to produce energy from biomethane, then biomethane can and should be considered a legitimate consideration as a sustainable, effective and affordable alternative energy source in the future.
ECONOMICS OF BIOMETHANES
It is clear that biomethane can reliably be produced, and currently biomethane is in many plants around the world. Furthermore, it has demonstrated that it is just as effective as wind
Unfortunately, just because biomethane is effective and clean does not guarantee its acceptance into society. It still must be fiscally stable to incentivize a company to produce it, else they risk loss and will ultimately exit the market. To evaluate profitability from an objective standpoint, the Discounted Cash Flow (DCF) method, a common metric to assess the valuation of companies and the profitability of potential projects, can be applied. In the DCF method, only cash inflows and outflows, both present and future, are considered using a discount rate to value future cash at the present. This discount is present because inflation and the option to invest in virtually risk-free investments such as US Treasury bonds causes money now to be more valuable than the same dollar amount in future years. For instance, $100 now can be put into a bank and grow into $120 in the future. This will be its “nominal value” because its value has been inflated due to rising interest and inflation rates. To more accurately represent these dollars today, we have to really count that $120 as $100 today, or its “real value” in economics terms, because that is really what it is worth to us right now when we are making calculations. So, we must take future money that will be earned and subtract all the value that could be added from investing or spending that money today to make more in the future. Once these cash flows are discounted, the net present value (NPV) can be calculated; if positive then the investment has a higher probability of generating positive cash rather than losing money, and if negative then it runs a greater risk of being unprofitable11. Cucchiella, D’Adamo, and Gastaldi’s case study of 2015 Italy’s investment in biomethane can be used as a model to verify the validity of biomethane. In 2015, there were multiple suppliers for biomethane production, including the organic fraction of solid urban waste (FORSU), energy crops, and mixed crops (a 30% mix of energy crops and 70% mix of cattle and pig manure). PSA was the cheapest upgrading
Source: Biocycle
85
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
BIOFUELS
FINAL THOUGHTS
method at the time, so for the purposes of this discussion it will be assumed that it was the means of production. Also, to take into account economies of scale, it will be assumed that maintenance costs for small biomethane production plants (100 m3/h and 250 m3/h) are a higher percentage of initial investment than higher production plants (500 m3/h and 1000 m3/h) at 8% and 5% of initial investment, respectively. Conversely, total labor costs will be larger for larger plants as opposed to smaller plants due to increased overhead costs and number of workers. Lastly, government subsidies for biomethane will be considered according to the proposed value of 1231 m3 of methane in biomethane equating to one issuance of 20-year Certificates of Immission of Biofuel in Consumption (CICs). However, to test the profitability under different subsidy environments, the value of the CICs will be varied to represent different subsidy values10. Several valuable implications can be drawn from these results, the first of which is that economies of scale are vital in the profitability of biomethane production, as 11% of cases are profitable for plants producing 100 m3/h, 25% of cases are profitable for 250 m3/h, 44% of cases are profitable for 500 m3/h, and 58% of cases are profitable for 1000 m3/h. Furthermore, these plants, regardless of size and substance, can be more profitable if they sell energy production directly to clients rather than third parties. Additionally, increasing subsidies can turn an unprofitable plant into a profitable one; this is especially important considering that the study found every substance-production configuration unprofitable if subsidies are not present. Lastly, the profitability of biofuels can be significantly increased if FORSU is used as opposed to energy crops and mixtures10. This is not surprising as the utilization of these wastes in biomethane production causes a cycle of energy production that turns waste into energy which produces more waste which can be used to produce more energy11! Therefore, governments should consider biomethane subsidies when determining how to promote biomethane plant growth. Similarly, corporations should open highly efficient plants that sell biomethane derived from FORSU waste directly to clients to maximize profits and increased longevity of the business.
With pressures on our entire society to identify and pursue viable alternatives for the energy crisis, biomethane can and should be discussed as a potential avenue for clean and sustainable energy production. There are already cost-effective methods that can be used by corporations to produce biogas. There are also methods available to extract a significant amount of methane from that biogas. All of this can be an economically feasible opportunity under the right conditions of subsidies, substances, and production output. Having said that, there are still some major roadblocks in current public opinion that could prohibit the acceptance of biomethane. Firstly, there is concern regarding the cost of these biofuels5. And while the economic feasibility of biomethane has been established, the main concern is where to receive capital for subsidies. Public financing may be available, but it likely will not be enough to stimulate and sustain necessary research and development practices in the biomethane industry today. IRENA (International Renewable Energy Agency) points to the need for the private sector to supply capital as well12. The organization points to pension funds and insurance companies having capital available for new clean energy investment. While these companies might be wary of the high initial capital costs, energy projects, especially those that will likely have high demand for their novelty and environmental friendliness, generate sufficient revenues to offset the cost of capital12. When fossil fuels start to significantly run out, biomethane production companies will be left as an attractive investment.
Figure E: Table of the GHG emissions per fuel source in the study (values are in CO2 eq/km)10. Source: DENA
"With pressures on our entire society to identify and pursue viable alternatives for the energy crisis, biomethane can and should be discussed as a potential avenue for clean and sustainable energy production."
While economic viability is a widespread concern, there is also concern over biofuels using too much food and water supplies in production, taking away from the resources that could be used for food and drink5. However, in biomethane production, this concern is not supported as the most sensible option for companies to use for input substances from a profitability standpoint is waste, not crops or other food items. Additionally, much of the water used in biogas upgrading are cleaned using pressure manipulation to mitigate the additional water amounts necessary for biomethane production. Lastly, the public has demonstrated concern that the environment would be overall negatively affected if biofuels try to replace traditional transportation fuels as a result of the production process not being clean enough5. 86
Figure F: Table documenting the profitability of biomethane plants using energy crop feedstock10.
Additionally, much of the water used in biogas upgrading are cleaned using pressure manipulation to mitigate the additional water amounts necessary for biomethane production.
"Therefore, while there is public concern over multiple aspects of biofuel production and usage, many such opinions are simply uninformed about the various aspects of biomethane in particular that can benefit the public."
Lastly, the public has demonstrated concern that the environment would be overall negatively affected if biofuels try to replace traditional transportation fuels as a result of the production process not being clean enough5. However, these respondents fail to take into account the studies showing that ethanol to be produced by biodiesel is more environmentally friendly than current ethanol production. Furthermore, research is constantly being done on improving the efficiency and production of the process along with controlling waste products to release as little Green House Gases into the environment as possible. Therefore, while there is public concern over multiple aspects of biofuel production and usage, many such opinions are simply uninformed about the various aspects of biomethane in particular that can benefit the public. However, biomethane is still in need
of more time and capital to have a lasting positive effect on our environment. IRENA hypothesizes that biofuels, in general, can limit the global mean temperature rise to below 2°C if investment and subsequent research in biofuels doubles at the current rate12. The Organisation for Economic Co-operation and Development (OECD) estimates that the largest sources of private capital in developed countries alone amount to over USD 90 trillion with around USD 2.8 trillion per annum of capital from pension funds and insurance companies available for new clean energy investment12. With the availability of private capital and biofuels now showing efficacy and profitability, biomethane can and should be the answer to our increasingly concerning energy shortage. References 1. New York Times. (1977, April 19). Transcript of Carter's Address to the Nation About Energy Problems. 2. Pimentel, D., Hurd, L. E., Bellotti, A. C., Forster, M. J., Oka, I. N., Sholes, O. D., & Whitman, R. J. (1973, November 2). Food Production and the Energy Crisis. 3. MAHB. (2019, May 23). When Fossil Fuels Run Out, What Then?
Figure G: Table documenting the profitability of biomethane plants using mixed substrates (30% energy crops, 70% livestock manure) in the study10.
87
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
BIOFUELS 4. Dresselhaus, M. S., & Thomas, I. L. (2001, November 15). Alternative energy technologies. 5. Cacciatore, M., Scheufele, D., & Shaw, B. (2012, September 28). Labeling renewable energies: How the language surrounding biofuels can influence its public acceptance. 6. Molino, A., Nanna, F., Ding, Y., Bikson, B., & Braccio, G. (2012, August 23). Biomethane production by anaerobic digestion of organic waste. 7. Last, S. (2017, February 23). What is the Difference Between Biogas and Biomethane? 8. Aguilar-Virgen, Q., Taboada-González, P., & Ojeda-Benítez, S. (2014). Analysis of the feasibility of the recovery of landfill gas: a case study of Mexico. Journal of Cleaner Production, 79, 53–60. 9. Greene, P. (2018, February 8). Basics Of Biogas Upgrading. 10. Cucchiella, F , D’Adamo, I , Gastaldi, M . (2015). Profitability Analysis for Biomethane: A Strategic Role in the Italian Transport Sector. International Journal of Energy Economics and Policy, 5 (2), 440-449. 11. Cucchiella, F., D'Adamo, I., Gastaldi, M., & Miliacca, M. (2018, May 20). A profitability analysis of small-scale plants for biomethane injection into the gas grid. 12. IRENA. (2016, June). Unlocking Renewable Energy Investment The role of risk mitigation and structured finance.
88
DUJS
Dartmouth Undergraduate Journal of Science ESTABLISHED 1998
ARTICLE SUBMISSION What are we looking for? The DUJS is open to all types of submissions. We examine each article to see what it potentially contributes to the Journal and our goals. Our aim is to attract an audience diverse in both its scientific background and interest. To this end, articles generally fall into one of the following categories: Research This type of article parallels those found in professional journals. An abstract is expected in addition to clearly defined sections of problem statement, experiment, data analysis and concluding remarks. The intended audience can be expected to have interest and general knowledge of that particular discipline. Review A review article is typically geared towards a more general audience, and explores an area of scientific study (e.g. methods of cloning sheep, a summary of options for the Grand Unified Theory). It does not require any sort of personal experimentation by the author. A good example could be a research paper written for class. Features (Reflection/Letter/Essay/Editorial) Such an article may resemble a popular science article or an editorial, examining the interplay between science and society. These articles are aimed at a general audience and should include explanations of concepts that a basic science background may not provide. Guidelines: 1. The length of the article should be under 3,000 words. 2. If it is a review or a research paper, the article must be validated by a member of the faculty. This statement can be sent via email to the DUJS account. 3. Any co-authors of the paper must approve of submission to the DUJS. It is your responsibility to contact the co-authors. 4. Any references and citations used must follow the Science Magazine format. 5. If you have chemical structures in your article, please take note of the American Chemical Society (ACS)â&#x20AC;&#x2122;s specifications on the diagrams. For more examples of these details and specifications, please see our website: http://dujs.dartmouth.edu For information on citing and references, please see: http://dujs.dartmouth.edu/dujs-styleguide 89
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Dartmouth Undergraduate Journal of Science Hinman Box 6225 Dartmouth College Hanover, NH 03755 dujs@dartmouth.edu
ARTICLE SUBMISSION FORM* Please scan and email this form with your research article to dujs@dartmouth.edu
Undergraduate Student: Name:_______________________________ School _______________________________
Graduation Year: _________________
Department _____________________
Research Article Title: ______________________________________________________________________________ ______________________________________________________________________________ Program which funded/supported the research ______________________________ I agree to give the Dartmouth Undergraduate Journal of Science the exclusive right to print this article: Signature: ____________________________________
Faculty Advisor: Name: ___________________________
Department _________________________
Please email dujs@dartmouth.edu comments on the quality of the research presented and the quality of the product, as well as if you endorse the studentâ&#x20AC;&#x2122;s article for publication. I permit this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: ___________________________________
*The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal.
Visit our website at dujs.dartmouth.edu for more information
WINTER 2018
90
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE Hinman Box 6225 Dartmouth College Hanover, NH 03755 USA http://dujs.dartmouth.edu dujs@dartmouth.edu
91
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE