Dartmouth Undergraduate Journal of Science - Fall 2021

Page 1

DUJS

Dartmouth Undergraduate Journal of Science FA L L 2 0 2 1

|

VO L . X X I I I

|

N O. 3

VITAL

REMEMBERING THE IMPORTANCE OF SCIENTIFIC COMMUNICATION

Doctors Under the Microscope: An Informative Look at Coping with Death in Health Care

Pg. 5 1

Meta Learning: A Step Closer to Real Intelligence

Pg. 35

Climate Change and its Implications for Human Health

Pg. 80 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE



Letter from the President Dear Reader, Many of us like to believe that science exists in a neutral bubble. However, it has become clear that in the modern day, scientific research is increasingly taking on deeply historic and political roles across many fields. Science, it seems, is not purely an intellectual or academic practice. Rather, scientists must report their findings in a way that persuades their audience, regardless of what they choose to study. The theme of this journal is Vital, reflecting our editors’ and writers’ belief that the task of scientific communication – one that goes so often ignored – has become more important than ever. In Fall of 2021, 54 students wrote for DUJS, ultimately producing 11 individual articles and 5 team articles. You will find a variety of scientific and social-scientific approaches to various topics, including machine learning and traffic, diabetes and kidney disease, and the history and current state of virology. Of particular note are the four members of the class of 2025 who undertook the difficult task of writing individual articles their freshmen fall. Ethan Liu provided extensive evidence suggesting that hyperphosphorylated tau may be a potential culprit for Alzheimer’s Disease. Jean Yuan explored the implications of radiation therapies in cancer treatment. Shawn Yoon explained why soil microbes must not be ignored when studying climate change. Ujvala Jupalli wrote about how lactose intolerance has become a “norm,” both in the US and globally. These students wrote thoughtful and well-researched articles without being able to use foundational STEM knowledge from previous college-level courses, an achievement that is certainly worthy of recognition and celebration. I hope that the wide range of articles show you the Journal's deep commitment to producing good scholarship throughout various scientific disciplines. Perhaps more importantly, I hope that you notice that across each of articles, you will not find just a laundry list of scientific facts. Rather, each student wrote a subtle yet impassioned argument about a topic that they are deeply passionate about. I thank you for taking the time to read each article carefully and to think critically about the story that each writer is trying to tell, and I hope that within the 156 pages in this edition, you are inspired to research and write your own scientific tale. Sincerely, Anahita Kodali

The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community and beyond by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EXECUTIVE BOARD President & Acting Editor-in-Chief: Anahita Kodali '23 Chief Copy Editors: Daniel Cho '22, Dina Rabadi '22, Kristal Wong '22, Maddie Brown '22 EDITORIAL BOARD Managing Editors: Alex Gavitt '23, Andrew Sasser '23, Audrey Herrald '23, Carolina Guerrero '23, Eric Youth '23, Georgia Dawahare '23, Assistant Editors: Callie Moody '24, Caroline Conway '24, Grace Nguyen '24, Jennifer Chen '23, Matthew Lutchko '23, Miranda Yu '24, Owen Seiner '24 STAFF WRITERS Abigail Fischer '23

Juliette Courtine '24

Andrea Cavanagh '24

Justin Chong '24

Anyoko Sewavi '23

Kate Singer '24

Ariela Feinblum '23

Kevin Staunton '24

Benjamin Barris '25

Lauren Ferridge '23

Brooklyn Schroeder '22

Lily Ding '24

Callie Moody '24

Lord Charite Igirimbabazi '24

Cameron Sabet '24

Matthew Lutchko '23

Camilla Lee '22

Miranda Yu '24

Carolina Guerrero '23

Nathan Thompson '25

Caroline Conway '24

Nishi Jain '21

Carson Peck '22

Owen Seiner '24

Daniela Armella '24

Rohan Menezes '23

David Vargas '23

Sabrina Barton '24

Declan O’Scannlain '23

Salifyanji Namwila '24

Dev Kapadia '23

Sarah Lamson '24

Elaine Pu '25

Shawn Yoon '25

Emily Barosin '25

Soyeon (Sophie) Cho '24

Ethan Litmans '24

Shuxuan (Elizabeth) Li '25

Ethan Liu '25

Sreekar Kasturi '24

Ethan Weber '24

Tanyawan Wongsri '25

Evan Bloch '24

Tyler Chen '24

Frank Carr '22

Ujvala Jupalli '25

Jake Twarog '24

Vaani Gupta '24

Jean Yuan '25

Vaishnavi Katragadda '24

John Zavras '24

Valentina Fernandez '24

Julian Franco Jr. '24

Zachary Ojakli '25 Zoe Chafouleas '24

SPECIAL THANKS

DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs.dartmouth.science@gmail.com Copyright © 2021 The Trustees of Dartmouth College

Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company


Table of Contents

Individual Articles Doctors Under the Microscope: An Informative Look at Coping with Death in Health Care Ariela Feinblum '23 & Dr. Yvon Bryan, Pg. 5

Potential Culprit of Alzheimer’s Disease - Hyperphosphorylated Tau Ethan Liu '25, Pg. 12

5

Machine Learning in Freeway Ramp Metering Jake Twarog '24, Pg. 19

Radiation Therapy and the Effects of This Type of Cancer Treatment Jean Yuan '25, Pg. 24

The Effects of Climate Change on Plant-Pollinator Communication Kate Singer '24, Pg. 30

12

Meta Learning: A Step Closer to Real Intelligence Salifyanji Namwila '24, Pg. 35

Microbial Impacts from Climate Change Shawn Yoon '25, Pg. 43

Ultrasound Mediated Delivery of Therapeutics Soyeon (Sophie) Cho '24, Pg. 47

35

Pathophysiology, Diagnosis, and Treatment of Heat-Induced Hives: Cholinergic Urticaria Tyler Chen '24, Pg. 55

Lactose Intolerance as the “Norm" Ujvala Jupalli '25, Pg. 66

Understanding The Connection Between Diabetes and Kidney Disease: Are SGLT-2 Inhibitors the “Magic Bullet”? Valentina Fernandez '24 Pg. 71

80

Team Articles

Climate Change and its Implications for Human Health Pg. 80

History and Current State of Virology Pg. 98

HIV/AIDS and Their Treatments Pg. 120

98

The Psychedelic Renaissance Pg. 136

Vegetarianism Debate Pg. 148

4

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Doctors Under the Microscope: An Informative Look at Coping with Death in Health Care BY ARIELA FEINBLUM '23, DR. YVON BRYAN MD Cover Image Source: Unsplash

Image 1: A distraught doctor. Image Source: Unsplash

5

Introduction The COVID-19 pandemic has affected pre-med undergraduate students in many ways; though we often focus on the pandemic preventing students from shadowing doctors, it has also directed many undergraduates’ attention to the death that doctors encounter in their work. This has been an eye-opening experience for many, but doctors have had to deal with patients dying since long before the pandemic. This can be very upsetting for doctors, like that portrayed in Figure 1. As an undergraduate student and aspiring doctor, myself, I believe it is essential to understand how doctors deal with this difficult aspect of their jobs. Many undergraduate students do not think about how they will personally deal with the inevitable deaths of future patients. There are also many different types of patient deaths. It is important for undergraduates to think about such factors before they become practicing medical professionals. Deaths caused by things like suicide and murder can be very troubling and tragic. Other deaths, such as those of patients who die with the assistance of euthanasia, can be seen as ending the patients’ pain and suffering. A patient's death does not have to be violent for a doctor to have difficulty coping. Factors like

the patient's age can affect how hard it is for the doctor to process that patient’s passing (RhodesKropf et al., 2005). I used the field of anesthesiology as a lens into the greater field of medicine. To understand how difficult some cases are for doctors to

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


deal with, I asked Dr. Yvon Bryan, a pediatric anesthesiologist at the Dartmouth-Hitchcock Medical Center, about his experience in dealing with patient deaths. Through my discussions with Dr. Bryan, as well as through a literature review, I came across some important factors influencing how doctors conceptualize and deal with death. First, it is important for undergraduate students to understand the physiology of how a patient dies. Many doctors intellectualize death by thinking about the biological processes that led to a patient's ultimate death, compartmentalizing death’s physical properties from its philosophical implications. Next, pre-med students should know that different types of death tend to affect doctors differently. Factors like the patient's age and cause of death can affect how doctors view and personally handle a specific patient’s death. There are even fields in medicine that revolve around death such as palliative care or with assisting with euthanasia. In this paper. I will discuss some of the issues affecting how doctors cope with patient deaths (Danticat, 2017).

while their mind is viewed as a qualitative entity of who they are as an individual. Although, it is important to note that there are many cognitive scientists and doctors that believe that every aspect in one’s mind is a result of the workings of the physical brain. Yet, many still challenge this view on religious grounds (Trivate et al., 2019). Death may be viewed as the ultimate lack of function of one's anatomy or physiology, but it is also important to recognize the ambiguity between the dying cells and organs and lack of the individual’s life. It may sometimes be difficult for students to understand the transition between life, dying, and death. One explanation is that dying is a lack of action. While one is living and healthy, they have their organs and necessary

Image 2: The cardiovascular system. Image Source: Unsplash

Loss of Vital Signs: The Numerical Intellectualization of Death There are key physical systems to focus on when it comes to understanding death from a biological perspective. These systems include the cardiovascular, pulmonary, endocrine, and neurological systems. To understand how these systems work together, we can use the analogy of the body to a car. In the cardiovascular system, the heart is like an engine, which pumps blood and makes the body work. The exhaust would be the pulmonary system breathing in air and carbon dioxide. The endocrine system works to maintain the body's hormonal balance. Finally, the neurological system directs all other systems and collects sensory information to assess when functional adjustments are necessary (Dr. Bryan, Personal communication, October 2021). In medical school, students often spend their first-year learning about human anatomy and their second-year learning about potential bodily abnormalities. On a basic level, the main systems can malfunction in different ways. Learning how the brain works is different from understanding dementia. Often, when the cardiovascular system malfunctions, myocardial ischemia occurs, or when the pulmonary system malfunctions, the lungs suffer from bronchitis or emphysema. It is important to note that there is a difference between anatomy and/or physiology and the qualitative experience of a patient, and it may be troubling to separate the anatomy of the patient from their experience. In other words, a persons lived experience is different from their biology. For example, a person’s brain is often viewed as a structure in quantitative form, FALL 2021

systems active. However, life and death are not a completely binary phenomenon. There are many different examples that convey how these grey zones may present themselves in patients. First, when a patient has a traumatic brain injury (TBI), they traditionally have had relatively normal brain function before their injury. Depending on the part of the brain that is damaged, patients can have long or shortterm memory loss, trouble with attention, or even trouble recognizing objects (Goldstein, 2014). These types of injuries may not affect the main systems that the patient depends on to live like the cardiovascular or endocrine system, but it can nonetheless affect their abilities to function in their daily lives. For example, if patients can’t remember their way home, or where they put things, TBIs can be extremely debilitating and sometimes even dangerous. These patients are alive but cannot live their lives the same way they did prior to their brain injury.

"... I believe it is essential to understand how doctors deal with this difficult aspect of their jobs."

Another example demonstrating the nonbinary nature of life and death is abnormalities 6


in one's liver or kidneys. Abnormalities in the liver and kidney can present themselves in various ways, such as a change in personality or ability to function in one’s daily life. One example of how this grey area can present itself is with patients with both Type 1 and Type 2 diabetes. Those with diabetes have a slow deterioration of their anatomical and physiological systems. Diabetes can lead to the requirement for dialysis treatment or kidney transplants for some patients. People may begin to lose the physical ability to live without support as well as a sense of who they are and their qualitative experience of the world (Mertig, 2012).

"Physicians are passionate about saving lives, even when it means risking their own."

Overall, when examining death from a physiological perspective, it is also important to distinguish between brain physiology and the functions or states that the brains physiology facilitates, as many anesthesiologists do in monitoring patients. This is an important topic when looking at patient death through the lens of anesthesiology. A patient can technically have their biological systems intact but have changes in brain function. Different ways your brain can change include – but are not limited to – being fully unconscious by use of anesthetic agents or minimally sedated. In these cases, many doctors may not see the patient as awake in the same way they do a patient who can communicate with them. This sedated state puts patients in a gray area between life and death for some anesthesiologists. This begs the question of how doctors view neurological conditions such as dementia that may cause people to lose much of who they previously were and brings up questions of how physicians deal with, or internalize, the impact of their patients' conditions on themselves. Overall, these questions are ones that pertain to doctors in all fields of medicine and are particularly common in the field of anesthesiology (Dr. Bryan, Personal communication, October 2021).

Cause of Death: Qualitative Impact on Physicians Something I learned through speaking with Dr. Bryan is that physicians often have no way of knowing just how deeply a patient's death is going to affect them until they experience that patient's death. Different types of patient deaths can also make a person question various aspects of one's personal beliefs. For example, when a doctor has a pediatric cancer patient, it can cause them to question the unfairness of life. These types of questions stem from seeing so much suffering in such young patients and their families. Children may die from many different things, such as cancer, infection, and accidents. Pediatric patient

7

deaths can be especially hard for physicians due to the realization of the quantity of years the patient has lost. Specifically, when a child dies, there is lost potential, and one inevitably thinks about what type of person that child may have been. The child has lost their life, and with this comes the loss of the future years the child would have experienced. (Granek et al., 2016; Dr. Bryan, personal communication, October 2021). More generally, there are many different emotions and thoughts associated with doctors' grief over a patient. Doctors are troubled by the loss of life. They may also feel guilty because they made a mistake, or because they wish they had done more to try and save the patient. There are also potential legal and financial ramifications that physicians may encounter in addition to the other difficult parts of a patient's death. This can be true regardless of a doctor's actual ability to save a patient which can lead to feelings of helplessness. For example, a child may come in with a severe trauma, and even though there was no way to save the child, the physician may still feel bad for not being able to save them. Even when a doctor does everything they can, they may still feel guilty about a patient's death. Many doctors take patients’ deaths and try to learn something from them. Physicians often have a hard time knowing when to stop trying to save a patient. Additionally, especially during the time of COVID-19, many doctors have put their lives at risk in order to treat patients. Traditionally, one may not think of doctors as risking their lives to save others, but this is not true. Physicians are passionate about saving lives, even when it means risking their own. This makes it even more difficult to accept a patient's death that a physician works hard to save (Kostka et al., 2021; Dr. Yvon Bryan, personal communication, October 2021). Several factors play a part in how a patient's death may affect their doctor. A primary factor is whether the death was unexpected. Unexpected deaths may take a larger emotional toll on many doctors than expected deaths do. If a patient suffering from terminal cancer dies, it may be sad, but less difficult because it was expected. If a patient comes in from being in an accident and dies, this is often harder for doctors to deal with. This is especially true for children who die suddenly. However, not all deaths are tragedies. Some patients may have lived full lives and died from an illness later in life. In these cases, it may be easier for doctors to deal with their deaths (Dr. Yvon Bryan, personal communication, October 2021; Jones, & Finlay, 2014).

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Case Studies In this section, I will use several case studies to try and understand how various patient deaths affect doctors differently and the wisdom a doctor may gain from specific patients they lose. Dr. Bryan shared with me various cases he has encountered. He also explained what he learned from each case and how it may have changed how he thought about things in his own life. As a future physician, I am trying to learn about how the body works and understand the mechanistic aspects of the human body as well as its physiology. What I have learned is that the body is the addition of many parts. The cases below illustrate not just the loss of an organ system, but the loss of an entire life. Understanding the physiology of how someone dies and how one might try and save them is not the same as the tragedy that surrounds a patient’s death (Dr. Bryan, Personal communication, October 2021). When talking about anesthesia specifically, it is important to remember that anesthesiologists spend most of their time with their patient unconscious. This means that the patient doesn't necessarily show their humanity for the majority of the time that the anesthesiologist is with them. Dr. Bryan told me how he reflects on cases and the importance of seeing the patients not just as having a type of issue but rather as human beings. All the patient deaths had a common lesson/theme, each death taught something about humanity (Dr. Bryan, Personal communication, October 2021). The first case was his first patient death in his third year of medical school. A doctor’s first patient death is always significant. In this case, the patient, a farmer, came into the emergency room and ultimately died later during hospitalization.

This was difficult for Dr. Bryan, as he spent time with the patient and saw their humanity. Initially, the patient was just a human being who came into the emergency room with a problem. Dr. Bryan’s later familiarity with the patient made the death harder to deal with on a personal level. When describing the case, he stated that this patient essentially gave him a tour of the hospital because of all the complications the patient experienced. This specific patient was in the hospital for about a month before he died from vascular disease and end organ failure. One of the main lessons I learned from this case was that the most difficult part of seeing a patient die is sometimes watching the humanity be lost from the patient (Dr. Bryan, Personal communication, October 2021). Next, he told me about a case which involved a two-year-old boy who was attacked and bitten by a dog, unprovoked. The dog was the grandfather's rottweiler. The boy came in with bite injuries on his thigh, face, and head. Before going into the operating room, the boy was not thought to be at high risk of dying. This element of the case made it more difficult, as the death was unexpected. During surgery, the doctors found that a bone chip had gone into the boy's sagittal sinus in the boy’s brain. When the neurosurgeon tried to take it out, the child ended up bleeding to death in the operating room. This case was particularly difficult for Dr. Bryan for many reasons. The unexpected nature of the patient’s death meant that he had no time to reflect on the case before going into the operating room. This case was also very tragic and complicated, as it was the grandfather's dog that was ultimately responsible for the boy's death. The odds of dying from a dog bite are low, which made the death more surprising and therefore more difficult. Another more obvious factor in this case was the age of Image 3: An operating room. Image Source: Unsplash

FALL 2021

8


Table 1: The impact of death and what we can learn from it based on my interviews with Dr. Bryan. Table Source: Created by Author

9

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


the patient, in addition to the cause of death. The boy was just two years old when he died. The boy also died after being bitten by a pet, something many of us interact with every day (Dr. Bryan, Personal communication, October 2021; Gazoni et al., 2008).

though the period leading to death can involve a slow decline with a drastic decline at the very end. Overall, this paper raises many questions and considerations for undergraduate students considering a career in medicine. It also aims to better prepare students to shadow doctors in the future (Zheng et al., 2018).

How Does the Student Learn from the Wisdom of Death?

References

Through my research and conversations with Dr. Bryan, I learned a lot about how doctors deal with patient deaths. The main thing I have learned is that thinking about how I, a future doctor, will deal with death is helpful but will never fully prepare me to deal with a patient's death. Doctors cannot tell how a patient's death will affect them until after it happens, and for this reason, thinking about how one will deal with it is important but insufficient to prepare for the real experience. Each doctor comes with their own personal baggage, opinions, and histories. Doctors can sometimes react irrationally to patient deaths because of these factors. For example, Dr. Bryan did not allow his children to ever go on trampolines after having seen a patient whose injury was caused by a trampoline accident. This may not have been rational to never allow his children on a trampoline because of a single patient, yet this was his reaction. Another example Dr. Bryan gave me was how he never allowed his children to have a dog. Many dogs are not dangerous; nonetheless, Dr. Bryan did not allow his children to have a dog after seeing a child die because of one. A doctor's personal history will inevitably influence how each doctor handles the loss of each individual patient. Generally, not all deaths are the same, and some will affect doctors more or less than others (Gazoni et al., 2012). Another important thing I learned from Dr. Bryan is that thinking about patient deaths involves understanding the humanity of people and not just seeing a patient as a set of numbers. Patient deaths can have qualitative, long-term effects on their loved ones as well as on the doctors who treated them. This is a very important point to recognize. Dr. Bryan taught me that patient deaths begin to change you, and that is completely normal. Patient deaths can have lasting psychological impacts on doctors, enabling them to think about what they really value in life. Doctors also learn from patient deaths. Doctors see the value in enjoying life firsthand because they watch patients lose the ability to do the things those patients enjoy. Death can happen quickly, even in just a minute,

FALL 2021

Danticat, E. (2017). The art of death: Writing the final story. Graywolf Press. Gazoni, Amato, P. E., Malik, Z. M., & Durieux, M. E. (2012). The Impact of Perioperative Catastrophes on Anesthesiologists: Results of a National Survey. Anesthesia and Analgesia, 114(3), 596–603. https://doi.org/10.1213/ ANE.0b013e318227524e Gazoni, Durieux, M. E., & Wells, L. (2008). Life After Death : The Aftermath of Perioperative Catastrophes. Anesthesia and Analgesia, 107(2), 591–600. https://doi.org/10.1213/ ane.0b013e31817a9c77

"... thinking about patient deaths involves understanding the humanity of people and not just seeing a patient as a set of numbers."

Goldstein, E. B. (2014). Sensation and perception. Wadsworth. Granek, L., Barrera, M., Scheinemann, K., & Bartels, U. (2016). Pediatric oncologists' coping strategies for dealing with patient death. Journal of Psychosocial Oncology, 34(1-2), 39-59. doi: 10.1080/07347332.2015.1127306 Jones, R., & Finlay, F. (2014). Medical students’ experiences and perception of support following the death of a patient in the UK, and while overseas during their elective period. Postgraduate Medical Journal, 90(1060), 69–74. https://doi. org/10.1136/postgradmedj-2012-131474 Kostka, A. M., Borodzicz, A., & Krzemińska, S. A. (2021). Feelings and Emotions of Nurses Related to Dying and Death of Patients – A Pilot Study. Psychology Research and Behavior Management, 14, 705–717. https://doi.org/10.2147/PRBM. S311996 Mertig, R. G. (2012). Nurses’ guide to teaching diabetes self-management (2nd ed.). New York: Springer Pub. Rhodes-Kropf, Carmody, S. S., Seltzer, D., Redinbaugh, E., Gadmer, N., Block, S. D., & Arnold, R. M. (2005). “This is just too awful; I just can’t believe I experienced that...”: medical students’ reactions to their “most memorable” patient death. Academic Medicine, 80(7), 634–

10


640. Trivate, T., Dennis, A. A., Sholl, S., & Wilkinson, T. (2019). Learning and coping through reflection: Exploring patient death experiences of medical students. BMC Medical Education, 19(1), 451. doi: 10.1186/s12909-019-1871-9 Zheng, R., Lee, S. F., & Bloomer, M. J. (2018). How nurses cope with patient death: A systematic review and qualitative meta-synthesis. Journal of Clinical Nursing, 27(1-2), e39-e49. doi: 10.1111/ jocn.13975

11

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Potential Culprit of Alzheimer’s Disease Hyperphosphorylated Tau BY ETHAN LIU '25 Cover Image: Neurofibrillary tangles in the Hippocampus of an old person with Alzheimer-related pathology. Image Source: Wikimedia Commons

Abstract Alzheimer’s disease (AD) is an age-related neurodegenerative disorder and the leading cause of dementia. Since the first report of this disease by Dr. Alois Alzheimer in 1906, extensive studies have revealed several pathological features, including extracellular neuritic plaques containing Aβ and intracellular neurofibrillary tangles mainly composed of hyperphosphorylated tau. According to the Aβ hypothesis, which has dominated the field in the last two decades, the primary cause of neurodegeneration in AD is the abnormal accumulation or disposal of Aβ. However, the successive failure of several clinical trials targeting Aβ-associated pathology have raised doubts in its role in neurodegeneration. It is beneficial to revisit the importance of another essential protein, tau, in the pathogenesis of AD to explore more strategies for treating this devastating disorder. This report will review the physiological and pathological functions of tau and its relation to neurodegeneration in AD. Additionally, recent evidence supporting the tau hypothesis will be addressed.

Introduction Alzheimer’s disease (AD) is the most common 12

neurodegenerative disorder associated with aging. It is clinically characterized by dementia followed by other cognitive impairments, such as aphasia (inability to understand/express speech), agnosia (inability to interpret sensations), apraxia (inability to speak), inability to interpret sensations, and behavioral disturbance. The disease affects approximately 10% of individuals older than 65 and about 33% of individuals 85 and older. Progressive neuronal loss caused by the disease is associated with the accumulation of insoluble fibrous materials within the brain, both intracellularly and extracellularly. The extracellular deposits consist of aggregated betaamyloid protein (Aβ), which is derived from a precursor protein called β-amyloid precursor protein (APP) that undergoes sequential cleavages by proteinases (Hebert et al., 2003). Aβ is a short peptide, and, although it is initially nonfibrillar, it progressively transforms into fibrils called neuritic plaques. Intracellularly, cell bodies and apical dendrites of neurons are occupied by intraneuronal filamentous inclusions, which are called neurofibrillary tangles (NFTs). NFTs mainly consist of hyperphosphorylated forms of microtubule-associated protein (MAP) and tau (Serrano-Pozo et al., 2011). Although Alzheimer’s Disease is pathologically characterized by the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 1: Diseased vs Healthy Neuron. Image Source: Wikimedia Commons

presence of both neuritic plaques and NFTs, recent evidence indicates that neurofibrillary tangles could have a more significant impact on the progression of the disease. First, the extent and topographical distribution of neurofibrillary lesions strongly correlates with the degree of dementia. The progression of NFTs in AD follows a typical spatiotemporal progression, which can be characterized in six neuropathological stages. During stages I and II, the superficial cell layers of the trans-entorhinal region and entorhinal regions are affected. Patients are still clinically unimpaired at this stage. In stages III and IV, severe NFTs develop extensively in the hippocampus and the amygdala with limited growth in the association cortex. In stages V and VI, significant neurofibrillary lesions appear in the isocortical association areas. The patient meets the criteria for the neuropathological diagnosis of AD during these two stages (Braak et al., 2014). The development of neurofibrillary lesions contrasts with the development of Aβ deposits. Aβ deposits show a density and distribution pattern that varies significantly amongst individuals and is not a reliable indicator of Alzheimer’s disease. In contrast, the gradual and patterned development of neurofibrillary lesions allows scientists to gauge the progression of the disease (Serrano-Pozo et al., 2011). Furthermore, FALL 2021

although

neuritic

plaques

are

exclusive to AD patients, Aβ deposition can be observed in healthy individuals. Neurofibrillary lesions are only observed in neurodegenerative disorders such as AD, amyotrophic lateral sclerosis (ALS), Down syndrome, and postencephalitic parkinsonism (Rodrigue et al., 2009). Therefore, it has been theorized that neurofibrillary lesions have a more significant impact on Alzheimer’s patients than neuritic plaques. This paper provides a summary of the physiological and pathological roles of tau in AD and discusses the advancement of the tau hypothesis in understanding AD disease progression.

"The disease affects approximately 10% of individuals older than 65 and about 33% of individuals 85 and older."

Tau Protein and its Physiological Function Tau is a phosphoprotein and belongs to the microtubule associated protein (MAP) family. Microtubules are structures in the cell cytoskeleton which supports neurons with structural integrity and cellular transport Tau t is an intrinsically disordered protein (lacking a fixed threedimensional shape) and is naturally unfolded and soluble under normal conditions. Although it is primarily found in neurons in the central nervous system, a relatively high concentration of tau is present in the heart, kidneys, lungs, skeletal muscles, pancreas, and testis. Trace amounts can also be found in the adrenal gland, stomach, and liver (Gu et al., 1996). In the adult brain, there 13


are six tau isoforms: products of alternative mRNA-splicing from a single gene, MAPT. The six isoforms, each ranging from 352 to 441 amino acid residues, differ in the number of amino acid N-terminal inserts (0N, 1N, or 2N) and the presence of three or four microtubule binding repeats (3R or 4R) near the C-terminal end. Each isoform has a different function on microtubules, and they are not equally expressed in cells. Tau is developmentally regulated since the abundance of different isoforms change as people age.

"More work is still required to understand the factors promoting tau hyperphosphorylation and the nucleating steps."

Tau plays an essential role in regulating microtubule dynamics and serves as a potent promoter of tubulin polymerization in vivo. The protein increases the rate of association and decreases the rate of dissociation of tubulin proteins at the growing ends of microtubules. It also reduces the dynamic instability of microtubules (the tendency for microtubules to change lengths rapidly) by preventing catastrophe, a period where microtubules rapidly shorten in response to stimuli (Barbier et al., 2019). These phenomena have been observed in vivo in neurons when tau is injected into fibroblasts. An increase in microtubule mass and increased microtubule resistance to depolymerizing agents were observed in those cells (Drubin & Kirschner, 1986). Other than directly binding to microtubules, tau proteins also bind to other cytoskeletal components, such as spectrin and actin filaments. These interactions may allow microtubules to interconnect with other cytoskeletal components, such as neurofilaments, to change cell shape and flexibility (Mietelska-

Porowska et al., 2014). Additionally, tau serves as a postsynaptic scaffold protein that modulates actin binding to substrates by regulating kinases which phosphorylates actin (Sharma et al., 2007). Overall, under normal physiological conditions, tau is involved in the regulation of microtubule dynamics, spatial arrangement, and cell stability, which affects a variety of cellular functions.

Tau Phosphorylation and Aggregation The phosphorylation of tau is regulated by kinase and phosphatase activity, although the specific kinases (which phosphorylate proteins) and phosphatases (which dephosphorylate proteins) involved are still under investigation. Phosphoryalted forms of tau are high in fetuses and decrease with age due to phosphatase activation. Glycogen synthase kinase 3 (GSK3), cyclindependent kinase 5 (CDK5), and microtubuleaffinity-regulating kinase (MARK) are promising candidates currently under study for regulating the phosphorylation of tau. Previous studies showed that the inhibition of GSK3 by addition of lithium chloride decreased tau phosphorylation and lowered levels of aggregated and insoluble tau in vivo, suggesting that GSK3 could be a drug target for treating AD (Noble et al., 2005). In comparison to kinases, phosphatases decrease tau phosphorylated forms. One study found that when PP2A protein phosphatase-2A (PP2A) activity is increased following downregulation of its endogenous inhibitors, I1pp2Aand I2pp2A, there is an increased level of phosphorylated tau (Chen et al., 2008). Therefore, disturbances

Image 2: Healthy Map2-tau in neurons. Map2 is stained in green. Tau is stained in red. DNA is stained blue. Tau is also in the yellow regions, where its red stain superimposes with the green stain in MAP to create the yellow colour. Image Source: Wikimedia Commons

14

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 3: Brain sections showing tau protein (brown spots). From left to right, 1) the brain of a healthy 65-year-old, 2) the brain of a former NFL linebacker who suffered eight concussions and died at age 45, John Grimsley, 3) the brain of a 73-year-old boxer who suffered from extreme dementia pugilistica. Image Source: Flickr

in the balance between kinase and phosphatase activities is suspected to contribute to the hyperphosphorylated status of tau protein and toxicity to neurons. In terms of pathological status, hyperphosphorylated tau that are dissociated from microtubules become unbound tau in the cytoplasm. It has been speculated that this increased hyperphosphorylated cytosolic tau promotes the conformational change in favor of tau misfolding and aggregation. NFTs are mainly made up of paired helical filaments (PHF), which are composed of tau in an abnormally phosphorylated state. All six isoforms have been detected in neurofibrillary lesions in varying levels. With the help of cryo-electron microscopy, a precise microscopy technique which involves cooling the sample to cryogenic temperatures, it was discovered that the 3R tau isoforms assembled into twisted, paired, helical-like filaments, while the 4R tau isoforms assembled into straight filaments (Fitzpatrick et al., 2017). Although the stepwise process for generating tau aggregation is not well-understood, one hypothesis for a nucleation-dependent assembly mechanism suggests that tau is dissociated from the microtubules and undergoes a conformational change more prone to misfolding. Early deposits of tau which promote the formation of pretangles are suggested to lack β-sheets structures, since they cannot be detected by histochemistry, which involves staining the protein with dyes that detect specific structures. However, later deposits of tau are detected to have β-sheets which can be detected by histochemistry, suggesting a structural transition. Recent evidence suggests that this structure change may be caused by tubulin dimers and oligomers, conglomerations FALL 2021

of microtubule subunits. After efficient filament addition and elongation, the mature PHFs are eventually formed (Meraz-Rios et al.). More work is still required to understand the factors promoting tau hyperphosphorylation and the nucleating steps.

Tau Hypothesis in the Pathogenesis of AD and Its Recent Updates The tau hypothesis is one of the two major hypotheses to explain the pathogenesis and disease progression in AD (the other being the Aβ hypothesis). The tau hypothesis states that tau phosphorylation and aggregation are the primary cause of neurodegeneration in AD. Firstly, the topographical distribution of NFTs can characterize the progression of AD. Clinicopathological studies have demonstrated that the amount and distribution of NFTs correlates with the severity and the duration of dementia, while the distribution of Aβ does not (Kametani & Hasegawa, 2018). Second, it has been demonstrated that tau-related pathology occurs before the onset of Aβ accumulation. According to recent findings from Dr. Braak and Dr. Del Tredici, who outlined the original six stages of AD (previously described in this paper), the system has been revised to include the progression of pre tangles formed in subcortical and transentorhinal regions of the brain prior to the development of NFT stage I. During these stages, Aβ deposits are not accompanied by tau lesions (Braak & Del Tredici, 2014). Third, tau has been involved in mediating Aβinduced neurodegeneration from animal studies. Knocking out the tau genes in the APP/PS1 (Amyloid Precursor Protein) mice conferred protection not only against memory impairment,

15


but also against synaptic loss and premature death. APP/PS1 mice that lacked tau had lesser plaque burdens than age-matched APP/PS1 mice that expressed tau (Leroy et al., 2012). Fourth, tau can elicit neuronal toxicity independent of Aβ. The pathological accumulation of tau within different cell types is responsible for a heterogeneous class of neurodegenerative diseases, which are termed tauopathies. Taumediated toxicity is extensively supported by findings from familial frontotemporal dementia (a condition formerly known as frontotemporal dementia with parkinsonism-17) (Goedert et al., 2017). With the advancement of AD studies through genetic animal models, the Tau P301S mouse model demonstrated synaptic deficits in the hippocampus before the formation of NFT. Another tau mouse model (rTg4510), which conditionally expresses P301L mutant tau, showed cell loss without the presence of NFTs, suggesting that early pathological species other than NFTs mediate cellular toxicity. The puzzle of how NFTs are formed is being solved. One step to solving this puzzle is identifying an intermediate species between tau monomers and NFTs, called tau oligomers. With the help of atomic force microscopy, granular oligomers were identified in Braak stage 0 brain samples. These oligomeric species were suggested to increase with the advancement of pre-symptomatic stages, possibly explaining the synaptic degeneration that

precedes NFTs. Oligomers were also detected in the P301L transgenic mice and rTg4510 mice (Meraz-Rios et al., 2010). While the results are still inconclusive, these oligomeric species could be the potential cause of NFT formation. Another recent update of the tau hypothesis has emphasized the propagation of tau lesions through a prion-like mechanism or “seeding” concept. Prions are misfolded proteins that convert normal proteins into their misfolded shapes. Although there is no evidence supporting tau as an infectious entity, its capacity to transform healthy tau species to aggregation-prone species resembles the cell-to-cell transmission of prions in vivo. Guo and Lee (2011) demonstrated that minute quantities of misfolded preformed fibrils could rapidly yield large amounts of filamentous inclusions resembling NFTs when the misfolded preformed fibrils were introduced into tau expressing cells. The aggregates they produced resemble the NFTs found in AD patients; they have highly ordered β-pleated sheet structures and resemble NFTs in AD patients based on several analyses, including immunofluorescence, amyloid dye binding, immune-EM, and biochemical analysis. These results indicate that preformed fibrils can potentially act as a prion to transform healthy tau species into NFTs (Guo & Lee, 2011). The hypothesis of tau-induced seeding was supported by numerous studies conducted in both cell and animal models. In these studies, tau seeds – derived from brain

Image 4: Model mouse brain with Neurofibrillary Tangles (blue). Neurons are stained green, while blood vessels are red. Image Source: flickr

16

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


homogenates of tauopathy patients, symptomatic tau transgenic mice, or from recombinant tau in vitro – generated tau aggregate-bearing transfected cells. In all instances, tau aggregates were competent for seeding, though to different degrees (Goedert et al., 1989). To demonstrate that tau aggregates engage in a prion-like behaviour, more evidence is required to elucidate the mechanisms of cellular uptake, templated seeding, and subsequent intercellular transfer to induce similar aggregation in recipient cells.

Conclusion Tau is a natively unfolded protein, and its ability to oligomerize and aggregate is regulated by post-translational modifications, with hyperphosphorylation being the most detrimental change. The pathological tau species confer their neurotoxicity through both gain-offunction and loss-of function mechanisms, which have been the foundations of the tau hypothesis in AD. A growing body of data further identifies that tau oligomers are the conformers mediating neurodegeneration. Anti-amyloid (Aß) therapy has been the focus of the field for the past 25 years, but due to a lack of clinical efficacy, scientists have recently been focused on anti-tau treatments (Vaz & Silvestre, 2020). Currently, there are studies targeting kinases involved in tau phosphorylation, such as glycogen synthase kinase 30beta (GSK-3ß) (Lauretti et. al., 2020). Several drugs, including lithium, a treatment for bipolar disorder, have been considered for their ability to inhibit GSK-3ß (Carmassi et al., 2016). The prevention of tau aggregation is also being researched using Methylthioninium Blue (also called Methylene Blue) or its derivatives, which have been shown to prevent aggregation in vitro (Wischik et al., 2015). There are also studies on tau clearance, which removes abnormal tau through immunotherapy. This approach is still in its early clinical process and requires more time to be tested (Bittar et al., 2020). However, all these therapies are still in phase II or III of clinical trial and are still in development. Hopefully, some of them will be successful and bring us closer to understanding and defeating this mysterious disease.

References Barbier, P., Zejneli, O., Martinho, M., Lasorsa, A., Belle, V., Smet-Nocca, C., Tsvetkov, P. O., Devred, F., & Landrieu, I. (2019). Role of Tau as a Microtubule-Associated Protein: Structural and Functional Aspects. Frontiers in aging neuroscience, 11, 204. https://doi.org/10.3389/ fnagi.2019.00204

FALL 2021

Bittar, A., Bhatt, N., & Kayed, R. (2020). Advances and considerations in AD tautargeted immunotherapy. Neurobiology of Disease, 134, 104707. https://doi.org/10.1016/j. nbd.2019.104707 Braak, H., & Braak, E. (1995). Staging of Alzheimer's disease-related neurofibrillary changes. Neurobiol Aging, 16(3), 271-278; discussion 278-284. Braak, H., & Del Tredici, K. (2014). Are cases with tau pathology occurring in the absence of Abeta deposits part of the AD-related pathological process? Acta Neuropathol, 128(6), 767-772. doi: 10.1007/s00401-014-1356-1 Carmassi, Claudia & Del Grande, Claudia & Gesi, Camilla & Musetti, Laura & Dell'Osso, Liliana. (2016). A new look at an old drug: Neuroprotective effects and therapeutic potentials of lithium salts. Neuropsychiatric Disease and Treatment. Volume 12. 1687-1703. 10.2147/NDT.S106479. Chen, S., Li, B., Grundke-Iqbal, I., & Iqbal, K. (2008). I1PP2A affects tau phosphorylation via association with the catalytic subunit of protein phosphatase 2A. J Biol Chem, 283(16), 1051310521. doi: 10.1074/jbc.M709852200 Drubin, D. G., & Kirschner, M. W. (1986). Tau protein function in living cells. J Cell Biol, 103(6 Pt 2), 2739-2746. doi: 10.1083/jcb.103.6.2739

"Tau-mediated toxicity is extensively supported by findings from familial frontotemporal dementia ..."

Fitzpatrick, A., Falcon, B., He, S., Murzin, A. G., Murshudov, G., Garringer, H. J., Crowther, R. A., Ghetti, B., Goedert, M., & Scheres, S. (2017). Cryo-EM structures of tau filaments from Alzheimer's disease. Nature, 547(7662), 185–190. https://doi.org/10.1038/nature23002 Goedert, M., Eisenberg, D. S., & Crowther, R. A. (2017). Propagation of Tau Aggregates and Neurodegeneration. Annu Rev Neurosci, 40, 189210. doi: 10.1146/annurev-neuro-072116-031153 Goedert, M., Spillantini, M. G., Jakes, R., Rutherford, D., & Crowther, R. A. (1989). Multiple isoforms of human microtubuleassociated protein tau: sequences and localization in neurofibrillary tangles of Alzheimer's disease. Neuron, 3(4), 519-526. doi: 10.1016/08966273(89)90210-9 Goedert, M., Spillantini, M. G., Potier, M. C., Ulrich, J., & Crowther, R. A. (1989). Cloning and sequencing of the cDNA encoding an isoform of

17


microtubule-associated protein tau containing four tandem repeats: differential expression of tau protein mRNAs in human brain. EMBO J, 8(2), 393-399. Gu, Y., Oyama, F., & Ihara, Y. (1996). Tau is widely expressed in rat tissues. J Neurochem, 67(3), 12351244. doi: 10.1046/j.1471-4159.1996.67031235. Guo, J. L., & Lee, V. M. (2011). Seeding of normal Tau by pathological Tau conformers drives pathogenesis of Alzheimer-like tangles. J Biol Chem, 286(17), 15317-15331. doi: 10.1074/jbc. M110.209296 Harada, A., Oguchi, K., Okabe, S. et al. Altered microtubule organization in small-calibre axons of mice lacking tau protein. Nature 369, 488–491 (1994). https://doi.org/10.1038/369488a0 Hebert, L. E., Scherr, P. A., Bienias, J. L., Bennett, D. A., & Evans, D. A. (2003). Alzheimer disease in the US population: prevalence estimates using the 2000 census. Arch Neurol, 60(8), 1119-1122. doi: 10.1001/archneur.60.8.1119 Hutton, M., Lendon, C. L., Rizzu, P., Baker, M., Froelich, S., Houlden, H., Pickering-Brown, S., Chakraverty, S., Isaacs, A., Grover, A., Hackett, J., Adamson, J., Lincoln, S., Dickson, D., Davies, P., Petersen, R. C., Stevens, M., de Graaff, E., Wauters, E., van Baren, J., … Heutink, P. (1998). Association of missense and 5'-splice-site mutations in tau with the inherited dementia FTDP-17. Nature, 393(6686), 702–705. https:// doi.org/10.1038/31508 Hyman, B. T., Phelps, C. H., Beach, T. G., Bigio, E. H., Cairns, N. J., Carrillo, M. C., . . . Montine, T. J. (2012). National Institute on Aging-Alzheimer's Association guidelines for the neuropathologic assessment of Alzheimer's disease. Alzheimers Dement, 8(1), 1-13. doi: 10.1016/j.jalz.2011.10.007 Kametani, F., & Hasegawa, M. (2018). Seeding of normal Tau by pathological Tau conformers drives pathogenesis of Alzheimer-like tangles. Front Neurosci, 12, 25. doi: 10.3389/fnins.2018.00025 Lauretti, E., Dincer, O., & Praticò, D. (2020). Glycogen synthase kinase-3 signaling in Alzheimer's disease. Biochimica et biophysica acta. Molecular cell research, 1867(5), 118664. https://doi.org/10.1016/j.bbamcr.2020.118664 Leroy, K., Ando, K., Laporte, V., Dedecker, R., Suain, V., Authelet, M., Héraud, C., Pierrot, N.,

18

Yilmaz, Z., Octave, J. N., & Brion, J. P. (2012). Lack of tau proteins rescues neuronal cell death and decreases amyloidogenic processing of APP in APP/PS1 mice. The American journal of pathology, 181(6), 1928–1940. https://doi. org/10.1016/j.ajpath.2012.08.012 Meraz-Rios, M. A., Lira-De Leon, K. I., CamposPena, V., De Anda-Hernandez, M. A., & MenaLopez, R. (2010). Tau oligomers and aggregation in Alzheimer's disease. J Neurochem, 112(6), 1353-1367. doi: 10.1111/j.1471-4159.2009.06511. Mietelska-Porowska, A., Wasik, U., Goras, M., Filipek, A., & Niewiadomska, G. (2014). Tau protein modifications and interactions: their role in function and dysfunction. Int J Mol Sci, 15(3), 4671-4713. doi: 10.3390/ijms15034671 Noble, W., Planel, E., Zehr, C., Olm, V., Meyerson, J., Suleman, F., Gaynor, K., Wang, L., LaFrancois, J., Feinstein, B., Burns, M., Krishnamurthy, P., Wen, Y., Bhat, R., Lewis, J., Dickson, D., & Duff, K. (2005). Inhibition of glycogen synthase kinase-3 by lithium correlates with reduced tauopathy and degeneration in vivo. Proceedings of the National Academy of Sciences of the United States of America, 102(19), 6990–6995. https:// doi.org/10.1073/pnas.0500466102 Rodrigue, K. M., Kennedy, K. M., & Park, D. C. (2009). Beta-amyloid deposition and the aging brain. Neuropsychol Rev, 19(4), 436-450. doi: 10.1007/s11065-009-9118-x Serrano-Pozo, A., Frosch, M. P., Masliah, E., & Hyman, B. T. (2011). Neuropathological alterations in Alzheimer disease. Cold Spring Harb Perspect Med, 1(1), a006189. doi: 10.1101/ cshperspect.a006189 Sharma, V. M., Litersky, J. M., Bhaskar, K., & Lee, G. (2007). Tau impacts on growth-factorstimulated actin remodeling. J Cell Sci, 120(Pt 5), 748-757. doi: 10.1242/jcs.03378 Vaz, M., & Silvestre, S. (2020). Alzheimer's disease: Recent treatment strategies. European journal of pharmacology, 887, 173554. https:// doi.org/10.1016/j.ejphar.2020.173554 Wischik, C. M., Staff, R. T., Wischik, D. J., Bentham, P., Murray, A. D., Storey, J. M., Kook, K. A., & Harrington, C. R. (2015). Tau aggregation inhibitor therapy: an exploratory phase 2 study in mild or moderate Alzheimer's disease. Journal of Alzheimer's disease : JAD, 44(2), 705–720. https://doi.org/10.3233/JAD-142874

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Machine Learning in Freeway Ramp Metering BY JAKE TWAROG '24 Cover Image: A ramp meter on the Sylvan westbound entrance to I-26 in Portland, Oregon. There is another meter not pictured on the other side. "STOP HERE ON RED" is illuminated when the ramp meter is active. Image Source: Wikimedia Commons

Introduction Despite their purpose as a means for mass transportation, freeways are known for the antithesis to that goal: traffic. They are susceptible to interruption and delay, which has a cascading effect and can keep traffic sticking around long after its initial source has been resolved. Traffic is difficult to predict, even with new, modernized equipment for measuring network flow, such as dedicated probe vehicles and smartphones (Zhang, 2015). In addition, after traffic becomes dense, it is hard to return flow to stable levels. Traffic generated by highway entrances, called "on-ramps," can be especially problematic at peak hours. On-ramps, particularly when their access is controlled by a traffic light, send large groups of cars known as "platoons" onto the highway. These platoons interfere with the ongoing flow of the freeway, causing significant jams. One technique that has been used to combat this is called ramp metering. Ramp metering was introduced in the United States in 1963 on I-290 in Chicago and has spread outward since then to other major urban areas (Yang, 2019). Despite being similar in appearance to traffic lights, ramp meters functionally act quite differently; they often lack

19

yellow lights, and when there is more than one lane, each lane gets their own light. Although often frustrating for drivers due to the addition of an additional light to a freeway commute, ramp metering is proven to help reduce on-ramp traffic by breaking up platoons. In the presence of either a reduction in flow or a bottleneck, reducing the number of vehicles that can access the freeway at once significantly helps reduce their impacts (Haboian, 1995). This paper will discuss the algorithmic implementation of ramp metering and how this can improve its implementation and use.

Fixed Parameter Algorithms To maximize the efficiency of on-ramp flow, the use of ramp meters must be tightly optimized to avoid delays when traffic on the freeway is light. Many different types of algorithms have been used to determine both when a ramp meter's active periods should be and what their light timings should be. Two currently existing strategies for ramp metering are known as the RWS strategy and the ALINEA strategy. The RWS strategy is a simple strategy that integrates the number of vehicles that can enter the freeway as a function of k, i.e. r(k). Metering systems DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 1: Ramp metering is used internationally in addition to in the U.S. Here, an on-ramp in Auckland, New Zealand sports a two-lane ramp metering system. Image Source: Wikimedia Commons

collect data about traffic flow through induction loops integrated into the freeway and on-ramps themselves, which each of the strategies can use. If the critical capacity has not been exceeded yet, then allowed flow is the difference between the last measured upstream freeway flow and the downstream critical capacity. A certain minimum ramp flow is specified so that the on-ramp is never completely halted. The downside to this approach is that minor disturbances in traffic, which often are hard to correct for, can heavily skew the data. For this reason, the smoothed version of the upstream flow is used instead of the raw values (Knoop, 2018). The ALINEA strategy focuses more on the downstream conditions than upstream (Knoop, 2018). It attempts to maintain a specific traffic flow at a certain value, ô. Variations of this algorithm incorporate traffic density or other factors to avoid downstream bottlenecks (Knoop, 2018). ALINEA is one of the more effective methods at preventing freeway slowdowns but can run into difficulties keeping queues short on on-ramps (Ghanbartehrani, 2020). Ultimately, both of these on their own fall somewhat short due to their use of fixed parameters, which need to be set at a specific, distinct value for each on-ramp. It is difficult and intensive for engineers to calculate what would be most optimal for each ramp, costing lots of money and time. Fixed parameters also do not correct for all edge cases. For these reasons, adaptive methods prove to be more effective in many scenarios. FALL 2021

To maximize the efficiency of on-ramp flow, the use of ramp meters must be tightly optimized to avoid delays when traffic on the freeway is light. Many different types of algorithms have been used to determine both when a ramp meter's active periods should be and what their light timings should be. Two currently existing strategies for ramp metering are known as the RWS strategy and the ALINEA strategy. The RWS strategy is a simple strategy that integrates the number of vehicles that can enter the freeway as a function of k, i.e. r(k). Metering systems collect data about traffic flow through induction loops integrated into the freeway and on-ramps themselves, which each of the strategies can use. If the critical capacity has not been exceeded yet, then allowed flow is the difference between the last measured upstream freeway flow and the downstream critical capacity. A certain minimum ramp flow is specified so that the on-ramp is never completely halted. The downside to this approach is that minor disturbances in traffic, which often are hard to correct for, can heavily skew the data. For this reason, the smoothed version of the upstream flow is used instead of the raw values (Knoop, 2018).

"It is difficult and intensive for engineers to calculate what would be most optimal for each ramp, costing lots of money and time."

The ALINEA strategy focuses more on the downstream conditions than upstream (Knoop, 2018). It attempts to maintain a specific traffic flow at a certain value, ô. Variations of this algorithm incorporate traffic density or other factors to avoid downstream bottlenecks (Knoop, 2018). ALINEA is one of the more effective methods at preventing freeway slowdowns but can run into difficulties keeping queues short on 20


on-ramps (Ghanbartehrani, 2020). Ultimately, both of these on their own fall somewhat short due to their use of fixed parameters, which need to be set at a specific, distinct value for each on-ramp. It is difficult and intensive for engineers to calculate what would be most optimal for each ramp, costing lots of money and time. Fixed parameters also do not correct for all edge cases. For these reasons, adaptive methods prove to be more effective in many scenarios.

Machine Learning "Freeway development has historically displaced minorities and underrepresented groups, tearing through parts of cities that once housed marginalized communities."

An intelligent algorithm could prove to be a good solution for the drawbacks of fixed parameters. In addition, it would be far more cost-effective and applicable to a wide range of scenarios, without having to be finely tuned for the dramatically different conditions on various freeways. A team of researchers headed by Saeed Ghanbartehrani and Anahita Sanandaji of Ohio University proposed the use of real historical data to train an algorithm which factors in unpredictable situations to dictate the timings for ramp meters. Their methodology focused on using four main machine learning modules: data refinement and selection; creating a regression; clustering; and creating a ramp metering algorithm (Ghanbartehrani, 2020). One section of the I-205 freeway, an auxiliary Interstate in Oregon, was used to train the algorithm. They gathered data throughout the week for the number of vehicles entering the ramp at five-minute intervals, which formed a pattern with two peaks on weekdays (morning and evening rush hours) and with one peak in the afternoon on weekends. In the regression stage, the team created a model to predict the volume of traffic, Vol(t*), from Time(t), Occupancy(t), Speed(t), and Vol(t). This model was consistently able to mirror the actual data with considerable accuracy, making it a good choice for the algorithm (Ghanbartehrani, 2020). For the clustering step, two k-means clustering approaches were used. The first was clustering based on time and Δvol/Δt to identify the traffic phases. This helped the algorithm compare the data it detected with a baseline to adjust for anomalies, like accidents. The other cluster was for traffic type and the rate which the traffic was expected to change, so it was only based on the change in volume with respect to time. This allowed the algorithm to select the correct values and model for the specific state (Ghanbartehrani, 2020).

21

Comparing the results with a standard ALINEA scenario found that ALINEA generated 8% more red lights than the proposed algorithm (Ghanbartehrani, 2020). The proposed algorithm also reduced maximum ramp queue length significantly, although the average length was similar, and each had comparable overall flow. These results were promising, as it showed that even relatively unsophisticated machine learning techniques can compete with ALINEA. The algorithm can be deployed far more inexpensively, as it only requires a few weeks of traffic data. Having a short data collection period is a large advantage for constructing ramp meters. Xiabo Ma et al. (2020) studied the challenges that transportation departments face in collecting data under standard conditions and argued that up to six to eight weeks of data is often needed. This is often infeasible for fixed ramp metering techniques due to budget or time constraints, which forces engineers to rely on less accurate data, decreasing ramp metering efficiency (Ma et al., 2020).

Other Adaptive Algorithms Another adaptive technique was studied by Kwangho Kim and Michael J. Cassidy of UC Berkeley. They proposed a ramp metering strategy based on kinematic wave theory by modeling traffic as a wave. They asserted that there were four main effects which happen in sequence that cause slowdowns when platoons of vehicles enter the freeway, which can be used to minimize overall traffic. The first of these is the "pinch effect," which is when merging and diverging maneuvers near a ramp form a jam. This jam then propagates upstream like a wave, which can cause another, even more restrictive jam if it encounters an upstream interchange. This triggers the "catch effect," as the bottleneck becomes "caught" at the new location. The jam will continue to propagate upstream slowly, while the original jam lessens. This causes an expanding free flow pocket as the jams propagate outward. There are positive consequences to this pocket, dubbed the "driver memory" effect and the "pumping effect," allowing for higher ramp inflow within this pocket as drivers adopt shorter headways due to its formation (Kim & Cassidy, 2012). To utilize these positive effects, the researchers tested an unconventional metering logic that intentionally allowed the traffic to slow substantially, but then maximized the recovery period created by the driver memory and pumping effects to allow the pocket to exist for as long as possible. The results of this study showed a 3%

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


gain over other alternatives in long-run discharge flow, which saved up to 300 vehicle-hours of travel for commuters in just the environment of the study. It also reduced on-ramp queues. The one downside over other adaptive algorithms is that it could inhibit access to off-ramps upstream due to intentionally reducing upstream flow for a duration of time (Kim & Cassidy, 2012).

Conclusion Despite the positive effects introduced by the new techniques, a consistent factor limiting the usage of ramp metering effectiveness is queue length. Geometric restrictions brought upon by geography and existing infrastructure present significant challenges. This is primarily because ramp meters are often installed on existing ramps, which were rarely designed for them. Because these ramps are frequently not long enough, designers need to either sacrifice queue space or post-meter acceleration distance (Yang, 2019). However, there are ways to optimize for this, such as introducing an area-wide system instead of having isolated ramp meters (Perrine, 2015). In addition, freeways have significant adverse effects on nearby communities. Freeway development has historically displaced minorities and underrepresented groups, tearing through parts of cities that once housed marginalized communities. Living in proximity to urban freeways has been linked to reduced socioeconomic status and even adverse birth effects (Genereux, 2008). Ramp metering, being a primary strategy used to increase the number and density of cars on freeways, exacerbates

their environmental and societal impacts. While increasing traffic flow is good for commuters that use a car, reducing the number of cars on the road through improved public transportation is even more effective, which itself requires disincentivizing car use (Wiersma, 2017). Overall, integrating modernized techniques into ramp metering such as machine learning can increase the effectiveness of an already useful strategy. While not without drawbacks, mitigating traffic on the freeways can help make them more efficient.

References Genereux, M., Auger, N., Goneau, M., & Daniel, M. (2008). Neighbourhood socioeconomic status, maternal education and adverse birth outcomes among mothers living near highways. Journal of Epidemiology & Community Health, 62(8), 695-700. ht t p s : / / d o i . o r g / 1 0 . 1 1 3 6 / jech.2007.066167 Ghanbartehrani, S., Sanandaji, A., Mokhtari, Z., & Tajik, K. (2020). A novel ramp metering approach based on machine learning and historical data. Machine Learning and Knowledge Extraction, 2(4), 379-396. https://doi.org/10.3390/make2040021 Haboian, K. A. (1995). A Case for Freeway Mainline Metering. Transportation Research Record, (1494), 11-20. https://onlinepubs.trb. org/Onlinepubs/trr/1995/1494/1494-002.pdf Kim, K., & Cassidy, M. J. (2012). A capacityImage 2: Los Angeles, Route 101. Los Angeles is known for its extensive network of freeways around and within the city. Unfortunately, the construction of freeways heavily negatively impacted minority communities in the city, much like most others in the United States. Image Source: Wikimedia Commons

FALL 2021

22


increasing mechanism in freeway traffic. Transportation Research Part B: Methodological, 46(9), 1260-1272. https://doi.org/10.1016/j. trb.2012.06.002 Knoop, V. L. (2018). Ramp Metering with RealTime Estimation of Parameters. In 2018 IEEE Intelligent Transportation Systems Conference: November 4-7, Maui, Hawaii (pp. 36193626). IEEE. doi:10.1016/j.trb.2012.06.002 Ma, X., Karimpour, A., & Wu, Y.-J. (2020). Statistical evaluation of data requirement for ramp metering performance assessment. Transportation Research Part A: Policy and Practice, 141, 248261. https:// doi.org/10.1016/j.tra.2020.09.011 Perrine, K. A., Lao, Y., Wang, J., & Wang, Y. (2015). Area-Wide ramp metering for targeted incidents: The additive increase, multiplicative decrease method. Journal of Computing in Civil Engineering, 29(2), 04014038. https:// doi.org/10.1061/(asce)cp.1943-5487.0000321 Wiersma, J., Bertolini, L., & Straatemeier, T. (2017). Adapting spatial conditions to reduce car dependency in mid-sized ‘post growth’ European city regions: The case of South Limburg, Netherlands. Transport Policy, 55, 62-69. https://doi.org/10.1016/j. tranpol.2016.12.004 Yang, G., Tian, Z., Wang, Z., Xu, H., & Yue, R. (2019). Impact of on-ramp traffic flow arrival profile on queue length at metered onramps. Journal of Transportation Engineering, Part A: Systems, 145(2), 04018087. https://doi. org/10.1061/jtepbs.0000211 Zhang, L., & Mao, X. (2015). Vehicle density estimation of freeway traffic with unknown boundary demand–supply: An interacting multiple model approach. IET Control Theory & Applications, 9(13), 1989-1995. https://doi. org/10.1049/iet-cta.2014.1251

23

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Radiation Therapy and the Effects of This Type of Cancer Treatment BY JEAN YUAN '25 Cover Image: This specific machine uses external beam radiation therapy in which the radiation oncologist can target a certain area of the body with radiation and damage the cells in that area. Image Source: Flickr

24

What is Radiation Therapy? Radiation oncology refers to the field in medicine in which doctors study and treat cancer with various methods of radiation therapy. Cancer cells, abnormal cells that are no longer responsive to body signals that control cellular growth and death, divide without control, leading to the invasion of nearby tissues (O’Connor & Adams, 2010). Radiation therapy involves targeting these cancer cells with regulated doses of high-energy radiation with the goal of killing these cells while also minimizing the amount of radiation dose to the normal cells, thereby preserving the organs and other healthy tissues. The therapy causes damage to the DNA of the cancer cells by ionization, leading to loss of the cell’s ability to reproduce and eventually leading to cell death (Baskar et al., 2012). Ionization is when ions are introduced to a certain part of the body in order to release their energy and remove an electron, ultimately causing damage to cells in that are. However, this kind of therapy also affects the division of cells of normal tissues. Damage to normal cells can cause unwanted side effects, such as radiation sickness (nausea, vomiting, diarrhea, etc.) and secondary cancer/malignancies. A radiation oncologist oversees finding the right balance between

destroying cancer cells and minimizing damage to normal cells. Radiation therapy cannot kill the cancer cells immediately. It takes only ten to fifteen minutes to conduct the treatment, but it can take up to eight weeks in order to see results. Even after months post treatment, cancer cells will continue to die (Baskar et al., 2012; National Cancer Institute, n.d.).

Types of Radiation Therapy There are two types of radiation therapy: external beam radiation therapy and internal beam radiation therapy (Baskar et al., 2012; National Cancer Institute, n.d.). The type that is administered depends on a few different factors: “type of cancer,” location of the cancerous tumor, health and medical history of the patient, previous treatments of the patient, and age of the patient (National Cancer Institute, n.d.). In external beam radiation therapy, a machine administers radiation at the affected area and sends radiation to that area from various directions in order to target the cancerous tumor. This type of radiation therapy is spot treatment and only treats a specific part of the body (Washington & Leaver, 2015). It is commonly used on cancers DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


of the breast, lung, prostate, colon, head, and neck. There are many types of external beam radiation therapy (Schulz-Ertner & Jäkel, 2006). They each rely on a computer to analyze the tumor in order to create an accurate treatment program. There is 3-D conformal radiation therapy (3DCRT), intensity-modulated radiation therapy (IMRT), image-guided radiation therapy (IGRT), and stereotactic body radiation therapy. 3-D conformal radiation therapy (3DCRT) uses the radiation beams to allow for accurate tumor and matches the shape of the tumor in order to minimize the growth of the tumor and leave less room for error. Intensity-modulated radiation therapy (IMRT) is linear and utilizes linear accelerators, high-energy rays with pinpoint accuracy, in order to precisely distribute radiation at the cancerous tumor and to try to prevent damage to the healthy tissue (Baskar et al., 2012). Image-guided radiation therapy (IGRT) is used when the area that needs to be treated is in close proximity to a critical structure, such as moving organs. It takes images and scans of the tumor to help doctors position the radiation beams in the correct spot. Lastly, stereotactic radiation therapy uses 3-D imaging to focus the high doses radiation beams on the cancerous tumor in order to try preserve as much healthy tissue as possible. Due to the high dose treatment, the tissue around will be damaged, but it lessens the overall amount of healthy tissue damaged. This type has proven positive results for treatment in early-stage lung cancer for patients unfit for surgery (Baskar et al., 2012; Sadeghi et al., 2010; Xing et al., 2005; National Cancer Institute, n.d.).

Internal radiation therapy, which is also more commonly known as brachytherapy, is more invasive in that an oncologist inserts radioactive materials at the site of the cancer. This radioactive material emits a high dose of radiation directly at the tumor, but also keeps the nearby tissue safe by a radioactive implant that is placed in or in close proximity to the tumor. The benefit of brachytherapy is the ability to deliver high doses of radiation to the tumor without damaging the surrounding healthy tissues. This method harms

FALL 2021

as few healthy cells as possible (Washington & Leaver, 2015; National Cancer Institute, n.d.). If an implant is delivered, the type of implant depends on the type of cancer. If a permanent implant is administered, the radiation fades over time. Examples of the cancers it is used to treat include prostate, gynecological organ cancer, and breast cancer (Sadeghi et al., 2010; National Cancer Institute, n.d.).

Dosage Since normal cells are also exposed to the radiation during treatment of the cancerous cells, the dose of radiation is very important in determining the maximum dosage that can safely be administered. Multiple doses of a smaller amount of radiation are less damaging to the normal cells than one dose equivalent to the total dose (Xing et al., 2005; National Cancer Institute, n.d.).The concept of the therapeutic ratio, a ratio that compares the blood concentration at which a drug becomes toxic and the concentration at which the drug is effective, becomes an important parameter when trying to achieve an acceptable probability of complications in the healthy tissue from the radiation and the probability of controlling the tumor (Tamarago et al., 2015). A key factor in the development, safety, and optimization of a certain treatment is to make sure there is a balance between the toxic and effective concentrations, which is why the therapeutic ratio is an integral part of radiation therapy. There must be balance between the dose prescribed, the tumor volume, and the organat-risk (OAR) tolerance levels (called isotoxic dose prescription (IDP)) (Zindler et al., 2018). IDP allows the radiation dose prescribed to be a predefined balance between the normal tissue complication (estimates of the tolerance of the dosage for specific organs and tissues) and the probability of controlling the tumor (Washington & Leaver, 2015; Zindler et al., 2018). The goal of the radiation therapy plays a large role in what kind of radiation therapy is needed. When only a small dose of treatment is needed, usually to ease cancer symptoms, it is defined as palliative care, in which the goal is to prevent or treat any symptoms or side effects of cancer as early as possible in order to improve the quality of life and reduce pain (National Cancer Institute, n.d.). Too much radiation can cause unwanted side effects due to the fact that radiation can kill healthy cells that are nearby. There is a limit as to how much radiation a body can receive (usually the limit of radiation in a lifetime for one person is about 400 millisieverts, and annual radiation limits is about 20 millisieverts (National Cancer Institute, n.d.; Hall et al., 2006)). Any more than the recommended amount of radiation can lead

"There are two types of radiation therapy: external beam radiation therapy and internal beam radiation therapy."

Image 1: Los Angeles, Route 101. Los Angeles is known for its extensive network of freeways around and within the city. Unfortunately, the construction of freeways heavily negatively impacted minority communities in the city, much like most others in the United States. Image Source: Wikimedia Commons

25


to long-term problems, such as a secondary cancer, which is a cancer that has either originated from the first kind and has mediatized to another location of the body, or a secondary type of cancer that was caused from the treatments of the first (which is more common with radiation therapy). Extreme amounts of radiation can even lead to death. The quantity of radiation can be determined by the concentration of radiation photons and the energy of the individual photons. If one area of a body has received the limit of radiation, another part of the body can only be treated with radiation therapy if it is far enough from the previous area (Sadeghi et al., 2010; National Cancer Institute, n.d.).

"Radiation therapy can be used as the only treatment for certain cancers, or it may be used in conjunction with chemotherapeutic agents and/or surgery."

Dosage is more of a concern in internal radiation therapy. There are two types of treatment: high-dose rate brachytherapy and low-dose rate brachytherapy (Xing et al., 2005; National Cancer Institute, n.d.). In high-dose rate (HDR) brachytherapy, the patient is treated for several minutes at a time with a powerful radiation source that is inserted in the body. The source is removed after ten to twenty minutes and can be repeated a few times per day or once a day over a course of a few weeks, depending on the type of cancer and the type of treatment needed. For lowdose rate brachytherapy, an implant is left in for one to a few days, in which it emits lower doses of radiation over a longer period of time, and then removed. Usually, patients stay in the hospital while being treated. If permanent implants are needed, once the radiation is no longer present, they are harmless and there is no need to take these implants out (Sadeghi et al., 2010; National Cancer Institute, n.d.).

Radiation Therapy Alongside Other Cancer Treatments Radiation therapy can be used as the only treatment for certain cancers, or it may be used in conjunction with chemotherapeutic agents and/ or surgery (Baskar et al., 2012; National Cancer Institute, n.d.; American Society of Clinical Oncology, n.d.). However, the point at which the radiation therapy can be administered depends on the type of cancer and whether the therapy is to ease or treat the cancer symptoms. About 50% of all cancer patients receive radiation therapy (Baskar et al., 2012). Medical oncologists and radiation oncologists work together to combine chemotherapy and radiation therapy in order to create a treatment plan where chemotherapy can be used to weaken the cancer cells in order for radiation therapy to work effectively (Peters et al., 2000). Before surgery, radiation is combined in order to shrink the cancer (National Cancer Institute, n.d.). Combined surgery and radiation therapy, intraoperative radiation therapy administers radiation therapy to the tumor during surgery using either an external or internal radiation beam. It aids surgeons in moving away healthy tissue before administering the radiation and preserves the vital organs (Khaira et al., 2009). Radiation before surgery can even aid the surgery procedure by shrinking the tumor and making it easier to remove during surgery (National Cancer Institute, n.d.).

Side Effects of Radiation Therapy Side effects of radiation therapy depend on the type of cancer, the location of the cancer, the radiation therapy dosage, and general health.

Image 2: An image of the machine that can execute the external beam radiation therapy. This machine uses beams from three types of particles: photons, protons, and electrons (Baskar et al., 2012; National Cancer Institute, n.d.). Photon beams “do not stop” until they reach the cancerous tumor, travelling through normal tissue as well. Proton particles can also reach the deep tumors, but they “do not scatter radiation” as the photon beams do (National Cancer Institute, n.d.). However, there is a higher price for these machines, making the use of these very limited. Electron particles cannot travel as deep as the photon and proton beams and are mainly used for surface tumors (Schulz-Ertner et al., 2006; Washington & Leaver, 2015; National Cancer Institute, n.d.). Image Source: Flickr 26

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 3: This is an image of Brachytherapy in the prostate. Radioactive seeds, or pellet, are implanted into the prostate gland in order to kill the prostate cancer. These seeds are able to emit high or low amounts of radiation depending on the cancer. Image Source: Wikimedia Commons

Common site-specific side effects of radiation therapy on the head and neck include nausea, mouth sores, hair loss, tooth decay, and difficulty swallowing (Coia & Tepper, 1995; American Society of Clinical Oncology, n.d.). Side effects of radiation therapy on the chest include difficulty swallowing, shortness of breath, nipple soreness, shoulder stiffness, cough, fever, and radiation fibrosis (Gross, 1977; American Society of Clinical Oncology, n.d.). Radiation fibrosis is caused by permanent lung scars from untreated radiation pneumonitis, which is inflammation of the lung due to radiation therapy aimed at the chest. Radiation pneumonitis is seen in around 5 to 15 percent of patients with breast cancer, lung cancer, or patients with mediastinal tumors (Berkey, 2010; American Society of Clinical Oncology, n.d.). Side effects of radiation therapy on the stomach and abdomen include loss of appetite, nausea, vomiting, bowel cramping, and diarrhea (Coia & Tepper, 1995; American Society of Clinical Oncology, n.d.). These side effects make it difficult to eat which is detrimental to the healing process since the body uses a lot of energy during radiation therapy. Patients need to consume a higher-than-normal amount of calories and protein to maintain body weight during therapy. Side effects of radiation therapy on the pelvis include diarrhea, rectal bleeding, loss of bladder control, bladder irritation, sexual problems, and lowered sperm count or changes in menstruation (Berkey, 2010; Coia & Tepper, 1995; American Society of Clinical Oncology, n.d.). These reactions begin around the second or third week of treatment and can last several weeks after treatment ends. Palliative care is used here to ease these symptoms (Baskar et al., 2012;

FALL 2021

American Society of Clinical Oncology, n.d.). Common physical side effects of radiation therapy include skin changes and fatigue (Berkey, 2010; American Society of Clinical Oncology, n.d.). Skin changes may include dryness of the skin, itching, and blisters, depending on which part of the body received the therapy (radiation dermatitis). This is a common side effect of radiation therapy to the breast, prostate, head, and neck. These side effects usually decrease a few weeks after treatment ends. However, some side effects, such as a second cancer, become long term effects after treatment. Second cancer is a new type of cancer that develops due to the treatment, usually around 10 to 15 years after the initial treatment. The common emotional side effects include coping with uncertainty, stress, anger, anxiety, depression, fear, and guilt. The prevalence of this goes from 0 to 60 percent, depending on the population being studied (Berkey, 2010). Side effects vary depending on the person and the type of cancer, which is why it is important to stay alert when receiving treatment and to be in touch with a healthcare team in order to watch for these side effects. These side effects happen due to the fact that the radiation is killing not only the cancer cells, but also damaging healthy cells. This damage to the healthy cells is what is causing the side effects. It is also important to stay in touch with a dietitian in order to keep the caloric and protein intake up to par and maintain weight (Berkey, 2010; Peters et al., 2000; National Cancer Institute, n.d.; American Society of Clinical Oncology, n.d.).

27


Referemces Baskar R, Lee AK, Yeo R, Yeoh KW. (2012 Feb 27). Cancer and Radiation Therapy: Current Advances and Future Directions. Chicago, Illinois. International Journal of Medical Sciences. Retrieved from https:// www.medsci.org/v09p0193.htm. Berkey FJ. (2010 August 15). Managing the Adverse Effects of Radiation Therapy. Penn State College of Medicine, Hershey, Pennsylvania. American Family Physician, 82 (4): 381-388. Retrieved from https:// www.aafp.org/afp/2010/0815/p381.html. Brachytherapy to Treat Cancer. (n.d.). National Cancer Institute. Retrieved from https://www. cancer.gov/about-cancer/treatment/types/radiationtherapy/brachytherapy. Coia LR MD., Tepper JE MD. (30 March 1995). Late effects of radiation therapy on the gastrointestinal tract. International Journal of Radiation Oncology Biology Physics, 31 (5): 1213-1236. Retrieved from https://www.redjournal.org/article/03603016(94)00419-L/pdf. Curative Radiation Therapy. (n.d.). National Cancer Institute. Retrieved from https://training.seer.cancer. gov/treatment/radiation/therapy.html External beam radiation therapy for cancer. (n.d.). National Cancer Institute. Retrieved from https:// www.cancer.gov/about-cancer/treatment/types/ radiation-therapy/externalbeam Gross N.J M.D. (1977 January 1). Pulmonary Effects of Radiation Therapy. Department of Medicine, Annals of Internal Medicine, 1-12. Retrieved from https://doi.org/10.7326/0003-4819-86-1-81. Hall JD, Godwin M MD, Clarke T MD. (10 August 2006). Lifetime Exposure to Radiation from Imaging Investigations. U.S. National of Medicine National Institutes of Health, Official Publication of The College of Family Physicians of Canada. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC1781500/. Khaira M, Mutamba A, Meligonis G, Rose GE, Plowman PN, O’Donnell H. (8 July 2009). The Use of Radiotherapy for the Treatment of Localized Orbital Amyloidosis. Taylor & Francis Online, 27 (6): 432-437. Retrieved from https://doi. org/10.1080/01676830802350216. O'Connor, C. M. & Adams, J. U. (2010). Essentials of Cell Biology. Cambridge, MA. NPG Education, 9798. Retrieved from https://web.iitd.ac.in/~amittal/

28

SBL101_Essentials_Cell_Biology.pdf. Peters WA III, Liu PY, Barrett RJ II, Stock RJ, Monk BJ, Berek JS, Souhami L, Gringsby P, Gordon W Jr., Alberts DS. (2000). Concurrent Chemotherapy and Pelvic Radiation Therapy Compared with Pelvic Radiation Therapy Alone as Adjuvant Therapy After Radical Surgery in High-Risk Early-Stage Cancer of the Cervix. Journal of Clinical Oncology, 18 (8): 16061613. Lippincott Williams and Wilkins. Retrieved from https://ascopubs.org/doi/abs/10.1200/ JCO.2000.18.8.1606. Radiation Therapy to Treat Cancer. (n.d.). National Cancer Institute. Retrieved from https://www. cancer.gov/about-cancer/treatment/types/radiationtherapy#:~:text=Radiation%20therapy%20(also%20 called%20radiotherapy,your%20teeth%20or%20 broken%20bones. Sadeghi M, Enferadi M, Shirazi A. (29 November 2010). External and internal radiation therapy: Past and future directions. Journal of Cancer Research and Therapeutics, 6 (3): 239-248. Retrieved from https://www.cancerjournal.net/text. asp?2010/6/3/239/73324. Schulz-Ertner D MD., Jäkel O PhD., Schlegel W PhD. (available online 27 September 2006). Radiation Therapy With Charged Particles. Seminars in Radiation Oncology, 16 (4) Retrieved from https:// doi.org/10.1016/j.semradonc.2006.04.008. Side Effects of Radiation Therapy. (n.d.). American Society of Clinical Oncology (ASCO), Cancer.Net. (September 2020). Retrieved from https://www. cancer.net/navigating-cancer-care/how-cancertreated/radiation-therapy/side-effects-radiationtherapy Tamarago J, Le Heuzey JY, Mabo P. (15 April 2015). Narrow therapeutic index drugs; a clinical pharmacological consideration to flecainide. U.S. National of Medicine National Institutes of Health, European Journal of Clinical Pharmacology. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC4412688/ Understanding Radiation Therapy. (n.d.). American Society of Clinical Oncology (ASCO), Cancer.Net. (September 2020). Retrieved from https://www. cancer.net/navigating-cancer-care/how-cancertreated/radiation-therapy/side-effects-radiationtherapy Washington CM, Leaver DT. (2015). Principles and Practice of Radiation Therapy. 4th edition. Elsevier Mosby. 156, 293, 536. Retrieved December 2, 2021.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Xing L PhD., Thorndyke B PhD., Schreibmann E PhD., Yang Y PhD., Li TF PhD., Kim GY PhD., Luxton G PhD., Koong Albert MD. (21 December 2005). Overview of Image-Guided Radiation Therapy. Stanford, CA. Medical Dosimetry, 1-22. Retrieved from https://doi.org/10.1016/j.meddos.2005.12.004. Zindler, J.D., Schiffelers, J., Lambin, P. (18 January 2018). Improved effectiveness of stereotactic radiosurgery in large brain metastases by individualized isotoxic dose prescription: an in silico study. Strahlenther Onkol, 194, 560–569. Retrieved from https://doi.org/10.1007/s00066-018-1262-x

FALL 2021

29


The Effects of Climate Change on PlantPollinator Communication BY KATE SINGER '24 Cover Image: European honeybee extracts nectar. Image Source: Wikimedia Commons

30

Introduction It is common knowledge that the global climate is changing due to anthropogenic influences. Increased CO2 levels in the atmosphere will lead to an increase in the global average temperature (Ekwurzel et al., 2017) which will have many negative impacts on the environment and its inhabitants. The timing of the biological cycles of many species is often correlated with environmental cues that are being affected by climate change. However, when the cycles of two interacting species are correlated with different cues, there can be a misalliance of interconnected biological processes. This phenomenon of timing discrepancies of biological cycles is known as phenological mismatch. This process is largely driven by climate change and affects species across taxa (Visser & Gienapp, 2019). Mismatched phenological processes can also impede interspecies communication, particularly communication between plants and pollinators. Plants normally communicate with their pollinators through modalities such as olfactory, visual, or electric signaling. These signals allow pollinators to locate the flower and its pollen stores (Sun et al., 2018), as well as determine whether that flower has previously been visited

by another pollinator and has consequently been depleted of its resources (Clarke et al., 2013). The changing climate is also altering the visual and olfactory signals produced by the flowers. While it is clear that these changes are occurring, the broader implications are still unknown.

Phenological Mismatch The timing of many organisms’ life history activities, or phenology, is dependent on environmental cues such as temperature or day length. Should these cues cease to be reliable due to changes in the climate, these processes will not occur at the optimal time and the overall fitness of the organism will decrease (McNamara et al., 2011). Hutchings et al. (2018) illustrate the phenomenon of decreased plant and pollinator fitness due to phenological mismatch through observation of the spider orchid, Ophrys sphegodes, and the solitary mining bee, Andrena nigroaenea. Optimal pollination relies on male bees emerging from hibernation right before the orchids bloom, and females emerging after the bloom. This is because these orchids create a scent bouquet which mimics the mating pheromones of a female bee. Since the orchid blooms before the female bees emerge and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 1: Spider orchids. Image Source: Wikimedia Commons

give off the true mating pheremone, the males are attracted to the flowers, which allows for pollination. While the timings of orchid bloom and the emergence of male and female bees are all dependent on temperature, they are not equally affected by changes in this factor. For every 1°C increase in the average spring temperature, the date of emergence of female solitary mining bees advances by 15.6 days, the date of emergence of male bees advances by 9.2 days, and the date of orchid bloom only advances by 6.4 days (Robbirt et al., 2014). This means that the warming temperatures have a greater effect on female bee emergence than they do on the date of flowering. If female bees emerge at the same time as or before the orchids bloom, the males are less likely to be deceived by the flower, and pollination rates decrease (Hutchings et al., 204). This study shows how instances of temporal discrepancy have increased over time in correlation with the increasing global temperature which has led to a decline in populations of spider orchids. The trend of phenological mismatch caused by climate change prohibiting plants from effectively communicating with their pollinators is seen in many other plant pollinator relationships as well (Kudo, 2019; Thompson, 2010). This process can be detrimental to plant species with specialized pollinators as is the case with the spider orchid and the solitary mining bee since they do not have other pollinators who can make up for this timing discrepancy if their primary pollinator is absent (Hutchings et al., 2018). Therefore, the plants experience low rates of pollination which ultimately leads to a decrease in the populations of the plants and potentially the pollinators as a FALL 2021

result.

Changes in Visual Signaling Species not experiencing phenological mismatch will also encounter difficulties in communicating with their pollinators because of climate change. Temperature can affect flowers’ visual displays, used to attract pollinators from far away, in multiple ways. Sullivan and Koski (2021) found that levels of anthocyanin, a floral pigment associated with blues, pinks, and purples, decreased in response to higher temperatures. This means that as temperatures increase due to anthropogenic causes, many flower species will be less pigmented and therefore will be less attractive to their pollinators which will likely lead to a decrease in pollination. Conversely, the team found that plants experiencing droughtlike conditions, but not necessarily increased temperatures, will experience higher levels of pigmentation. Since climate change will affect different regions in different ways, it is important to note how the effects described in this paper will not affect every species equally (Sullivan & Koski, 2021). The pattern of varied responses to climate change across flower species is also depicted in the paper by Koski et al. (2020) examining the change in the UV absorption of flowers in response to climate change. Flowers with exposed anthers, or pollen stores, have increased levels of UV absorbing pigmentation when exposed to increased UV levels due to ozone depletion. This pigmentation absorbs the radiation in the petals, so it does not damage the exposed pollen. The opposite is true for flowers with anthers concealed by petals. In response to increased temperatures,

"This phenomenon of timing discrepancies of biological cycles is known as phenological mismatch."

31


"What is clear, however, is that specialized plant pollinator relationships are more at risk and declines in their populations because of climate change hindering communication and pollination are already being observed."

flowers with concealed anthers showed a decrease in pigmentation. For these flowers, it is important to reduce heat absorption as the petals enclosing the anthers create a “micro-greenhouse” which can cause the pollen to experience heat damage (Koski et al., 2020). While different across taxa, climate change will impact visual signals plants use to communicate with pollinators. In some species, this will mean the flowers will be more conspicuous and therefore have increased pollination rates, while the opposite may be true for others.

change in volatility. The composition of a volatile scent bouquet aids pollinators in identifying plant species, meaning that any changes in the makeup of a flower's olfactory signal will decrease a pollinator’s ability to recognize the flower. As with phenological mismatch, this is particularly damaging to specialists who rely on specific scent bouquets to locate the correct flower (Farré-Armengol et al., 2014). Failure to do so would result in decreased pollination, which, for specialized pollination relationships, could be detrimental to plant populations.

Changes in Olfactory Signaling

Flowers normally carry a negative charge relative to the atmospheric electric field while honeybees and other flying pollinators tend to be positively charged. These charges are naturally fluctuating and differ between environments. When a bee lands on a flower, it transfers some of its positive change to the flower which makes the flower’s electric field less negative. While this change is not permanent, it does remain altered for a short period of time. Additionally, the more bees that interact with the flower, the longer the altered charge will last. When a bee approaches a flower, it can sense its change using the hairs on its body. This means that if a bee encounters a flower with an altered electric field, it can determine that the flower had previously been visited by other individuals and would likely have been depleted of its nectar and pollen. The bee can then use this information to decide to move to a different flower instead of wasting its time and energy at a flower that bears no reward (Clarke et al., 2017).

Visual signals are not the only way in which flowers attract pollinators. Olfactory signals are used to attract pollinators that are close to the flower. These signals are also being altered by the changing climate. Increased temperatures have been shown to decrease volatile emissions in some species of flower (Cna’ani et al., 2014), therefore making them less attractive to pollinators. Conversely, other studies have shown that volatile emissions increase as temperatures rise, until a temperature threshold is reached, and the plant begins experiencing major heat stress (Farré-Armengol et al., 2014). This temperature maximum is about 40°C meaning only species in particularly warm climates will express the trend shown in this paper. Since these studies used different test species and got different results, one can conclude that the change in release of volatile compounds will differ between species, as was the case with UV pigmentation levels discussed above. Additionally, each individual volatile compound, of which a flower’s scent bouquet is composed, reacts differently to increased temperatures. This means that the composition of the scent bouquet is altered in addition to the

Conclusion Given the wide body of literature on the subject, it is undeniable that climate change will alter how plants and animals communicate with one

Image 2: Raphanus sativus. Image Source: Wikimedia Commons

32

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 3: Interactions between bee, flower, and atmospheric electric field cannot be separated, as each of them influence the other. Image Source: Clarke et al., 2017

another. However, further study is needed to fill many different gaps in the knowledge of this field. Since the effects of climate change will be so diverse, there is still much that is unknown about how communication between plants and pollinators will be altered. In fact, because new modalities through which species communicate are still being discovered, the ways in which climate change will alter these signals has yet to be explored. The electric field through which bees and flowers communicate was only discovered within the last decade (Clarke et al., 2013). If climate change does not affect the charge itself, it would be interesting to examine whether climactic conditions impact a pollinator’s ability to detect the charge. Additionally, since climate change impacts each sensory modality differently, it would be beneficial to examine how important each modality is for successful, and frequent pollination. What is clear, however, is that specialized plant pollinator relationships are more at risk and declines in their populations because of climate change hindering communication and pollination are already being observed (Hutchings et al. 2018). Furthermore, the greater impact of this change is still unknown. In the worst-case scenario, this could mean a decline in plant populations due to lack of successful pollination. If this decline is severe enough, then pollinator populations would also decline which would have major consequences for global ecology and food supply. Luckily, this outcome is highly unlikely since most plants do not solely rely on specific pollinators. In the best-case scenario, the change in signal modalities would be insignificant, and pollinators would be able to carry on pollination without hindrance. The true consequences of these changes in plant pollinator communications will likely lie somewhere in between. This is especially true considering not FALL 2021

all changes in communication due to climate change will be negative. As discussed earlier, some flowers will have increased pigmentation and some will produce stronger olfactory signals (Koski et al. 2020; Farré-Armengol et al. 2014), both of which make the flowers more attractive to pollinators. With the variation in signal modalities and their responses to climate change across taxa, it is difficult to claim that we will see a significant decrease in most plant and pollinator populations due to changes in their communication. Continuing to monitor these changes and their effects on populations will be important for better predicting the severity of anthropogenic climate change and understanding how to protect species potentially at risk.

References Barragán-Fonseca KY, van Loon JJA, Dicke M, Lucas-Barbosa D (2019) Use of visual and olfactory cues of flowers of two brassicaceous species by insect pollinators. Ecological Entomology. 45(1):45-55 Bryers KJRP, Bradshaw HD Jr, Riffell JA (2014) Three floral volatiles contribute to differential pollinator attraction in monkeyflowers (Mimulus). Journal of Experimental Biology. 217(4):614-623 Clarke D, Morley E, Robert D (2017) The bee, the flower, and the electric field: electric ecology and aerial electroreception. Journal of Comparative Physiology A, Neuroethology, Sensory, Neural, and Behavioral Physiology. 203(9):737-748 Clarke D, Whitney H, Sutton G, Robert D (2013) Detection and learning of floral electric fields by bumblebees. Science. 340(6128):66-69

33


Cna’ani A, et al. (2014) Petunia x hybrida floral scent production is negatively affected by hightemperature growth conditions. Plant, Cell & Environment. 38(7):1333-1346

Robbirt KM, Roberts DL, Hutchings MJ, Davy AJ (2014) Potential disruption of pollination in a sexually deceptive orchid by climatic change. Current Biology. 24(23):2845-2849

Dunes, E. (2016). Spider Orchids [photograph]. Wikimedia Commons. https://commons. wikimedia.org/wiki/File:Spider-orchids_ (30733033192).jpg

Severns, J. (2006). European honeybee extracts nectar [photograph]. Wikimedia Commons. https://commons.wikimedia.org/wiki/ File:European_honey_bee_extracts_nectar.jpg

Ekwurzel B et al. (2017) The rise in global atmospheric CO2, surface temperature, and sea level from emissions traced to major carbon producers. Climatic change. 144:579-590

Solga MJ, Harmon JP, Ganguli AC (2014) Timing is Everything: An Overview of Phenological Changes to Plants and Their Pollinators. Natural Areas Journal. 34(2)227-234

Farré-Armengol G, Fillella I, Llusiá J, Niinemets Ü, Peñuelas J (2014) Changes in floral bouquets from compound-specific responses to increasing temperatures. Global change biology. 20(12):3660-3669

Sullivan CN, Koski MH (2021) The effects of climate change on floral anthocyanin polymorphisms. Proceedings of the Royal Society B. 288(1946)

Hutchings MJ, Robbirt KM, Roberts DL, Davy AJ (2018) Vulnerability of a specialized pollination mechanism to climate change revealed by a 356year analysis. Botanical Journal of the Linnean Society. 186(4):498-509 Koski MH, MacQueen D, Ashman TL (2020) Floral pigmentation has responded rapidly to global change in ozone and temperature. Current Biology. 30(22):4425-4431 Kenpei. (2007). Raphanus sativus [photograph]. Wikimedia Commons. https://commons. wikimedia.org/wiki/File:Raphanus_sativus3.jpg

Sun S, Leshowitz MI, Rychtář J (2018) The signalling game between plants and pollinators. Nature. 8(6686) Thompson JD (2010) Flowering phenology, fruiting success and progressive deterioration of pollination in an early-flowering geophyte. Philosophical transactions of the royal society b. 365(1555):3187-3199 Visser ME, Gienapp P (2019) Evolutionary and demographic consequences of phenologicalmismatches. Nature Ecology & Evolution. 3(6):879-885

Kudo G, Cooper EJ (2019) When spring ephemerals fail to meet pollinators: mechanism of phenological mismatch and its impact on plant reproduction. Proceedings of the Royal Society B. 286(1904) McNamara JM, Barta Z, Klaassen M, Bauer S (2011) Cues and the optimal timing of activities under environmental changes. Ecology Letters. 14(12): 1183–1190. Miller-Rushing AJ, Høye TT, Inouye DW, Post E (2010) The effects of phenological mismatches on demography. Philosophical transactions of the Royal Society of London. 365(1555): 3177–3186 Radchuk V, Reed T, Teplitsky C, van de Pol M, Charmantier A, Hassall C, et al. (2019) Adaptive responses of animals to climate change are most likely insufficient. Nature Communications. 10(3109)

34

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Meta Learning: A Step Closer to Real Intelligence BY SALIFYANJI NAMWILA '24 Cover Image: A Deep Neural Network. Deep learning, a technique in machine learning, has become crucial in the domain of modern machine interaction, search engines and mobile applications, revolutionizing modern technology by mimicking the human brain and enabling machines to reason independently. Even though the concept of deep learning extends to various industries, certain Machine Learning (ML) approaches like supervised learning that employ deep learning require huge amounts of input data, which is ultimately expensive to create especially for specific domains of tasks. The challenge then is for data scientists and ML engineers to create actionable real-world algorithms most effective for given tasks even with limited input data set. This is where meta learning pitches in. Image Source: Wikimedia Commons

35

A Brief Introduction to Machine Learning, Deep Learning and Neural Networks Machine learning, Deep Learning, and Neural Networks are all sub-fields of Artificial Intelligence (AI) and Computer Science. Machine Learning is a branch of AI that focuses on the use of data and algorithms to imitate the way that humans learn; it allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so (Burns, 2021). Deep learning is a sub-field of machine learning, and neural networks is a subfield of deep learning. Deep learning and machine learning differ in how each algorithm learns. Deep learning eliminates some of the manual human intervention required in the process and enables the use of larger datasets by automating much of the feature extraction process. Feature extraction in Deep/Machine learning is a process that aims to reduce the number of features in a dataset by creating new features from the existing ones, summarizing most of the information contained in the original set of features (Ippolito, 2019). Classical machine learning on the other hand is more dependent on human intervention. Human experts determine the set of features

to understand the differences between data inputs, usually requiring more structured data to learn (IBM Cloud Education, 2020). Neural networks, or artificial neural networks (ANNs), are comprised of node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network (Jiaxuan et al, 2020). The ‘deep’ in deep learning refers to the depth of layers in a neural network. A neural network that consists of more than three layers- which would be inclusive of the inputs and the output- can be considered a deep learning algorithm or a deep neural network (IBM Cloud Education, 2020). A neural network that only has two or three layers is just a basic neural network. Some methods used in supervised learning include neural networks, logistic regression, support vector machine (SVM), random forest, and more.

Some Types of Machine Learning Machine learning classifiers fall into three primary categories: Supervised machine DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 1: An Artificial Neural Network. The depth of the hidden layer determines whether it is a deep artificial neural network or not. W1…Wn are weights applied to the output of an individual node before data is sent to the next layer. Image Source: Wikimedia Commons

Learning, Unsupervised Machine Learning and Semi Supervised learning. Supervised learning is defined by its use of labeled datasets to train algorithms to classify data or predict outcomes accurately. Labeled data is a designation for pieces of data that have been tagged with one or more labels identifying certain properties or characteristics, or classifications or contained objects (Wikipedia: Labeled data in machine learning); unlabeled data on the other hand are not tagged with one or more labels. As input data is fed into the model, it adjusts its weights until the model has been fitted appropriately. This is part of the cross-validation process , a statistical technique for testing the performance of a machine learning model throughout the whole dataset, that ensures the model avoids overfitting (when a model learns “noise’, or irrelevant information within a dataset, making its performance on unseen data inaccurate) or underfitting (when a model is too simple that it cannot establish the dominant trend within the data, resulting in training errors and poor performance of the model). Unsupervised learning uses machine learning algorithms to analyze and cluster unlabeled datasets. These algorithms discover hidden patterns or data groupings without the need for human intervention. Algorithms used in unsupervised learning include neural networks, naïve bayes probabilistic methods, k-means clustering, and more. Semi-Supervised learning on the other hand is a comfortable medium between supervised and unsupervised learning. During training, it uses a smaller labeled data set to guide classification and feature extraction from a larger, unlabeled data set. This way, semisupervised learning can solve the problem of not having enough labeled data to train a supervised learning algorithm (IBM Cloud Education, 2020). Machine learning technology has many applications including speech recognition, online customer service chatbots, recommendation FALL 2021

engines, and computer vision, which makes our lives easier. However, machine learning algorithms have some challenges such as the need for large datasets for training, high operational costs due to many trials during the training phase, and trials taking a long time to find the model which performs best for a certain dataset. Enter Meta learning, another subset of machine learning to tackle these challenges by optimizing learning algorithms and finding better performing ones.

Introduction to Meta Learning Meta Learning in simple terms means learning to learn. In machine learning, it refers to learning algorithms (called meta machine learning algorithms or meta learners) that learn from the output of other machine learning algorithms that in turn learn from data (Triantafillou et al., 2019). Accordingly, meta learning requires the presence of other learning algorithms that have already been trained on data. These meta learners make predictions by taking the output from existing machine learning algorithms as input and predicting a number or class label.

"Machine Learning is a branch of AI that focuses on the use of data and algorithms to imitate the way that humans learn; it allows software applications to become more accurate at predicting outcomes without being explicitly programmed to do so"

Traditional machine learning uses a large training dataset exclusive to a given task to train a model and fails abruptly when very few data points are available. This is a very involved process. It contrasts with how humans take in new information and learn new skills. Human beings do not need a large pool of examples to learn a skill. We learn very quickly and efficiently from a handful of examples. Drawing inspiration from how humans learn, meta learning attempts to automate these traditional machine learning challenges. It seeks to apply machine learning to learn the most suitable parameters and algorithms for a given task. Meta learning produces a versatile AI model that can learn to perform various tasks without having to train them from scratch. Even 36


though many researchers and scientists only recently believed that meta learning could get us closer to achieving artificial general intelligence, meta learning research dates as far back as the late 20th century.

Research History of Meta Learning

" Like in machine learning, the solution in meta learning is to “learn it.""

With the advancement of artificial intelligence technology, meta learning has experienced three stages of development. The beginning of the first development stage of meta learning can be traced back to the early 1980s, when Donald B. Maudsley put forward the concept of “meta learning” for the first time. Maudsley regarded meta learning as the synthesis of hypothesis, structure, change, process, and development. He described it as “the process by which learners become aware of and begin to control their already internalized perception, research, learning, and growth habits” (Maudsley, 1980). In 1985, John Biggs used meta learning to describe the specialized application of metacognition, the study of memory-monitoring and self-regulation, meta-reasoning, awareness, and self-awareness, in student learning and believed that meta learning is a subprocess of metacognition. In 1988, Philip Adey and Michael Shayer combined meta learning with physics and proposed a new method of teaching physics based on meta learning ideas (Adey et al, 1988). In the early 1990s, the development of meta learning entered the second stage, the concept of meta learning slowly penetrated the field of machine learning, and many researchers contributed to the early work of meta learning. As for the issue of identifying which algorithm from a set of complementary algorithms can optimize each scenario and improve the overall performance on a given task, technically called algorithm selection, at that time, some researchers discovered that this issue is a learning task, and because of this, it gradually developed in the machine learning discipline, and a brandnew field, “meta learning field,” gradually formed (Giraud-Carrier et al, 2004). In 1990, Rendell and Cho innovatively proposed a method to characterize classification problems using the meta learning idea and conducted experiments to verify that these classification features have a positive impact on machine learning algorithm behavior like accuracy, speed (Rendell et al, 1990). However, this idea was mostly used in psychology at that time. In 1992, David Aha further expanded the meta learning idea: for a given dataset with characteristics, say C1, C2…, Cn, the rule of preferring an algorithm, say A1, over another algorithm, say A2, is obtained by a rule-based learning algorithm, and this idea was

37

first applied to the field of machine learning (Aha et al, 1992). In 1993, Chan and Stolfo proposed that meta learning is a general technology that can combine multiple learning algorithms and used meta learning strategies to combine classifiers, machine learning algorithms used to assign a class label to a data input, calculated by different algorithms. Experiments showed that meta learning strategies and algorithms are more effective than other experimental strategies and algorithms (Chan et al, 1993). In 1994, with the European development of a largescale classification algorithm comparison project STATLOG (comparative testing of statistical and logical learning), the meta learning idea gradually attracted the attention of many scholars (King et al., 1995). In the 21st century, more and more researchers in the field of machine learning have paid more attention to the application of meta learning ideas for algorithm selection, and the development of meta learning has also entered the third stage. In 2002, Vilalta and Drissi proposed that meta learning and basic learning are different in the scope of adaptation levels; meta learning studies how to dynamically select the correct bias by accumulating metaknowledge (that is knowledge about knowledge), while the basis of basic learning is a priori fixed. In 2017, Finn et al. innovatively proposed a Model-Agnostic Meta Learning (MAML) algorithm, which is compatible with any model trained with gradient descent (an optimization algorithm used to find the values of parameter of a function that minimizes a cost function, best used when parameters cannot be calculated analytically), and is suitable for a variety of different learning problems, including classification, and regression among others. In 2020, Raghu et al proposed an Almost No Inner Loop (ANIL) algorithm based on MAML. This algorithm has better computational performance than ordinary MAML in standard images. Still, many other researchers have proposed various meta learning models that employ more advanced machine learning approaches like reinforcement learning, a technique in which a learning agent is able to perceive and interpret its environment, take actions, and learn through trial and error.

Some Approaches to Meta-learning Algorithms Meta learning is not so different from basic machine learning. In any machine learning task, data samples are given in form of inputoutput pairs, say (X, Y), from some underlying distribution and the goal is to arrive at a function which can predict, say some Y values for some

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 2: Diagram of modelagnostic meta learning algorithm (MAML), which optimizes for a representation θ that can quickly adapt to new tasks. Image Source: Wikimedia Commons

unseen data X coming from the same distribution. In meta learning, this given data is different tasks which are regarded as samples from some underlying task distribution, and the goal is to come up with a function which can perform ‘well’ for some unseen tasks coming from the same distribution. Like in machine learning, the solution in meta learning is to “learn it." Each task is a pair of training data (coming from the same distribution) and a loss function. Loss functions measure how far an estimated value is from its true value. It is computed from the output of a machine learning algorithm, which is the input in meta learning. To perform well for a task, it means having a low value for the loss of unseen data that comes from the same distribution where the training data for the task came from. Unseen data also has its own training data which can enhance the performance of the task. In meta learning, one way of solving a task is using given data to learn some parameters of some prediction function that solves that task (Lahoti, 2018). Hence the objective is to come up with some initial set of parameters for a function which when updated with training data from an unseen task, performs well on that task. Several approaches are used to learn these parameters, including Optimized Meta Learning and Model-Agnostic Meta learning (MAML).

Optimized Meta Learning Several learning models have many hyperparameters that are optimizable. A hyperparameter is a parameter that is defined before the start of the learning process and is used to control the learning process. Hyperparameters have a direct impact on the quality of the training process (Wikipedia: Hyperparameter-Machine Learning, 2021). This means that the process of choosing hyperparameters or tuning them dramatically affects how well the algorithm learns. With ever-increasing complexity of models, however, more so neural networks, a challenge

FALL 2021

arises: The complexity of models makes them increasingly difficult to configure. Consider a neural network. Human engineers can optimize a few parameters for configuration. This is done through experimentation. Yet, deep neural networks have hundreds of hyperparameters. Such a system is too complicated for humans to optimize fully. There are many ways to optimize hyperparameters, among which include grid searching and random searching. Grid search method makes use of manually predetermined hyperparameters. The group of predetermined parameters is searched for the best performing one. Grid search involves the trying of all possible combinations of hyperparameter values. The model then decides the best-suited hyperparameter value. However, this method is referred to as traditional since it is very time consuming and inefficient (Stalfort, 2019). Random Search, unlike grid search, which is an exhaustive method involving the tying of all possible combinations of values, replaces this exhaustive process with random searching. The model makes random combinations and attempts to fit the dataset to test for accuracy (Stalfort, 2019). Since the search is random, there is a possibility that the model may miss a few potentially optimal combinations. On the upside, it uses much less time compared to grid search and often gives ideal solutions. Random search can outperform grid search provided that only a few hyperparameters are required to optimize the algorithm.

Model-Agnostic Meta learning (MAML) Even though random search generally performs better than grid search, we can do better at arriving at the initialization parameter for the representation function that works well on unseen data. Meta Agnostic Meta learning (MAML) achieves this goal and is by far more effective than grid search or random search approaches. MAML looks for its initialization parameters that work best for task samples from

38


the task distribution using gradient descent (Finn et al, 2017). Gradient descent minimizes a given function by moving towards the direction of the steepest descent iteratively. It is used to update the parameters of the model (Andrychowicz et al., 2016). The meta learner seeks to find an initialization that is not only useful for adapting to various tasks, but also can be adapted quickly (in a small number of steps) and efficiently (using only a few examples). Figure 3 shows a visualization. Suppose we are seeking to find a set of parameters θ that are highly adaptable. During the course of meta learning (the bold line), MAML optimizes for the set of parameters such that when a gradient step is taken with respect to a particular task Li (the gray lines), the parameters are close to the optimal parameters θ i* for task Li (Finn et al, 2017). This approach is quite simple and has several advantages. It does not make any assumptions on the form of the model. It is quite efficient – there are no additional parameters introduced for meta learning, and the learner’s strategy uses a known optimization process (gradient descent), rather than having to come up with one from scratch (Weng L., 2018). Lastly, it can be easily applied to several domains, including classification and regression (Rahul B., 2021).

Meta Learning in the Field of Robot Learning In the rapidly developing field of robotic learning (Azizi, A. 2020), the application prospects of meta learning are very broad. Even more so, in robot operation skills, meta learning as a ‘learning to learn’ method, has made good progress (Peters,

2008). The diagram in Figure 4 demonstrates the effectiveness of MAML in training robot learners with only a few episodes (i.e., batches of tasks) of adaptation. For the mass-voltage task, the initial meta-policy, steered the robot significantly to the right due to the extra mass and voltage change, which caused an imbalance in the robot’s body and leg motors. However, after 30 episodes of adaptation using evolutionary strategy MAML method, the robot straightens the walking pose, and after 50 episodes, the robot can balance its body completely and is able to walk longer distances (Xingyou S. et al, 2020). These observations only further reinforce how indispensable MAML is, especially in robot training. With the development of robot technology in the fields of home, factory, national defense, and out-space exploration (Tan et al., 2013), the autonomous operation ability of robots has attracted more attention from the public, and it is expected that it can replace humans in more complex multi-domain tasks. Therefore, there is a need to develop more advanced methods for robots to learn operating systems. However, improving the ability of robots to learn operating skills autonomously and quickly is a major issue in the field of robot learning problem (Ji et al., 2021). In addressing this challenge, Finn et al., proposed a Meta-Imitation Learning (MIL) method by extending MAML to imitation learning (Finn et al., 2017). This method allows robots to master new skills with only one demonstration, which improves robot learning efficiency, whether in simulation or in visual demonstration experiments on a real robot platform; the ability of this method to learn new

Image 3: Qualitative changes during the adaptation phase of learning under the mass-voltage task. Image Source: Wikimedia Commons

39

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 4: Meta Imitation learning and Domain Agnostic Learning on a robot learner. The left image shows the robot learner being taught the task. Note the bowl in which the peach is being delivered. The image on the right shows how the robot learner imitates the action perfectly even after positions of receiver plates are shuffled randomly. Image Source: Wikimedia Commons

tasks has been verified, and it is far superior to the latest imitation learning methods.

learning how to learn any task empowers us far beyond knowing how to learn specific tasks.

Still new robust meta learning approaches in robotics keep on being developed. For instance, Yu et al proposed a Domain-Adaptive Meta Learning (DAML) method based on meta learning, allowing the learning of cross-domain correspondences, so that robot learners can visually recognize and manipulate new objects after only observing a video demonstration of a human user and achieve the effect of one-time learning (Yu et al., 2018). This way, meta learning cannot only help robots to imitate learning, but also cultivate the ability of robots to learn to learn.

Meta learning not only solves problems encountered by conventional artificial intelligence but also solves the problems of Machine Learning’s prediction accuracy and efficiency in Data Science. Consequently, the role of meta learning in artificial intelligence is indispensable. With traditional machine learning techniques paving the way to meta learning, researchers are now able to use meta learning models across tasks instead of training new data and new models for different tasks from scratch (Huisman, 2021). At the same time, since the birth of artificial intelligence in the 1950s, people have always wanted to build machines that learn and think about big data like humans (Lake, 2019). The meta learning algorithm based on the idea that “it is better to teach them how to fish than to give them fish,” dedicated to helping artificial intelligence learn how to learn big data, is the ideal solution to realize this goal, and it is also a new generation force in the development of artificial intelligence. In the future, researchers will also make breakthroughs in a more challenging direction. Perhaps the million-dollar question that remains the same is: How can we use knowledge about learning (i.e., metaknowledge) to improve the performance of learning algorithms? Obviously, the answer to this question is the key to progress in this field and will continue to be the subject of in-depth research.

Conclusion Meta-learning opportunities present themselves in many ways and can be embraced using a wide spectrum of learning techniques. Every time we try to learn a certain task, whether successful or not, we gain useful experience that we can leverage to learn new tasks. We should never have to start entirely from scratch. Instead, we should systematically collect our ‘learning exhaust’ and learn from it to build auto-machine learning systems that continuously improve over time, helping us tackle new learning problems ever more efficiently. The newer tasks we encounter, and the more similar those new tasks are, the more we can tap into prior experience, to the point that most of the required learning has already been done beforehand. The ability of computer systems to store virtually infinite amounts of prior learning experiences (in the form of metadata) opens a wide range of opportunities to use that experience in completely new ways, and we are only starting to learn how to learn from prior experience effectively. Yet, this is a worthy goal:

FALL 2021

"In the rapidly developing field of robotic learning, the application prospects of meta learning are very broad."

References Adey, P., et al. (1988) Strategies for Meta-Learning in Physics. Physics Education. Ref: https://iopscience.iop.org/ 40


article/10.1088/0031-9120/23/2/005/meta Andrychowicz, M., et al. (2016). Learning to Learn by Gradient Descent. e1606.04474. https:// arxiv.org/abs/1606.04474 Aha, D. W. (1992). Generalizing from Case Studies: A Case Study. Proceedings of the 9th International Workshop on Machine Learning. Aberdeen, UK. https://www.sciencedirect.com/ science/article/pii/B9781558602472500061 Azizi, A. (2020). Applications of Artificial Intelligence Techniques to Enhance Sustainability of Industry 4.0: Design of an Artificial Neural Network Model as Dynamic Behavior Optimizer of Robotic Arms. Complexity, 2020, e8564140. https://www.hindawi.com/journals/ complexity/2020/8564140/ Biggs, J. B. (1985) The Role of Meta-Learning in Study Processes. British Journal of Educational Psychology. https://bpspsychub.onlinelibrary. wiley.com/doi/abs/10.1111/j.2044-8279.1985. tb02625.x Burns, E. (2021). In-depth guide to machine Learning in the Enterprise. https:// searchenterpriseai.techtarget.com/In-depthguide-to-machine-learning-in-the-enterprise Finn, C., et al. (2017). Model-agnostic Meta-Learning for Fast Adaptation of Deep Networks. Proceedings of the 2017 International Conference on Machine Learning. Sydney, Australia. http://proceedings.mlr.press/v70/ finn17a Giraud-Carrier, C., et al. (2004). Introduction to the Special Issue on Meta-Learning. Machine Learning. https://link.springer.com/ article/10.1023/B:MACH.0000015878.60765.42 Hartman, T. (2019). Meta-Modelling Meta Learning. https://medium.com/datathings/metamodelling-meta-learning-34734cd7451b Huisman, M., et al (2021). A survey of Deep Meta-Learning. Artificial Intelligence. https://scholar.google.com/scholar_ lookup?title=A%20survey%20of%20 deep%20meta-learning&author=M.%20 Huisman&author=J.%20N.%20 van%20Rijn&author=&author=A.%20 Plaat&publication_year=2021

Lahoti, S. (2018). What is Meta Learning? https://hub.packtpub.com/what-is-metalearning/ Madhava, S. (2017). Deep Learning Architectures. https://developer.ibm.com/ articles/cc-machine-learning-deep-learningarchitectures/ Raghu, A., et al. (2019). Rapid Learning or Feature Reuse? Towards Understanding the Effectiveness of MAML. https://arxiv.org/ abs/1909.09157 Rahul, B. (2021). How to Run ModelAgnostic Meta Learning (MAML) Algorithm. https://towardsdatascience.com/how-torun-model-agnostic-meta-learning-mamlalgorithm-c73040069810 Rendell, L., et al. (1990). Empirical Learning as a Function of Concept Character. Machine Learning. https://link.springer.com/ article/10.1007/BF00117106 Rozantsev, A., et al. (2018). Beyond Sharing Weights for Deep Domain Adaptation. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://scholar.google. com/scholar_lookup?title=Beyond%20 sharing%20weights%20for%20deep%20 domain%20adaptation&author=A.%20 Rozantsev&author=M.%20 Salzmann&author=&author=P.%20Fua&publication_year=2018 Santoro, A. (2016). Google DeepMind. Meta Learning with Augmented Neural Networks. http://proceedings.mlr.press/v48/santoro16.pdf Stalfort, J. (2019). Hyperparameter Tuning Using Grid Search and Random Search: A conceptual Guide. https://medium.com/@jackstalfort/ hyperparameter-tuning-using-grid-search-andrandom-search-f8750a464b35 Triantafillou, E., et al (2019). Meta-Dataset: A Dataset of Datasets for Learning to Learn from a Few Examples,” 2019. https://arxiv.org/ abs/1903.03096 You, J., et al. (2020). Graph Structure of Neural Networks. In International Conference on Machine Learning (pp. 10881-10891). http:// proceedings.mlr.press/v119/you20b.html Vilalta R., et al (2002). A Perspective View

41

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


and Survey of Meta-Learning. Artificial Intelligence Review. https://link.springer.com/ article/10.1023/A:1019956318069 Weng L. (2018). Meta Learning: Learning to Learn Fast. https://lilianweng.github.io/lillog/2018/11/30/meta-learning.html Xingyou, S., et al. (2020). Google AI Blog. Exploring Evolutionary Meta Learning in Robotics. https://ai.googleblog.com/2020/04/ exploring-evolutionary-meta-learning-in.html Xu, Z., et al (2019). Meta-learning via Weighted Gradient Update. IEEE Access. 2019. https:// ieeexplore.ieee.org/abstract/document/8792107

FALL 2021

42


Microbial Impacts from Climate Change BY SHAWN YOON '25 Cover Image: Microbial species living in soil are beneficial to nutrient distribution but can be harmful in large proportions. Image Source: Pacific Northwest National Laboratory, Creative Commons License

Introduction Representative Carbon Pathways (RCP) visual models have been designed to predict the levels of carbon-induced warming by 2100. Industrial activity and government policies in the status quo do not limit carbon emissions to the extent that they would promote significant changes in global warming. The visual model currently has multiple pathways, or directions, ranging from 2.6 to 8.5, that the world could take depending on the changes in carbon emissions. RCP 2.6 would be the best outcome, with the world developing into one where emissions are net-negative. Currently, the model predicts an increase of up to 4 degrees Celsius along RCP pathway 8.5, the worst of the outcomes derived from current warming levels. This is a stark contrast from the best possible outcome, RCP 2.7, which predicts under 2 degrees Celsius total by 2100 (Moore et al., 2013). Such visual models are crucial in depicting the current trajectory of carbon emissions. This is certainly cause for alarm. These carbon sinks, abiotic storages of high amounts of carbon content, are aggravated by further warming. The result is greater releases of carbon into the atmosphere. This could create multiple positive feedback loops among several different carbon sinks, including the ocean, vegetation, atmosphere, and soil. Such feedback loops would entail the processes of increasing

43

carbon dioxide in these sinks. With greater temperatures, the sinks release greater levels of carbon, resulting in an intensifying cycle over time. The most prominently researched sink remains the atmosphere, with the United Nations Environmental Commission requiring nations to set their own limits on greenhouse gas emissions. In its current state, soil as a sink stores 3000 petagrams (pg) of carbon (a third of the world’s carbon); in contrast, the atmospheric carbon dioxide levels reach around 600 pg. Storing immense amounts of carbon in the soil, the second largest sink behind oceans, is important to avoid unfavorable outcomes, such as RCP 8.5. For this to occur, researchers have suggested the carbon in the soil must be maintained or even increased in percentage (Pries et al., 2017). Unfortunately, the carbon from soils is prone to being emitted back into the atmosphere due to connectivity. As the atmosphere is connected directly to soils, along with oceans, carbon from respiration is deposited there. Mainly via soil respiration, the effects of this emission to the atmosphere are greatly influenced by the effects of soil temperature, demonstrating how such carbon sinks act as a network (Reth et al., 2005). With optimal temperatures, microbes can better maintain homeostasis and respire to greater extents, increasing carbon dioxide emissions. The impacts of soil as a mitigating factor of climate DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 1: Estimated changes in temperature under RCP pathway 8.5. Image Source: Wikimedia Commons

change cannot be emphasized enough. While the atmospheric levels of carbon dioxide are alarming in the status quo, and must certainly be addressed, the aerobic processes within the soil must also be recognized. Soil acts as a sink, storing carbon to a great extent, much like the oceans and the atmosphere. However, unlike these other sinks, carbon storage within the soil plays no active role in creating threats for humans. Carbon storage in all three major sinks presents clear challenges for humans and other lifeforms. Examples include the creation of smog or instigating the greenhouse effect, or the acidification of the oceans which can force organisms out of their natural habitats (as seen in the phenomenon of coral bleaching) (Kumaraswamy, 2009). This further emphasizes the effectiveness of soil as a carbon sink, and why it must be maintained, given the dire situation of decreasing biodiversity and ecological services of ecosystems brought by feedback loops because of climate change and warming temperatures. Understanding why soil horizons, the distinct layers of soil, act as such an effective sink and how they can be sustained is key to discussing positive feedback loops induced by global warming. Microbial respiration has been found to decrease as depth of soil horizons increases, with 30% of respiration occurring in the top layers. However, given the amount of soil mass on the Earth’s surface, the soil carbon content cannot be undermined (Fang & Moncrieff ,2005).

Microbial Activity in Soil While carbon can be removed from the soil through anthropogenic means, such as fracking or fossil fuel mining, this article focuses on the FALL 2021

natural processes that drive emission from soil layers without human influence. These include microbial activity. Microbes, specifically mycorrhizal microbes, within the soil are responsible for aerobic respiration, which takes nutrients and organic material from top layers of the soil and re-releases carbon back into the atmosphere upon digestion. This respiration is like the aerobic processes of plants and animals, in which organisms take in oxygen from the atmosphere and convert it into carbon dioxide that is released into the environment. It must be noted that most of the ecosystem respiration takes place within the soil (Ryan & Law, 2005). Such processes by soil are easily noticeable through the processes of decomposition, which is a key indicator of soil respiration. Microbes responsible for decomposition are found in areas with varying soil organic material (SOM), as soil carbon is mainly derived from decomposing flora and fauna or carbon leaching from organisms living within the soil horizons. Given that this varies by biome, levels of decomposition and respiration vary by location. Temperature is crucial for the homeostasis of microbes living within soil. It can also create physical barriers. For example, tundra and boreal biomes have less microbial productivity than temperate deciduous forests or tropical rainforests, given this variance in organic carbon within the toplayers of soil (Raich, 2000). Thus, microbes are dormant in colder conditions. Unable to access decomposing material, their respiration is greatly limited compared to warmer biomes with higher decomposition rates from higher temperatures. Frozen or rocky soils can also

"The impacts of soil as a mitigating factor of climate change cannot be emphasized enough."

44


prevent microbes from accessing this carbonrich matter. Furthermore, SOM tends to be higher in agricultural climates, due to increased substrate and nutrient supplies from plants to the soil. This would suggest that heterotrophic respiration, emissions of carbons from metabolic processes of organisms, from plants and the soils are heightened during growing seasons, demonstrating the effect of climate on microbial respiration (Kou et al., 2007). The main difference between these biomes and microclimates leads to the next segment on soil respiration: the effects of warming on microbial respiration.

Effects of Soil Microbes on Biomes

"There is a considerable need to address the issue of soil respiration, considering that global warming is only exacerbating the effects of emission of carbon dioxide into the atmosphere from the soil horizons."

Moderate levels of microbial respiration are key to creating productive ecosystems by distributing nutrients via processes of decomposition. Research has found that respiration cycles are critical in recycling the carbon and nitrogen used in plant processes in wooded and grassland communities (McCulley et al., 2004). While moderate respiration is important for maintaining high levels of primary productivity of ecosystems, excessive soil respiration leads to unbalanced levels of carbon dioxide in the atmosphere. This is especially alarming since anthropogenic influences are already contributing to high levels of carbon within the atmosphere through various industrial activities. Such warming from greenhouse gases may have disastrous effects on soil carbon emission. Greater warming within the soil from warmer atmospheres leads to higher optimum temperatures for microbes, which are usually dormant in colder temperatures. Positive feedback loops are intensified: soils release carbon dioxide, this acts as a greenhouse gas and warms the soil, and eventually leads to greater microbial respiration within the soil. This

effect is intensified especially with the thawing of permafrost soils, unfreezing soil that has largely kept microbes dormant from respiration in colder boreal regions. due to global warming, Experiments conducted by Caitlyn Pries, current professor of ecology at Dartmouth College, and her team found that tundra ecosystems were losing carbon uptake capacity that existed ten thousand years ago since the Holocene, the period including the past 11,700 years, upon melting layers that were previously frozen. Reduction of carbon stock within deeper, frozen layers ceased upon measurements using radiocarbon dating, indicating the necessity of mitigating the effects of climate change on soil respiration (Pries et al., 2012).

Conclusion There is a considerable need to address the issue of soil respiration, considering that global warming is only exacerbating the effects of emission of carbon dioxide into the atmosphere from the soil horizons. Developing a deeper understanding of how to control soil respiration and SOM will be key towards curbing the contribution of carbon dioxide, especially considering that twice the carbon content of the atmosphere is present within the top meter of soils (Rustad et al., 2000). Research by Dartmouth professor Caitlin Pries on the whole-soil carbon flux (carbon respiration from all soil layers) shows that if climate change follows the path of projected model RCP 8.5 with an increase of 4 degrees Celsius, soil respiration could increase by anywhere from 34 to 37% (Pries et al., 2017). Before we reach this point, the topic of soil respiration must be addressed, especially in its relations to areas of higher temperatures and vegetation. Given the correlational relationship of microbial respiration and atmospheric carbon

Image 2: Image of microbes within soil under a microscope. Image Source: Ohio State University, Creative Commons License

45

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 3: Current levels of carbon dioxide emissions in 2021. Image Source: NRDC, Creative Commons License

levels, it may be ideal to first curb the emissions of greenhouse gases to slow the effects of positive feedback loops aggravating respiratory levels. Directly dealing with means to counteract carbon respiration by recognizing such areas of high microbial activity could also offer means of dealing with several carbon sinks at once, providing solutions to dealing with the issue of the carbon flux.

References Fang, C., & Moncrieff, J. B. (2005). The variation of soil microbial respiration with depth in relation to soil carbon composition. Plant and Soil, 268(1/2), 243–253. http://www.jstor.org/ stable/24124489 Kou, T., Zhu, J., Xie, Z., Hasegawa, T., & Heiduk, K. (2007). Effect of elevated atmospheric CO2 concentration on soil and root respiration in winter wheat by using a respiration partitioning chamber. Plant and Soil, 299(1/2), 237–249. http://www.jstor.org/stable/24127961 Kumaraswamy, S. (2009). Soil as a source and/or sink for carbon. Current Science, 96(5), 634–634. http://www.jstor.org/stable/24104547 McCulley, R. L., Archer, S. R., Boutton, T. W., Hons, F. M., & Zuberer, D. A. (2004). Soil Respiration and Nutrient Cycling in Wooded Communities Developing in Grassland. Ecology, 85(10), 2804–2817. http://www.jstor.org/ stable/3450439

Climate, 26(23), 9291–9312. http://www.jstor. org/stable/26193475 Pries, Caitlin E. Hicks, et al. “Holocene Carbon Stocks and Carbon Accumulation Rates Altered in Soils Undergoing Permafrost Thaw.” Ecosystems, vol. 15, no. 1, Springer, 2012, pp. 162–73, http:// www.jstor.org/stable/41413975. Pries, Caitlin E. Hicks, et al. (2017). The WholeSoil Carbon Flux in Response to Warming. Science, vol. 355, no. 6332, 2017, pp. 1420–1423., https://doi.org/10.1126/science.aal1319. Raich, J. W., & Tufekcioglu, A. (2000). Vegetation and Soil Respiration: Correlations and Controls. Biogeochemistry, 48(1), 71–90. http://www.jstor. org/stable/1469553 Reth, S., Reichstein, M., & Falge, E. (2005). The effect of soil water content, soil temperature, soil pH-value and the root mass on soil CO2 efflux – A modified model. Plant and Soil, 268(1/2), 21–33. http://www.jstor.org/stable/24124471 Rustad, L. E., Huntington, T. G., & Boone, R. D. (2000). Controls on Soil Respiration: Implications for Climate Change. Biogeochemistry, 48(1), 1–6. http://www.jstor.org/stable/1469549 Ryan, M. G., & Law, B. E. (2005). Interpreting, Measuring, and Modeling Soil Respiration. Biogeochemistry, 73(1), 3–27. http://www.jstor. org/stable/20055185

Moore, J. K., Lindsay, K., Doney, S. C., Long, M. C., & Misumi, K. (2013). Marine Ecosystem Dynamics and Biogeochemical Cycling in the Community Earth System Model [CESM1(BGC)]: Comparison of the 1990s with the 2090s under the RCP4.5 and RCP8.5 Scenarios. Journal of

FALL 2021

46


Ultrasound Mediated Delivery of Therapeutics BY SOYEON (SOPHIE) CHO '24 Cover Image: A mouse’s brain after intravenous administration of experimental nanoparticles that can bypass the blood-brain barrier (BBB). Cell nuclei are shown in blue, blood vessels in red, and human cancer cells in green. These experimental nanoparticles are one of the current therapeutic efforts, such as ultrasound, to administer drugs across the BBB. Image Source: Wikimedia Commons

Introduction Ultrasound is defined as sound waves with high frequencies that are not audible by humans. Because sound waves are much safer than other methods used for imaging, due to the lack of radiation, ultrasound has been used for diagnostic imaging of fetuses in pregnant women as well as swelling, infection, or other forms of pain in internal organs. Also, unlike x-rays, ultrasound scans clearly indicate soft tissues, which explains its widespread use. However, applications of ultrasound outside imaging have been explored in the past several years. Nonimaging uses of ultrasound use the frequencies between the audible region (less than 20 kHz) and the diagnostics ultrasound region (more than 10 MHz) (Mitragotri, 2005). This large range of ultrasound frequencies allows for many different uses outside of imaging, including the delivery of therapeutic drugs for patients.

Effects of Ultrasound Cavitation The direct and indirect effects of ultrasound waves allow for a wide range of ultrasound applications in therapeutics. The primary effect is that the ultrasound waves directly interact with mediums like cells and tissues through periodic 47

oscillations of a frequency and amplitude determined by the ultrasound source (Mitragotri, 2005). For the secondary effects, ultrasound increases temperature as the medium absorbs the oscillating sound waves. This impacts tissues with a higher absorption coefficient for ultrasound waves, such as bones rather than muscle tissue (Suslick, 1988). A more significant secondary effect comes from cavitation, which is the cleavage, growth, and oscillation of microbubbles hit by ultrasound waves. Microbubbles have a diameter of around 3 µm and are inserted into the site of interest through intravenous injection. In addition to assisting delivery of drugs and gene products, microbubbles can also act as contrast agents for imaging, since they oscillate in response to ultrasound much more than cells (Blomley et al., 2001). Despite initial concerns about having gaseous spaces within the blood vessels, clinical studies have demonstrated no safety hazards from microbubbles, due to their small size (Nanda et al., 1997). There are two major types of cavitation, dependent on the stability of the oscillatory motions of the microbubbles. Stable cavitation involves stable, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 1: An ocular ultrasound of a large retinoblastoma tumor, a malignant tumor in the retina, within the eye of a three-year-old boy. Image Source: Wikimedia Commons

repeating oscillations of the microbubbles, and it is the main mechanism for acoustic streaming. In acoustic streaming, the oscillations produce velocities circling around each microbubble, and for high amplitude oscillations, the velocities induce shear stresses on surrounding tissue (Pitt et al., 2004). These shear stresses cause the lysis of red blood cells (also called hemolysis) and vesicles like liposomes (Rooney, 1970; Marmottant & Hilgenfeldt, 2003). Acoustic streaming involves microstreaming based on the scale of shear stresses. In inertial cavitation, microbubbles grow and collapse in irregular patterns, unlike the repeating oscillations in stable cavitation. Because of the collapses, it is also called collapse cavitation. Inertial cavitation is the main mechanism for sonochemistry, shock waves, and liquid microjets. Sonochemistry occurs as the sudden collapse of microbubbles quickly increases temperature in the microbubbles and induces chemical reactions like free radical generation. This step relates sonochemistry to sonodynamic therapy. Shock waves are also caused by the sudden collapse of microbubbles and can induce drug transport, since microbubble collapse disturbs the surrounding tissue and membranes that may regulate the entrance of foreign molecules like drugs. Shock waves both contribute to sonophoresis, also called phonophoresis, and sonoporation, also called transient membrane permeabilization. Sonophoresis is the transdermal delivery of topically applied drugs, and it occurs as shock waves increase the permeability of the skin. Sonoporation is the impermanent damage to the FALL 2021

cell membranes and tissues by shock waves, which enhance the delivery of drugs. Liquid microjets are formed by a collapse of a microbubble near the surface, which then cleaves the gas bubble and shoots liquid to the surrounding tissue, also inducing drug transport by causing stress and thus relating to the sonophoresis (Mitragotri, 2005). One consequence is that non-site cells may also undergo lysis due to the microjets hitting them at high velocities (Nyborg, 2001). Generally, these mechanisms and resulting phenomena, such as sonodynamic therapy, sonophoresis, and sonoporation, disrupt surrounding tissue through oscillations and collapses from the ultrasound-generated microbubbles. They also contribute to a major type of applications: ultrasound mediated delivery of drugs or gene therapy products. For ultrasound-mediated delivery, microbubbles are injected with the nanoparticles delivered inside of the bubbles, inducing stress to the tissue surrounding the medium and increasing cavitation (Chowdhury et al., 2017). Building on the mechanisms and resulting phenomena for US-mediated delivery of the drugs, this review will discuss the following: difference between the enhanced uptake of drugs through ultrasound, molecules in US-mediated therapeutics, and other applications including the disruption of the blood-brain barrier (BBB), transdermal drug delivery, and more.

"... applications of ultrasound outside imaging have been explored in the past several years."

The direct and indirect effects of ultrasound waves allow for a wide range of ultrasound applications in therapeutics. The primary effect is 48


that the ultrasound waves directly interact with mediums like cells and tissues through periodic oscillations of a frequency and amplitude determined by the ultrasound source (Mitragotri, 2005). For the secondary effects, ultrasound increases temperature as the medium absorbs the oscillating sound waves. This impacts tissues with a higher absorption coefficient for ultrasound waves, such as bones rather than muscle tissue (Suslick, 1988). A more significant secondary effect comes from cavitation, which is the cleavage, growth, and oscillation of microbubbles hit by ultrasound waves. Microbubbles have a diameter of around 3 µm and are inserted into the site of interest through intravenous injection. In addition to assisting delivery of drugs and gene products, microbubbles can also act as contrast agents for imaging, since they oscillate in response to ultrasound much more than cells (Blomley et al., 2001). Despite initial concerns about having gaseous spaces within the blood vessels, clinical studies have demonstrated no safety hazards from microbubbles, due to their small size (Nanda et al., 1997).

"Sonoporation and local release are both mechanisms that can jointly increase drug concentrations at the site of interest, while being minimally invasive to the surrounding non-site tissue."

There are two major types of cavitation, dependent on the stability of the oscillatory motions of the microbubbles. Stable cavitation involves stable, repeating oscillations of the microbubbles, and it is the main mechanism for acoustic streaming. In acoustic streaming, the oscillations produce velocities circling around each microbubble, and for high amplitude oscillations, the velocities induce shear stresses on surrounding tissue (Pitt et al., 2004). These shear stresses cause the lysis of red blood cells (also called hemolysis) and vesicles like liposomes (Rooney, 1970; Marmottant & Hilgenfeldt, 2003). Acoustic streaming involves microstreaming based on the scale of shear stresses. In inertial cavitation, microbubbles grow and collapse in irregular patterns, unlike the repeating oscillations in stable cavitation. Because of the collapses, it is also called collapse cavitation. Inertial cavitation is the main mechanism for sonochemistry, shock waves, and liquid microjets. Sonochemistry occurs as the sudden collapse of microbubbles quickly increases temperature in the microbubbles and induces chemical reactions like free radical generation. This step relates sonochemistry to sonodynamic therapy. Shock waves are also caused by the sudden collapse of microbubbles and can induce drug transport, since microbubble collapse disturbs the surrounding tissue and membranes that may regulate the

49

entrance of foreign molecules like drugs. Shock waves both contribute to sonophoresis, also called phonophoresis, and sonoporation, also called transient membrane permeabilization. Sonophoresis is the transdermal delivery of topically applied drugs, and it occurs as shock waves increase the permeability of the skin. Sonoporation is the impermanent damage to the cell membranes and tissues by shock waves, which enhance the delivery of drugs. Liquid microjets are formed by a collapse of a microbubble near the surface, which then cleaves the gas bubble and shoots liquid to the surrounding tissue, also inducing drug transport by causing stress and thus relating to the sonophoresis (Mitragotri, 2005). One consequence is that non-site cells may also undergo lysis due to the microjets hitting them at high velocities (Nyborg, 2001). Generally, these mechanisms and resulting phenomena, such as sonodynamic therapy, sonophoresis, and sonoporation, disrupt surrounding tissue through oscillations and collapses from the ultrasound-generated microbubbles. They also contribute to a major type of applications: ultrasound mediated delivery of drugs or gene therapy products. For ultrasound-mediated delivery, microbubbles are injected with the nanoparticles delivered inside of the bubbles, inducing stress to the tissue surrounding the medium and increasing cavitation (Chowdhury et al., 2017). Building on the mechanisms and resulting phenomena for US-mediated delivery of the drugs, this review will discuss the following: difference between the enhanced uptake of drugs through ultrasound, molecules in US-mediated therapeutics, and other applications including the disruption of the blood-brain barrier (BBB), transdermal drug delivery, and more.

Enhanced Uptake of Drugs Through Ultrasound Ultrasound enhances the uptake of drugs through two main mechanisms: sonoporation and local release. The first mechanism, sonoporation, occurs when the shock waves from inertial cavitation temporarily weaken the cell membranes and tissues, which enhances the previously limited delivery of drugs through the membranes (Mitragotri, 2005). Sonoporation is also called transient membrane permeabilization, and its effects can range from an increase in the permeability of membranes to cell death, microvascular hemorrhage, and rearrangement of tissue structure, depending on intensity (Miller et al., 2002). Thus, its effects need to be controlled for when delivering DNA or drugs to a specific

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 2: An image demonstrating the structures of a liposome, a micelle, and a bilayer sheet, all of which are phospholipid aqueous solution structures. The white circles represent the hydrophilic heads, and the yellow strands represent the lipophilic tails. Image Source: Wikimedia Commons

site. The second mechanism, local release (also known as targeted deliver), delivers drugs inside a carrier vesicle to a site, instead of drugs in their original form (Pitt et al., 2004). Ultrasound is applied to the site of interest, resulting in two major processes of local release of the vesicle. First, the shock waves from inertial cavitation from the microbubble will expand through the fluid in the site. If the carrier is close to the source of the shock wave, the shear stress from the shock wave may be enough to rupture the vesicle and release the drug (Sundaram et al., 2003; Marmottant & Hilgenfeldt, 2003). Second, carrier vesicles denser than water, such as liposomes, will be pulled towards the microbubbles through convection; this process is called acoustic pressure. Acoustic pressure will increase the shearing stress applied on the vesicles through shock waves, because the closer the vesicles are to the microbubbles, the higher the shear stress will be (Nyborg, 2001; Guzmán et al., 2003). Because local release can specifically target the cells (i.e., tumor cells) in a site of interest, it explains the concept of local delivery. Sonoporation and local release are both mechanisms that can jointly increase drug concentrations at the site of interest, while being minimally invasive to the surrounding non-site tissue. Some other mechanisms have been tested for their effect on drug uptake. Large scale convection motion from ultrasound beams, rather than repeated oscillations, build convection motion in the vascular system that increases the rate of drug delivery for in vitro systems (Starritt et al., 1989). However, acoustic streaming is less applicable for in vivo systems because the blood vessels already have rapid convection motion, and outside the FALL 2021

vascular system, fluids are not abundant enough for the convection motion to form (Pitt et al., 2004).

Molecules in Ultrasound Mediated Therapeutics Drugs transported without carriers are called free drugs. Many of the molecules (i.e., drugs, genes) in ultrasound mediated therapeutics are transported in carriers partially because mechanisms like local release work when ultrasound are specifically applied to carriers. Furthermore, carriers prevent the molecules from interacting with tissue not in the targeted site. One type of carrier is a polymeric carrier, which consists of a polymer with a specific site that is degraded by extreme pH, aerobic conditions, and enzymes at the site (Howard Jr et al., 1997). Polymeric carriers have been tested in specific studies, although more work is needed to apply them into wider settings (Pitt et al., 2004). Another type of carrier includes lipophilic molecules like liposomes or micelles. Liposomes have lipophilic membranes and hydrophilic centers while micelles have lipophilic centers, which allow both to carry lipophilic drugs and only liposomes to encapsulate hydrophilic drugs in the centers (Ning et al., 1994; Husseini et al., 2000; Pitt et al., 2004). As previously mentioned, lipophilic molecules like liposomes and micelles have a higher density than water, meaning they are pulled towards microbubbles through convective motion and more easily disturbed by ultrasound. Out of liposomes, cationic liposomes are more effective gene carriers because the negatively charged DNA is more firmly bound to the liposomes until the ultrasound-mediated cavitation releases the gene to the specific site

50


(Koch et al., 2000). Microbubbles themselves are also effective carriers for molecules like genes. Gene carrying microbubbles are created by applying ultrasound to a surfactant, or a substance that reduces a liquid’s surface tension, surrounded by gas and the gene to be transferred (Pitt et al., 2004). After injecting these gene carrying microbubbles into a vessel close to the site, ultrasound is initially applied at low intensity for imaging. Once microbubbles arrive at the site, ultrasound intensity is increased, and inertial cavitation causes microbubbles to collapse and open up pathways through the vessel walls (Price & Kaul, 2002; Miura et al., 2002).

Other Applications Some other applications of ultrasound in therapeutics include the disruption of the bloodbrain barrier (BBB), which is a semipermeable layer made up of endothelial cells, pericytes, and astrocytes. The BBB prevents the passage of large molecules with a molecular weight larger than 40 kilodaltons between the brain and the surrounding blood vessels; many drugs have a weight larger than 40 kilodaltons, preventing easy movement into the brain (Daneman & Prat, 2015; Mehta et al., 2017). However, following the intravenous injection of microbubbles, focused ultrasound (FUS) at a frequency of around 220,000 times per second can be used to expand and contract the microbubbles in the capillaries, which causes cavitation in the capillaries and disrupts the BBB (Banerjee et al., 2021; Wu et al., 2021). This allows for more routes to delivery drugs, such as modified tight junctions through the walls of the capillaries (Burgess et al., 2015).

Transdermal drug delivery, or sonophoresis, is another application of ultrasound outside traditional drug delivery. In response to the skin’s layers preventing macromolecules like protein from entering the inner parts of the body, ultrasound is focused on the desired region to permeabilize the skin and allow for drugs to pass the skin into the site. It provides an alternative to drug delivery via needles, especially at frequencies lower than 100 kHz (close to the audible region) (Boucaud et al., 2002). Sonophoresis was first successfully tested in 1954 for digital polyarthritis (the inflammation and stiffness of five or more joints) by delivering the hormone hydrocortisone using ultrasound (Fellinger & Schmidt, 1954). The effects of sonophoresis with higher frequencies (higher than 1 MHz) have been clinically tested and used to deliver drugs like salicylic acid and lidocaine. Other drugs like insulin for diabetes and oligonucleotides have been preclinically tested for skin inflammation (Boucaud et al., 2002). In particular, the delivery of salicylic acid was tested with higher frequencies of larger than 1 MHz, which increased salicylic acid transport when ultrasound was applied for 20 minutes to guinea pigs (Bommannan et al., 1992). Vaccine delivery through sonophoresis has also been explored, given that the skin’s immune cells readily receive the vaccines delivered transdermally (Mitragotri & Kost, 2004).

Conclusion Different types of ultrasound mediated delivery of therapeutics are at varying stages of development. For example, LIPUS treatments for bone fractures and nonunions are in clinical stages, while ultrasound mediated delivery of

Image 3: A schematic image of lipid vesicles, or liposomes, with red hydrophilic heads and black lipophilic tails. The green circles represent the carried particles, which could be neurotransmitters, dye, drugs, or nanoparticles. Image Source: Wikimedia Commons

51

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


proteins or DNA are still in development stages. For protein delivery, it has mostly focused on the delivery of insulin to address diabetes, as well as some other regulatory hormones (Mitragotri et al., 1995). One major limitation towards transitioning to advanced trials is that it is difficult to determine the appropriate intensity of the ultrasound for therapeutics delivery (Pitt et al., 2004). Ultrasound needs to be strong enough so that protein is transported through the epidermal layer but not too strong so that tissue is not damaged permanently. Furthermore, safety concerns arise from the biological consequences of ultrasound treatments (Nyborg, 2001). Sufficient stable or inertial cavitation from the ultrasound may successfully permeabilize the membrane, but also damage the cell functions and going against the purpose of helping cells.

al., 2006). Similarly, targeting molecules such as antibodies could be bound to microbubbles by the pairing of complementary binders on antibodies and microbubbles (Takalkar et al., 2004). Thus, the ultrasound intensity needed to transport therapeutics would be lower than carriers without these modifications. More research on modifications to carriers or ultrasound intensity would help explore ultrasound mediated delivery of a wider variety of therapeutics. It is anticipated that clinical applications of this technique will greatly affect how we treat conditions, ranging from diabetes to bone fractures.

"Vaccine delivery through sonophoresis has ... been explored, given that the skin’s immune cells readily receive the vaccines delivered transdermally."

References Banerjee, K., Núñez, F. J., Haase, S., McClellan, B. L., Faisal, S. M., Carney, S. V., Yu, J., Alghamri, M. S., Asad, A. S., Candia, A. J. N., Varela, M. L., Candolfi, M., Lowenstein, P. R., & Castro, M. G. (2021). Current Approaches for Glioma Gene Therapy and Virotherapy. Frontiers in Molecular Neuroscience, 14, 621831. https://doi. org/10.3389/fnmol.2021.621831 Blomley, M. J. K., Cooke, J. C., & Unger, E. C. (2001). Science, medicine, and the future: Microbubble contrast agents: a new era in ultrasound. BMJ, 322(7296), 1222–1225. https:// doi.org/10.1136/bmj.322.7296.1222

Image 4: An image of a sonophoresis procedure, through which a clinician administers drugs on a patient’s skin and transdermally delivers therapeutics. Image Source: Wikimedia Commons

Bommannan, D., Okuyama, H., Stauffer, P., & Guy, R. H. (1992). Sonophoresis. I. The use of high-frequency ultrasound to enhance transdermal drug delivery. Pharmaceutical Research, 09(4), 559–564. https://doi. org/10.1023/A:1015808917491 Boucaud, A., Garrigue, M. A., Machet, L., Vaillant, L., & Patat, F. (2002). Effect of sonication parameters on transdermal delivery of insulin to hairless rats. Journal of Controlled Release, 81(1– 2), 113–119. https://doi.org/10.1016/S01683659(02)00054-8 One possible solution is to make the carriers more accepted by the human immune system. Carriers made from native proteins like albumin, a plasma protein made by the liver that transports hormones and enzymes, would appear less foreign and pass the membrane more easily (Shohet et al., 2000; Spada et al., 2021). Also, carriers attached to polymers like poly(ethylene oxide) (PEO) are called polymersomes. These polymersomes transport macromolecules like liposomes while not stimulating phagocytes in the immune system, and they are more stable than liposomes against thermal transitions (Lee et al., 2001; Li et

FALL 2021

Burgess, A., Shah, K., Hough, O., & Hynynen, K. (2015). Focused ultrasound-mediated drug delivery through the blood–brain barrier. Expert Review of Neurotherapeutics, 15(5), 477–491. https://doi.org/10.1586/14737175.2015.1028369 Carvalho, D. C. L., & Cliquet, A. (2004). The Action of Low-intensity Pulsed Ultrasound in Bones of Osteopenic Rats. Artificial Organs, 28(1), 114–118. https://doi.org/10.1111/j.15251594.2004.07091.x Chowdhury, S. M., Lee, T., & Willmann, J. K. (2017). Ultrasound-guided drug delivery in

52


cancer. Ultrasonography, 36(3), 171–184. https:// doi.org/10.14366/usg.17021 Cook, S. D., Salkeld, S. L., Popich-Patron, L. S., Ryaby, J. P., Jones, D. G., & Barrack, R. L. (2001). Improved Cartilage Repair After Treatment With Low-Intensity Pulsed Ultrasound: Clinical Orthopaedics and Related Research, 391, S231–S243. https://doi.org/10.1097/00003086200110001-00022 Daneman, R., & Prat, A. (2015). The Blood–Brain Barrier. Cold Spring Harbor Perspectives in Biology, 7(1), a020412. https://doi.org/10.1101/ cshperspect.a020412 Doan, N., Reher, P., Meghji, S., & Harris, M. (1999). In vitro effects of therapeutic ultrasound on cell proliferation, protein synthesis, and cytokine production by human fibroblasts, osteoblasts, and monocytes. Journal of Oral and Maxillofacial Surgery, 57(4), 409–419. https:// doi.org/10.1016/S0278-2391(99)90281-1 Fellinger, K. & Schmidt, J. Klinik and therapies des chromischen gelenkreumatismus. Maudrich Vienna, Austria 549–552 (1954). Guzmán, H. R., McNamara, A. J., Nguyen, D. X., & Prausnitz, M. R. (2003). Bioeffects caused by changes in acoustic cavitation bubble density and cell concentration: A unified explanation based on cell-to-bubble ratio and blast radius. Ultrasound in Medicine & Biology, 29(8), 1211–1222. https:// doi.org/10.1016/S0301-5629(03)00899-8 Howard, W. A., Bayomi, A., Natarajan, E., Aziza, M. A., El-Ahmady, O., Grissom, C. B., & West, F. G. (1997). Sonolysis Promotes Indirect Co−C Bond Cleavage of Alkylcob(III)alamin Bioconjugates. Bioconjugate Chemistry, 8(4), 498–502. https://doi.org/10.1021/bc970077l Husseini, G. A., El-Fayoumi, R. I., O’Neill, K. L., Rapoport, N. Y., & Pitt, W. G. (2000). DNA damage induced by micellar-delivered doxorubicin and ultrasound: Comet assay study. Cancer Letters, 154(2), 211–216. https://doi.org/10.1016/S03043835(00)00399-2 Ilyashenko, I. (2009). Phonophoresis procedure. Wikimedia Commons. Wikimedia Commons. https://upload.wikimedia.org/wikipedia/commo ns/9/93/%D0%A4%D0%BE%D0%BD%D0%BE %D1%84%D0%BE%D1%80%D0%B5%D0%B7. png. Koch, S., Pohl, P., Cobet, U., & Rainov, N. G.

53

(2000). Ultrasound enhancement of liposomemediated cell transfection is caused by cavitation effects. Ultrasound in Medicine & Biology, 26(5), 897–903. https://doi.org/10.1016/S03015629(00)00200-3 Lee, J. C.-M., Bermudez, H., Discher, B. M., Sheehan, M. A., Won, Y.-Y., Bates, F. S., & Discher, D. E. (2001). Preparation, stability, and in vitro performance of vesicles made with diblock copolymers. Biotechnology and Bioengineering, 73(2), 135–145. https://doi.org/10.1002/bit.1045 Li, L., AbuBaker, O., & Shao, Z. J. (2006). Characterization of Poly(Ethylene Oxide) as a Drug Carrier in Hot-Melt Extrusion. Drug Development and Industrial Pharmacy, 32(8), 991–1002. https://doi. org/10.1080/03639040600559057 Marmottant, P., & Hilgenfeldt, S. (2003). Controlled vesicle deformation and lysis by single oscillating bubbles. Nature, 423(6936), 153–156. https://doi.org/10.1038/nature01613 MDougM. (2007). Lipid vesicles. Wikimedia Commons. Wikimedia Commons. https:// commons.wikimedia.org/wiki/File:Lipid_ vesicles.svg. Mehta, A. M., Sonabend, A. M., & Bruce, J. N. (2017). Convection-Enhanced Delivery. Neurotherapeutics, 14(2), 358–371. https://doi. org/10.1007/s13311-017-0520-4 Miller, D. L., Pislaru, S. V., & Greenleaf, J. F. (2002). [No title found]. Somatic Cell and Molecular Genetics, 27(1/6), 115–134. https:// doi.org/10.1023/A:1022983907223 Mitragotri, S. (2005). Healing sound: The use of ultrasound in drug delivery and other therapeutic applications. Nature Reviews Drug Discovery, 4(3), 255–260. https://doi.org/10.1038/nrd1662 Mitragotri, S., Blankschtein, D., & Langer, R. (1995). Ultrasound-Mediated Transdermal Protein Delivery. Science, 269(5225), 850–853. https://doi.org/10.1126/science.7638603 Mitragotri, S., & Kost, J. (2004). Low-frequency sonophoresis. Advanced Drug Delivery Reviews, 56(5), 589–601. https://doi.org/10.1016/j. addr.2003.10.024 Miura, S., Tachibana, K., Okamoto, T., & Saku, K. (2002). In vitro transfer of antisense oligodeoxynucleotides into coronary endothelial

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


cells by ultrasound. Biochemical and Biophysical Research Communications, 298(4), 587–590. https://doi.org/10.1016/S0006-291X(02)02467-1 Mourad, P. D., Lazar, D. A., Curra, F. P., Mohr, B. C., Andrus, K. C., Avellino, A. M., McNutt, L. D., Crum, L. A., & Kliot, M. (2001). Ultrasound Accelerates Functional Recovery after Peripheral Nerve Damage. Neurosurgery, 48(5), 1136–1141. https://doi.org/10.1097/00006123-20010500000035 Nanda, N. C., & Carstensen, E. L. (1997). Echoenhancing agents: Safety. In N. C. Nanda, R. Schlief, & B. B. Goldberg (Eds.), Advances in Echo Imaging Using Contrast Enhancement (pp. 115–131). Springer Netherlands. https://doi. org/10.1007/978-94-011-5704-9_6 Natebw. (2006). Retinoblastoma ultrasound. Wikimedia Commons. Wikimedia Commons. https://commons.wikimedia.org/wiki/ File:Retinoblastoma_ultrasound.jpg. Ning, S., Macleod, K., Abra, R. M., Huang, A. H., & Hahn, G. M. (1994). Hyperthermia induces doxorubicin release from long-circulating liposomes and enhances their anti-tumor efficacy. International Journal of Radiation Oncology*Biology*Physics, 29(4), 827–834. https://doi.org/10.1016/0360-3016(94)90572-X Nyborg, W. L. (2001). Biological effects of ultrasound: Development of safety guidelines. Part II: General review. Ultrasound in Medicine & Biology, 27(3), 301–333. https://doi.org/10.1016/ S0301-5629(00)00333-1 Pitt, W. G., Husseini, G. A., & Staples, B. J. (2004). Ultrasonic drug delivery – a general review. Expert Opinion on Drug Delivery, 1(1), 37–56. https://doi.org/10.1517/17425247.1.1.37 Price, R. J., & Kaul, S. (2002). Contrast Ultrasound Targeted Drug and Gene Delivery: An Update on a New Therapeutic Modality. Journal of Cardiovascular Pharmacology and Therapeutics, 7(3), 171–180. https://doi. org/10.1177/107424840200700307 Rooney, J. A. (1970). Hemolysis Near an Ultrasonically Pulsating Gas Bubble. Science, 169(3948), 869–871. https://doi.org/10.1126/ science.169.3948.869

Albumin Microbubbles Directs Gene Delivery to the Myocardium. Circulation, 101(22), 2554–2556. https://doi.org/10.1161/01. CIR.101.22.2554 Spada, A., Emami, J., Tuszynski, J. A., & Lavasanifar, A. (2021). The Uniqueness of Albumin as a Carrier in Nanodrug Delivery. Molecular Pharmaceutics, 18(5), 1862–1894. https://doi. org/10.1021/acs.molpharmaceut.1c00046 Starritt, H. C., Duck, F. A., & Humphrey, V. F. (1989). An experimental investigation of streaming in pulsed diagnostic ultrasound beams. Ultrasound in Medicine & Biology, 15(4), 363–373. https://doi.org/10.1016/03015629(89)90048-3 Suslick, K. S. (Ed.). (1988). Ultrasound: Its chemical, physical, and biological effects. VCH. Takalkar, A. M., Klibanov, A. L., Rychak, J. J., Lindner, J. R., & Ley, K. (2004). Binding and detachment dynamics of microbubbles targeted to P-selectin under controlled shear flow. Journal of Controlled Release, 96(3), 473–482. https:// doi.org/10.1016/j.jconrel.2004.03.002 Villarreal, M. R. (2007). Phospholipids aqueous solution structures. Wikimedia Commons. Wikimedia Commons. https://commons. wikimedia.org/wiki/File:Phospholipids_ aqueous_solution_structures.svg. Watanabe, Y., Matsushita, T., Bhandari, M., Zdero, R., & Schemitsch, E. H. (2010). Ultrasound for Fracture Healing: Current Evidence. Journal of Orthopaedic Trauma, 24(Supplement 1), S56–S61. https://doi.org/10.1097/BOT.0b013e3181d2efaf Wu, S.-K., Tsai, C.-L., Huang, Y., & Hynynen, K. (2020). Focused Ultrasound and MicrobubblesMediated Drug Delivery to Brain Tumor. Pharmaceutics, 13(1), 15. https://doi. org/10.3390/pharmaceutics13010015 Wyatt, E., & Davis, M. (2017). Nanoparticles in Brain Metastases. Wikimedia Commons. Wikimedia Commons. https://commons. wikimedia.org/wiki/File:Nanoparticles_in_ Brain_Metastases_(26155288848).jpg.

Shohet, R. V., Chen, S., Zhou, Y.-T., Wang, Z., Meidell, R. S., Unger, R. H., & Grayburn, P. A. (2000). Echocardiographic Destruction of

FALL 2021

54


Pathophysiology, Diagnosis, and Treatment of Heat-Induced Hives: Cholinergic Urticaria BY TYLER CHEN '24 Cover Image: On the left is the general immune response that leads to wheal formation – the characteristic symptom of chronic urticaria. With cholinergic urticaria, increases in body temperature lead to this immune response through multiple mechanisms. On the right is the ribbon for chronic urticaria awareness. Image Source: Wikimedia Commons

Introduction There are not many conditions that are both as commonplace yet largely unheard of as cholinergic urticaria (CU). In fact, it wasn’t until 1924 when CU – referred to as “urticaria calorica” at the time – was first documented by Duke (Duke, 1924). In essence, CU is the abnormal sensitivity to heat. Those afflicted with CU will produce numerous small (usually <5mm in diameter) short-lived pruritic wheals (itchy hives) – most frequently on the upper trunk and proximal limbs – when the body’s core temperature rises from some sort of heat stimulus (examples include physical or mental exertion, eating spicy foods, and taking hot showers) (Kim et. al, 2014). The symptoms subside quite quickly – typically within an hour or so – after the core body temperature lowers. But, even then, it may significantly impair quality of life, especially for athletes whose body temperatures frequently elevate due to routine physical exertion through their sports. Due to its characteristic clinical presentations, diagnosis of CU has become quite easy. However, the underlying pathophysiology behind CU still remains uncertain. Clinically, we can categorize CU into 4

55

subcategories: 1) CU with Poral Occlusion (CuPO), 2) CU with Acquired, Generalized Hypohidrosis (CuAGH), 3) CU with Sweat Hypersensitivity (CuSH), and 4) idiopathic CU (iCu) – all of which are further detailed over the course of the paper. These four subtypes all have unique presentations and underlying pathophysiologies that lead to the urticarial reaction. In all cases, CU can be generally diagnosed fairly easily through either an exercise test or some other heat stimulus (Commens & Greaves, 1978). However, further distinguishing between the four types for CU requires additional diagnostic tools. Moreover, due to the varying pathophysiologies behind all four conditions, treatments for each are distinct as well, necessitating the precise diagnosis of the specific type of CU. The disease itself is fairly common across the general population, with most cases being seen among young adults. Different studies have aimed to determine the prevalence of CU among the young adult population, but there is no consensus on the real figure. For example, a study on university and high school students in Germany found that 11.2% of the young adult population displayed whealing consistent with DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


CU following a heat stimulus (Zuberbier et al., 1994) – a figure that vastly differs from that of a study on the young adult population within India that found an overall prevalence of 4.16% (Godse et al., 2013). This article seeks to document the current collective understandings of the underlying pathophysiologies of CU, the various diagnostic tools used to diagnose and differentiate between the four categories of CU, and the present methods available to treat CU.

IgE, and cytokines that include interleukin α and β, and interleukin 8 – then induce a local inflammatory response which in turn produces the wheals consistent with CU (Kobayashi et al., 2002). In cases such as these, those affected with CuPO consistently see a seasonal trend with their symptoms, with a peak in symptoms during the winter and a low during the summer (Rho, 2006). This can be attributed to dryer skin during the winter due to lower temperatures. It may be that with dryer skin due to lower temperatures and humidity, the skin hyperkeratinizes (the process in which excess keratin results in increased cohesion between and accumulation of dead skin cells, leading to the obstruction of the superficial acrosyringium) in order to protect the skin, leading to higher incidences of PO and thus CU. The flip side is also true: higher temperatures and humidity during the summer reduce the amount of hyperkeratinization of the intradermal portions of the sweat ducts, resulting in a lower incidence rate of CU. b. Cholinergic Urticaria with Generalized Hypohidrosis (CuAGH)

Pathophysiology The current pathophysiology surrounding CU remains largely unclear. However, insights have been derived for each subtype of CU. a. Cholinergic Urticaria with Poral Occlusion (CuPO) There have been a few cases that suggest that CU can be caused by poral occlusion (PO) (when sweat pores get clogged). In these cases, histological examinations of skin cells surrounding the wheals consistent with CU revealed occlusion of the upper parts of the sweat ducts. This occlusion was seen to be largely due to both hyperkeratinization – formation of keratotic plugs in the superficial acrosyringium (intraepidermal portions of the sweat ducts) – and dilatation of the sweat ducts themselves (Chinuki et al., 2011; Nakamizo et al., 2012). From this, it is hypothesized that PO causes sweat to leak out of the sweat ducts and into the surrounding dermis. The contents of sweat – which include numerous enzymes, a renin-like substance, secretory IgA, FALL 2021

Image 1: A male patient is seen here displaying CU on the volar aspect of his forearm. The characteristic pinpoint-sized, numerous, pruritic wheals are seen clearly across the arm, along with some mild erythema (redness) in response to a heat stimulus. Image Source: Wikimedia Commons

Acquired,

CU has also long been associated with acquired, generalized hypohidrosis (AGH) – a condition in which individuals have difficulty sweating. Autoimmunity to sweat glands, autoimmunity to acetylcholine receptors [acetylcholine binds to the M3-muscarinic acetylcholine receptors (M3MARs) in the eccrine glands, which produce an immediate sweat response], degeneration of post-ganglionic sympathetic skin nerve fibers (nerve fibers that carry the sympathetic signals for acetylcholine release), or even PO (as previously discussed) have all been proposed as potential causes of AGH (Hu et al., 2018; Nakamizo et al., 2012). But, even with these hypotheses, the general pathophysiology surrounding the involvement of AGH with CU remains unclear. Although the exact mechanism is unclear, recent studies looking into the association between AGH and CU have shed more light on the roles and functions of M3MARs within the eccrine glands. Looking at patients with CU with hypohidrosis/ anhidrosis, there seems to be a lower expression of acetylcholinesterase – an enzyme responsible for the breakdown of acetylcholine into acetic acid and choline – and M3MAR in eccrine gland epithelial tissue (Sawada et al., 2014). In normal patients, acetylcholine released within eccrine gland epithelial tissue will either bind to the M3MARs on the sweat ducts to stimulate the secretion of sweat or bind to acetylcholinesterase to get broken down. However, those with

"Clinically, we can categorize CU into 4 subcategories: 1) CU with Poral Occlusion (CuPO), 2) CU with Acquired, Generalized Hypohidrosis (CuAGH), 3) CU with Sweat Hypersensitivity (CuSH), and 4) idiopathic CU..."

56


"... CU is ultimately rooted in the body’s hypersensitivity to heat..."

Image 2: A cross-section of skin, with sudoriferous glands (sweat glands). Here, the sweat glands are all specifically eccrine sweat glands, since the glands themselves open directly to the surface of the skin via the sweat ducts, as opposed to apocrine sweat glands which open to the hair follicle. The five layers Stratum corneum, Stratum lucidum, Stratum granulosum, Stratum mucosum, Stratum germinativum form the epidermal layer of the skin, and the portion of the sweat duct located within these layers is known as the acrosyringium. The superficial acrosyringium refers to the portion of the acrosyringium that is near the outer surface of the skin, or within the top portions of the Stratum corneum. In patients with CuPO, elevated keratin levels result in higher cohesion between keratinocytes (90% of skin cells) in the Stratum corneum, leading to an overaccumulation of skin cells surrounding the superficial acrosyringium. With a large enough accumulation of keratinocytes, they cap the opening to the outer surface of the skin – forming a keratotic plug.

anhidrosis have skin with lower expression of both of these receptors. As a result, acetylcholine fails to bind to the M3MAR due to its reduced expression, and also fails to become fully degraded due to the lower amount of acetylcholinesterase available. This excess acetylcholine then overflows into adjacent mast cells, which store a variety of different chemical mediators and are responsible for controlling local inflammatory responses such as allergy or hypersensitivity reactions. In response to the presence of acetylcholine, mast cells degranulate, or release their chemical mediators, which stimulates the inflammatory response and produces the wheals that are characteristic of CU (Takahashi et al., 1992). However, some individuals afflicted with this type of CU also see no expression of M3MAR even on mast cells (Nakamizo et al., 2012). This indicates that there may be other molecules beyond acetylcholine that are involved in CU. The cause of this underexpression of both M3MAR and acetylcholinesterase remains unknown, but it does appear to be related to the concentration of several chemokines associated with Atopic Dermatitis – another skin condition that is closely related to CU. Specifically, expression of the chemokines CCL2/MCP-1, CCL5/RANTES,

and CCL17/TARC within the eccrine gland epithelial tissue affected by Atopic Dermatitis was significantly higher, which thus attracts CD4+ and CD8+ T cells and mast cells to the eccrine gland epithelial tissues (Sawada et al., 2014). In turn, the presence of these T cells is thought to cause the eccrine glands to limit the expression of M3MAR and acetylcholinesterase, which thus promotes the urticarial condition alongside the increased presence of mast cells. The cause of this underexpression of both M3MAR and acetylcholinesterase remains unknown, but it does appear to be related to the concentration of several chemokines associated with Atopic Dermatitis – another skin condition that is closely related to CU. Specifically, expression of the chemokines CCL2/MCP-1, CCL5/RANTES, and CCL17/TARC within the eccrine gland epithelial tissue affected by Atopic Dermatitis was significantly higher, which thus attracts CD4+ and CD8+ T cells and mast cells to the eccrine gland epithelial tissues (Sawada et al., 2014). In turn, the presence of these T cells is thought to cause the eccrine glands to limit the expression of M3MAR and acetylcholinesterase, which thus promotes the urticarial condition alongside the increased presence of mast cells.

Image Source: Wikimedia Commons

57

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 3: An insight into the pathways surrounding the urticarial reaction in CuAGH. Dysfunctional and/or underexpressed M3MAR not only leads to decreased sweating but also results in an overflow of acetylcholine into nearby mast cells, which then stimulates the degranulation of said mast cells and leads to the urticarial reaction. Additionally, some patients report pain – not just pruritus – with CuAGH, which can also be seen to be the result of acetylcholine overflow as well. Excess acetylcholine stimulates sensory nerves – causing pain.

c. Cholinergic Urticaria with Sweat Hypersensitivity (CuSH) Some cases have also detailed patients with CU displaying a hypersensitivity to their sweat. This indicates that, in some cases, the onset of CU may be rooted in the individual’s allergic reaction to their sweat. Upon intradermal injections of their sweat, patients with CuSH had an immediate skin reaction, indicating that there is a high association between this subtype of CU and hypersensitivity to sweat (Adachi et al., 1994). This hypersensitivity to autologous sweat antigen has been well-documented among patients with Atopic Dermatitis (eczema). In fact, it has been noted that the hypersensitivities to sweat between those with CU and Atopic Dermatitis seem to be virtually the same, leading to the belief that the exact mechanism behind the presence of the urticarial reaction may be similar to that of Atopic Dermatitis (Tanaka et al., 2006). In patients with Atopic Dermatitis, a specific IgE for the sweat antigen led to the degranulation of basophils and mast cells, which thus induced the inflammatory response and urticarial reaction. A similar mechanism is also projected for CuSH, suggesting that both CU and Atopic Dermatitis may share hypersensitivities to the same antigens within sweat – even if their clinical presentations are distinct (Takahagi et al., 2009). d. Idiopathic Cholinergic Urticaria (iCu) Although most individuals with CU can be diagnosed with one of the three aforementioned subtypes, there are some cases in which the urticarial reaction cannot be pinpointed to any one of the causes previously mentioned. In this case, methods used to differentiate between PO, AGH, and sweat hypersensitivity have been exhausted, and the patient is diagnosed with iCu (Nakamizo et al., 2012).

FALL 2021

Clinical Diagnosis

Image Source: Wikimedia Commons

a. Diagnosis of Cholinergic Urticaria Since CU is ultimately rooted in the body’s hypersensitivity to heat, the diagnosis for CU in the most general sense (i.e., without distinguishing between subtypes) can be carried out fairly easily. As is common amongst other forms of hypersensitivity testing, provocation testing is used to determine the presence of CU. Patients are typically placed in a hot bath, asked to perform a certain exercise, or placed in a hot box to elevate their body temperature and their physical response is noted (Fukunaga et al., 2018). However, even when given a response that is consistent with CU (pinpoint-sized wheals, erythema, etc.), CU must still be differentiated from Food-Dependent Exercise-Induced Anaphylaxis (FDEIA) and Localized Heat Urticaria (Fukunaga et al., 2018). Differentiating between FDEIA and CU is relatively simple, as the former requires the ingestion of food and a specific IgE response to the causative food antigen. To measure these conditions, skin prick tests, measurements of the particular IgE, and other provocation methods can be used to distinguish between FDEIA and CU. Differentiating between Localized Heat Urticaria and CU is rather simple as well. While CU is a more systematic urticarial reaction due to a significant increase in core body temperature, Localized Heat Urticaria is the development of itchy wheals limited only to the portions of skin that were exposed to heating (Fukunaga et al., 2002). As a result, Localized Heat Urticaria can be tested through more localized heat provocation testing, typically involving cylinders of hot water that heat a specific area rather than general heat stimuli aimed at elevating the body’s core temperature.

58


In addition to using heat stress methods, 0.05mL intradermal injections of 0.02% methacholine chloride (Acetyl-β methylcholine chloride) are used to induce an urticarial reaction in those with CU (Commens & Greaves, 1978). Methacholine itself has a similar structure to acetylcholine – the neurotransmitter thought to be the primary substance triggering CU. However, since acetylcholine itself is unstable in solution, methacholine is the more suitable option for routine CU testing. When injected into the skin, the formation of the characteristic wheals is used to establish a diagnosis of CU. However, only around 51% of cases present with wheals upon methacholine injection, implying that a lack of flare-up cannot definitively rule out CU. Likewise, 0.05mL intradermal injections of 0.002% carbamylcholine chloride (carbachol) can be used to a similar effect (Schwartz, 2021). b. Differential Diagnosis between Subtypes of Cholinergic Urticaria Beyond the confirmation of CU, it is necessary to pinpoint which type of CU afflicts the patient, as that will guide the treatment plan. Due to the differences between the underlying pathophysiologies for the four subtypes of CU, there are several methods to differentiate between the categories: i. Intradermal Injection of Autologous Sweat An intradermal injection of a diluted sample of the patient’s sweat can provide insight into what kind of CU the patient might be facing. The pathophysiology surrounding CuAGH

relies more on the leakage of acetylcholine and other substances into nearby tissue, and not on a direct immune response to the sweat itself. For those with CuSH, an intradermal injection of a diluted sample of autologous sweat will produce an inflammatory response. The sample must be diluted to below 1/10, as even healthy individuals will occasionally produce an inflammatory and urticarial response to intradermal injections of autologous sweat that are more highly concentrated (Fukunaga et al., 2005). Thus, a positive reaction (defined by the development of a wheal and/or erythema of a significant size predetermined by the dosage and dilution of the injection) to an intradermal injection of autologous sweat implies hypersensitivity to sweat and can be used to support the diagnosis of CuSH. ii. Intradermal Injection of Cholinergic Agents Acetylcholine is a very large proponent of some types of CU (CuAGH in particular). As such, an intradermal injection of a cholinergic agent (molecules with the structure of acetylcholine) can be expected to produce an urticarial response in certain cases of CU. In using this method of diagnosis, both CuAGH and CuSH produce an urticarial reaction (Nakamizo et al., 2012). On the contrary, there will be typically no urticarial reaction among those with CuPO, since it is the sweat content itself that leads to the urticarial reaction, not the presence of acetylcholine or other cholinergic agents. iii. Hypohidrosis/Anhidrosis Testing

Image 4: A visual depiction of the mast cell degranulation process within the sweat allergy pathway. 1. Antigen (sweat antigen). 2. Immunoglobulin E antibody (IgE). 3. High-affinity IgE receptors on mast cells (FcεRI receptors). 4. Preformed mediators (e.g. histamine, proteases, chemokines, and heparin). 5. Granules. 6. Mast cell body. 7. Newly formed mediators (e.g. prostaglandins, leukotrienes, thromboxane, PAF). IgE antibodies specific to a particular sweat antigen first bind to the sweat antigen, and then bind to FcεRI receptors on mast cells or basophils, driving the degranulation of different mediators – histamine in particular – which stimulates the inflammatory response and results in the urticarial reaction. Image Source: Wikimedia Commons 59

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


As evident in its name, CuAGH presents with hypohidrosis. This can be used as a differentiating factor with the other subtypes of CU, since CuSH does not present with hypohidrosis. To diagnose hypohidrosis, a simple sweat test is carried out. Sweat can be easily visualized through topical indicators such as iodinated starch and sodium alizarin sulphonate (Chia & Tey, 2013). These substances undergo a dramatic color change when they encounter the moisture of sweat. After the topical indicator is applied to the patient, a thermoregulatory sweat test is carried out. During this test, the patient will be placed in a hot box, placed under a thermal blanket, or will exercise. If the topical indicator fails to change color after some time, it indicates that the patient has hypohidrosis. Hypohidrosis itself is also a separate disorder, and further actions can be taken to localize and pinpoint the cause of the patient’s hypohidrosis. iv. Histological Examination Histological examination of the skin tissue surrounding the wheals/urticarial reaction can also provide valuable insights into the subtype of CU that the patient has. This method is primarily useful for CuPO, as the main pathophysiology behind this type of CU is the hyperkeratinization of the superficial acrosyringium and the dilatation of the sweat ducts themselves. Histological examination of the epidermal layer of the skin around the wheal can verify whether the superficial acrosyringium is being obstructed by a keratin plug, which would indicate CuPO. Furthermore, within cases of CuAGH, the sweat glands themselves are also occasionally atrophic – which can be verified through histological examination of the sweat glands themselves (Nakamizo et al., 2012). v. Idiopathic Cholinergic Urticaria (iCu) The previous four methods are effective in verifying or affirming a certain subtype of CU within patients. However, under circumstances when intradermal injections of autologous sweat and cholinergic agents don’t elicit a response, the patient is verified to not have hypohidrosis, and the histological examination of local epithelial

FALL 2021

tissue reveals nothing indicative, the patient is diagnosed with iCu.

Treatments a. Cholinergic Urticaria with Poral Occlusion (CuPO) Since the main cause of CuPO is the PO itself, steps should be taken to reduce the hyperkeratinization that ultimately leads to the urticarial reaction. Taking a hot bath and exercising can help the body sweat more, which in turn can help improve the symptoms of this type of CU (Kobayashi et al., 2002). Repeated sweating prevents the formation of keratotic plugs in the intraepidermal portions of the sweat ducts, and thus prevents the onset of CU. In fact, the seasonality of CuPO is largely tied with this, as increased sweating during the summer deters the hyperkeratinization of the sweat ducts, leading to a lower incidence of CuPO during the summer months. In addition to bathing or inducing more sweating, individuals can also apply keratolytic agents, which deter the formation of keratotic plugs (Nakamizo et al., 2012). b. Cholinergic Urticaria with Generalized Hypohidrosis (CuAGH)

Acquired,

Combating CuAGH is more nuanced. The onset of AGH plays a big role in the pathophysiology of this CU, so treatments can range depending on the cause of it. When the cause of AGH is the destruction of sweat glands due to autoimmune disease, a high dosage of corticosteroids can be effective (Bito et al., 2012). Although the exact mechanism surrounding the effects of corticosteroids on improving symptoms remains unclear, it is thought that this treatment leads to the decrease in lymphocyte infiltrate around the sweat glands, allowing the M3MARs to re-express. This, in turn, would then lead to improvements in sweating and CU.

Image 5: Methacholine (left) vs. Acetylcholine. But have very similar structures except for methacholine having an extra methyl group. This extra methyl group allows methacholine to be selective towards muscarinic acetylcholine receptors compared to nicotinic acetylcholine receptors, thus allowing for the selective stimulation of receptors compared to direct injection of acetylcholine. Additionally, the structural change makes methacholine significantly less susceptible to hydrolysis and breakdown compared to acetylcholine. Acetylcholine can be broken down by several non-specific cholinesterases, including acetylcholinesterase. However, methacholine can only be hydrolyzed by acetylcholinesterase, and at a much lower rate than acetylcholine gets hydrolyzed. Image Source: Wikimedia Commons

"... it is necessary to pinpoint which type of CU afflicts the patient, as that will guide the treatment plan."

Beyond systemic steroid therapy, the cornerstone of pharmacological treatment for CuAGH is antihistamine drugs. The first line of therapy typically involves H1-antagonists, although patients typically only see a mild to moderate

60


"Overall, further investigation into CU and the efficacy of different drugs against CU can help elucidate some of the uncertainty surrounding treatment plans and improve the current understanding of the first three subtypes of CU, as well as provide insight into the pathophysiology of iCu."

Image 6: The chemical structure of the antihistamine cetirizine, also known as Zyrtec (shown on the right). Cetirizine itself is a highly-selective H1-antagonist – meaning that it specifically targets histamine H1-receptors, outcompeting histamine in binding to the receptor (Portnoy & Dinakar, 2004). More specifically, it is an inverse agonist to the receptor, meaning that when it binds to the histamine H1-receptor, it produces an effect that is opposite to the effect that would have ensued if histamine had bound to the histamine H1-receptor. Additionally, it is seen that cetirizine also has some anti-inflammatory responses independent of that of the H1receptor. Specifically, it is seen that cetirizine also regulates the release of chemokines and cytokines, which thus regulates the inflammatory response (Walsh, 1994; Albanesi et al., 1998). Additionally, cetirizine has also been seen to limit eosinophil chemotaxis as well, further regulating/limiting the inflammatory response (Boone et al., 2000). Zyrtec itself is a combination of cetirizine and pseudoephedrine, with the former providing allergy relief and the latter being a decongestant.

improvement in symptoms from standard doses (Fukunaga et al., 2018). Even still, they are generally considered to be effective. Specifically, the H1-antagonist cetirizine (also called Zyrtec) was found to be particularly effective in relieving CU, leading to significant reductions in wheal formation, pruritus, and erythema (Zuberbier et al., 1995). Additionally, increasing the dosage of H1-antagonists can further improve symptoms, though this effect was seen in fewer than half of patients. When increasing the dosage did not provide any further relief, lafutidine – an H2antagonist – was also found to be effective in those with refractory CU (Hatakeyama et al., 2016). Further studies into different anticholinergic drugs and treatments were also proven to be potentially effective. One study found the oral administration of scopolamine butylbromide (an anticholinergic drug) to be effective in cases where patients were found to be resistant to H1-antagonists and other conventional antihistamines (Tsunemi et al., 2003). Another case found that a combination of propranolol (a β2-adrenergic blocker), cetirizine, and montelukast to be effective in treating CU when an antihistamine-only treatment regimen failed to produce long-term relief (Feinberg and Toner, 2008). Botulinum toxin injections were also found to be effective in possibly treating CU, with one individual finding relief from CU following a Botox injection (Sheraz & Halpern, 2013). However, this relief was only temporary, as the neuromuscular blocking effects of Botulinum toxin resulting in a decreased release of acetylcholine wears off over time as new axons regenerate and new neuromuscular connections form, leading to a resurgence in acetylcholine levels and CU. Danazol was also found to be very effective in treating CU. Specifically, multiple studies have indicated that Danazol therapy greatly reduces rates of whealing in young men afflicted with CU (Wong et al., 1987). Within individuals with CU, it is observed that serum levels of α1-antichymotrypsin – a

protease inhibitor responsible for inhibiting mast cell chymases and the neutrophil proteinase cathepsin G – are depressed, which is thought to promote CU through the delayed inactivation of these inflammatory proteases (Kalsheker, 1996). Danazol itself can greatly increase the serum level of several protease inhibitors, with serum levels of α1-antichymotrypsin also seen to increase significantly after taking Danazol – alongside significant symptomatic improvements as well. In addition to further establishing Danazol’s effectiveness, this also implies that the depressed levels of α1-antichymotrypsin are linked pathologically with the release of histamine and the subsequent development of wheals and pruritus (Alsamarai et al., 2012). But despite Danazol’s effectiveness in mitigating CU, it should be avoided entirely or given with extreme caution to females and children, since it is an attenuated androgen, or a hormone responsible for regulating the development of male characteristics (e.g. testosterone). c. Cholinergic Urticaria Associated with Sweat Hypersensitivity (CuSH) Treatments for CuSH largely resemble that of CuAGH from a pharmacological perspective. However, the first line of treatment is typically a desensitization protocol to autologous sweat in order to reduce the severity of the urticarial response: the sweat antigen to which the individual is highly sensitive to is purified and then used in a rapid desensitization process that is commonly used with other kinds of hypersensitivities (Kozaru et al., 2011). Beyond the normal procedure of allergen desensitization, the treatment options for CuSH closely resemble the typical pharmacological treatments used to treat CuAGH. In some cases of CuSH, anti-IgE therapy can be effective in reducing symptom severity. A well-known example of this anti-IgE therapy is omalizumab, a recombinant humanized

Image Source: Wikimedia Commons

61

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


monoclonal antibody that has been seen to be potentially effective in treating CuSH among other urticarias (Metz et al., 2008). Omalizumab binds to the same site on IgE that binds with the high-affinity IgE receptors, or FcεRI, located on mast cells, basophils, and antigen-presenting dendritic cells (Chang et al., 2007). As a result, sweat antigen-IgE complexes are unable to bind to FcεRI, which prevents the cross-linking of FcεRI via the sweat antigen-IgE complexes and thus prevents the granulation of basophils and mast cells (Siraganian, 2004). This inhibition of the degranulation of basophils and mast cells, in turn, prevents the urticarial reaction. However, while the efficacy of omalizumab is high within several studies, the overall collection of literature surrounding the use of omalizumab to treat CU remains mixed regarding its effectiveness. Omalizumab has also been documented to be ineffective in treating CU within some patients who were H1-antagonist resistant (Sabroe, 2010). Thus, omalizumab is a potentially effective treatment for CuSH, but further research must be conducted to identify which populations are most likely to respond positively to it. d. Idiopathic Cholinergic Urticaria (iCu) Just as the diagnosis for iCu remain unclear, the treatments options for iCu also remains generic. As mentioned with the previous subtypes of CU, the primary pharmacological treatment for CU is mainly antihistamine drugs, with H1-antagonists being within the primary option of these drugs (Fukunaga et al., 2018). Alongside antihistamine drugs, other common treatments for CU in general also include leukotriene inhibitors (e.g. montelukast) and immunosuppressive drugs

FALL 2021

(e.g. prednisone). Overall, there is not a single treatment that is universally effective in all cases of CU (Alsamarai et al., 2012). Similarly, extensive study into all the different combinations of antihistamines, anticholinergic drugs, etc. have also revealed that no one combination is effective for all patients. Thus, the most comprehensive treatment plan lies within personalization and a trial-and-error process with existing treatments that are effective for a good portion of the population.

Conclusion

Image 7: A breakdown of how omalizumab, or any Anti-IgE, reduces the severity of the allergic immune response. Omalizumab/ Anti-IgE (antibodies with the dark green heavy chains) bind to free-floating IgE, preventing the IgE from binding to the FcεRI receptors found on mast cells, basophils, and antigenpresenting dendritic cells. This in turn prevents the degranulation of mast cells and basophils, which in turn reduces the urticarial reaction. Additionally, the reduction of IgE bound to FcεRI receptors also leads to the decreased expression of FcεRI receptors on the surface of mast cells, basophils, and antigenpresenting dendritic cells. IgE levels are already the lowest of all the immunoglobulins, so reducing the expression of FcεRI receptors also decreases the chance of capturing free-floating IgE, and thus lowers both the strength and chance of mast cell and basophil degranulation (Rios & Kalesnikoff, 2015). Lower FcεRI receptor expression could also decrease the quantity of allergen (sweat antigen in the case of CuSH) presented on the surface of antigen-presenting dendritic cells to T cells, which can reduce tissue infiltration by T and B cells, eosinophils, mast cells, and basophils (Segal et al., 2008). Image Source: Segal et al., 2008, Creative Commons License

The current literature surrounding CU is relatively sparse given its sizable prevalence across the population. As such, present understanding of the disease’s underlying pathophysiology remains uncertain as well. What is currently understood about CU is that it typically presents in the form of four variants: Cholinergic Urticaria with Poral Occlusion (CuPO), Cholinergic Urticaria with Acquired, Generalized, Hypohidrosis (CuAGH), Cholinergic Urticaria Associated with Sweat Hypersensitivity (CuSH), and Idiopathic Cholinergic Urticaria (iCu). Treatments for each of these types of CU remain generalized. Antihistamine drugs are the most common pharmacological treatments for all subtypes, while systemic steroid therapy and autologous sweat desensitization are prominent treatments for CuAGH and CuSH as well. Overall, further investigation into CU and the efficacy of different drugs against CU can help elucidate some of the uncertainty surrounding treatment plans and improve the current understanding of the first three subtypes of CU, as well as provide insight into the pathophysiology of iCu.

62


References

tb07332.x

Adachi, J., Aoki, T., & Yamatodani, A. (1994). Demonstration of sweat allergy in cholinergic urticaria. Journal of Dermatological Science, 7(2), 142–149. https://doi.org/10.1016/09231811(94)90088-4

DUKE, W. W. (1924). URTICARIA CAUSED SPECIFICALLY BY THE ACTION OF PHYSICAL AGENTS: (LIGHT, COLD, HEAT, FREEZING, BURNS, MECHANICAL IRRITATION, AND PHYSICAL AND MENTAL EXERTION). Journal of the American Medical Association, 83(1), 3–9. https://doi.org/10.1001/ jama.1924.02660010007002

Albanesi, Pastore, Fanales-Belasio, & Girolomoni. (1998). Cetirizine and hydrocortisone differentially regulate ICAM-1 expression and chemokine release in cultured human keratinocytes. Clinical & Experimental Allergy, 28(1), 101–109. https://doi.org/10.1046/j.13652222.1998.00206.x Alsamarai, A. M., Hasan, A. A., & Alobaidi, A. H. (2012). Evaluation of different combined regimens in the treatment of cholinergic urticaria. The World Allergy Organization Journal, 5(8), 88–93. https://doi.org/10.1097/wox.0b013e31825a72fc Bito, T., Sawada, Y., & Tokura, Y. (2012). Pathogenesis of Cholinergic Urticaria in Relation to Sweating. Allergology International : Official Journal of the Japanese Society of Allergology, 61. https://doi.org/10.2332/allergolint.12-RAI-0485 Boone, M., Lespagnard, L., Renard, N., Song, M., & Rihoux, J. (2000). Adhesion molecule profiles in atopic dermatitis vs. allergic contact dermatitis: Pharmacological modulation by cetirizine. Journal of the European Academy of Dermatology and Venereology, 14(4), 263–266. https://doi. org/10.1046/j.1468-3083.2000.00017.x Chang, T. W., Wu, P. C., Hsu, C. L., & Hung, A. F. (2007). Anti‐IgE Antibodies for the Treatment of IgE‐Mediated Allergic Diseases. In Advances in Immunology (Vol. 93, pp. 63–119). Academic Press. https://doi.org/10.1016/S00652776(06)93002-8 Chia, K. y., & Tey, H. l. (2013). Approach to hypohidrosis. Journal of the European Academy of Dermatology and Venereology, 27(7), 799– 804. https://doi.org/10.1111/jdv.12014 Chinuki, Y., Tsumori, T., Yamamoto, O., & Morita, E. (2011). Cholinergic Urticaria Associated with Acquired Hypohidrosis: An Ultrastructural Study. Acta Dermato Venereologica, 91(2), 197– 198. https://doi.org/10.2340/00015555-1000 Commens, C. a., & Greaves, M. w. (1978). Tests to establish the diagnosis in cholinergic urticaria. British Journal of Dermatology, 98(1), 47–51. https://doi.org/10.1111/j.1365-2133.1978.

63

Feinberg, J. H., & Toner, C. B. (2008). Successful Treatment of Disabling Cholinergic Urticaria. Military Medicine, 173(2), 217–220. https://doi. org/10.7205/MILMED.173.2.217 Fukunaga, A., Bito, T., Tsuru, K., Oohashi, A., Yu, X., Ichihashi, M., Nishigori, C., & Horikawa, T. (2005). Responsiveness to autologous sweat and serum in cholinergic urticaria classifies its clinical subtypes. Journal of Allergy and Clinical Immunology, 116(2), 397–402. https://doi. org/10.1016/j.jaci.2005.05.024 Fukunaga, A., Shimoura, S., Fukunaga, M., Ueda, M., Nagai, H., Bito, T., Tsuru, K., Ichihashi, M., & Horikawa, T. (2002). Localized heat urticaria in a patient is associated with a wealing response to heated autologous serum. British Journal of Dermatology, 147(5), 994–997. https://doi. org/10.1046/j.1365-2133.2002.04952.x Fukunaga, A., Washio, K., Hatakeyama, M., Oda, Y., Ogura, K., Horikawa, T., & Nishigori, C. (2018). Cholinergic urticaria: Epidemiology, physiopathology, new categorization, and management. Clinical Autonomic Research, 28(1), 103–113. https://doi.org/10.1007/s10286017-0418-6 Godse, K., Farooqui, S., Nadkarni, N., & Patil, S. (2013). Prevalence of cholinergic urticaria in Indian adults. Indian Dermatology Online Journal, 4(1), 62–63. https://doi. org/10.4103/2229-5178.105493 Hatakeyama, M., Fukunaga, A., Washio, K., Ogura, K., Yamada, Y., Horikawa, T., & Nishigori, C. (2016). Addition of lafutidine can improve disease activity and lead to better quality of life in refractory cholinergic urticaria unresponsive to histamine H1 antagonists. Journal of Dermatological Science, 82(2), 137–139. https:// doi.org/10.1016/j.jdermsci.2016.02.001 Hu, Y., Converse, C., Lyons, M. c., & Hsu, W. h. (2018). Neural control of sweat secretion: A review. British Journal of Dermatology, 178(6),

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


1246–1256. https://doi.org/10.1111/bjd.15808 Kalsheker, N. A. (1996). Α1-antichymotrypsin. The International Journal of Biochemistry & Cell Biology, 28(9), 961–964. https://doi. org/10.1016/1357-2725(96)00032-5 Kim, J. E., Eun, Y. S., Park, Y. M., Park, H. J., Yu, D. S., Kang, H., Cho, S. H., Park, C. J., Kim, S. Y., & Lee, J. Y. (2014). Clinical Characteristics of Cholinergic Urticaria in Korea. Annals of Dermatology, 26(2), 189–194. https://doi. org/10.5021/ad.2014.26.2.189 Kobayashi, H., Aiba, S., Yamagishi, T., Tanita, M., Hara, M., Saito, H., & Tagami, H. (2002). Cholinergic Urticaria, a New Pathogenic Concept: Hypohidrosis due to Interference with the Delivery of Sweat to the Skin Surface. Dermatology, 204(3), 173–178. Kozaru, T., Fukunaga, A., Taguchi, K., Ogura, K., Nagano, T., Oka, M., Horikawa, T., & Nishigori, C. (2011). Rapid Desensitization with Autologous Sweat in Cholinergic Urticaria. Allergology International, 60(3), 277–281. https://doi. org/10.2332/allergolint.10-OA-0269 Metz, M., Bergmann, P., Zuberbier, T., & Maurer, M. (2008). Successful treatment of cholinergic urticaria with anti-immunoglobulin E therapy. Allergy, 63(2), 247–249. https://doi.org/10.1111/ j.1398-9995.2007.01591.x Nakamizo, S., Egawa, G., Miyachi, Y., & Kabashima, K. (2012). Cholinergic urticaria: Pathogenesis-based categorization and its treatment options. Journal of the European Academy of Dermatology and Venereology, 26(1), 114–116. https://doi.org/10.1111/j.14683083.2011.04017.x Portnoy, J. M., & Dinakar, C. (2004). Review of cetirizine hydrochloride for the treatment of allergic disorders. Expert Opinion on Pharmacotherapy, 5(1), 125–135. https://doi. org/10.1517/14656566.5.1.125 Rho, N. (2006). Cholinergic Urticaria and Hypohidrosis: A Clinical Reappraisal. Dermatology, 213(4), 357–358. Rios, E. J., & Kalesnikoff, J. (2015). FcεRI Expression and Dynamics on Mast Cells. In M. R. Hughes & K. M. McNagny (Eds.), Mast Cells: Methods and Protocols (pp. 239–255). Springer. https://doi.org/10.1007/978-1-4939-1568-2_15

FALL 2021

Sabroe, R. A. (2010). Failure of omalizumab in cholinergic urticaria. Clinical and Experimental Dermatology, 35(4), e127–e129. https://doi. org/10.1111/j.1365-2230.2009.03748.x Sawada, Y., Nakamura, M., Bito, T., Sakabe, J.-I., Kabashima-Kubo, R., Hino, R., Kobayashi, M., & Tokura, Y. (2014). Decreased Expression of Acetylcholine Esterase in Cholinergic Urticaria with Hypohidrosis or Anhidrosis. Journal of Investigative Dermatology, 134(1), 276–279. https://doi.org/10.1038/jid.2013.244 Schwartz, R. A. (2021). Cholinergic Urticaria: Background, Pathophysiology, Etiology. https:// emedicine.medscape.com/article/1049978overview Segal, M., Stokes, J. R., & Casale, T. B. (2008). AntiImmunoglobulin E Therapy. The World Allergy Organization Journal, 1(10), 174–183. https:// doi.org/10.1097/WOX.0b013e318187a310 Sheraz, A., & Halpern, S. (2013). Cholinergic urticaria responding to botulinum toxin injection for axillary hyperhidrosis. British Journal of Dermatology, 168(6), 1369–1370. https://doi. org/10.1111/bjd.12200 Siraganian, R. P. (2004). Mast cell signal transduction from the high-affinity IgE receptor. https://doi.org/10.1016/j.coi.2003.09.010 Takahagi, S., Tanaka, T., Ishii, K., Suzuki, H., Kameyoshi, Y., Shindo, H., & Hide, M. (2009). Sweat antigen induces histamine release from basophils of patients with cholinergic urticaria associated with atopic diathesis. British Journal of Dermatology, 160(2), 426–428. https://doi. org/10.1111/j.1365-2133.2008.08862.x Takahashi, K., Soda, R., Kishimoto, T., Matsuoka, T., Maeda, M., Araki, M., Tanimoto, Y., Kawada, N., Kimura, I., & Komagoe, H. (1992). [The reactivity of dispersed human lung mast cells and peripheral blood basophils to acetylcholine]. Arerugi = [Allergy], 41(6), 686–692. Tanaka, A., Tanaka, T., Suzuki, H., Ishii, K., Kameyoshi, Y., & Hide, M. (2006). Semipurification of the immunoglobulin E-sweat antigen acting on mast cells and basophils in atopic dermatitis. Experimental Dermatology, 15(4), 283–290. https://doi.org/10.1111/j.09066705.2006.00404.x Tsunemi, Y., Ihn, H., Saeki, H., & Tamaki, K. (2003). Cholinergic urticaria successfully treated with scopolamine butylbromide. International

64


Journal of Dermatology, 42(10), 850–850. https:// doi.org/10.1046/j.1365-4362.2003.02010.x Walsh, G. M. (1994). The anti-inflammatory effects of cetirizine. Clinical & Experimental Allergy, 24(1), 81–85. https://doi. org/10.1111/j.1365-2222.1994.tb00921.x Wong, E., Eftekhari, N., Greaves, M. w., & Ward, A. M. (1987). Beneficial effects of danazol on symptoms and laboratory changes in cholinergic urticaria. British Journal of Dermatology, 116(4), 553–556. https://doi. org/10.1111/j.1365-2133.1987.tb05877.x Zuberbier, T., Aberer, W., Burtin, B., Rihoux, J. P., & Czarnetzki, B. M. (1995). Efficacy of cetirizine in cholinergic urticaria. Acta DermatoVenereologica, 75(2), 147–149. https://doi. org/10.2340/0001555575147149 Zuberbier, T., Althaus, C., Chantraine-Hess, S., & Czarnetzki, B. M. (1994). Prevalence of cholinergic urticaria in young adults. Journal of the American Academy of Dermatology, 31(6), 978–981. https://doi.org/10.1016/S01909622(94)70267-5

65

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Lactose Intolerance as the “Norm” BY UJVALA JUPALLI '25 Cover Image: Those with lactose intolerance have to watch their consumption of dairy products and switch out milk, cheese, ice cream, yogurt, and other dairy-containing products. These changes in lifestyle may require some extra effort but, overall, lead to better health benefits for those without the ability to digest lactose. Image Source: Wikimedia Commons

What is Lactose Intolerance? Lactose intolerance affects up to 65% to 70% of the world’s adult population and is more common in certain countries than in others (Bayless et al., 2017). The disorder results from an inability to digest lactose, which is a disaccharide found in most dairy products. Lactose is made up of the two monosaccharides: glucose and galactose. Monosaccharides are also known as simple sugars; these molecules can be directly absorbed through the wall of the small intestine and the hepatic portal system. Lactose-intolerant individuals lack the ability to break these two molecules apart, which causes the physical symptoms of the syndrome. Individuals with lactose intolerance face many consequences after consuming lactose, including stomach cramps, stomachaches, nausea, bloating, high levels of gas, and diarrhea. These individuals may try to avoid consuming dairy products by switching out cow milk for substitutes like oat milk, almond milk, or soy milk. Interestingly, lactose intolerant individuals are not lactose intolerant throughout their whole lifetime. Humans, like most mammals, are born with the ability to digest the lactose in breastmilk (the main source of nutrients and calories for a newborn). However, we lose our ability to break down lactose after weaning; this phenomenon is

66

referred to as lactase non-persistence, or LNP. Individuals who retain the ability to digest lactose after weaning have a mutation of an autosomal dominant trait that allows them to continue to do so. There are some individuals who are unable to break down lactose from birth. This is known as congenital lactase deficiency or CLD (Diekmann, 2015). These infants are at an elevated risk for weight loss and severe dehydration if they are not given lactose-free milk/formula. Infants with this condition in the developing world are at a great risk of developing life-threatening diseases due to the lack of access to adequate healthcare and nutrition. Lastly, there is a group of individuals who are lactose tolerant but lose their ability to digest lactose during childhood or adulthood. This is called secondary lactose intolerance and is due to a decrease in lactase production caused by an injury, illness, or surgery involving the small intestine. For instance, illnesses such as intestinal infections, bacterial overgrowth in the intestines, and inflammation of the digestive tract (also known as Crohn’s disease) can play a major role in the development of secondary lactose intolerance (Luthy et al., 2017). Lactose intolerance can be diagnosed using either a hydrogen breath test or a lactose tolerance test. The hydrogen breath test measures the amount of hydrogen in an individual’s breath after their consumption of lactose. Breathing out large amounts of hydrogen means that the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 1: There are many popular substitutes for cow’s milk including but not limited to coconut milk, almond milk, cashew milk, soy milk, and hemp milk. Image Source: Wikimedia Commons

lactose wasn’t digested in the small intestine. Instead, that lactose was fermented by bacteria in the colon which resulted in the production of hydrogen and other gases (Catanzaro et al., 2021). The lactose tolerance test measures glucose levels in the bloodstream at various intervals after the consumption of lactose. If the level of glucose in the blood doesn’t rise, it means that the lactose wasn’t broken down by the lactase enzyme in the small intestine and absorbed into the bloodstream. Although this test doesn’t require expensive equipment, it is more invasive and therefore not as widely used as the hydrogen breath test (Misselwitz et al., 2019).

The Importance of Lactase The breakdown of lactose involves an enzymatic hydrolysis process. By utilizing the enzyme lactase-phlorizin hydrolase (LPH), also known as lactase, our body breaks down the disaccharide into the digestible molecules: galactose and glucose. Glucose and galactose are monosaccharides that can be directly absorbed by the enterocytes (single-layered columnar epithelial cells) through active transport in the lumen of the small intestine (Tümer et al., 2013). These molecules can either be used as fuel by the cells in the central nervous system and the periphery or be converted into glycogen and stored for later use in either the liver or muscle cells.

Lactase is an enzyme located on the brush border of the small intestine among its three sections. The first section is the duodenum, the middle is the jejunum, and the last is the ileum. This enzyme has the highest concentration in the jejunum and the lowest concentration in the ileum. The brush border is the inner lining of the small intestine that contains enzymes, plicae circulares (folds in the mucus membrane), and villi capillaries on its surface. This inner lining is responsible for much of the breakdown and absorption of complex sugars, amino acids, fatty acids, and nutrients – including lactose – that were not already absorbed by the duodenum (Collins et al., 2021). The main function of the lactase enzyme is to break down the lactose molecules into its monosaccharide components so they can be better absorbed by our bowels.

"Lactose intolerance affects up to 65% to 70% of the world’s adult population and is more common in certain countries than in others."

The lactase gene is located on chromosome number two, and the regulatory mutation that controls whether this gene is on or off is actually located fourteen thousand base pairs upstream on non-coding DNA that used to be commonly known as “junk DNA.” However, today it is known that these so-called “junk DNA” are far from useless, because they can control whether a gene is turned on or not. In essence, the mutation that controls whether the lactase gene is on or off after the weaning period and whether our body continues to produce lactase is located several Image 2: Lactose can be broken down by lactase after its reaction with water. This is the process of enzymatic hydrolysis and this is what allows those with the lactase enzyme to digest dairy products. Image Source: Wikimedia Commons

FALL 2021

67


thousand nucleotide bases away from the actual lactase gene in question (Anguita-Ruiz et al., 2020).

Convergent Evolution

"The most common way many individuals avoid the symptoms of lactose intolerance is by restricting their intake of lactose."

An interesting fact about the lactose tolerance mutation is that it occurred in several distinct populations around the world during a similar time period. The domestication of cows about ten thousand years ago greatly contributed to this evolution, since milk can be an extra source of calories and water in times of famines, malnourishment, and drought. Therefore, the groups of individuals who were able to digest lactose and use it as a source of energy after weaning were the ones who survived during tough times, reproduced, and were able to pass their traits onto their offspring. Throughout years of evolution, large percentages of these populations were able to develop that same trait. The development of this trait across distinct populations is known as convergent evolution. Another important factor to note is that this selection pressure on lactase gene expression is only present in humans; the lactase gene in other mammals is naturally turned off since they don’t require it after the weaning period (Anguita-Ruiz et al., 2020). Today, many studies have shown that there are clear differences between the percentages of specific populations in the ability to digest lactose (Anguita-Ruiz et al., 2020; Bayless et al., 2017; Ségurel & Bon, 2017). Populations who have practiced cattle-breeding and dairy farming show the greatest percentages of individuals who are lactose tolerant. The highest rates of lactase persistence are present in individuals in Northern Europe and the Middle East, or those who have migrated from there. The lowest rates are in East Asian countries, with China having 15% of individuals with lactase persistence and South Korea, Vietnam, and Cambodia having 0% to 5%. This may be due to the fact that many

tribes or early populations in East Asia were nonpastoralist communities and therefore did not rely heavily on many dairy products for calories and/or nutrition. As for the United States, lactase persistence is present in 83% to 93% of White Americans with origins from Europe or Scandinavia. 12% to 40% of African Americans and about 30% of Mexican Americans from more rural areas also have the ability to digest lactose. Additionally, relatively low levels of lactase persistence have been found in South America with Peru having 6%, Uruguay having 30%, and Colombia having 20% of individuals able to digest dairy (Anguita-Ruiz et al., 2020). In the populations with a higher lactase persistence, there are also a greater number of individuals who do end up losing their ability to digest lactose but many years after the weaning period ends. For example, some studies of the Finnish population (a country in northern Europe) demonstrated that a majority of Finnish individuals don’t lose their lactase persistence until the age of 10 and others are even able to digest lactose until the age of 20. In Thailand (a Southeast Asian country), on the other hand, many children lose their lactase activity in their intestines by the age of two (Kuchay, 2020).

Absence of Lactase and its Treatments

Another term for the absence or low levels of lactase in the small intestine is hypolactasia. Without lactase in the small intestine, our bodies are not able to break down and absorb lactose. The undigested lactose becomes fermented by colonic microbiota (bacteria in the colon). This increases the solute concentration of digestive fluids, which causes a counterbalancing influx of water into the lumen, therefore contributing to the unfavorable symptoms of lactose intolerance (Kuchay, 2020). This form of digestion leads to the creation of gases like hydrogen, carbon dioxide, and methane in the intestine (Misselwitz et al., 2019). Gas in the intestine leads to additional symptoms, including stomach cramps, stomach pain, and diarrhea.

Image 3: The domestication of cows played a major role when it came to lactase gene expression. There is a high positive correlation between populations with early cow domestication and populations with lactose tolerance. Image Source: Wikimedia Commons

68

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 4: This figure demonstrates the distribution of lactose intolerance around the world. It can be seen that Northern Europe has the highest lactase persistence (LP) and East Asia has the lowest lactase persistence Image Source: Wikimedia Commons

The most common way many individuals avoid the symptoms of lactose intolerance is by restricting their intake of lactose. However, it is often encouraged that these individuals include some form of dairy in their diet instead of completely avoiding lactose in order to build some tolerance (Szilagyi & Ishayek, 2018). The nondairy substitutes are mostly derived from plants and do not contain nearly as many nutritional benefits as the dairy products themselves. Thus, it is essential for those who are lactose intolerant to make sure they are getting enough calcium, Vitamin D, and Vitamin A from the other foods in their diet. Additionally, the lactase enzyme can be added to dairy products in the form of liquids, capsules, or tablets prior to consumption. This enzyme will begin to break down the lactose before it even enters the gastrointestinal tract and therefore allow those individuals with lactose intolerance to consume dairy (Szilagyi & Ishayek, 2018).

Conclusion Although lactose intolerance is present in a majority of adults, the symptoms and their severity differ based on the individual’s age, gender, amount of lactose ingested, bowel motor abnormalities, and visceral sensitivity (He et al., 2008). Lactose malabsorption can be affected by environmental factors as well and can change throughout the lifetime of an individual. Nevertheless, there are numerous substitutes today for dairy products that are just as flavorsome and accessible as the dairy products themselves. All things considered, lactose intolerance is a common biological pattern because it is natural for human beings to not be able to digest lactose FALL 2021

after the weaning period (Swagerty et al., 2002). In fact, since lactose tolerance is caused by a mutation, it is therefore the “real disease”, despite coming with many benefits. All in all, it is important to recognize that lactose intolerance is a naturally occurring phenomenon that affects a huge portion of the human population.

References Anguita-Ruiz, A., Aguilera, C. M., & Gil, Á. (2020). Genetics of Lactose Intolerance: An Updated Review and Online Interactive World Maps of Phenotype and Genotype Frequencies. In Nutrients (Vol. 12, Issue 9, p. 2689). MDPI AG. https://doi.org/10.3390/nu12092689 Bayless, T. M., Brown, E., & Paige, D. M. (2017). Lactase Non-persistence and Lactose Intolerance. In Current Gastroenterology Reports (Vol. 19, Issue 5). Springer Science and Business Media LLC. https://doi.org/10.1007/s11894-017-0558-9 Catanzaro, R., Sciuto, M., & Marotta, F. (2021). Lactose intolerance: An update on its pathogenesis, diagnosis, and treatment. Nutrition research (New York, N.Y.), 89, 23–34. https://doi. org/10.1016/j.nutres.2021.02.003 Collins JT, Nguyen A, Badireddy M. Anatomy, Abdomen and Pelvis, Small Intestine. [Updated 2021 Aug 11]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2021 Jan-. Available from: https://www.ncbi.nlm.nih.gov/ books/NBK459366 Diekmann, L., Pfeiffer, K., & Naim, H. Y. (2015). Congenital lactose intolerance is triggered by severe mutations on both alleles of the lactase 69


gene. In BMC Gastroenterology (Vol. 15, Issue 1). Springer Science and Business Media LLC. https://doi.org/10.1186/s12876-015-0261-y Forsgård, R. A. (2019). Lactose digestion in humans: intestinal lactase appears to be constitutive whereas the colonic microbiome is adaptable. In The American Journal of Clinical Nutrition (Vol. 110, Issue 2, pp. 273–279). Oxford University Press (OUP). https://doi.org/10.1093/ ajcn/nqz104 He, T., Venema, K., Priebe, M. G., Welling, G. W., Brummer, R.-J. M., & Vonk, R. J. (2008). The role of colonic metabolism in lactose intolerance. In European Journal of Clinical Investigation (Vol. 38, Issue 8, pp. 541–547). Wiley. https://doi. org/10.1111/j.1365-2362.2008.01966.x

physician, 65(9), 1845–1850. Szilagyi, A., & Ishayek, N. (2018). Lactose Intolerance, Dairy Avoidance, and Treatment Options. Nutrients, 10(12), 1994. https://doi. org/10.3390/nu10121994 Tümer, E., Bröer, A., Balkrishna, S., Jülich, T., & Bröer, S. (2013). Enterocyte-specific regulation of the apical nutrient transporter SLC6A19 (B(0) AT1) by transcriptional and epigenetic networks. The Journal of biological chemistry, 288(47), 33813–33823. https://doi.org/10.1074/jbc. M113.482760

Hu, P., Niu, Q., Zhu, Y., Shi, C., Wang, J., & Zhu, W. (2020). Effects of early commercial milk supplement on the mucosal morphology, bacterial community and bacterial metabolites in jejunum of the pre- and post-weaning piglets. In Asian-Australasian Journal of Animal Sciences (Vol. 33, Issue 3, pp. 480–489). Asian Australasian Association of Animal Production Societies. https://doi.org/10.5713/ajas.18.0941 Kuchay R. (2020). New insights into the molecular basis of lactase non-persistence/ persistence: a brief review. Drug discoveries & therapeutics, 14(1), 1–7. https://doi.org/10.5582/ ddt.2019.01079 Luthy, K. E., Larimer, S. G., & Freeborn, D. S. (2017). Differentiating Between Lactose Intolerance, Celiac Disease, and Irritable Bowel Syndrome-Diarrhea. In The Journal for Nurse Practitioners (Vol. 13, Issue 5, pp. 348– 353). Elsevier BV. https://doi.org/10.1016/j. nurpra.2017.01.018 Misselwitz, B., Butter, M., Verbeke, K., & Fox, M. R. (2019). Update on lactose malabsorption and intolerance: pathogenesis, diagnosis and clinical management. In Gut (Vol. 68, Issue 11, pp. 2080–2091). BMJ. https://doi.org/10.1136/ gutjnl-2019-318404 Ségurel, L., & Bon, C. (2017). On the Evolution of Lactase Persistence in Humans. In Annual Review of Genomics and Human Genetics (Vol. 18, Issue 1, pp. 297–319). Annual Reviews. https://doi. org/10.1146/annurev-genom-091416-035340 Swagerty, D. L., Jr, Walling, A. D., & Klein, R. M. (2002). Lactose intolerance. American family

70

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Understanding The Connection Between Diabetes and Kidney Disease: Are SGLT-2 Inhibitors the “Magic Bullet”? BY VALENTINA FERNANDEZ '24

Cover Image: 1916 Schematic of a longitudinal section of a kidney. One of the hallmark symptoms in diabetes mellitus is polyuria, or excessive urination, which results from abnormally high levels of sugar in the blood, causing the kidneys to retain more water. Image Source: Wikimedia Commons

71

Introduction Diabetes has been called the epidemic of the century, while kidney disease has been called the under-recognized public health crisis (Kharroubi & Dariwsh, 2015; NVS 2021 report of 2018 data). Kidney disease causes more death than breast cancer or prostate cancer, and diabetes affects over 460 million people worldwide. (Kharroubi & Dariwsh, 2015; NVS 2021 report of 2018 data). But, seldom is the connection between these two considered in-depth. Diabetes is the leading cause of kidney disease, accounting for nearly half of all causes of kidney failure resulting in a kidney transplant (Tuttle et al., 2021). Just in the United States, 34.2 million adults (~10.5% of the population) are thought to have diabetes, with 9095% of cases being type 2 diabetes. It is estimated that the economic burden of diabetes costs the United States about $327 billion USD per year, with reduced productivity accounting for $90 billion of the total, and the rest due to direct medical costs (American Diabetes Association, 2018). Diabetic Kidney Disease (DKD), which is chronic kidney disease for people with diabetes, is estimated to occur in ~30% of people with type 1 diabetes and in ~40% of people with type 2 diabetes (Tuttle et al., 2021). Even though

the interconnectedness of diabetes and kidney disease has long been known, the emergence of a new class of drugs called SGLT-2 inhibitors has reignited interest in addressing DKD. SGLT-2s were originally developed for people with type 2 diabetes to control their glycemia levels. However, they were found to have both positive cardiovascular and renal effects. The development of these drugs provides a promising future for people living with cardiometabolic and renal diseases, such as diabetes, hypertension, and chronic kidney disease. This review article will focus on the connection between diabetes and kidney disease. This paper will begin by providing background on the molecular mechanisms at play in diabetes mellitus and nephropathy (kidney disease). Next, the paper will turn its attention to the postulated reasons why diabetes causes kidney failure. The paper will then transition to review the current interventions in place to mitigate the burden of these disease, with a specific focus on the emergence of SGLT-2 inhibitors. To conclude, this paper will explore the limits of SGLT2 therapy right now, and what we can expect from this class of drugs looking forward.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Molecular Mechanisms of Diabetes: The Difference Between Type 1 and Type 2 Diabetes Diabetes Mellitus is defined as a group of metabolic diseases characterized by defective insulin secretion, insulin action, or both, which results in chronic hyperglycemia, or an abnormally high blood sugar level (Kharroubi & Darwish, 2015). There are various forms of diabetes mellitus, including gestational diabetes and rare-diabetes disorders, such as maturity-onset diabetes of the young (MODY) and latent autoimmune diabetes in adults (LADA). Classifying diabetes remains a controversial issue, with various scientists, physicians, and regulatory bodies disagreeing on the distinct categories for grouping diabetes. For the purposes of this paper, I will follow the American Diabetes Association’s traditional classification, which lists four forms of diabetes: gestational diabetes mellitus (GDM), other types (including, but not limited to, LADA and MODY), type 1 and type 2 diabetes (Kharroubi & Darwish, 2015). Type 1 Diabetes, or insulin-dependent diabetes, is an autoimmune disorder in which the body attacks its own pancreatic beta cells, which are responsible for endogenous insulin production (Kaufman, 2006). Type 1 diabetes constitutes about 5-10% of all diabetes cases and 80-90% of all cases in children (Maahs et al., 2010). Due to its historical prevalence in children, teens, and young adults, type 1 diabetes has been previously called juvenile diabetes, even though diagnosis can occur at any age (CDC, 2021). Although a genetic basis predisposes certain populations to type 1 diabetes, the specific causes and mechanisms of its inheritance patterns remain unknown. Several risk factors have been identified; for example, having certain variants of the HLA-DQA1, HLA-DQB1, and HLA-DRB1 genes may increase the likelihood of developing type 1 diabetes (Type 1 Diabetes, n.d.). These genes belong to a family of genes known as the human leukocyte antigen (HLA) complex, which play a role in the immune system’s ability to distinguish self from non-self (Overview of the Immune System - Immune Disorders, n.d.). HLA proteins and their combination of variations (haplotypes) can increase the risk of acquiring an autoimmune response, resulting in malfunction of insulin-producing beta cells and leading to the type 1 diabetes phenotype (Rosen & Ingelfinger, 2019). Specifically, the HLA DR3-DQ2 haplotype has been linked to almost half of all patients diagnosed with type 1 diabetes. As such, therapies focused on targeting this haplotype have emerged as candidates for a type 1 diabetes cure; to give FALL 2021

just one example, Swedish biopharmaceutical company Diamyd Medical is undergoing a phase 3 clinical trial to develop a vaccine against type 1 diabetes (Hannelius, 2021). To date, there is no cure for type 1 diabetes, and all people with this disease must administer insulin every day, multiple times a day, to maintain their blood glucose levels, as was demonstrated by the Diabetes Control and Complications Trial (DCCT) (Nathan et al., 2005; Wilson, 2011). Type 2 diabetes is the most common form of diabetes and it is characterized by insulin resistance; pancreatic beta cells can produce insulin, but the body is unresponsive to it. In the United States alone, more than 34 million people (about 1 in 10 Americans) have type 2 diabetes, and the amount of people with type 2 diabetes is expected to continue growing (CDC, 2019). Not all patients with type 2 diabetes will become insulin dependent, and most are treated with a combination of therapies to promote weight loss and normoglycemia, which aims to bring blood sugar levels down to a normal range (these specific therapies will be discussed in detail below). Although type 2 diabetes is largely due to environmental and lifestyle factors, it also has a significant genetic basis. Genome wide association studies (GWAS) have been pivotal in identifying about 70 loci associated with type 2 diabetes in various populations (Kharroubi & Darwish, 2015). Several loci positioned in and around the CDKAL1, CDKN2A/B, HHEX/ IDE and SLC30A8 genes are responsible for increasing type 2 diabetes risk (Zeggini et al., 2007). HHEX/IDE specifically is related to the insulin-degrading enzyme, while CDKAL1 and CDKN2A/B regulate cell expression, and therefore are linked to beta cell dysfunction (Zeggini et al., 2007). On the other hand, SLC30A8 encodes for a protein involved in the intracellular accumulation of zinc, specifically in the pancreas, meaning that it colocalizes with insulin in various insulin secretory pathways. The specific mechanisms involved in these pathways remain elusive, and further analysis is necessary for full comprehension of the signaling present. Recently, a variant of the HNF1A gene was shown to increase the risk of developing type 2 diabetes among the Latino population, suggesting it may serve as a screening tool in the future (SIGMA Type 2 Diabetes Consortium et al., 2014). All in all, there are a variety of genes at play for increasing type 2 diabetes risk, some specific to certain populations or subgroups (Kharroubi & Darwish, 2005). Since most (but not all) patients with type 2 diabetes have excess weight or obesity, the available therapies target weight loss, blood sugar management, and promote healthy eating

"Even though the interconnectedness of diabetes and kidney disease has long been known, the emergence of a new class of drugs called SGLT-2 inhibitors has reignited interest in addressing DKD."

72


and regular exercise (Vargatu, 2016).

"All forms of diabetes are characterized by deficiencies relating to the insulin hormone."

Table 1: The blood glucose levels that correspond to each condition, according to guidelines from the American Diabetes Association’s 2021 Standards of Care.

Screening for diabetes is crucial for early identification of the disease. And, given the increasing incidence and prevalence of diabetes, screening is now more important than ever (Lynam et al., 2019). For type 1 diabetes, one screening method involves testing for the presence of a few islet autoantibodies that have been identified as risk factors for the disease (American Diabetes Association, 2021). Islet cells are found in clusters throughout the pancreas; alpha cells (which produce glucagon, the hormone that raises blood sugar) and beta cells (which produce insulin) are two subtypes of islet cells (American Diabetes Association, 2021). Currently, the American Diabetes Association’s 2021 Standards of Care does not recommend this type of clinical testing for lowrisk populations, citing an insufficient amount of evidence confirming its clinical significance and verity (American Diabetes Association, 2021). However, they do recognize the validity of measuring the islet antibodies in individuals at risk for type 1 diabetes (e.g., relatives of those with type 1 or individuals from the general population with type 1 diabetes-associated genetic factors), pointing to a few European and Americans studies that reported a 70% likelihood of developing type 1 diabetes after testing positive for two or more autoantibodies (Ziegler et al., 2013). For type 2 diabetes, screening measures involve informal assessment of risk factors, such as obesity or hypertension, and is recommended for all individuals beginning at 45 years of age (American Diabetes Association, 2021). Since all types of diabetes are ultimately diagnosed based on glycemic levels, there are three main exams that confirm the prognosis: (i) a fasting plasma glucose test (FPG), (ii) an oral glucose tolerance test (OGTT), and (iii) an A1C glycosylated hemoglobin test (Mayo Clinic, 2021). An FPG test measures blood sugar after an overnight fast, and results are usually corroborated through an OGTT or an A1c test (American Diabetes Association, 2021). An OGGT, also known as the “glucose challenge,” gives the test-taker a glucose infusion (typically 75g of glucose in solution) and then tracks the test-taker’s glucose metabolism for two hours to achieve a final

blood glucose reading (American Diabetes Association, 2021). An A1c test, on the other hand, does not measure glucose levels directly and instead relies on hemoglobin. Hemoglobin is the protein in red blood cells that carries oxygen throughout the body. Glucose glycates (sticks) to hemoglobin, and so by measuring the amount of glycosylated hemoglobin in the blood, one can measure blood sugar over an extended period. An A1c is given as a percentage of the glucosebound hemoglobin molecules in the body, and it is directly proportional to average glucose levels over a 3-month period, making it a significant marker for diabetes diagnosis (Sun et al., 2014). In fact, A1c is such a potent indicator that most clinical trials testing therapies for diabetes will use changes in A1c levels as a marker for the efficacy of the drug. For all exams, blood glucose levels below 100mg/dL (5.6 mmol/L) are normal; those ranging from 100-125 mg/dL (5.6-6.9 mmol/L) indicate prediabetes, and any above 126 mg/dL (7 mmol/L) on two separate tests confirm the presence of diabetes (American Diabetes Association, 2021).

Defects in Insulin: The Culprit for all Types of Diabetes All forms of diabetes are characterized by deficiencies relating to the insulin hormone. Insulin is an endocrine peptide hormone that binds to its receptors in the plasma membrane and triggers a signaling pathway to move sugar from the blood to the inside of cells, where it can be used for metabolic duties, such as glucose regulation or suppression of triglyceride production, among many others (Petersen & Shulman, 2018). In other words, higher circulating insulin levels are followed by a decrease in blood glucose, meaning that insulin is necessary to achieve and maintain normoglycemia for all populations. Insulin or insulin-like peptides (ILPs) have been identified in all animals, even invertebrates, indicating their evolutionary significance in harnessing the energy from glucose for use by the cell (Petersen & Shulman, 2018). Insulin exerts its effects by binding to the insulin receptor (INSR) on the plasma membrane of its target cells, triggering several signaling cascade

Data Source: ADA Table Source: Created by author.

73

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 1: Schematic representation of the insulin receptor depicting the extracellular and intracellular subunits of the molecule Image Source: Hubbard, 2013

pathways reliant on phosphorylation (Haeusler et al., 2018). The INSR belongs to the receptor tyrosine kinase (RTK) superfamily, which are characterized by their phosphorylation activity (De Meyts, 2000). The INSR is a heterotetrametric (a dimer of heterodimers) RTK, composed of two extracellular α subunits (which bind insulin) and two membrane-spanning β subunits, each which contain a tyrosine kinase domain (Hubbard, 2013). The tyrosine kinase domain of the INSR is inactive in the absence of a ligand, but once insulin binds to two distinct sites on each α subunit of its receptor, this RTK autophosphorylates to become an activated dimer (De Meyts, 2000). Unlike most RTKs, the ISNR does not bind signaling proteins directly upon activation but instead binds docking proteins called IRS (Insulin Receptor Substrate) 1-6, which then recruit various other proteins to trigger intracellular signaling cascades (De Meyts, 2000). Phosphorylation of the receptor causes a conformational change that triggers a signal transduction cascade inside the target cell, beginning with the recruitment of insulin receptor substrates (IRS) (De Meyts, 2000). The IRS proteins were identified and sequenced in the 1980s, following the development of molecular cloning technologies (Guo, 2014). One of the main pathways involving the IRS proteins is the P13K/AKT pathway, which mediates the metabolic effects of insulin/ When defective, it is linked to the diabetes phenotype (De Meyts, 2000; Guo, 2014). Activation of this pathway is triggered by the binding of p85 or p55 (two of the regulatory subunits of PK13) to the IRS-1 and FALL 2021

IRS-2 proteins (which are bound to the activated ISNR dimer) (De Meyts, 2000). Following this initial binding, PK13 is activated through phosphorylation, allowing it to ultimately generate PIP3, a secondary messenger which then activates PDK1 and PDK 2 (3-phosphoinositidedependent-protein kinase), two kinases that mediate the effect of insulin on metabolism (Guo, 2014). PDK1 then activates AKT, which will have four critical substrates that serve as downstream targets: (i) mTOR (important for protein synthesis), (ii) GSK 3 (glycogen synthase kinase 3, important for the regulation fo glycogen synthesis), (iii) FoxO transcription factors (important for the regulation of gluconeogenic and adipogenic genes), and (iv) AS160 (involved in glucose transport) (De Meyts, 2000). Glucose transport from the bloodstream into its target cell (whether it be a muscle, skeletal, or fat cell) is precisely what people with diabetes have difficulty with. Therefore, deficiencies in the part of the pathway involving AS160 and its downstream targets results in postprandial glycemic excursions and dysregulation of glycemia. In normal (non-diabetes) circumstances, AKT will activate AS160. Activation of AS160 is required for GLUT4 translocation, where GLUT4 vesicles localize glucose channels to the plasma membrane, enabling the cell to remove glucose from the bloodstream (Guo, 2014). In its basal state, the GAP activity of AS160 maintains its target Rab in an inactive, GDP-bound form, which retains GLUT4 in intracellular compartments (Thong et al., 2007). After insulin stimulation, AKT phosphorylates AS160, deactivating its GAP activity, which then allows the target Rab to shift to its active GTP-bound form, thereby relieving an inhibitory effect on GLUT4 traffic and thus 74


allowing the GLUT4 vesicles to translocate to the plasma membrane (Thong et al., 2007).

The Kidneys The kidneys, along with the ureters, urinary bladder, and urethra, are part of the body’s excretory system and are responsible for disposing metabolic wastes and regulating the osmotic balance of blood. Each kidney, about 10 cm in length, has an outer renal cortex and an inner renal medulla, which contain tightly packed excretory tubules and their associated blood vessels. As blood enters the kidneys, the excretory tubules form and process the filtrate, composed of salts, sugars, amino acids, and nitrogenous waste. The kidney’s functional units, the nephrons, weave back and forth across the renal cortex and medulla and play a pivotal role in producing urine hyperosmotic to body fluids. The nephrons are composed of a single long tubule, a ball of capillaries (referred to as the glomerulus), a cup-shaped swelling in the tubule surrounding the glomerulus (called Bowman’s capsule), and the macula densa, which are a set of salt sensors that generate paracrine chemical signals, such as changes to renal blood flow, glomerular filtration, and renin release to control kidney function (Urry et al., 2017). The kidney has roughly 1 million nephrons, and each is supplied with blood by an afferent arteriole. As blood passes down the nephron, a series of absorption and reabsorption events filter the blood and expel its waste through urine.

One of the earliest reported studies linking diabetes with kidney disease was in 1981, when RA DeFronzo and colleagues observed insulin resistance in patients with uremia, a condition indicative of defective filtering of urine. They noted that the basal insulin concentration of the uremic subjects (125.01 ±1 pmol/L) was higher than insulin concentration in the control subjects, (97.23 ±1pmol/L; p<0.01). Additionally, they reported that the average rate of glucose utilization from minute 20 to 120 of the study period in the 17 uremic subjects (3.71+0.20 mg/kg-min) was 50% lower than in the 36 controls (7.38+0.26 mg/kg min, P < 0.001). This study concluded that while insulin secretion was not impaired in most patients with renal failure, patients exhibited hyperglycemia due to decreased tissue sensitivity to insulin, thus confirming the link between kidney disease and insulin resistance (DeFronzo et al., 1981). What remained elusive after this study, though, was how exactly hyperglycemia damaged the kidney’s function. The processes of absorption (from bloodstream to the nephron’s tubule) and reabsorption (back to bloodstream) in the kidneys are determined by ion concentrations; for people with diabetes, the excessive glucose ions in the blood (hyperglycemia) offset these balances and damage the blood vessels’ linings, thus preventing the kidney’s nephrons from properly filtering waste. The tubular hypothesis of nephron filtration and diabetic kidney disease postulates that elevated glucose levels in the glomerular filtrate drive an

Image 2: Schematic depicting the process of blood filtration and urine formation via the kidney’s nephrons. Specifically, the kidneys rely on ion concentration gradients to properly filter the blood’s waste, mostly nitrogenous compounds. Image Source: Wikimedia Commons

75

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


increased reabsorption of glucose and sodium by the sodium-glucose cotransporters, SGLT2 and SGLT1, in the proximal tubule, thus resulting in elevated blood glucose levels (Vallon & Thomson, 2020). SGLT2 and SGLT1 proteins are expressed in the apical membrane of the early and late proximal tubule, respectively. As symporters, these membrane channels contribute to the renal absorption and reabsorption of glucose (Pittampalli et al., 2018). They mediate active transport of glucose against its concentration gradient by means of cotransport with sodium (sodium’s concentration gradient is created by the Na-K-ATPase pump, which forces intracellular Na to exit the cell across its basolateral membrane) (Vallon & Thomson, 2020). Specifically, SGLT2 uses one sodium ion to transport one glucose molecule and is responsible for 90% of the glucose reuptake in the first segment of the proximal tubule, while SGLT1 uses two sodium ions to transport one glucose molecule and is responsible for the remaining 10% (Pittampalli et al., 2018). Because people with diabetes have elevated glucose levels in their blood, they experience what is called a “hyper-reabsorption” of glucose and sodium in the proximal tubule (Vallon & Thomson, 2020). The increase in absorption of glucose and sodium enhances the glomerular filtration rate (GFR), since the glomerulus and its protein channels (SGLTs) are overworking to move the higher-than-normal levels of glucose into the bloodstream; this results in a state of “hyperfiltration” for the kidney. The increase in tubular glucose reabsorption capacity helps to conserve the filtration function of the kidney, but it also maladaptive because it sustains hyperglycemia (Hotstetter, 1992). Eventually, the kidneys respond to the chronic increase in tubular reabsorption by overexpressing SGLT2s, an adaption which further promotes chronic hyperglycemia. In mouse models of T1DM and T2DM, SGLT2 expression has been found to increase by 40-80% during the early stages of hyperglycemia (Kajiwara & Sawa, 2021; Vallon et al., 2013, 2014). These findings supported the tubular hypothesis of nephron filtration and diabetic kidney disease, suggesting that diabetes increases renal SGLT2 expression and glucose reabsorption (from nephron to bloodstream). Over time, the kidney’s hyperfiltration behavior becomes dangerous; even though overexpression of SGLT2 initially helped the kidney to adapt to increased sugar levels in the blood, it inadvertently promoted hyperglycemia in doing so. In short, a seemingly beneficial adaption

FALL 2021

becomes maladaptive. Chronic hyperglycemia promoted by overexpression of SGLT2 has a variety of adverse effects for the body. For the kidney specifically, studies have shown that the prevalence of hyperfiltration is proportional not only to blood glucose levels, but also to blood pressure, suggesting that hyperfiltration (resulting in hyperglycemia) may result in or exacerbate hypertension. Over time, hypertension constricts the afferent arterioles in the kidneys, decreasing filtration by the glomerulus; what begins as hyperfiltration eventually results in hypofiltration, characterized by a decreased eGFR indicative of a failing kidney (Palatini, 2012). The adverse effects associated with kidney failure, which are exacerbated by diabetes, have made diabetic kidney disease a priority to address. Currently, the US Food and Drug Administration (FDA) has approved four SGLT-2 inhibitors for the treatment of hyperglycemia in type 2 diabetes: canagliflozin, dapagliflozin, empagliflozin, and ertugliflozin (Pittampalli et al., 2018; Tuttle et al., 2021). These agents are all part of the same class (Sodium Glucose Transporter 2 Inhibitors), so their molecular structure and mechanisms of action are very similar (Hsia et al., 2017). Their main differences lie in which biopharmaceutical company produce the drug, which results in slight differences on how sensitive or selective the agents are (Hsia et al., 2017). In addition, the drugs differ in in the doses that are available in the market, and slightly in their administration patterns. For example, canagliflozin is only available in 100mg or 300mg doses, while dapagliflozin is a much smaller dose, only available as 5mg or 10 mg (Hsia et al., 2017).

"The kidneys ... are part of the body’s excretory system and are responsible for disposing metabolic wastes and regulating the osmotic balance of blood."

SGLT-2 inhibitors have demonstrated tremendous success in their ability to combat hyperglycemia, these class of drugs have been found to confer cardiorenal protection as well, after a variety of cardiovascular disease (CVD) outcome trials (CVOT) were carried out in people with type 2 diabetes and established ASCVD (Merck Sharp & Dohme Corp., 2021; Neal et al., 2017; Wiviott et al., 2019; Zinman et al., 2015). For instance, in the Johnson&Johnson sponsored CANVA trial, canagliflozin demonstrated an 18% reduction in urinary albumin-to-creatinine ratio (ACR) and a 27% risk reduction for progression of albuminuria (95% CI), two important markers of kidney disease (Neal et al., 2017). Previously, canagliflozin had demonstrated its safety and efficacy in controlling hyperglycemia for people with type 2 diabetes; specifically, the 300mg dose of canagliflozin conferred a 1.06% reduction in

76


"Diabetes is ubiquitous, not only in the world as a global health problem, but also in the body— its comorbidities travel in conjunction with the disease."

Table 2: The risk-benefit profile of the SGLT-2 inhibitor class, based on results from the DAPA-CKD, CREDENCE, and EMPA-REG trials testing dapagliflozin, canagliflozin, and empagliflozin, respectively.

A1c at 26 weeks, compared to placebo, and these reductions were sustained at week 52. In a metaanalysis and systematic review done by Zelniker et al (2019), all the results from the CVOTs until 2018 were combined, and the cardiorenal benefits of SGLT2 for people with diabetes were once again validated. Specifically, this meta-analysis pooled data from 34, 322 patients and reported that SGLT2 inhibitors reduced major adverse cardiovascular events (MACE) by 11% and reduced the risk of progression of renal disease by 45% (p<0.0001) (Zelniker et al., 2019). Notably, the extent to which SGLT2s conferred renal protection varied with baseline renal functions, with lesser reductions in progression of renal disease in patients that had more severe kidney disease at baseline. This suggested that the drugs are efficacious, early intervention with SGLT2s will maximize their ability to confer protective benefits. The future for SGLT2 inhibitors will likely depend on the risk-benefit ratio of the drug class. In other words, for SGLT2 inhibitors to solidify their dominance of the market as the most efficient therapy for type 2 diabetes and kidney disease, the benefits they confer to the patient will have to strongly outweigh any risks associated with their use. Fortunately, because sodium-glucose co-transporters 1 and 2 function independent of insulin action, they have demonstrated little to no adverse events like hypoglycemia or weight gain. For example, in the DAPA-CKD trial testing dapagliflozin in patients with chronic kidney disease, the incidence of adverse events were similar in both the experimental and control group (dapagliflozin vs placebo) and low overall (Heerspink et al., 2020). In fact, the trial was stopped early due to “overwhelming efficacy” and topline results granted the drug Fast Track designation by the FDA in August 2019 (FDA Grants Fast Track Designation for Farxiga in Chronic Kidney Disease, 2019). Likewise, in the CREDENCE trial, canagliflozin demonstrated similar rates of adverse events compared to the placebo group, with a rate of 12.3 versus 11.2 per 1,000 patient-years for risk of lower-limb

amputation (95% CI, 079 to 1.56). Despite these initial positive results, all SGLT-2 inhibitors carry a red flag for people with diabetes: a risk of diabetic ketoacidosis (DKA) (Zelniker et al., 2019). DKA is a serious complication of diabetes caused by an excessive production of blood acids (ketones) due to a lack of insulin circulating in the body. Although the rates of DKA in these trials have been overall very low, they are still present, and certainly a cause for concern. The risk to benefit ratio of SGLT-2 inhibitors are summarized in Table 2. The era of SGLT-2 inhibitors looks promising; after all, these classes of drugs are rather novel in this space, and in just a decade or so have managed to establish themselves as first-line therapies for hyperglycemia, and now CVD and renal disease. As we’ve learned, maintaining glucose homeostasis is vital to preserving a reliable and consistent source of glucose to all the body’s organs. Diabetes is ubiquitous, not only in the world as a global health problem, but also in the body— its comorbidities travel in conjunction with the disease. SGLT-2 inhibitors provide an avenue for better care, and along with lifestyle changes and other self-care measures (i.e. physical activity and nutritional care) will continue to facilitate normoglycemia and minimize adverse cardiorenal outcomes. Whether SGLT-2 inhibitors are the “magic bullet” for diabetes, CVD, and renal disease or not remains unknown, but hopefully over time, we will have an accumulation of more real-world evidence validating its efficacy, and allowing for wider adoption of these agents.

References American Diabetes Association. (2018). Economic Costs of Diabetes in the U.S. in 2017. Diabetes Care, 41(5), 917–928. https://doi. org/10.2337/dci18-0007 Association, A. D. (2021). 2. Classification and Diagnosis of Diabetes: Standards of Medical Care in Diabetes—2021. Diabetes Care, 44(Supplement

Data Source: Pittampalli et al., 2018 Table Source: Created by author.

77

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


1), S15–S33. https://doi.org/10.2337/dc21-S002 CDC. (2019, May 30). Type 2 Diabetes. Centers for Disease Control and Prevention. https://www. cdc.gov/diabetes/basics/type2.html FDA grants Fast Track designation for Farxiga in chronic kidney disease. (2019, August 27). https:// www.astrazeneca.com/media-centre/pressreleases/2019/fda-grants-fast-track-designationfor-farxiga-in-chronic-kidney-disease-27082019. html

Kaufman, F. R. (2006). Diabesity: A doctor and her patients on the front lines of the obesitydiabetes epidemic. Bantam Books. Kharroubi, A. T., & Darwish, H. M. (2015). Diabetes mellitus: The epidemic of the century. World Journal of Diabetes, 6(6), 850–867. https:// doi.org/10.4239/wjd.v6.i6.850

Guo, S. (2014). Insulin signaling, resistance, and metabolic syndrome: Insights from mouse models into disease mechanisms. Journal of Endocrinology, 220(2), T1–T23. https://doi. org/10.1530/JOE-13-0327

Lynam, A., McDonald, T., Hill, A., Dennis, J., Oram, R., Pearson, E., Weedon, M., Hattersley, A., Owen, K., Shields, B., & Jones, A. (2019). Development and validation of multivariable clinical diagnostic models to identify type 1 diabetes requiring rapid insulin therapy in adults aged 18–50 years. BMJ Open, 9(9), e031586. https://doi.org/10.1136/bmjopen-2019-031586

Haeusler, R. A., McGraw, T. E., & Accili, D. (2018). Biochemical and cellular properties of insulin receptor signalling. Nature Reviews Molecular Cell Biology, 19(1), 31–44. https://doi. org/10.1038/nrm.2017.89

Maahs, D. M., West, N. A., Lawrence, J. M., & Mayer-Davis, E. J. (2010). Epidemiology of type 1 diabetes. Endocrinology and Metabolism Clinics of North America, 39(3), 481–497. https://doi. org/10.1016/j.ecl.2010.05.011

Hannelius, U. (2021, September 27). Diamyd Medical receives first regulatory approval to start the Phase III trial DIAGNODE-3 with the diabetes vaccine Diamyd®. https://www.diamyd. com/docs/pressClips.aspx?ClipID=4070572

Mayo Clinic. (2021, January 20). Type 1 diabetes—Diagnosis and treatment—Mayo Clinic. https://www.mayoclinic.org/diseasesconditions/type-1-diabetes/diagnosis-treatment/ drc-20353017

Heerspink, H. J. L., Stefánsson, B. V., CorreaRotter, R., Chertow, G. M., Greene, T., Hou, F.-F., Mann, J. F. E., McMurray, J. J. V., Lindberg, M., Rossing, P., Sjöström, C. D., Toto, R. D., Langkilde, A.-M., & Wheeler, D. C. (2020). Dapagliflozin in Patients with Chronic Kidney Disease. New England Journal of Medicine, 383(15), 1436– 1446. https://doi.org/10.1056/NEJMoa2024816

Merck Sharp & Dohme Corp. (2021). Randomized, Double-Blind, Placebo-Controlled, ParallelGroup Study to Assess Cardiovascular Outcomes Following Treatment With Ertugliflozin (MK8835/PF-04971729) in Subjects With Type 2 Diabetes Mellitus and Established Vascular Disease, The VERTIS CV Study (Clinical Trial Registration study/NCT01986881). clinicaltrials. gov. https://clinicaltrials.gov/ct2/show/study/ NCT01986881

Hsia, D. S., Grove, O., & Cefalu, W. T. (2017). An Update on SGLT2 Inhibitors for the Treatment of Diabetes Mellitus. Current Opinion in Endocrinology, Diabetes, and Obesity, 24(1), 73–79. https://doi.org/10.1097/ MED.0000000000000311 Hubbard, S. R. (2013). The Insulin Receptor: Both a Prototypical and Atypical Receptor Tyrosine Kinase. Cold Spring Harbor Perspectives in Biology, 5(3), a008946–a008946. https://doi. org/10.1101/cshperspect.a008946 Kajiwara, K., & Sawa, Y. (2021). Overexpression of SGLT2 in the kidney of a P. gingivalis LPSinduced diabetic nephropathy mouse model. BMC Nephrology, 22(1), 287. https://doi. org/10.1186/s12882-021-02506-8

FALL 2021

Neal, B., Perkovic, V., Mahaffey, K. W., de Zeeuw, D., Fulcher, G., Erondu, N., Shaw, W., Law, G., Desai, M., & Matthews, D. R. (2017). Canagliflozin and Cardiovascular and Renal Events in Type 2 Diabetes. New England Journal of Medicine, 377(7), 644–657. https://doi. org/10.1056/NEJMoa1611925 Overview of the Immune System—Immune Disorders. (n.d.). Merck Manuals Consumer Version. Retrieved October 30, 2021, from http s : / / w w w. m e rc k m anu a l s . c om / h om e / immune-disorders/biology-of-the-immunesystem/overview-of-the-immune-system Pittampalli, S., Upadyayula, S., Mekala, H. M., &

78


Lippmann, S. (2018). Risks vs Benefits for SGLT2 Inhibitor Medications. Federal Practitioner, 35(7), 45–48. Rosen, C. J., & Ingelfinger, J. R. (2019). Traveling down the Long Road to Type 1 Diabetes Mellitus Prevention. New England Journal of Medicine, 381(7), 666–667. https://doi.org/10.1056/ NEJMe1907458 SIGMA Type 2 Diabetes Consortium, Estrada, K., Aukrust, I., Bjørkhaug, L., Burtt, N. P., Mercader, J. M., García-Ortiz, H., HuertaChagoya, A., Moreno-Macías, H., Walford, G., Flannick, J., Williams, A. L., Gómez-Vázquez, M. J., Fernandez-Lopez, J. C., Martínez-Hernández, A., Jiménez-Morales, S., Centeno-Cruz, F., Mendoza-Caamal, E., Revilla-Monsalve, C., … MacArthur, D. G. (2014). Association of a lowfrequency variant in HNF1A with type 2 diabetes in a Latino population. JAMA, 311(22), 2305– 2314. https://doi.org/10.1001/jama.2014.6511 Sun, X., Du, T., Huo, R., & Xu, L. (2014). Hemoglobin A1c as a marker for identifying diabetes and cardiovascular risk factors: The China Health and Nutrition Survey 2009. Acta Diabetologica, 51(3), 353–360. https://doi. org/10.1007/s00592-013-0515-5 Thong, F. S. L., Bilan, P. J., & Klip, A. (2007). The Rab GTPase-Activating Protein AS160 Integrates Akt, Protein Kinase C, and AMP-Activated Protein Kinase Signals Regulating GLUT4 Traffic. Diabetes, 56(2), 414–423. https://doi. org/10.2337/db06-0900 Tuttle, K. R., Brosius, F. C., Cavender, M. A., Fioretto, P., Fowler, K. J., Heerspink, H. J. L., Manley, T., McGuire, D. K., Molitch, M. E., Mottl, A. K., Perreault, L., Rosas, S. E., Rossing, P., Sola, L., Vallon, V., Wanner, C., & Perkovic, V. (2021). SGLT2 Inhibition for CKD and Cardiovascular Disease in Type 2 Diabetes: Report of a Scientific Workshop Sponsored by the National Kidney Foundation. American Journal of Kidney Diseases, 77(1), 94–109. https://doi. org/10.1053/j.ajkd.2020.08.003 Type 1 diabetes: MedlinePlus Genetics. (n.d.). Retrieved October 30, 2021, from https:// medlineplus.gov/genetics/condition/type-1diabetes/ Vallon, V., Gerasimova, M., Rose, M. A., Masuda, T., Satriano, J., Mayoux, E., Koepsell, H., Thomson, S. C., & Rieg, T. (2014). SGLT2 inhibitor empagliflozin reduces renal growth and

79

albuminuria in proportion to hyperglycemia and prevents glomerular hyperfiltration in diabetic Akita mice. American Journal of Physiology. Renal Physiology, 306(2), F194-204. https://doi. org/10.1152/ajprenal.00520.2013 Vallon, V., Rose, M., Gerasimova, M., Satriano, J., Platt, K. A., Koepsell, H., Cunard, R., Sharma, K., Thomson, S. C., & Rieg, T. (2013). Knockout of Na-glucose transporter SGLT2 attenuates hyperglycemia and glomerular hyperfiltration but not kidney growth or injury in diabetes mellitus. American Journal of Physiology. Renal Physiology, 304(2), F156-167. https://doi. org/10.1152/ajprenal.00409.2012 Vargatu, I. (2016). WILLIAMS TEXTBOOK OF ENDOCRINOLOGY. Acta Endocrinologica (Bucharest), 12(1), 113. https://doi.org/10.4183/ aeb.2016.113 Wiviott, S. D., Raz, I., Bonaca, M. P., Mosenzon, O., Kato, E. T., Cahn, A., Silverman, M. G., Zelniker, T. A., Kuder, J. F., Murphy, S. A., Bhatt, D. L., Leiter, L. A., McGuire, D. K., Wilding, J. P. H., Ruff, C. T., Gause-Nilsson, I. A. M., Fredriksson, M., Johansson, P. A., Langkilde, A.M., & Sabatine, M. S. (2019). Dapagliflozin and Cardiovascular Outcomes in Type 2 Diabetes. New England Journal of Medicine, 380(4), 347– 357. https://doi.org/10.1056/NEJMoa1812389 Zeggini, E., Weedon, M. N., Lindgren, C. M., Frayling, T. M., Elliott, K. S., Lango, H., Timpson, N. J., Perry, J. R. B., Rayner, N. W., Freathy, R. M., Barrett, J. C., Shields, B., Morris, A. P., Ellard, S., Groves, C. J., Harries, L. W., Marchini, J. L., Owen, K. R., Knight, B., … Hattersley, A. T. (2007). Replication of genome-wide association signals in UK samples reveals risk loci for type 2 diabetes. Science (New York, N.Y.), 316(5829), 1336–1341. https://doi.org/10.1126/science.1142364 Zelniker, T. A., Wiviott, S. D., Raz, I., Im, K., Goodrich, E. L., Bonaca, M. P., Mosenzon, O., Kato, E. T., Cahn, A., Furtado, R. H. M., Bhatt, D. L., Leiter, L. A., McGuire, D. K., Wilding, J. P. H., & Sabatine, M. S. (2019). SGLT2 inhibitors for primary and secondary prevention of cardiovascular and renal outcomes in type 2 diabetes: A systematic review and meta-analysis of cardiovascular outcome trials. The Lancet, 393(10166), 31–39. https://doi.org/10.1016/ S0140-6736(18)32590-X Ziegler, A. G., Rewers, M., Simell, O., Simell, T., Lempainen, J., Steck, A., Winkler, C., Ilonen, J., Veijola, R., Knip, M., Bonifacio, E., & Eisenbarth,

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


G. S. (2013). Seroconversion to multiple islet autoantibodies and risk of progression to diabetes in children. JAMA, 309(23), 2473–2479. https:// doi.org/10.1001/jama.2013.6285 Zinman, B., Wanner, C., Lachin, J., & Fitchett, D. (2015). Empagliflozin, Cardiovascular Outcomes, and Mortality in Type 2 Diabetes | NEJM. https://www.nejm.org/doi/full/10.1056/ NEJMoa1504720

FALL 2021

80


Climate Change and its Implications for Human Health STAFF WRITERS: LAUREN FERRIDGE '23, KEVIN STAUNTON '24, VAISHNAVI KATRAGADDA '24, ROHAN MENEZES '23, TANYAWAN WONGSRI '25, SABRINA BARTON '24, SOYEON (SOPHIE) CHO '24, EMILY BAROSIN '25 TEAM LEADS: ANAHITA KODALI '23, DINA RABADI '22 Cover Image: Global warming and climate change will significantly change the Earth in coming years; these changes may have significant implications for human health in a variety of ways. Image Source: Wikimedia Commons

Introduction Climate change, the gradual yet potentially catastrophic shift in earth’s climate due to global warming, is an undeniable reality. For decades, the consensus in the global scientific community has been that the primary cause of this warming is anthropogenic, driven by the accumulation of heat-trapping greenhouse gasses in the atmosphere from human activities. According to a recent survey, this consensus is now over 99% (Lynas et al., 2021). While greenhouse gasses have been exuded by natural sources like volcanoes and forest fires for millions of years, the rate of emissions has never exceeded earth’s natural capacity to absorb them through greenhouse gas sinks - such as the ocean and terrestrial ecosystems (Yue and Gao, 2018). The addition of anthropogenic sources has surpassed this capacity almost twofold, overriding our planet’s natural ability to regulate global temperatures (Yue and Gao, 2018). Even now, rapid warming is increasing the number and severity of extreme weather events and droughts

81

and causing rising seas that drive out coastal communities, all of which widen global socioeconomic divides, particularly for global health. The most culpable source of emissions is the consumption of fossil fuels such as coal, oil, and natural gas, all of which emit greenhouse gasses - largely carbon dioxide and methane - when burnt (Perera, 2017). Human society currently relies on fossil fuels for most of our electricity and heat generation, as well as to power industrial activities, buildings, and transportation networks (Lamb et al., 2021). According to Lamb et al. (2021), these activities collectively accounted for 86% of all anthropogenic greenhouse gas emissions between 1990 and 2018. The remaining 14% came from the agriculture, forestry, and land-use sector, primarily in the form of methane - a byproduct of livestock digestive processes - as well as nitrous oxide from fertilizer application and carbon dioxide from the clearing of carbonsequestering natural vegetation (Lamb et al., 2021; Lynch, 2019). These emissions and their effects are globally disproportionate. The countries with the least DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


emissions, largely poorer, developing countries with lower economic outputs, are the most vulnerable to climate change (Althor et al., 2016; Diffenbaugh and Burke, 2019; Ahmed et al., 2009). This vulnerability partly stems from the already-warm climatic regions these countries are located in, such that increased warming may be beyond their capacity to endure, but the primary reason is that they lack the resources for climate adaptation (Diffenbaugh and Burke, 2019). These impacts are widening historic inequalities as richer, developed countries have fewer incentives to reduce emissions and restrict their economic growth than their poorer, developing counterparts (Althor et al., 2016; Diffenbaugh and Burke, 2019). The inequality does not stop there. Even within countries, and particularly in the US, communities of socio-economic privilege consume fossil fuels at a much greater rate than their less privileged counterparts who, unlike them, lack the resources to adapt to the consequences (Thomas et al., 2019; Nielsen et al., 2021). This begs the question; what will be the fallout of this ever-worsening unequal distribution of environmental harms? By all indicators, a crisis of public health inequity.

The Root Causes of Climate Change and Health Inequity The upstream root causes of climate change and health inequity usually stem from socioeconomic inequities. Sites of institutional power such as governments, corporations, and schools can directly impact the way social structures are created; in America, the predominant social structures are related to socioeconomic status and divide the nation into lower, middle, and upper classes. As will be discussed throughout this article, it is clear that climate change disproportionately affects those in lower economic classes. Often, institutional decisions are influenced by notions of race, class, and sex, which contribute to the disproportionate impacts of climate change not just on lower economic classes but on people of colour and women and children (Rudolph & Gould, 2015). There are myriad concrete examples that show that the causes and drivers of climate change and health inequity are the same - these span energy infrastructure, transportation, housing, food, agriculture, and land use. For example, studies have shown that environmental health hazards are not distributed evenly across the American landscape. In California, the pollution burden (which can be worsened by climate change) of pesticides and toxic chemical contaminants was found to be unequally distributed throughout FALL 2021

the state, with just 10% of the state carrying a majority of the burden. The zip codes that encapsulated these areas were found to contain higher proportions of people of colour; Hispanics, African Americans, Native Americans, and Asian/Pacific Islanders all were more likely to live in a highly polluted zip code than their nonHispanic white counterparts. As such, the burden of pollution is significantly higher for people of colour (Cushing et al., 2015). Other issues include redlining in urban environments, lack of public transportation from rural to urban environments, and food deserts, preventing access to clean food and water for residents (Rudolph & Gould, 2015). Poorer living conditions due to increased exposure to air pollution, increased violence, or a lack of greenery can lead to disability, death, or chronic illnesses (Rudolph & Gould, 2015). Typically, those that experience health inequities are those that are discriminated against in any way, such as racial minorities, gender minorities, or even those that live in poverty. Unfortunately, these are also the very people who do not have the political, legal, or economic power to fight against the larger institutions who play a role in controlling climate change policy as well as in decreasing health inequities (The Lancet, 2018). Thus, this lack of power directly leads to greater climate change effects as well as greater adverse health effects faced by minorities.

"The upstream root causes of climate change and health inequity usually stem from socio-economic inequities."

Inequities in Health: Disproportionate Impacts of Climate Change on Racial/Ethnic Groups Many studies have continuously shown that racial and ethnic minorities will experience the worst effects of climate change. A recent report conducted by the Environmental Protection Agency (EPA) has shown how an individual’s racial and ethnic identity directly relates to heat associated morbidity and mortality in the United States. As compared to their white counterparts, racial minorities are 35% more likely to lose working hours to heat waves, leading to the inability to afford healthcare (EPA, 2021). There is a strong association of heatassociated morbidity and mortality with racial and ethnic minorities, with Black and Hispanic individuals experiencing these adverse health effects at much greater rates than their White counterparts (Gronlund, 2014). In many cases, these disproportionate effects are due to an increase in heat vulnerability among minorities, due to factors such as housing and redlining, neighborhood crime, heat perception, or cultural or linguistic isolation. These vulnerability factors then lead to poorer physical health and are 82


founded in systemic discrimination, as factors like lower incomes result in minorities having to find housing in areas with less vegetation, a greater presence of heat absorbing surfaces, or less air conditioning (Gronlund, 2014). Hispanic and Latino individuals are at the highest risk of climate change related health-impacts, with a 43% higher chance of these individuals living in areas most affected by climate change. Black, African American, and Pacific Islander individuals are 10% more likely to live in areas of increasing temperatures (EPA, 2021). As the threat of global warming heightens, heat waves will increase in intensity, duration, and frequency, and the impact of the weather on minorities will also become more severe. Among Hispanic individuals, studies in New York City and Phoenix have shown that in predominantly Hispanic neighborhoods, there is an increased vulnerability and increase in adverse health effects due to heat because of linguistic isolation that may make it difficult for some to understand heat warnings or health-education messages (Gronlund, 2014). While there is the possibility for individuals in these ethnic or minority neighborhoods to travel to cooler places, there is a reluctance to do so due to cultural isolation, a lack of familiarity with activities in cooler places, and concerns with immigration or possible deportation. As ethnic and racial minorities are forced into these neighborhoods that face greater impacts of climate change related crises, health inequities faced by racial groups will continue to be perpetuated. A fascinating case study of disproportionate climate effects can be seen in analyzing the impact of Hurricane Harvey in Houston during 2017. After concerns of how Hurricane Katrina disproportionately impacted Black residents of New Orleans, researchers began to study how Harvey may have had a similar impact. Using FEMA’s Inundation Footprint and other data collected on flooding from the hurricane, it was found that even after controlling for explanatory variables, racial and ethnic minorities experienced significantly more flooding from the hurricane (Chakraborty et al., 2019). In areas where the population of Black residents was one standard deviation higher than the mean, there was a 4.5% increase in the mean proportion of the area that would be flooded. Increasing the number of Hispanic residents by 1 standard deviation similarly led to an increase in land flooded, by 2.6%. This suggests that flooding is localized to areas with higher proportions of minorities. Similar to the instance of increased

83

adverse effects due to heat, increased exposure to issues like flooding led to greater health effects, as flooding results in immediate threats such as hypothermia, long-term physical illnesses due to poverty such as malnutrition, or mental health illnesses due to stress (Du et al., 2010). Flooding is expected to increase as climate change occurs in areas such as Houston, where storms that bring more than 20 inches of rain are already six times more likely to occur in 2019 than they were in 2000. Heavy storms are expected to increase twenty-fold by 2081 at the current rate (Chakraborty et al., 2019). This issue and others will then lead to an even greater increase in health disparities among racial groups, as they will disproportionately bear the burden of health consequences from climate change in addition to systemic inequities that already exist. Disproportionate Impacts of Climate Change on Lower Socioeconomic Classes The various consequences of climate change often result in crises that greatly diminish the health of lower socioeconomic classes, building upon the existing health inequity in those communities (Friel et al., 2011). People in lower socioeconomic classes generally live in warmer urban areas, which become more susceptible to extreme temperatures due to the absence of natural greenery and shade, the use of heatabsorbing building materials, and the lack of air conditioning in many of the buildings. These environmental conditions can add to the typically hot and humid local climates in these cities, raising maximum temperatures during the day by 1 to 3°C. When this increase in temperature is added to the 2°C temperature increase expected by 2050 1.8 to 4.0°C warming by 2100 due to climate change, these urban areas will become dangerous to live in. Climate change-related heat waves around the world increase morbidity and mortality due to heat-related diseases, and humid conditions often leave people exposed to communicable disease. These worsening climate conditions can easily become hazards for those working outside or in confined spaces in manufacturing and construction (Friel et al., 2011). Rising temperatures, along with insufficient sanitation, water treatment, and drainage, also increase infectivity of diseases. Due to their living conditions, many low-income areas of the world are already extremely susceptible to malaria, dengue, diarrhea and other diseases. Flooding and other extreme weather events, which are projected to become more frequent and severe as climate change worsens, strain

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 1: This image shows the percent of the population below the poverty line in the US based on 2015-2019 data. The disparities in the distribution of impoverished people throughout the US is clear - certain states carry a higher burden of poverty than others. Image Source: Wikimedia Commons

already poor infrastructure and thus increase the risk of contracting these diseases. More than 25% of Latin America and the Caribbean and around 60% of slums in Bangladesh lack the drainage needed to combat flooding, and safe and secure water and sanitation is absent from around 50% of urban Asian and Africa. This lack of infrastructure increases the risk for disease in areas already very prone to disease (Friel et al., 2011). Extreme weather events and the poor infrastructure have even more direct health consequences than increasing disease due to the extreme danger of these events. As climate change worsens, the severity of these weather events, poorer neighborhoods of the world not only have inadequate infrastructure to protect them, but also lack the funding to support evacuation efforts or rebuilding of said infrastructure. The storms and floods that have ravaged Manila as recently as 2013 exhibit the direct effect of natural disasters on infrastructure and the health of people in those areas. Hurricane Katrina demonstrated how even living in a richer nation - like the US does not protect those in lower economic classes from the devastating effects of climate change. Additionally, the impending sea-level rise can raise the likelihood of these events for 13% of the urban population of the world who live in the low elevation coastal zone (Friel et al., 2011). Climate change is particularly devastating for the 100 million people, or 2% of the world’s population, who are homeless. The lack of resources and the associated stigma limiting support makes individuals experiencing homelessness unable to protect themselves from any sort of disaster. Consistently, individuals experiencing homelessness are victims of relocation and displacement in response to disasters, which can cause all sorts of health consequences, both

FALL 2021

physical and mental. Additionally, as housing is destroyed or damaged due to natural disasters, secure housing for homeless individuals becomes even more scarce. For example, even two years after Hurricane Katrina, over 30,000 people were still living in government-funded emergency housing. (Gibson, 2019). With limited amounts of organizations working to support these people, their health needs are not addressed or can become more severe because disaster efforts cannot reach them (Gibson, 2019). Clearly, displacement of people due to climate change disasters will result in the growth of homelessness and its subsequent health inequities. Disproportionate Impacts of Climate Change on Women/Children

"Climate change is particularly devastating for the 100 million people, or 2% of the world’s population, who are homeless."

While climate change is a phenomenon that impacts everyone, women and children are disproportionately affected compared to men. A 2014 review spanning 141 countries found that more women were killed than men by the effects of natural disasters (WHO, 2014). Results become more disproportionate in countries where women have fewer rights than men and less disproportionate in countries where men and women were of comparable status. Hence, societal roles and responsibilities clearly place women at a health disadvantage. Women are traditionally seen as providers and caretakers of a household. As providers, especially in rural areas, they are tasked with securing food, water, and fuel for their families (UN Women Watch, 2009). These expectations can cause perilous consequences, especially in the event of extreme weather and natural disasters. In Bangladesh, changes in the hydrologic cycle and groundwater recharge due to climate change result in women experiencing greater exposure to waterlogged areas. This leads to the women

84


"The consequences of extreme temperatures have also disproportionately impacted the vulnerable groups in urban areas heat."

being heavily exposed to unhygienic water and suffering from increasing gynecological issues (Neelormi, 2009). Additionally, as caretakers, women are responsible for tending to other members of the family and could, as a result, be forbidden from leaving the house unattended. When cyclones struck Bangladesh in 1991, many women died waiting for their family members to return home before departing together to a secure location, despite many cyclone warnings (Aguilar, 2004). This resulted in a death rate that was 71 per 1000 women, but only 15 per 1000 men. These disproportionate effects are also seen in developed countries, such as with Hurricane Katrina in Louisiana. The population that was most severely affected by flooding as a result of the hurricane were women (Butterbaugh, 2005). In the chaos created by power outages and displacement, multiple women claimed that they had been sexually assaulted while sheltering at the Superdome; their claims have yet to be taken seriously by authorities. Hurricane Katrina was also particularly devastating as single-mother households made up 56 percent of families in New Orleans. It is only logical that any harm endured by mothers would go on to affect their children. It is suggested by multiple studies that children who experience hurricanes are at higher risk for developing depression, anxiety, or posttraumatic stress disorder (Goenjian et al., 2001; Kessler et al., 2006). Looking at the more general effects of climate change on children’s health, we can also deduce that the poor health of mothers will lead to the low health of infants. In fact, children are predicted to be the population most severely disadvantaged by climate change, as they are expected to bear 88 percent of the burden of disease (Zhang et al., 2007; Patz et al., 2007). According to a comprehensive review by the Harvard School of Public Health, global warming allows insects to migrate to new places, taking insect-borne diseases such as malaria, zika, dengue with them (Harvard T.H. Chan School of Public Health, 2021). Heavier rainfall that comes with climate change can increase floods, leading to diarrheal diseases, which are particularly harmful in young infants. Air pollution, which currently accounts for 20 percent of global infant deaths, will increase as a result of the root causes of climate change (State of Global Air, 2020). The current effects of climate change on children’s physical and mental health are already devastating; if no action is taken to combat this issue, the scale of destruction caused by climate change will only increase.

85

Disproportionate Impacts of Climate Change based on Geographical Location The consequences of climate change have impacted disproportionately impacted populations in different geographical locations. For example, rural and urban populations have been impacted by climate change in different ways. Climate change has increased the severity and frequency of droughts around the world. In particular, according to the 1997 study by the World Resources Institute, 29% of the world population live in dry land, or arid or semi-arid zones which are more vulnerable to droughts and extreme temperatures. Additionally, the geographical distribution of dry land disproportionately impacts different continents more than others, as 45% of Africa’s population, 43% of Asia’s population, and 44% of developing regions’ population live in dry land, compared to 17% of the population in the Americas and the Caribbean (World Resources Institute, 1997). These patterns support the same data that rural areas are more vulnerable to droughts: 43% of rural populations are exposed to droughts, compared to 32% of urban populations. Given that disadvantaged groups like cattle farmers, ethnic minorities and populations under the poverty line are more likely to live in rural areas than urban areas, rural populations are disproportionately impacted by the increases in droughts due to climate change (Islam and Winkel, 2017). Furthermore, rural households with lower income are often limited to keeping livestock as their primary asset, which are vulnerable to droughts due to their continued dependence on water and food (Nkedianye et al., 2011). Wealthier households can own different types of assets that are less affected by droughts or extreme climate hazards, adding to the unequal impact of climate change on the rural poor. The consequences of extreme temperatures have also disproportionately impacted the vulnerable groups in urban areas. Disadvantaged groups like ethnic minorities, slum dwellers, and other groups with lower socioeconomic status will be more likely to live in areas with poor ventilation and poor heat management infrastructure, meaning they would be more affected in their workplaces and homes by the extreme heat (Kovats and Akhtar, 2008). Furthermore, in these urban areas, maximum temperatures are 1 to 3°C higher than cities with more parks, because they have higher population density and fewer trees to absorb the carbon dioxide produced by vehicles

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 2: This graph shows various populations in the US that lack access to food and transport to grocery stores or other sources of food as of 2010. Image Source: Wikimedia Commons

and facilities (Ferguson et al., 2008). Extreme weather events also cause unequal consequences for the at-risk populations. To focus on specific case studies, disadvantaged groups like slum dwellers live in lower elevation areas in Dhaka, Bangladesh, which put them at greater risk in floods (Braun and Abheure, 2011). In Latin American and Caribbean countries, the dwellings of disadvantaged groups are in hilly slopes that are much more vulnerable to mudslides, which are increasing in frequency with climate change, and more than 25% of them also lack appropriate drainage systems for floods (Painter, 2007; Bartlett et al., 2009). Lower income households in urban areas are often limited to keeping assets in the form of housing, which are at risk of damage by floods (Moser, 2007; Islam and Winkel, 2017). Furthermore, these consequences of climate change impact urban food security because extreme temperatures and extreme weather events like storms or floods can damage crops, livestock, or infrastructure for the food industry, increasing food prices. Higher food prices disproportionately affect low-income urban populations, since they have limited access to non-market food sources and limited budgets for their food (Cohen and Garrett, 2010). These groups within a city will tend to rely on inexpensive food that fit their budgets, which would cause malnutrition due to the food’s poor nutritional value (Brinkman et al., 2010).

FALL 2021

Vulnerable groups like poorer urban and rural populations are not only affected by the consequences of climate change but also have limited options of migrating to safer regions. Case studies of Manila’s floods and storms and the 2005 New Orleans floods have demonstrated that poorer urban households cannot easily migrate to higher elevation regions or cities with safer buildings and infrastructure that prevent damage from extreme weather events (Costello et al., 2009). These cases demonstrate that vulnerable populations in rural and urban areas are disproportionately impacted by climate change in various aspects.

Climate Change and Health: Extreme Weather Events Climate change brings on a lot of occurrences that can affect the likelihood of extreme weather events. Based on the measurable human impact on the environment, especially the increase of greenhouse gasses, it has become likely that there will be a warming of temperatures across the board and an rise in sea level, with a slightly lower likelihood of an increase in the intensity of extreme precipitation (Sauerborn & Ebi, 2012). These changes are likely to cause increases in occurrences or intensity of cyclones, droughts, floods, wildfires and other events. Extreme weather events are natural occurrences that are generally unexpected or unpredictable, can cause a lot of destruction, and are known to become more common with the onset of climate

86


change. Due to their unpredictability, they can greatly damage populations, communities, and infrastructure. These extreme natural disaster events can cause death and severe injuries, but they can also increase the risk of disease, communicable and noncommunicable, food scarcity, shelter loss and forced migration (Sauerborn & Ebi, 2012). Additionally, the potential disruption of pregnancy and delivery, education, clean water, pregnancy and delivery, sanitation and health infrastructure can have serious adverse health effects on all those affected by the event (Sauerborn & Ebi, 2012). In addition to the immediate injuries people accumulate during natural disaster events, there are many health consequences that can arise for individuals undergoing stress associated with such a traumatic event, especially coronary ailments like potential heart attacks. The stress from extreme weather events can also affect the ability to fight off disease or infection and can even flare up allergies and asthma. Psychologically, stress and trauma can also have extreme mental health consequences that may not appear for weeks or months, including PTSD, anxiety, depression and others like them. All of these happening simultaneously can clog up medical centers in the areas around extreme weather events, which can prevent many, especially those who have preexisting conditions like diabetes, from receiving adequate care (Hidalgo & Baez, 2019). The stress of traumatic events can majorly psychologically affect the well-being of victims. To this end, researchers conducted a study focused on mental health following extreme weather events in Australia, an area which is prone to wildfires, earthquakes, landslides, floods, cyclones and more (Morrissey & Reser,

2007). They found that individuals with anxiety or previous traumatic experiences can be more seriously affected after a new traumatic occurrence, which aligns with the consensus that what makes traumatic events more impactful is vulnerability, in this case, psychological vulnerability. Just as more stable infrastructure can help minimize the destruction, increasing government funding for adequate mental health care and widening associated programs can help prevent worse mental health issues (Morrissey & Reser, 2007). The damage that comes from extreme weather events is all related to the quality of the response, but with climate change making these events worse and worse, the response shouldn’t be limited to that particular event. By combining disaster risk reduction with climate change adaptation, organizations (like NGOs) looking to provide support can help minimize the health damage among the population affected and limit the vulnerability across the board. Limiting vulnerability could be accomplished through building and rebuilding infrastructure as well through changing policy, strengthening and streamlining the healthcare system, and teaching emergency preparedness and response and locating other areas of risk. Since a disaster completely disrupts daily life, a proper and effective health response should be holistic and far-reaching (Banwell et al., 2018). Rising Sea Levels Another major consequence of climate change is posed by rising sea levels, primarily caused by the melting of large terrestrial ice sheets and the thermal expansion of the ocean as it absorbs atmospheric heat (Mimura, 2013). Levitus et

Image 3: Hurricanes are one type of extreme weather event that may become more common due to climate change. Image Source: Wikimedia Commons

87

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


al. (2012) found that sea levels have risen by an average 0.54mm per year from 1955 to 2010. This rate of sea level rise has been increasing over time (Mimura, 2013). While this may not seem significant, if serious mitigation measures are not taken, sea levels are predicted to rise 0.5 to 2m above sea level by 2100 (Nicholls et al., 2011). This could cause the forced displacement of as many as 187 million people, equivalent to 1 out of every 40 people in today’s world (Nicholls et al., 2011). There is also a more immediate threat posed by rising sea levels; flood risk. Sea level rises of as little as 1-10cm can exponentially increase rates of extreme flooding (Taherkhani et al., 2020). In the US for example, it is estimated that rates of extreme flooding are expected to double every five years due to climate changeinduced sea-level rise (Taherkhani et al., 2020). Using this estimation, within 50 years, risks of extreme flooding will increase by a factor of over a thousand (Taherkhani et al., 2020). Sea level rise and the accompanying increased frequency of extreme flooding has major consequences for global health. Such effects include the direct threat higher sea levels pose to poor and vulnerable communities worldwide, but particularly in developing countries in the global South which lack the resources or infrastructure to adequately adapt (Byravan, 2010). Such communities are also less likely to have disaster insurance or other means of recovering from losses due to sea-level rise and extreme weather events, rendering them even more vulnerable to displacement (Public Health institute/Center for Climate Change and Health, 2016). However, there are other significant impacts to consider. These include the increase of saltwater contamination of freshwater sources, the consumption of which may lead to hypertension and other diseases like diarrhea (Wong et al., 2014). This is particularly a problem for groundwater pools that island communities rely on, since those populations lack easy access to alternate sources of freshwater (Mimura, 2013). Sea level rise and the resulting saltwater intrusion also indirectly damages the health of communities by degrading mangrove populations - particularly in tropical and subtropical areas where they are common - as well as wetland ecosystems (Mimura, 2013; Barbier et al., 2011; Ward et al., 2016). Many mangrove species depend on aerial roots which extend above the water’s surface for much of their respiration, which higher sea levels prevent (Mimura, 2013). Wetland ecosystems are also vulnerable, with many of their plant species unable to survive in highly saline environments

FALL 2021

(Ward et al., 2016). Among other ecosystem services, Mangroves and wetlands act as a buffer to storms and floods, mitigating their effects on coastal communities, so their loss renders such communities more vulnerable to the damages of extreme weather (Barbier et al., 2011; Ward et al., 2016). Additional effects of sea-level rise on global health include soil inundation and salinization, which may turn agricultural land permanently infertile (Gomall, 2010). Temporary storm surges, whose level is increased by sea-level rise, may also temporarily degrade or inundate agricultural land (Gomall, 2010). Both of these effects impact food security for communities living along the coast or river deltaic regions, which are particularly vulnerable to flooding (Gomall, 2010). As such, countries such as India, Pakistan, Myanmar, and Bangladesh are particularly vulnerable, as these countries have increasingly relied upon such flood-prone areas for food crop cultivation in an effort to feed their growing populations (Gomall et al., 2010; Webster, 2008). Thus, sea-level rise is detrimental to food security, thereby impacting global health as well. Ice Melts

"Additional effects of sea level rise on global health include soil inundation and salinization, which may turn agricultural land permanently infertile."

Ice melt as both a general scientific concept and as a known consequence of climate change has rapidly entered into the public consciousness over the course of the past few decades, largely as a result of public awareness campaigns centered on melting ice caps and rising sea levels. Ice melt can be defined generally as the ice mass lost from either the surface or base of an ice shelf or sheet, which almost always enters the surrounding ocean as meltwater (Mackie et. al, 2020). Both melting ice caps and rising sea levels contribute to ice melt. Ice melt is neither a negative or positive phenomenon – it has always occurred, as it serves to regulate the size of ice sheets and plays an important role in the circulation of various Arctic water currents. Ice melt has raised concerns in recent history because of the rate at which ice melt is occurring. Increasing levels of ice melt and Arctic warming have particularly devastating consequences for the residents of these areas; many Arctic residents rely on hunting, fishing, and gathering for food and consistent climate conditions for safe food storage. As the climate warms, fish and meat may not be able to dry, and food stored below ground may go bad as permafrost melts. Additionally, outbreaks of food-related bacterial diseases and illnesses will likely increase as

88


temperatures warm, as bacteria like Clostridium botulinum (responsible for botulism) and Vibrio parahemolyticus (responsible for gastrointestinal illness) are able to germinate at warm temperatures. Warmer temperatures may also change migratory patterns of animals and allow animals to carry disease for longer, both of which ultimately affect the sources of food available to Arctic residents (Parkinson & Evengård, 2009). As it is predominantly Indigenous peoples that reside in the Arctic, Indigenous peoples - already an underserved population by healthcare systems - will disproportionately take on the brunt of the impact of ice melt and Arctic warming (Cochran & Geller, 2002).

"... ice melt may result in the release of microorganisms, many of which can be pathogenic in humans."

Beyond impacting food sources, ice melt may result in the release of microorganisms, many of which can be pathogenic in humans. Permafrost and glaciers, both of which are usually frozen constantly, both contain a myriad of microorganisms, which are typically dormant; however, as global warming continues to increase levels of ice melt, these microorganisms are released into the natural ecosystems that surround typically frozen structures. The massive release of microorganisms into the environment has already contributed to increases in disease - for example, the 2016 anthrax outbreaks in Siberia were associated with permafrost melt (Yarzábal et al., 2021; Stella et al., 2020). As more disease-causing bacteria, viruses, and other microbes are released, scientists hypothesize that new epidemics may appear in the coming years. Ice melts, in addition to directly contributing to human disease, can also have downstream effects that can further impact health. For scientists to develop a complete and nuanced understanding of ice melt and its implications in the broader scheme of climate change, it has been crucial to create advanced methods of modeling ice melt and its effects. The algorithms used to model ice melt have rapidly advanced within the last few years. Ice melt models are often used in conjunction with either ocean forcing models (models that show how the ocean responds to various real world forces) or earth system models (models that show how carbon moves through the Earth’s atmosphere) to better acknowledge the far-reaching effects of ice melt (Goldberg et al., 2018). Current models, while accurate, often fail to account for more nuanced factors that can alter ice melt: many models oversimplify ocean forcings, while others are too low-resolution to draw accurate conclusions concerning ice melt and potential positive feedback loops along glacial coasts. (Moorman et. al, 2020). Traditional

89

models that incorporated the effects of melt ponds (pools of open water that form on the surface of Arctic ice during the summers) utilized satellite imagery yet failed to take into account color saturation. This resulted in the variability of different ice ponds’ effects on albedo and solar radiation not being acknowledged (Mingfeng et al., 2020). Mingfeng et al. were able to develop a method that takes color saturation into account when looking at melt pond data, underscoring the need and urgency of obtaining data that is reliable and accurate. Despite their flaws, ice melt models have yielded valuable insights into just how ice melt interacts with other aspects of the climate system, particularly with ocean circulation. Increasing ice melt causes a considerable influx of freshwater into the ocean, so it has been important to investigate how coastal freshening (the addition of freshwater to the coasts along glaciers) is affecting water circulation patterns. It has been found that Antarctic Bottom Water (AABW), a dense water mass, is negatively impacted by coastal freshening; this is due to the fact that the formation of the source from which it originates, Dense Shelf Water (DSW) is sensitive to freshwater forcings (Moorman et. al, 2020). Models have also revealed that ice shelves are most prone to melt near grounding lines, the delineation between where they are attached to bedrock and where they become free-standing ice sheets. (Goldberg et al., 2018). It is thought that Circumpolar Deep Water (CDW) breaching ice shelves at overflow sites could be the driving force of basal glacial melt beneath ice shelves, particularly near already weakened grounding lines (Moorman et al., 2020). Studies completed in the past with less accurate ice melt models came to the conclusion that coastal freshening would invariably lead to warming trends in coastal waters – this has recently been disproven with the use of a more accurate model. The updated model showed that coastal freshening can lead to both warming and cooling trends, and that coastal meltwater can either accelerate or inhibit ice shelf melt. (Moorman et al., 2020). Another study, which experimented with modeling by combining an ESM (earth system model) with a dynamic ice sheet model, set out to find out more about the climate response to increasing melt from the Antarctic shelf. They found that freshwater entering as meltwater tends to form a buoyant layer on top of the saline water below it, increasing the heat content of the midlayer ocean water and preventing ventilation. This in turn increases stratification, which, as previously mentioned, can reduce AABW formation (Mackie

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 4: Ice melts can be seen in many bodies of water, including Lake Baikal, located in Russia. Image Source: Wikimedia Commons

et al., 2020). In sum, increasing ice melt disrupts the established patterns of oceanic circulation. These changes, in turn, can result in chemical and bacterial contamination of waters that are used for fishing or other sources of food production, potentially contaminating many sources of food for marine animals and humans. Changes in oceanic circulation also result in changes in heat distribution throughout the ocean, which can cause harmful algal blooms which can release extremely potent natural toxins into the water; these can be ingested by humans or by animals that humans go on to consume (Fleming et al., 2006).

under 15 years old mostly due to diarrheal diseases by 2030 and 33,000 deaths by 2050” (Cissé, 2019). On a broader scale, the impact of diarrheal related diseases, the leading cause of food and water-borne disease burden in human health, is projected to have the greatest burden of mortality impacts with respect to climate change in sub-Saharan Africa by 2030 but leading burden is projected to shift to Southeast Asia by 2050 (Cissé, 2019). Before fully understanding the impact of climate change on food and water borne disease, it is helpful to first know the scope and severity of current food and water borne disease burden.

Overall, it is important to our growing understanding of ice melt and its implications on the climate to continue to develop more accurate and nuanced models of ice melt, which ideally will be used in conjunction with models of other parts of the climate system. Ice melts lead to rising sea levels, extreme weather events, and disproportionate impact on marginalized populations.

Unsafe water used for the cleaning and processing of food is a key risk factor contributing to foodborne diseases. While children under the age of 5 represent 9% of the global population, 43% of food borne disease burden is within this group (Kirk et al., 2015). Salmonella, norovirus, cholera and typhoid are the leading food-borne disease threats; however it is important to recognize that different diseases present more of a threat in different areas. For example, foodborne cholera presents the biggest burden in Africa and Asia whereas brucellosis and M. bovis have the highest burden in the Middle East and some African regions (Kirk et al., 2015). On a global scale, food borne disease burden is high

Food and Waterborne Disease Food- and water-borne diseases are significantly influenced by climate change. The WHO estimates an “additional 48,000 deaths in children

FALL 2021

a

90


generating over 600 million illnesses and 420,000 deaths worldwide (Cissé, 2019). Norovirus is the leading cause of food-born illnesses causing 125 million cases and of all food-borne diseases, diarrheal and invasive infections due to nontyphoidal Salmonella causes the highest burden leading to 4.07 million disability-adjusted life years (DALYs). In the African region, there were 91 million food-borne disease cases and 137,000 deaths per year. Furthermore, diarrheal diseases contributed to 70% of those deaths. (Cissé, 2019). As climate change is leading to increased facilitation of contamination and transmission of food-borne viruses and pathogens, mortality and case incidence rates are expected to increase. To further understand the impact of climate change on food-borne diseases, it is also necessary to understand the climate change relation to waterborne diseases as water plays a major role in both food and waterborne diseases and the separation of food and water exposure is difficult. Human exposure to water-born infections occurs by contact with or ingestion of contaminated recreation and/or drinking water. Drinking water containing infectious pathogens is the main driver of the burden of water-borne diseases. The most burdensome water-borne diseases are diarrheal diseases, cholera, shigella, and typhoid. Just like food-borne diseases, low-and middle-income countries (LMICs) have the highest waterborne disease burden which is estimated at 842,000 deaths a year including 361,000 in children under the age of 5 (Cissé, 2019). Lack of basic hygiene and sanitation and failing health and water delivery infrastructure are the leading challenges in the fight against water borne diseases. Climate change is projected to exacerbate risk of diarrheal diseases and other water-borne diseases in LMICs as repercussions of climate change intensify. Climate change has a wide variety of effects including “rising temperature, soil degradation, loss of productivity or agricultural land, desertification, loss of biodiversity, degradation of ecosystems, reduced fresh-water resources, acidification of oceans, and the disruption and depletion of stratospheric ozone” (Rossati, 2016). All of these consequences impact human health, increasing frequency of distribution, timing, and intensity of infectious disease and noncommunicable diseases, malnutrition in famine, and increased mortality from complications with heat waves (Rossati, 2016). All environmental effects disproportionately impact children. Diseases with the greatest environmental contribution in children under the age of 5 include lower respiratory infections (32%),

91

diarrheal diseases (22%), neonatal conditions (15%) and parasitic and vector-borne diseases (12%) (Cissé, 2019). Quality and quantity of water is important to gauging the burden of infectious diseases in LMICs as its effects go beyond the food chain. Water-related infectious diseases are already a major cause of mortality and morbidity; these diseases are exacerbated by climate change, posing new challenges to the public and global health sector for food and water borne diseases. Trends in Africa and Asia reveal that extreme climate change induced events such as floods will increase risk of infectious diseases spreading through water systems, and, conversely, improvements related to drinking water, sanitation and hygiene are effective methods to significantly reduce intestinal parasitic infections in school-aged children (Cissé, 2019). Food and water borne diseases are inextricably connected because of the influence of contaminated and unclean water on food supplies. In low- and middle-income countries (LMICs) the increased frequency of floods because of increasing global temperatures exacerbates challenges with water pollution, which subsequently escalates risk for foodand water- borne diseases, disproportionately impacting people in low socioeconomic communities (Cissé, 2019). There are direct and indirect ways in which climate change affects food- and water-borne diseases. Direct impacts refer to extreme climate or environmental events such as flooding or sea-level rise which leads to increased water contamination due to presence of fecal-oral pathogen presence in the environment. Indirect impacts are mostly climatic factors such as temperature and humidity that influence processes of pathogen replication and survival and rising frequency and relevance of related conflicts such as agriculture, water resource management, and population displacements (Walker, 2018). Water- and food- borne diseases are linked to the ingestion of pathogens via contaminated water or food. The diseases are further connected as contaminated water can contaminate food and increase risk of transmission. Vector Borne Disease A vector borne disease is a “disease that results from an infection transmitted to humans and other animals by blood-feeding arthropods, such as mosquitoes, ticks, and fleas” (VectorBorne Diseases, n.d.). Specifically, a vector is “an organism that transmits an infectious pathogen from an infected human or animal host to an uninfected human” (Rocklöv & Dubrow, 2020).

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Global disease burden of vector-borne diseases is led by malaria, dengue, schistosomiasis, leishmaniasis, Chagas disease and African trypanosomiasis which infect more than one billion people and kills over a million people every year (Campbell-Lendrum et al., 2015). In tandem with a global temperature increase from 1.5 °C to 2 °C, risk and incidence of malaria and dengue fever, two of the most pertinent vector-borne diseases are also projected to increase and shift geographical regions of interest. (Cissé, 2019). Additionally, if global temperature increases by 2 to 3°C as expected, the population at risk for malaria would increase by as much as 3 to 5% (Rossati, 2016). A key factor in understanding and responding to vector-borne diseases is the modes and intricacies of transmission. These are the multifactorial pathways through which climate change and climate variability affect human health including social, environmental, ecological, and economic factors, as each of them impact survival and growth of human and pathogen populations (Cissé, 2019). Vector borne diseases have one of the highest disease burdens globally but disproportionately impact the global south. This is because “vectorborne diseases have wider socioeconomic impacts, increasing health inequities, and acting as a brake on socioeconomic development” (Campbell-Lendrum et al., 2015). These burdens and inequities are exacerbated by climate change as the mortality rate from vector borne diseases is almost 300 times greater in developing countries than it is in developed countries (Campbell-Lendrum et al., 2015). The burden is exponentially larger due to increased frequency of vector-borne diseases in tropical climates that is common to many developing countries and because of concurrent low levels of socioeconomic development and health service coverage in these areas. On an individual level, the burdens are greater in impoverished populations because individuals are subject to poorer environmental and social conditions such as poor quality housing and increased proximity to vector breeding sites; and lack of access to preventative health interventions and disease treatments (Campbell-Lendrum et al., 2015). Climate change presents a threat to global health and an already severe vector-borne disease global burden. This risk for vector-borne disease is increasing, and “put simply, vectors, which are ectotherms (that is, cold-blooded animals), do better in a warmer world” (Rocklöv & Dubrow, 2020). A warmer climate is more favorable to the survival and completion of vector life cycles

FALL 2021

and, in the case of mosquitos, is even capable of speeding it up. This has enabled extension of areas of disease distribution in direct correlation with increasing temperatures (Rossati, 2016). Extension of vector regions and subsequently infections can be observed in changing tick behaviors. Tick-borne diseases have increased over past years because rising temperatures in cold regions have “accelerated the cycle of development, the production of eggs, and the density and distribution of the tick population” (Rossati, 2016). Due to climate change, ticks have been found in more regions than ever before and are also found at higher altitudes, presenting new and increased climate change burden. Precipitation is another important environmental influence on vector transmission and breeding. It mostly affects vectors that have aquatic developmental stages such as mosquitos. Humidity, which is related to precipitation, creates a better environment for diseases transmitted by vectors without aquatic developmental stages such as ticks or sandflies (Campbell-Lendrum et al., 2015). As many impacts of climate change, including increased precipitation, humidity, flooding, and increased areas with warmer climates are becoming more frequent, so are the frequency of vector-diseases transmissions, infections, and numbers and expansion of burdens. This is in part because of the socioeconomic factor related to disease burden which cannot be excluded when considering remediation methods or global health strategies. As threat of increased infections in more places rise an epidemiological approach to vector-borne diseases must include attention to ecology and behavior of the host, the ecology and behavior of the carrier, and the level of immunity of the population” (Rossati, 2016) and historical and social relationship of impacted communities with such diseases.

"Climate change poses a risk to the occupational health and safety of workers."

Occupational Health and Productivity Risks Climate change poses a risk to the occupational health and safety of workers. As a result of rising temperatures, many workers working outdoors or in hot indoor conditions will be at an increased risk of suffering from heat-related disorders. Due to these heat-related ailments, workers are more prone to lapses in judgment and reduced alertness, increasing their chances of suffering a workplace injury (EPA, 2016; Tawatsupa, 2016). Other effects of climate change on the health of outdoor workers could include increased poor

92


air quality and diseases transmitted by ticks and mosquitoes (EPA, 2016). Depending on the type of job they hold, workers might also be more frequently exposed to occupational hazards. For example, as climate change causes warmer and drier conditions in forests, the intensity and frequency of which wildfires happen will increase. Studies have correlated climate change to increasing instances of occupational health and safety hazards - such as burns and heat exhaustion - of firefighters (Adetona et al., 2016; Britton et al., 2013).

"These vaccines all share the ability to induce a natural immune response against the viral pathogen, which then grants immunity to the vaccinated individual."

Image 5: Mosquitos are a common vector for malaria across the world.

Looking beyond health effects, climate change will also negatively affect our economy as it decreases productivity and the supply of workers. Revisiting rising temperatures, this can cause workers to feel more fatigued and affects the supply of resources, such as crops, that could be cultivated. Additionally, if more workers suffer from workplace injuries caused by climate change, this will decrease the number of workers and a lost work capacity (Ebi et al., 2017). In a more indirect manner, rising temperatures and heavier rainfalls will also lead to increased intergroup conflict in the workplace (National Bureau of Economic Research, 2015). This could also affect productivity as workers will become preoccupied with resolving conflict rather than focusing on their work. While discussing occupational health and productivity risks, we must also consider that workers are already a vulnerable population. Due to factors such as socioeconomic status, race, and immigration status, many who have health-

endangering jobs do so because of necessity (Levi & Patz, 2015). Through evaluating the effects of climate change on this area, we can see how it affects those who are least fortunate.

Conclusion Climate change has major implications for human health and will result in the exacerbation of health inequities across the globe. Events that will impact health include rising sea levels, ice melts, extreme weather events, airway diseases and increased presence of allergens due to pollution, food and water borne diseases, vector borne diseases, and occupational risks. These incidents will disproportionately affect vulnerable populations such as those who are part of racial or ethnic minorities, those in lower socioeconomic classes, and women and children. Addressing climate change with a sense of urgency is key in adequately slowing down progression of such events to minimize drastic health repercussions, particularly in vulnerable populations.

References Ahmed, S. A., Diffenbaugh, N. S., & Hertel, T. W. (2009). Climate volatility deepens poverty vulnerability in developing countries. Environmental Research Letters, 4(3), 034004. https://doi.org/10.1088/1748-9326/4/3/034004 Althor, G., Watson, J. E. M., & Fuller, R. A. (2016). Global mismatch between greenhouse gas emissions and the burden of climate change.

Image Source: Wikimedia Commons

93

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Scientific Reports, 6(1), 20281. https://doi. org/10.1038/srep20281 Ambrey, C., Byrne, J., Matthews, T., Davison, A., Portanger, C., & Lo, A. (2017). Cultivating climate justice: Green infrastructure and suburban disadvantage in Australia. Applied Geography, 89, 52–60. https://doi.org/10.1016/j. apgeog.2017.10.002 Banwell, N., Rutherford, S., Mackey, B., & Chu, C. (2018). Towards Improved Linkage of Disaster Risk Reduction and Climate Change Adaptation in Health: A Review. International Journal of Environmental Research and Public Health, 15(4), 793. https://doi.org/10.3390/ ijerph15040793 Barbier, E. B., Hacker, S. D., Kennedy, C., Koch, E. W., Stier, A. C., & Silliman, B. R. (2011). The value of estuarine and coastal ecosystem services. Ecological Monographs, 81(2), 169–193. https:// doi.org/10.1890/10-1510.1 Bartlett, S., Dodman, D., Hardoy, J., Satterthwaite, D., & Tacoli, C. (2009). Social Aspects of Climate Change in Urban Areas in Low- and Middle Income Nations. Braun, B., & Aßheuer, T. (2011). Floods in megacity environments: Vulnerability and coping strategies of slum dwellers in Dhaka/Bangladesh. Natural Hazards, 58(2), 771–787. https://doi. org/10.1007/s11069-011-9752-5 Brinkman, H.-J., de Pee, S., Sanogo, I., Subran, L., & Bloem, M. W. (2010). High Food Prices and the Global Financial Crisis Have Reduced Access to Nutritious Food and Worsened Nutritional Status and Health. The Journal of Nutrition, 140(1), 153S-161S. https://doi.org/10.3945/ jn.109.110767 Byravan, S., & Rajan, S. C. (2010). The Ethical Implications of Sea-Level Rise Due to Climate Change. Ethics & International Affairs, 24(3), 239–260. https://doi.org/10.1111/j.17477093.2010.00266.x Campbell-Lendrum, D., & Corvalán, C. (2007). Climate Change and Developing-Country Cities: Implications For Environmental Health and Equity. Journal of Urban Health, 84(S1), 109– 117. https://doi.org/10.1007/s11524-007-9170-x Campbell-Lendrum, D., Manga, L., Bagayoko, M., & Sommerfeld, J. (2015). Climate change and vector-borne diseases: What are the implications

FALL 2021

for public health research and policy? Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1665), 20130552. https://doi.org/10.1098/rstb.2013.0552 Chakraborty, J., Collins, T. W., & Grineski, S. E. (2019). Exploring the Environmental Justice Implications of Hurricane Harvey Flooding in Greater Houston, Texas. American Journal of Public Health, 109(2), 244–250. https://doi. org/10.2105/AJPH.2018.304846 Cissé, G. (2019). Food-borne and water-borne diseases under climate change in low- and middle-income countries: Further efforts needed for reducing environmental health exposure risks. Acta Tropica, 194, 181–188. https://doi. org/10.1016/j.actatropica.2019.03.012 Climate Change and Social Vulnerability in the United States: A Focus on Six Impacts. (n.d.). 101. Cochran, P. L., & Geller, A. L. (2002). The Melting Ice Cellar. American Journal of Public Health, 92(9), 1404–1409. Cohen, M. J., & Garrett, J. L. (2010). The food price crisis and urban food (in)security. Environment and Urbanization, 22(2), 467–482. https://doi. org/10.1177/0956247810380375 Costello, A., Abbas, M., Allen, A., Ball, S., Bell, S., Bellamy, R., Friel, S., Groce, N., Johnson, A., Kett, M., Lee, M., Levy, C., Maslin, M., McCoy, D., McGuire, B., Montgomery, H., Napier, D., Pagel, C., Patel, J., … Patterson, C. (2009). Managing the health effects of climate change. The Lancet, 373(9676), 1693–1733. https://doi.org/10.1016/ S0140-6736(09)60935-1 Cushing, L., Faust, J., August, L. M., Cendak, R., Wieland, W., & Alexeeff, G. (2015). Racial/ Ethnic Disparities in Cumulative Environmental Health Impacts in California: Evidence From a Statewide Environmental Justice Screening Tool (CalEnviroScreen 1.1). American Journal of Public Health, 105(11), 2341–2348. https://doi. org/10.2105/AJPH.2015.302643 Diffenbaugh, N. S., & Burke, M. (2019). Global warming has increased global economic inequality. Proceedings of the National Academy of Sciences of the United States of America, 116(20), 9808–9813. https://doi.org/10.1073/ pnas.1816020116 Dokken, D. (n.d.). 5—Coastal Systems and LowLying Areas. 49.

94


Du, W., FitzGerald, G. J., Clark, M., & Hou, X.Y. (2010). Health Impacts of Floods. Prehospital and Disaster Medicine, 25(3), 265–272. https:// doi.org/10.1017/S1049023X00008141 Ferguson, B., Fisher, K., & Golden, J. (2012). Reducing Urban Heat Islands: Compendium of Strategies—Cool Pavements. U.S. Environmental Protection Agency. https://www.epa.gov/sites/ default/files/2017-05/documents/reducing_ urban_heat_islands_ch_5.pdf Ferring, D., & Hausermann, H. (n.d.). Full article: The Political Ecology of Landscape Change, Malaria, and Cumulative Vulnerability in Central Ghana’s Gold Mining Country. Retrieved October 14, 2021, from https://www.tandfonline. com/doi/full/10.1080/24694452.2018.1535885 Fleming, L. E., Broad, K., Clement, A., Dewailly, E., Elmir, S., Knap, A., Pomponi, S. A., Smith, S., Gabriele, H. S., & Walsh, P. (2006). Oceans and human health: Emerging public health risks n the marine environment. Marine Pollution Bulletin, 53(10–12), 545–560. https://doi.org/10.1016/j. marpolbul.2006.08.012 Friel, S., Hancock, T., Kjellstrom, T., McGranahan, G., Monge, P., & Roy, J. (2011). Urban Health Inequities and the Added Pressure of Climate Change: An Action-Oriented Research Agenda. Journal of Urban Health, 88(5), 886–895. https:// doi.org/10.1007/s11524-011-9607-0 Gibson, A. (2019). Climate Change for Individuals Experiencing Homelessness: Recommendations for Improving Policy, Research, and Services. Environmental Justice, 12(4), 159–163. https:// doi.org/10.1089/env.2018.0032 Goldberg, D. N., Gourmelen, N., Kimura, S., Millan, R., & Snow, K. (2019). How Accurately Should We Model Ice Shelf Melt Rates? Geophysical Research Letters, 46(1), 189–199. https://doi.org/10.1029/2018GL080383 Gornall, J., Betts, R., Burke, E., Clark, R., Camp, J., Willett, K., & Wiltshire, A. (2010). Implications of climate change for agricultural productivity in the early twenty-first century. Philosophical Transactions of the Royal Society B: Biological Sciences, 365(1554), 2973–2989. https://doi. org/10.1098/rstb.2010.0158 Gronlund, C. J. (2014). Racial and Socioeconomic Disparities in Heat-Related Health Effects and Their Mechanisms: A Review. Current Epidemiology Reports, 1(3), 165–173. https://

95

doi.org/10.1007/s40471-014-0014-4 Hidalgo, J., & Baez, A. A. (2019). Natural Disasters. Critical Care Clinics, 35(4), 591–607. https://doi.org/10.1016/j.ccc.2019.05.001 Kirk, M. D., Pires, S. M., Black, R. E., Caipo, M., Crump, J. A., Devleesschauwer, B., Döpfer, D., Fazil, A., Fischer-Walker, C. L., Hald, T., Hall, A. J., Keddy, K. H., Lake, R. J., Lanata, C. F., Torgerson, P. R., Havelaar, A. H., & Angulo, F. J. (2015). World Health Organization Estimates of the Global and Regional Disease Burden of 22 Foodborne Bacterial, Protozoal, and Viral Diseases, 2010: A Data Synthesis. PLoS Medicine, 12(12), e1001921. https://doi.org/10.1371/ journal.pmed.1001921 Kovats, S., & Akhtar, R. (2008). Climate, climate change and human health in Asian cities. Environment and Urbanization, 20(1), 165–175. https://doi.org/10.1177/0956247808089154 Lamb, W. F., Wiedmann, T., Pongratz, J., Andrew, R., Crippa, M., Olivier, J. G. J., Wiedenhofer, D., Mattioli, G., Khourdajie, A. A., House, J., Pachauri, S., Figueroa, M., Saheb, Y., Slade, R., Hubacek, K., Sun, L., Ribeiro, S. K., Khennas, S., Can, S. de la R. du, … Minx, J. (2021). A review of trends and drivers of greenhouse gas emissions by sector from 1990 to 2018. Environmental Research Letters, 16(7), 073005. https://doi. org/10.1088/1748-9326/abee4e Levitus, S., Antonov, J. I., Boyer, T. P., Baranova, O. K., Garcia, H. E., Locarnini, R. A., Mishonov, A. V., Reagan, J. R., Seidov, D., Yarosh, E. S., & Zweng, M. M. (2012). World ocean heat content and thermosteric sea level change (0–2000 m), 1955–2010. Geophysical Research Letters, 39(10). https://doi.org/10.1029/2012GL051106 Lynas, M., Houlton, B. Z., & Perry, S. (2021). Greater than 99% consensus on human caused climate change in the peer-reviewed scientific literature. Environmental Research Letters, 16(11), 114005. https://doi.org/10.1088/17489326/ac2966 Lynch, J. (2019). Availability of disaggregated greenhouse gas emissions from beef cattle production: A systematic review. Environmental Impact Assessment Review, 76, 69–78. https:// doi.org/10.1016/j.eiar.2019.02.003 Mackie, S., Smith, I. J., Ridley, J. K., Stevens, D. P., & Langhorne, P. J. (2020). Climate Response to Increasing Antarctic Iceberg and Ice Shelf Melt.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Journal of Climate, 33(20), 8917–8938. https:// doi.org/10.1175/JCLI-D-19-0881.1 MIMURA, N. (2013). Sea-level rise caused by climate change and its implications for society. Proceedings of the Japan Academy. Series B, Physical and Biological Sciences, 89(7), 281–301. https://doi.org/10.2183/pjab.89.281

Combustion is the Leading Environmental Threat to Global Pediatric Health and Equity: Solutions Exist. International Journal of Environmental Research and Public Health, 15(1), 16. https:// doi.org/10.3390/ijerph15010016

Moorman, R., Morrison, A. K., & Hogg, A. M. (2020). Thermal Responses to Antarctic Ice Shelf Melt in an Eddy-Rich Global Ocean–Sea Ice Model. Journal of Climate, 33(15), 6599–6620. https://doi.org/10.1175/JCLI-D-19-0846.1

Rignot, E., Xu, Y., Menemenlis, D., Mouginot, J., Scheuchl, B., Li, X., Morlighem, M., Seroussi, H., van den Broeke, M., Fenty, I., Cai, C., An, L., & de Fleurian, B. (2016). Modeling of oceaninduced ice melt rates of five west Greenland glaciers over the past two decades. Geophysical Research Letters, 43(12), 6374–6382. https://doi. org/10.1002/2016GL068784

Morrissey, S. A., & Reser, J. P. (2007). Natural disasters, climate change and mental health considerations for rural Australia. Australian Journal of Rural Health, 15(2), 120–125. https:// doi.org/10.1111/j.1440-1584.2007.00865.x

Rocklöv, J., & Dubrow, R. (2020). Climate change: An enduring challenge for vector-borne disease prevention and control. Nature Immunology, 21(5), 479–483. https://doi.org/10.1038/s41590020-0648-y

Moser, C. O. N. (Ed.). (2007). Reducing global poverty: The case for asset accumulation. Brookings Institution Press.

Rossati, A. (2016). Global Warming and Its Health Impact. The International Journal of Occupational and Environmental Medicine, 8(1), 7–20. https://doi.org/10.15171/ijoem.2017.963

Nicholls, R. J., Marinova, N., Lowe, J. A., Brown, S., Vellinga, P., de Gusmão, D., Hinkel, J., & Tol, R. S. J. (2011). Sea-level rise and its possible impacts given a ‘beyond 4°C world’ in the twenty-first century. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369(1934), 161–181. https://doi.org/10.1098/rsta.2010.0291 Nielsen, K.S., Nicholas, K.A., Creutzig, F. et al. The role of high-socioeconomic-status people in locking in or rapidly reducing energy-driven greenhouse gas emissions. Nat Energy 6, 1011– 1016 (2021). https://doi.org/10.1038/s41560021-00900-y Nkedianye, D., de Leeuw, J., Ogutu, J. O., Said, M. Y., Saidimu, T. L., Kifugo, S. C., Kaelo, D. S., & Reid, R. S. (2011). Mobility and livestock mortality in communally used pastoral areas: The impact of the 2005-2006 drought on livestock mortality in Maasailand. Pastoralism: Research, Policy and Practice, 1(1), 17. https://doi.org/10.1186/20417136-1-17 Parkinson, A. J., & Evengård, B. (2009). Climate change, its impact on human health in the Arctic and the public health response to threats of emerging infectious diseases. Global Health Action, 2(1), 2075. https://doi.org/10.3402/gha. v2i0.2075 Perera, F. (2018). Pollution from Fossil-Fuel

FALL 2021

Rudolph, L., & Gould, S. (2015). Climate Change and Health Inequities: A Framework for Action. Annals of Global Health, 81(3), 432. https://doi. org/10.1016/j.aogh.2015.06.003 S. Nazrul Islam, & Winkel, J. (2017). Climate Change and Social Inequality. United Nations Department of Economic & Social Affairs. https:// www.un.org/development/desa/publications/ working-paper/wp152 Sauerborn, R., & Ebi, K. (2012). Climate change and natural disasters – integrating science and practice to protect health. Global Health Action, 5(1), 19295. https://doi.org/10.3402/gha. v5i0.19295 SeaLevelRise.pdf. (n.d.). Retrieved November 29, 2021, from https://climatehealthconnect.org/wpcontent/uploads/2016/09/SeaLevelRise.pdf Stella, E., Mari, L., Gabrieli, J., Barbante, C., & Bertuzzo, E. (2020). Permafrost dynamics and the risk of anthrax transmission: A modelling study. Scientific Reports, 10(1), 16460. https:// doi.org/10.1038/s41598-020-72440-6 Taherkhani, M., Vitousek, S., Barnard, P. L., Frazer, N., Anderson, T. R., & Fletcher, C. H. (2020). Sea-level rise exponentially increases coastal flood frequency. Scientific Reports, 10(1), 6466. https://doi.org/10.1038/s41598-020-

96


62188-4 The Lancet Planetary Health. (2018). Environmental racism: Time to tackle social injustice. The Lancet Planetary Health, 2(11), e462. https://doi.org/10.1016/S25425196(18)30219-5

digitallibrary.un.org/record/432312?ln=en Yarzábal, L. A., Salazar, L. M. B., & Batista-García, R. A. (2021). Climate change, melting cryosphere and frozen pathogens: Should we worry…? Environmental Sustainability, 4(3), 489–501. https://doi.org/10.1007/s42398-021-00184-8

Thomas, K., Hardy, R. D., Lazrus, H., Mendez, M., Orlove, B., Rivera-Collazo, I., Roberts, J. T., Rockman, M., Warner, B. P., & Winthrop, R. (2019). Explaining differential vulnerability to climate change: A social science review. Wiley Interdisciplinary Reviews. Climate Change, 10(2), e565. https://doi.org/10.1002/wcc.565

Yue, X.-L., & Gao, Q.-X. (2018). Contributions of natural systems and human activity to greenhouse gas emissions. Advances in Climate Change Research, 9(4), 243–252. https://doi. org/10.1016/j.accre.2018.12.003

United Nations Development Programme. (2007). Deglaciation in the Andean Region— Fighting climate change: Human solidarity in a divided world. In United Nations Development Programme, Human Development Report 2007/2008 (pp. 1–18). Palgrave Macmillan UK. https://doi.org/10.1057/9780230598508_1 Vector-borne diseases. (n.d.). Retrieved October 17, 2021, from https://www.who.int/news-room/ fact-sheets/detail/vector-borne-diseases Walker, J. (2018a). The influence of climate change on waterborne disease and Legionella: A review. Perspectives in Public Health, 138(5), 282–286. https://doi.org/10.1177/1757913918791198 Walker, J. (2018b). The influence of climate change on waterborne disease and Legionella: A review. Perspectives in Public Health, 138(5), 282–286. https://doi.org/10.1177/1757913918791198 Wang, M., Su, J., Landy, J., Leppäranta, M., & Guan, L. (2020). A New Algorithm for Sea Ice Melt Pond Fraction Estimation From High-Resolution Optical Satellite Imagery. Journal of Geophysical Research: Oceans, 125(10), e2019JC015716. https://doi.org/10.1029/2019JC015716 Ward, R. D., Friess, D. A., Day, R. H., & MacKenzie, R. A. (2016). Impacts of climate change on mangrove ecosystems: A region by region overview. Ecosystem Health and Sustainability, 2(4), e01211. https://doi.org/10.1002/ehs2.1211 Webster, P. J. (2008). Myanmar’s deadly daffodil. Nature Geoscience, 1(8), 488–490. https://doi. org/10.1038/ngeo257 World Resources Institute. (1997). Aridity zones and dryland populations: An assessment of population levels in the world’s drylands. United Nations Development Programme. https://

97

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


History and Current State of Virology STAFF WRITERS: LORD CHARITE IGIRIMBABAZI '24, CAMERON SABET '24, VAISHNAVI KATRAGADDA '24, MATTHEW LUTCHKO '23, DECLAN O’SCANNLAIN '24, JUSTIN CHONG '24, CAMILLA LEE '22, CAROLINE CONWAY '24, SOPHIE (SOYEON) CHO '24, FRANK CARR '22, JULIAN FRANCO JR. '24, DANIELA ARMELLA '24, MIRANDA YU '24, BROOKLYN SCHROEDER '22, ABIGAIL FISCHER '23, VALENTINA FERNANDEZ '24, CALLIE MOODY '24, ANYOKO SEWAVI '23, LAUREN FERRIDGE '23, VAANI GUPTA '24 TEAM LEADS: NISHI JAIN '21, CAROLINA GUERRERO '23 Cover Image: COVID-19 is potentially the most significant viral disease of the 21st century. Image Source: Pixabay

Introduction Virus Overview As the world becomes increasingly connected, the threat of infectious diseases, especially those caused by viruses, becomes more prevalent. Namely, the COVID-19 pandemic has shed light on the importance of studying both how viruses work to cause disease and how researchers can devise therapeutic solutions. However, viruses are not only agents of disease – they can also be used as therapeutics themselves as will be explored later in this paper. As a whole, devoting research toward viral mechanisms of action and understanding past pandemics is critical to future prevention. Viruses are microscopic parasites that cannot survive nor reproduce without a host cell. They are quite a bit smaller than bacteria, with the measles virus being around ⅛ the size of

98

Escherichia coli (E. coli) bacteria. In one striking study at Davidson College, Dr. David R. Wessner (2010) found that the polio virus is around 10,000 times smaller than a single grain of salt (~30 nm across), demonstrating how incredibly small viruses can be. Experts debate whether viruses are alive. On one hand, viruses possess nucleic acids like deoxyribonucleic acids (double-stranded) or ribonucleic acids (single-stranded), just like living cells. Both deoxyribonucleic acid (DNA) and ribonucleic acid (RNA) are found in living organisms, like humans. On the other hand, they cannot read this information on their own and require a host organism to replicate. Since viruses require a host, this parasitic biochemical machine might appear to be just that—a machine. Additionally, viral genomes are incredibly small. They only encode enough amino acids to produce enzymes required for entry and replication within host cells and a capsid, which is the outer protein DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Figure 1: The structure of a virion. Image Source: Wikimedia Commons

shell protecting the virus (Villarreal, 2008). By comparison, the human genome is comprised of about 3.2 billion nucleotides and contains all of the information that the body needs to sustain itself (Brown, 2002). Mechanisms of Entry and Replication Viruses may enter the body via numerous pathways, including respiratory passages or open wounds. Some viruses even stay dormant in an insect’s saliva and infect immediately after the insect bites an animal. Then, like puzzle pieces, viruses bind to the host cell’s surface receptors (proteins unique to a certain cell). After this, there are a few different ways the virus can enter; this entry is often determined by if a virus has an envelope or not; viral envelopes consist of a phospholipid bilayer and membrane proteins that are derived from the host cell, and serve to promote fusion with the host cell For instance, HIV already has an envelope, so it can just fuse with the cell membrane and gain entry. The influenza virus also has an envelope, and the cell engulfs it. However, when a given virus is nonenveloped, like the polio virus, that virus can cut through the membrane by creating a porous channel since the membrane is left exposed. Then, the viruses can spew out their genetic materials and disrupt cellular processes to produce viral proteins. This often tampers with the host cell’s ability to produce its own proteins or RNA, often leading to host cell death. Meanwhile, the virus makes cellular conditions favorable for further infection and reproduction. For example, it has been estimated that a single sneeze from someone FALL 2021

infected with the novel coronavirus contains ~20,000 droplets of viral particles. In many cases, merely breathing in such particles is enough to prompt further infection, demonstrating the power of virus’ abilities to not only acclimate to different hosts, but also to capitalize upon certain behaviors to continue reproducing (Villarreal, 2008).

Mechanisms Treatment and Prevention of Viral Infections Existing Antiviral Drugs and How They Work Some viral infections can be dangerous, so devising treatments is of great significance to the medical research community. Beyond simply treating symptoms of infections, there are two broad classes of drugs (differing in whether they are immunologically derived or chemically synthesized) used to combat viruses. Firstly, there are treatments involving the infusion of monoclonal antibodies (i.e., many antibodies with the same specificity and structure). These often bind to free virions, or virus particles that have yet to infect a host cell, preventing them from entering cells as well as targeting them for destruction by one’s immune system. Before the development of these treatments, the use of animal plasma/antibodies was similar, but the foreign nature of these antibodies meant an inevitable immune response against the treatment. So, the quest to develop “humanized” antibodies that would not elicit an immune response began (Riechmann et al., 1988). The first development of such antibodies was performed

"Some viral infections can be dangerous, so devising treatments is of great significance to the medical research community."

99


"These vaccines all share the ability to induce a natural immune response against the viral pathogen, which then grants immunity to the vaccinated individual."

in 1973 by creating hybrid cells of cancerous antibody-producing cells from mice and human immune cells (Schwaber & Cohen, 1973); these cells are biologically immortal, and once they develop their antibody specificity can produce seemingly endless amounts of the proteins. Since this breakthrough, new treatments have applied this technology to cancers, autoimmune disorders, and viral infections.

an effective treatment of many viruses including the Ebola Virus, coronaviruses (including SARSCoV-2), paramyxoviruses (such as the Measles and Mumps viruses), and RSV. Notably, it ended up not being effective in treating Hepatitis C. It acts by inhibiting the viral RNA polymerase, which is a general structure among many viruses, preventing them from reproducing their genomes (Aleem & Kothadia, 2021).

One of the most recent and well-documented uses of this kind of therapy is REGN-COV2 (also called “Regeneron,” after the company that developed it), which consists of a solution of two different antibodies, each having specificity for a different part of the SARS-CoV-2 Spike Protein (Weinreich et al., 2020). Other monoclonal antibodies used to treat viral infections include Palivizumab and, more recently, Suptavumab (The Impact-RSV Study group, 1998; Simões et al., 2020); both are used to treat and prevent infections caused by Respiratory Syncytial Virus (RSV) by binding to its F-Protein, a viral membrane protein used for fusion with target host cells. Bavituximab targets the membrane phospholipid Phosphatidylserine that is present on the external face of human cell membranes only if the cell is cancerous and/or it is infected by Hepatitis C. While this antibody does not directly bind to free Hepatitis C virions, it causes the phagocytosis and immunological destruction of viral-infected cells (Ahn & Flamm, 2011).

These examples showcase the overall trend in antiviral medicine: therapies are designed to target a specific part of a virus (and sometimes that part is shared by many viruses, resulting in broad-spectrum drugs). Thus, as further research is done on viruses and new viruses are discovered, new therapies, be they antibodies or small molecules, can be developed, and their usefulness against viruses beyond their initial targets can be determined.

Alternatively, antiviral medicines are often taken as pills or oral solutions and are generally small molecules developed in a lab setting. These medicines have diverse mechanisms, depending on both the drug and its target. Like the monoclonal antibodies discussed above, many antiviral medicines are specific to a single virus or a family of viruses. There are a few drugs, however, that are capable of targeting multiple types of viruses by targeting general structures– those that can are called broad-spectrum antivirals (Vardanyan & Hruby, 2016). An example of antiviral drugs is the influenza neuraminidase inhibitors, such as Tamiflu, which prevent the action of the neuraminidase enzyme, preventing the release of progeny virions. Another class of drugs used to combat the influenza virus are the adamantanes, which inhibit an ion channel in the Influenza A viral membrane that is necessary for the release of the virus’s genetic payload after it has entered a cell (Ison, 2011). An example of a broad-spectrum antiviral is Remdesivir. Initially developed to treat Hepatitis C infections, it has proven to be

100

Existing Vaccines and How they Work Unlike antiviral therapies that treat viral infections after they occur, vaccines work by preventing them from taking place. Vaccines can prevent a range of the most damaging viral infections, including smallpox, influenza, measles, Hepatitis B, and HPV (Graham, 2013). The first vaccine developed was Edward Jenner’s 1796 smallpox vaccine. His vaccine and later ones have played a crucial role in containing and preventing outbreaks. For instance, before the measles vaccine was licensed in 1963, almost all children were infected by the age of 15; measles infections have dropped over 98% since then (Ravanfar et al., 2009). These vaccines all share the ability to induce a natural immune response against the viral pathogen, which then grants immunity to the vaccinated individual (Ellebedy & Ahmed, 2016). The earliest antiviral vaccines were derived from live animals or eggs, while recent ones have been created through cell culturing and other advanced molecular biology techniques (Graham, 2013). After identifying the genetic sequence of a virus and the structure of its surface proteins, researchers develop a vaccine that mimics it. Historically, vaccines have come in five main types. Live viral vaccines contain an attenuated (weakened) version of a given virus, while inactivated whole viral vaccines are treated with heat or UV light to damage the virus. Subunit vaccines (containing the surface glycoproteins, which are proteins at the surface of viruses that help viruses enter cells), recombinant

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Figure 2: A digital representation of a generic influenza virus. Image Source: Flickr

viral proteins (manufactured to contain main surface glycoproteins), and virus-like particles (assembled from viral structural proteins) all contain just parts of the virus. Each method has its advantages and disadvantages. For example, inactivated whole viral forms are one of the most effective methods but involve safety concerns if not completely inactivated, (Ellebedy & Ahmed, 2016). Though vaccines are one of the most efficient ways to prevent viral infections, there remain viruses for which no vaccines exist. In 2019, COVID-19 was among them. The process of developing and implementing a vaccine typically takes place over many years, involving vaccine design, testing, and manufacturing on a large scale (Zhou et al., 2020). This lengthy process often makes it difficult for vaccines to be created in time to stop an epidemic. In the case of COVID-19, especially because of the devastating effects of the pandemic, scientists were successful in speeding up the steps to generate vaccines quickly (Zhou et al., 2020). Because of the significant benefits conferred by antiviral vaccines, researchers continue to work on new vaccines while improving existing ones.

Viruses, Human Behavior, and the Environment How do diverse zoonotic viruses develop and how do they jump to humans? Zoonotic viruses are viruses that can jump from animals to humans via a number of methods,

FALL 2021

including direct and indirect contact (i.e. bodily fluids and contact with an infected animal’s habitat), vectors (i.e. ticks), food, and water (Zoonoses, 2020). Many recent pandemics have originated in animals, and zoonotic diseases have a high chance of causing the next pandemics due to the ability of zoonoses, or animal-borne diseases, to cause interspecies transmission (Zoonoses, 2020). Of the total emerging infectious diseases (EID), 60.3% have been caused by zoonoses (Zoonoses, 2020). The origins of EIDs are correlated with socioeconomic, environmental, and ecological factors that predict where they are likely to originate. This explains why in regions where disease reporting efforts and resources are low, a greater risk of zoonotic and vector-borne viruses exists (Zoonoses, 2020). From an analysis of viral discovery data, Carroll et. al. (2018) estimate that about 1.67 million undiscovered viruses with zoonotic origins are present in bird and mammal hosts. Additionally, Carroll et al. (2018) expect that 631,000 - 827,000 of these not-yet-discovered viruses have the potential for zoonotic transmission based on analysis of viral-host relationships, the history of zoonotic viruses, and patterns of viral emergence. The COVID-19 pandemic has stressed the urgency of better understanding zoonotic spillover, the process by which viruses jump from animals to humans. There are many obstacles and factors relating to ecology, virology, evolution, and human immunity that prevent zoonotic

101


Figure 3: Wuhan, China, is the starting point of the COVID-19 pandemic. Image Source: Wikimedia Commons

spillover, making the mechanism an enigma to scientists. For instance, the disease must unlock receptors on the cell surfaces of the target hosts and must learn to replicate itself without signaling the host’s immune system (Singer et al., 2021). Viruses that manage to make the jump are rare already, and when they are successful, most spillover events do not trigger large-scale outbreaks (Singer et al., 2021). To make the jump, the virus must somehow be equipped to infect the new host before even coming into contact with said host. Coronavirus research indicates that the current host pressures the virus into mutations that will allow it to infect other hosts that it has not had contact with yet (Singer et al., 2021). In regard to the recent SARS-CoV-1 and SARS-CoV-2 outbreaks, spike proteins unlock ACE2 receptors (which are expressed in various human cells) of new host cells, explaining how bat coronaviruses can infect human cells (Petrosillo et al., 2020). Plowright et al. (2017) propose a conceptual and quantitative framework in which data has been integrated to address gaps in research on barriers and determinants of zoonotic spillover. This framework can be grouped into three functional phases that describe the major routes of transmission. In phase 1, pathogen pressure (the amount of pathogen available to the human host at any point in time and space) depends on interactions among reservoir host distribution, pathogen prevalence, and release from the

102

reservoir host. These factors are followed by pathogen survival and the development and dissemination of the pathogen after leaving the reservoir host. The second phase is determined by how human and vector behavior affect pathogen exposure. In the third phase, the probability and severity of infection are determined by the genetic, physiological, and immunological components of the human host as well as the dose and route of exposure of the pathogen (Plowright et al., 2017).

Epidemics and Pandemics Overview of Epidemics and Pandemics and How They Start Before COVID-19, a public understanding of pandemics and epidemics existed but was simply too general and incomprehensive. Most knew them to be the spread of an atrocious, natural phenomenon that could wipe out entire populations due to a lack of immunity. Influenza, cholera, malaria, and the flu, for example, struck us as examples of epidemics and pandemics that took place decades ago but would never strike us again. But today, as we find ourselves amidst one of the worst pandemics that mankind has ever seen, we have been forced to understand how epidemics begin and work, and consequently, how they can escalate into a pandemic. Epidemics occur at a local level; they are the widespread infection of a disease within a

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


particular community at a particular time and continue to spread as the disease travels into larger geographical scope (Miller, 2014). For example, think of Wuhan, China back in December 2019. SARS-CoV-2, the classified virus name for COVID-19, spread in the city of Wuhan and its communities at a rapid rate, culminating in the infection of thousands of people within a matter of weeks. An epidemic consists of a local outbreak, does not have a predictable occurrence rate, and spreads to larger geographical areas (Miller, 2014). Pandemics are a consequence of epidemics; they are the global spread of a disease that begins locally and, in turn, affect a very large percentage of the global population (What is a Pandemic?, 2021). Taking COVID-19 as a prime example, SARS-CoV-2 began locally in Wuhan, China and began to spread across Asia and Europe as a result of travel and mobilization, eventually reaching North and South America. The spread of an epidemic depends on 1) how easily the disease is transmitted from one individual to another and 2) the movement of those who carry it (What is a Pandemic?, 2021). For the COVID-19 infection, traveling via airplane made the transmission of the virus exponentially faster; within a matter of hours, the virus had reached another continent and infected hundreds of people along the way. Further characterization of pandemics includes 1) prone age group target of the disease, 2) self-limited illness and recovery of infected individuals, 3) fatality and death count, and 4) seasonal influenza. The 1918 Spanish Influenza and the bubonic plague of the 14th century serve as examples of world-wide pandemics in human history (What is a Pandemic?, 2021). COVID-19/SARS/MERS Coronaviruses (CoV) have been around for the past few decades, often causing respiratory tract infections, with the viruses first being discovered in the 1960s. Especially in the last few years, these viruses have exponentially grown in threat and prevalence, with a Severe Acute Respiratory System (SARS-CoV) virus epidemic in 20022003 and the Middle East Respiratory Syndrome (MERS-CoV) epidemic in 2012. In 2019, a novel coronavirus emerged in Wuhan, China, initially referred to as SARS-CoV-2 due to its similarity to SARS. The clinical presentation of this virus is known as the COVID-19 disease. These RNA viruses have some of the largest genomes and are often passed to humans through animal intermediates. Once in humans, the disease spreads rapidly through contact and through

FALL 2021

airborne droplets from sneezing, coughing, or breathing. CoV belong to the order Nidovirales and under the subfamily Coronavirinae, with a few major characteristics: large genomes, high replication rates, unique enzymatic activity, and extensive ribosomal frameshifting (a process used by viruses to have many proteins encoded by one piece of mRNA) due to a variety of nonstructural genes that are encoded in the RNA (Umakanthan et al., 2019). CoV are typically enveloped and contain positive single-stranded RNAs of about 8.4-12 kDa in size. The 5’ end of the genome contains the majority of information necessary for viral replication while the 3’ end contains five important structural proteins: spike protein, membrane protein, nucleocapsid protein, envelope protein, and haemagglutinin-esterase protein. Each of these proteins plays an integral role in the CoV’s life. The spike protein is necessary in attaching and fusing the envelopes to the host cell, the membrane protein defines the envelope shape, the nucleocapsid contains RNA complexes important for RNA transcription and assembly, the envelope protein constructs the envelop, and the haemagglutinin-esterase is important for receptor binding on host cells (Umakanthan et al., 2019). The spike glycoproteins present on CoV are especially essential in promoting entry of the virus into host target cells (Samudrala et al., 2020). While SARS, MERS, and SARS-CoV-2 all fall under the category of CoV, each virus presents with slightly different characteristics. SARS was first recognized in Guangdong, China, eventually leading to the virus spreading across 30 countries and affecting 79,000 people total with a 9.5% fatality. The virus was traced back to Himalayan palm civets from a livestock market, confirming that CoV viruses are indeed zoonotic in origin. MERS presents with pneumonia-like features as well as renal failure. While this epidemic was much smaller, with a total of 91 infected patients, the disease was much more fatal at 34% fatality. This virus was identified in bats and Arabian dromedary camels, as well as in goats, sheep, and cows, which acted as intermediate host disease reservoirs before passing the virus on to humans (Umakanthan et al., 2019).

"While SARS, MERS, and SARS-CoV-2 all fall under the category of CoV, each virus presents with slightly different characteristics."

SARS-CoV-2 presents with many similar clinical features but a much smaller fatality rate of 2.3%, as well as a less severe clinical presentation. MERS was much more fatal, with more patients developing acute respiratory distress syndrome (ARDS), potentially because MERS binds to a

103


Figure 4: A transmission electron image of the novel SARS-CoV-2 virus. Image Source: Wikimedia Commons

"Studies specific to COVID-19 in patients have found that the median age of infected patients is around 56 years..."

different receptor than SARS and SARS-CoV-2. While MERS binds to dipeptidyl peptidase 4 (DPP4) receptors, SARS and the novel CoV bind to angiotensin converting enzyme (ACE) receptors. Other clinical features are very similar, such as low platelet count and decreased albumin levels. However, the disease reproductive number (which estimates the number of new cases that can be directly related to one original case), was estimated to be 2.0-2.5 for SARS-CoV2????, much higher than MERS (<1) and slightly higher than SARS (1.7-1.9). SARS-CoV-2 and SARSCoV are more closely related than SARS-CoV-2 and MERS-CoV. While the route of transmission has been assumed to be through airborne droplets or contact, it’s also possible that there is a gastrointestinal route of transmission, based on the hypothesis that other CoVs like MERS could have spread through drinking the milk of infected camels (Petrosillo et al., 2020). Studies specific to COVID-19 in patients have found that the median age of infected patients is around 56 years, with males more affected than females due to a higher concentration of angiotensin enzyme 2 (Umakanthan et al., 2019). In many cases, the disease correlates highly with previous factors of susceptibility, such as smoking, hypertension, diabetes, or other similar health conditions. Symptoms often include milder conditions with nonspecific symptoms, such as a fever, cough, myalgia, throat ache, or nausea. However, the more dangerous pneumonia classification of the virus presents with severe disease, with dyspnoea and dangerously low

104

blood-oxygen saturation level (Umakanthan et al., 2019). Despite the similarities to SARS-CoV and MERS-CoV, there are several differences with the novel SARS-CoV-2 (Umakanthan et al., 2019). Preliminary studies of SARS-CoV-2 suggest that significant mutations present on the virus membrane proteins and receptor binding sites lead to its extreme transmission and pathogenicity. The receptor-binding domain (RBD) contains many of these mutations. Based on genomics understanding of SARS-CoV, a series of six amino acids on the spike protein have been identified as crucial for binding to the ACE2 receptor. However, SARS-CoV-2 presents with five unique amino acids in those places, causing an abnormally high affinity for the ACE2 receptors. While the pathogenesis of SARS-CoV-2 is not entirely understood yet, the many similarities between the virus and SARSCoV as well as MERS-CoV provide a basis for a likely hypothesis. As ACE2 receptors are typically present in the lungs (specifically in type-2 pneumocytes), the virus binds to many of these cells, causing the subsequent downregulation of ACE2 receptors. That downregulation leads to an increase in angiotensin-2 through ACE1 (an enzyme). This could then potentially lead to pulmonary vascular permeability and lung injury. As the body attempts to combat this through an immune response, inflammatory cytokines and chemokines are released, which often lead to more damage (Samudrala et al., 2020).

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Figure 5: A model representation of the chemical structure of Remdesivir. Image Source: Wikimedia Commons

While the zoonotic origin of SARS-CoV-2 is expected to be through bats, past experience with CoV has shown that the disease often jumps to an animal intermediate before moving to humans. Based on genomic and evolutionary evidence of SARS-CoV-2-like CoV in pangolins (a type of scaly mammal), the disease is hypothesized to have transmitted from bats to pangolins and then to humans. The Pangolin-CoV virus found is 91.02% identical to SARS-CoV-2 and 90.55% identical to Bat-CoV (RaTG14), suggesting that this is a possibility. The spike protein encoded by the Pangolin-CoV is very closely related to SARSCoV-2 as well. Additionally, 5 key amino acids necessary for the novel virus binding to human ACE2 receptors is consistent with the PangolinCoV whereas the Bat-CoV only shares four of those mutations (Zhang et al., 2020). Prior to the development of the vaccine, no appropriate medicine or treatment had been FDA-approved. While the race for a vaccine continued in 2020, the efficacy of five different FDA-approved drugs were studied. Cytotoxicity and drug effect was studied in vitro, with some treatments eventually progressing to phase III clinical trials. Remdesivir is one such treatment that was found to significantly reduce the mortality rate of COVID-19 upon 14 days of treatment and showed improvement in 64% of cases. Other treatments, such as chloroquine, showed promise but could lead to toxicity with overdose, posing complications. Protease inhibitors were studied as well, but unsuccessfully, as their efficacy was inconclusive (Samudrala et al., 2020).

FALL 2021

However, late in 2020, the first vaccines were announced and approved by the FDA for emergency use. The efficacy of mRNA vaccines has been an increasingly valuable area of research in the past few decades, especially in combating cancer. However, the emphasis on vaccine development was mostly on DNAbased approaches up until the late-2000s due to the many difficulties in working with RNA (Pardi et al., 2020). Designed to be a temporary molecule, RNA is very unstable; furthermore, it can elicit unnecessarily excessive immune responses, making it difficult to deliver safely in patients. However, recent advancements in optimizing RNA stability and influencing the rate of translation as well as the half-life of the transcript by modifying the 5’ and 3’ untranslated regions improve the safety and stability of RNA. Additionally, large-scale RNA purification methods have been developed, allowing vaccines to be cheaper and more resource-effective. Efficient delivery molecules have also been recently designed, using substances that protect the mRNA as well as the patient. Polymers, peptides, and lipid nanoparticles are being studied further as potential delivery agents for more effective mRNA vaccines. Since many of these technologies are relatively novel, as is the use of mRNA vaccines, much more research needs to be done (Pardi et al., 2020). 1918 Flu Pandemic The 1918 Spanish flu epidemic is one of the most

105


infamous pandemics in history. It is estimated to have killed 50-100 million people in total, including approximately 675,000 individuals in the United States. In the absence of the technology necessary to identify viruses, some scientific communities thought the bacterium Haemophilus influenzae was responsible for the disease referred to as the “Spanish flu.” However, the true culprit was a virus belonging to the influenza A (H1N1) subtype (Wever & van Bergen, 2014). Though society has now identified the virus responsible, the origin of the 1918 influenza virus remains something of a mystery; there is no consensus on where the virus first appeared in a human population. However, the sequencing of the virus’s eight-segment genome has provided more insight into its development. Some scientists initially assumed that the 1918 influenza virus resulted from gene reassortment between human and animal viruses, but more recent evidence casts doubt on this hypothesis. Because all eight of the sequenced gene segments are avian-like and waterfowl enteric tracts are known reservoirs of influenza A viruses, an updated hypothesis posits that the 1918 influenza virus resulted from the adaptation of an existing avian virus to a human host (Morens et al., 2007). It is impossible to fully discuss the 1918 influenza pandemic without acknowledging the context of the First World War. In Europe and in American training camps, the 1918 pandemic killed an estimated 45,000 American soldiers in total. According to some sources, the Spanish flu led to more American burials in France than the war itself. The disease infected over one million men in the United States Army, 700,000 men in the German Army, and 313,000 men in the British Expeditionary Forces. The first substantial outbreak of the 1918 influenza pandemic has been traced to a Kansas military camp called Camp Funston (Wever & van Bergen, 2014). The First World War likely contributed to the spread and severity of the Spanish flu, as overcrowding has been associated with a ten times higher risk of the flu and an increased severity of the disease. Overcrowding was such a concern in the U.S. Army that in January 1918 (before the pandemic), the Army Surgeon General William Gorgas testified before the Senate that U.S. troops required more floor space. The overcrowding and movement of soldiers during the First World War likely contributed to the 1918 pandemic (Aligne, 2016). Overall, the 1918 pandemic had three distinct

106

waves and stood out from other influenza outbreaks in its mortality patterns. While influenza strains tend to be most dangerous for the very young and very old individuals within a population, the 1918 flu hit those of middle age (20-50 years) hard as well, resulting in an atypical w-shaped mortality curve. The pandemic’s mortality peaked in October 1918 in the U.S. and Italy, and most victims died of pneumonia or other respiratory system complications (Gavrilova & Gavrilov, 2020). Though relegated to the pages of history textbooks today, the infamous Spanish flu lives on, as five of the genes in the common (H3N2) influenza virus originated from the 1918 pandemic (Belshe, 2009). Ebola Ebola virus disease (EVD) is a severe, often lethal, infection caused by a zoonotic virus which is a member of the filoviruses and causes an acute hemorrhagic fever (Jacob et al., 2020). EVD epidemics typically start by a single case of probable zoonotic transmission (wildlife to human) followed by human-to-human transmission (Groseth et al., 2007). The first recorded Ebola human outbreak took place in 1976 in Sudan, when an individual came into contact with the blood of a guinea pig infected by the Ebola virus (EBOV) (Emond RT et al., 1977). The virus simultaneously spread to Zaire, which is now called the Democratic Republic of Congo. During the outbreaks, 284 cases and 318 cases were confirmed in Sudan and Zaire respectively. The EBOV Sudan strain (SEBOV) had a Case Fatality Rate (CFR) of 53% and the Zaire strain (ZEBOV) had a high CFR of 89% (Groseth et al., 2007). Subsequent outbreaks of EBOV showed the successful replication of the virus by revealing the presence of other EBOV strains. The Reston Ebola virus was genetically discovered in 1989 in the United States of America from Macaques imported from the Philippines; however, it was proven non-infectious to humans (Miranda et al., 1999). In 1994, scientists discovered the Ivory Coast Ebola virus (ICEBOV) and another strain was found in Bundibugyo, Uganda (BEBOV) in 2007 and has a CFR of 26% (Muyembe-Tamfum et al., 2012). From all the EBOV species, ZEBOV was the most common and highly pathogenic with 30706 confirmed cases as of 2021 with an average lethality rate between 25% and 90% (Ebola Virus Disease, 2021). All known sources of EBOV human infection have involved contact with dead or butchered wildlife such as apes and chimpanzees, or the exploration of natural sites that house bats (Groseth et al., 2007; Emond RT et al., 1977).

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Ebola virus disease, like other filovirus infections, is contracted from direct contact with bodily fluids of infected humans or animals (dead or alive) and makes its way into the body through the skin and mucous membranes (Beeching et al., 2014). It takes about 2-21 days with an average of 7 days before the onset of first symptoms after infection. Early stages of the disease are non-specific. First symptoms may include acute fever, headache, fatigue, vomiting, and muscle pain (Beeching et al., 2014). The late stages of the disease often include multi-organ dysfunction, which is the major cause of death of patients infected by EBOV. The virus has a CFR ranging between 30% and 90% of all confirmed cases (Ebola Virus Disease, 2021). Given the high mortality rate of patients with EVD, the World Health Organization declared the Ebola epidemics as Public Health emergencies to prevent the virus from disseminating across the globe and to help scientists develop an understanding of the mechanisms of the Ebola virus (which could be used as a potential biological weapon). Researchers have attempted to identify the reservoir species or natural hosts of EBOV—i.e., animals that can carry the virus without exhibiting clinical symptoms. Different studies suggest that bats are putative reservoir hosts of EBOV, because the virus was successfully isolated from multiple bats species with no clinical signs of the illness (Pourrut et al., 2005). Despite this major discovery, researchers have not yet confirmed or determined the natural reservoir host of the virus; further research needs to be conducted to confirm the theory. Studies to determine the virus reservoirs are important to understand how the virus emerged and to develop risk-reduction measures that will prevent future EBOV outbreaks. Another scientific quest was to uncover the viral genetic footprints that might help explain the virulence of the ZEBOV. The severity of the virus is largely understood through its remarkable ability to interfere with the immune response of the host (Wang et al., 2019). It inhibits the expression of genes involved in the innate immune response against viral infections, such as IRF3 (Interferon Regulatory Factor 3) gene—which is an important transcriptional factor for the induction of early antiviral immunity (Wawina-Bokalanga et al., 2019). EBOV is a negative-strand (antisense strand) RNA virus which encodes for eight viral proteins, including four structural proteins: VP35 (viral protein 35), VP40, VP24, NP (nucleoprotein) (Wawina-Bokalanga et al., 2019).

FALL 2021

These four viral proteins are produced by the virus as it hijacks the cellular machinery of the host cell, and each protein plays an important role in the pathogenic mechanisms of the virus. VP35 is crucial for the EBOV interferon antagonism (signaling protein produced and released by the host cell in response to viral infections) (Hartman et al., 2008). VP35 inhibits the activation of IRF3 gene by impairing or blocking IRF3 phosphorylation—an important biochemical process that regulates protein function and signal transmitting throughout the cell (Hartman et al., 2008). Without IRF 3 and its dependent genes to initiate innate immunity against the virus, the Ebola virus replicates drastically (Hartman et al., 2008). This uncontrollable viral proliferation prompts an overload of virions harbored by the host that causes multiorgan failure or dysfunction and cellular death or apoptosis, which lead to clinical complications or death in EVD patients. Scientists have found that the carboxyl-terminus or carboxyl-group of VP35 protein is responsible for its immunosuppressive capacity, as this region of the protein can physically inhibit IRF3 activation (Hartman et al., 2008). Therefore, environmentally or human induced mutations at a specific position on the IRF3 inhibitory domain could decrease considerably the ability of VP35 to act as an interferon antagonist or to silence the IRF3 gene (Hartman et al., 2008). Scientists have also been looking for environmental and biological patterns that could serve as the basis of the resurgence of the Ebola virus. Researchers have demonstrated that environmental changes or seasonal patterns contribute to the preservation of EBOV in nature (Groseth et al., 2007). One of the scenarios that explains the resurgence of EBOV is that the virus is asymptotically harbored by reservoir species and arises seasonally in nature depending on fitting environmental conditions. Using geographical modeling and bioinformatics, Allison Groseth et al. (2007) found that ZEBOV, ICEBOV, and SEBOV occupy different geographical areas. ZEBOV and ICEBOV outbreaks happened in dry seasons whereas SEBOV outbreaks occurred during the seasonal periods of wetness. This observation coincides with the data from ZEBOV outbreaks in 1996 and 1997 in Gabon whereby a high number of dead great apes was recorded as a result of ZEBOV infection among the population between November and February—a period that marks the dry season in Gabon (Groseth et al., 2007). Another scenario that might explain the recurrence of EBOV in human populations could be the persistence of the virus in some bodily fluids even after complete clinical recovery and

"Given the high mortality rate of patients with EVD, the World Health Organization declared the Ebola epidemics as Public Health emergencies to prevent the virus from disseminating across the globe and to help scientists develop an understanding of the mechanisms of the Ebola virus."

107


Figure 6: The Ebola virus. Image Source: Flickr

clearance in the blood (Dukobu et al., 2018). However, there is not yet evidence that could establish risk of transmission in convalescence. Clinical advancements of EBOV

"Sabin’s method, known as the Oral Polio Vaccine (OPV), replaced Stalk’s vaccination in 1963. This had been deemed advantageous over the IPV due to its cheaper cost, easier administration, and capability of causing an active infection of the oropharynx in addition to the intestinal endothelium, inducing a greater immune response."

Many treatments and vaccines against Ebola virus are in development to reduce the severity of the disease and prevent future outbreaks. Certainly, the most remarkable advancement of EBOV research in 2019 is the development of a vaccine against the ZEBOV species—Recombinant Vesicular Stomatitis virus-Zaire Ebola Virus (rVSV-ZEBOV) (Ehrhardt et al., 2019; Metzger et al., 2018). However, the duration of protective efficacy of the vaccine is highly disputed. Polio Poliomyelitis, commonly referred to as the Polio virus, or for short, Polio, is historically known for its long-lasting presence and effect on populations worldwide. While there is literature and artistic evidence of Polio’s presence as early as 1403BC, the first clinical description of polio did not occur until the late 1700’s (Mehndiratta et al., 2014). It was in the 1900s that Polio became prevalent in the United States and was acknowledged for the first time as an existing epidemic in the year 1916 by U.S. public health authorities (Mehndiratta et al., 2014). Due to the extremely early presence of Polio in human history, there is still no distinct origin to pinpoint how its transmission transpired (Lacey, 1949). However, what differentiates the historical presence of Polio is the recurrence of an epidemic each summer (Mehndiratta et al., 2014). The near eradication of Polio was due to the rapid study and vaccine development performed by both Jonas Stalk, who produced the first Polio vaccination in 1955, and Albert Sabin, who

108

developed another form of vaccination shortly after in 1963 (De Jesus, 2007). The Salk method, recognized as the Inactivated Polio Vaccine (IPV), was given by injection and worked to stimulate serum IgM, IgG, IgA, but not secretory IgA: in such case, immunity had been induced by antibody transduction into the oropharynx (Howard, 2005). Sabin’s method, known as the Oral Polio Vaccine (OPV), replaced Stalk’s vaccination in 1963. This had been deemed advantageous over the IPV due to its cheaper cost, easier administration, and capability of causing an active infection of the oropharynx in addition to the intestinal endothelium, inducing a greater immune response (Howard, 2005). In the United States, there are fewer than 10 Polio cases that occur annually, all of which are a result of back mutations (Melnick, 1996). However, despite successes in eradication in parts of the world, Polio is still endemic in six countries Nigeria, India, Pakistan, Niger, Afghanistan, and Egypt (Howard, 2005). This presence is partially attributed to the difficulty of providing a heat-stable oral vaccine to ensure sufficient seroconversion in these tropical locations (Howard, 2005). Smallpox Smallpox is a highly contagious disease which is caused by the variola virus. The variola virus comes in two forms: the more common, lifethreatening variola major with a mortality rate of 30% and the milder version- variola minor or alastrim (Geddes, 2006). Its origins are obscured due to its prevalence throughout early world history, but the World Health Organization (WHO) reported findings of smallpox-related skin rashes on Egyptian mummies around 3000 years ago, suggesting that ancient Egypt could have been the earliest instance of this disease. This report attempted to find the “original home” of DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


smallpox, but in practicality, smallpox could have developed anywhere with an irrigated agricultural society and larger populations (Geddes, 2006). By the end of the 1500s, smallpox’s notoriety rose as the virus was a significant cause of death in Europe, southwestern Asia, India, and China, and it went global during the age of exploration and colonization, reaching the Caribbean and Americas. It has been speculated that the variola major and minor viruses evolved from a version of the virus that affected animals. Variola viruses are a part of the poxviridae group of viruses that includes the vaccinia virus and animal poxviruses such as monkeypox (Geddes, 2006). Phylogeny evidence further solidifies this hypothesis: variola viruses were found to be descendants of an ancestral African rodent-borne variola-like virus as the variola virus and this rodent-affecting virus are in a monophyletic group with one another. The time frame for this convergence is estimated to

be between 16,000 and 68,000 years ago (Li et al., 2007). In early 1000 A.D China, individuals were putting smallpox pus or scabs by nasal canals or through cuts, so that the body would be exposed to a weakened form of smallpox. This process, called variolation, was the earliest treatment to smallpox and foreshadowed what would be the first invented vaccine in history. Edward Jenner, an English physician, noticed milkmaids who caught cowpox were resistant to smallpox. Edward removed material from a cowpox lesion on the hand of a milkmaid and inoculated it into a young boy. After a few days of mild symptoms, the boy became immune to smallpox (Parrino & Graham, 2006). Over time and the eventual replacement of the cowpox with the vaccinia vaccine, this method by Jenner was widely acceptable. This vaccination, although effective, required Figure 7: The Poliovirus. Image Source: Wikimedia Commons

FALL 2021

109


Figure 8: An artistic rendering of a nanoparticle. Image Source: Flickr

booster shots, as its immunity wore off after 3 years. After some breakouts throughout the globe, smallpox cases dwindled down, and eventually its existence ceased after its properformed worldwide and the last cases of major and minor variola was in Bangladesh in 1975 and Somalia in 1977 respectively (Parrino & Graham, 2006). Today, there are few people who are vaccinated for smallpox, but there are two known countries (USA and Russia) who officially have stocks of the virus in laboratories. They are used for study if a similar virus ever breaks out, but many have questioned why it still has not been destroyed.

Viruses as Therapeutics Viruses as Delivery Mechanisms Gene therapy is the introduction of genetic material into a human’s cells to replace a malfunctioning gene or make a protein that can compensate for the effects of a certain disease or condition (Sung & Kim, 2019). One of the mechanisms by which this genetic material is inserted into cells is viruses. Researchers around the globe have honed and refined the use of viral vectors, such that current treatment mitigates the systematic inflammation and organ failure that marked earlier attempts (Lundstrom, 2018). The benefit of this mechanism is that it provides continued, long-term expression of the corrected gene at physiologically effective levels.

110

The most commonly used viruses for viral delivery are adenoviruses (AAV), which provide advantages such as infection of a vast range of host cells, including dividing and non-dividing cells, and their maintenance as an episome, meaning that the inserted genetic material behaves as an extrachromosomal element in the targeted cell’s nucleus, reducing the risk of mutagenesis (McCaffrey et al., 2008). Adenoviruses can either be replication-deficient (RD) or replicationcompetent (RC). RD adenovirus vectors have certain genes deleted to prevent replication of the virus, which would exponentially increase the lethal immunogenic response of the host, and to prevent transduced cells from undergoing apoptosis. RD adenoviruses are especially useful for gene therapy to promote continued expression of the foreign transgene. RC adenoviruses, on the other hand, replicate more efficiently since they have the necessary genes coding for replicative proteins and are important factors for lysing and destroying cancer cells. Retroviruses, unlike adenoviruses, are considered the optimal standard for long-term gene therapy as they can carry up to eight kilobases of foreign inserts and can replicate their single-stranded RNA into double-stranded DNA, which is then permanently inserted into the human genome (Anson, 2004). One of the major downsides of retroviruses is their inability to infect nondividing cells, but one class of retroviruses called lentiviruses circumvents this problem and has

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


provided a quantum leap in mediating high levels of gene transfer and cell transduction. Bacteriophages, viruses that infect bacteria, are promising viral classes that provide the advantage of easy expression of foreign molecules on the outer surface of the phage, allowing for high levels of targeting (Hosseinidoust, 2017). Downsides of using bacteriophages include undesirable immune responses and lower gene delivery efficiency. Viruses are an excellent vehicle for targeted gene therapy, immunotherapy, cancer therapy, and even treatment for infectious diseases. Their innovative transformation into therapeutic drugs has revolutionized the fields of cell biology, genetics, and clinical treatment (Hosseinidoust, 2017). Oncolytic Virus Therapy Oncolytic viruses infect and kill cancer cells either naturally or due to modification in a lab (Using Oncolytic Viruses to Treat Cancer - National Cancer Institute, 2018). Patients who developed a naturally-acquired viral infection as early as the late 1800s were observed to have tumor regression either during or after the infection (Using Oncolytic Viruses to Treat Cancer - National Cancer Institute, 2018). This led to multiple clinical trials between 1950 and 1980 to test the efficacy of naturally occurring viruses in treating cancer, but these efforts were largely unsuccessful (Fukuhara et al., 2016). The challenge currently facing researchers is finding a way to edit virus genomes such that the virus reproduces in cancer cells without also infecting healthy cells. Viruses can also be modified to trigger a systemic tumorspecific immune response (Fukuhara et al., 2016). Only one oncolytic virus therapy, talimogene laherparepvec (T-VEC), is currently approved by the FDA for clinical use (Using Oncolytic Viruses to Treat Cancer - National Cancer Institute, 2018). T-VEC treats metastatic melanoma by both selectively targeting cancerous cells and promoting a regional and systemic immune response to suppress tumors (Andtbacka et al., 2015). T-VEC was developed from a modified herpes simplex virus (HSV) type 1 (Andtbacka et al., 2015). Two genes were deleted: the herpes virus neurovirulence factor gene, ICP34.5, and the ICP47 gene. Additionally, the gene encoding for human granulocyte macrophage colonystimulating factor (GM-CSF, a cytokine that helps produce white blood cells) was inserted into the viral genome to help produce a tumorspecific immune response (Andtbacka et al.,

FALL 2021

2015). Currently, T-VEC is given via injection to patients with advanced melanoma—either Stage IIIB, IIC, or IV—that cannot be surgically removed (Andtbacka et al., 2015). Viruses and Nanoparticles Effective drug delivery and targeting are essential for effective medical treatment. However, because drugs are poorly soluble in water, most of them are lost in the body during administration and only a small portion gets localized to the targeted site (van Kan-Davelaar et al., 2014). Nanomedicine has made tremendous leaps in compensating for these deficiencies, as the use of semisynthetic carriers like quantum dots, liposomes, and vesicles has increased the capacity for cellular uptake, intracellular accumulation, and physiological retention. Viruses have also emerged as an optimal nanoparticle biodelivery system, primarily because of their high biocompatibility and biodegradability. As nanocarriers, viruses can target proteins, polymers, and enzymes to specific sites and cells in the body. A viral nanocarrier is made by removing the viral genome and reconstructing the outer capsid viral proteins into a shell that can disassemble and release the product in response to pH, chemical stimuli, and temperature (van Kan-Davelaar et al., 2014). Therefore, viral nanoparticles allow site-specific targeting of desired products in a context-dependent regulation. Viral nanoparticles can also be used for the treatment of metastatic cancer. In comparison to traditional treatments like radiation and chemotherapy, which can be highly lethal and cytotoxic, viral nanotechnology can mediate efficient levels of molecular trafficking of proteins, antibodies, fluorescent dyes, and drugs to tumor cells (Grasso & Santi, 2010). When these viral nanoparticles are labeled with a fluorescent tag, some classes of these viruses like cowpea mosaic virus (CPMV) can interact with the intermediate filament protein vimentin, which is overexpressed in cancer cells (Steinmetz et al., 2011). Through these particles, we can better detect the localization of tumors and metastatic cancer cells in vitro and in vivo. In addition to imaging, viruses can be used to deliver anticancer agents to only cancer cells instead of normally affecting healthy cells. Their highly symmetrical structure allows researchers to conjugate and present multiple targeting molecules on its surface to mediate highly specific cell targeting with high payload capacities (Grasso & Santi,

"Patients who developed a naturallyacquired viral infection as early as the late 1800s were observed to have tumor regression either during or after the infection."

111


2010). The most effective types of viruses for nanoparticle targeting are plant-based viruses and bacteriophages. Mammalian viruses are not optimal vehicles because they proliferate in humans, which can trigger downstream negative effects (Steinmetz et al., 2011). While viral nanoparticles offer exciting potential for drug delivery and disease treatment, many of these technologies are still in their nascent stages and few are in preclinical trials. Yet, the groundbreaking work and bleeding-edge research with viral nanoparticles thus far has paved a new avenue of research that can transform the field of pharmacokinetics and drug delivery. Viruses for Imaging

"Mammalian viruses are not optimal vehicles because they proliferate in humans, which can trigger downstream negative effects."

The application of recent advancements in viral nanotechnology and functional viruses as therapeutics relies on sophisticated imaging techniques. Imaging viruses has a wide range of uses. It enables scientists to utilize viruses as calibration tools, using their regular identical structures as control specimens for testing parameters (Goldsmith & Miller, 2009). Primarily, though, imaging is used to study virus structure and function. This includes studying the assembly and infection processes. Virus imaging has also recently been used to study viruses as functional nanoparticles in medicine and nanotechnology. The functional applications also range from materials science to biophysics and electrochemistry (Gulati et al., 2019). One type of microscopy used to study viruses is Atomic Force Microscopy, which has resolution on the nanometer scale and can observe samples in both liquid and air mediums (Goldsmith & Miller, 2009). This form of microscopy works by shining a laser on a cantilever (a beam that is fixed at one end) with a tip. The tip is pressed into the sample and the structure of the sample bends the cantilever. The bends are measured and recorded by the laser focused on the cantilever. One mode, called contact mode, ends in the destruction of the sample. Another mode, tapping mode, is gentler and results in less specimen deformation. Another type of microscopy frequently used to study viruses is electron microscopy. This type of microscopy works by using electrons to form an image of the specimen instead of using light. One type of electron microscopy is called negative staining. This method produces an image with a dark background and white specimen (Goldsmith & Miller, 2009). These images come

112

from supernatant, so the specimens are in fluid. The specimen is stabilized using a support film to hold the particles. Then, a thin carbon coat is evaporated over the film so the specimen does not melt from the electron beam. This technique is quick and can take as few as fifteen minutes (Goldsmith & Miller, 2009). Thin sectioning is another technique used with electron microscopy. This type is used for thin sheets of cells or tissues. Like in the previously discussed types of microscopy, these samples are fixed, so living tissues and cells cannot be viewed. Though it is a powerful way to visualize tissue samples, the main limitation for this type of microscopy is that the sample could miss the portion that contains the virus, (Gulati et al., 2019). Another type of electron microscopy that allows scientists to understand viruses, specifically virus localization, is immunogold labeling. This technique uses antibodies that bind to viruses and secondary gold labeled antibodies that bind to the primary antibodies (Gulati et al., 2019). The areas that contain the virus show up easily through electron microscopy because the electron-dense gold appears dark against the white portions of the cell. This is useful for qualitative observations about localization of virus, virus parts, or viruslike particles (Gulati et al., 2019). Lastly, Cryo electron microscopy is another useful tool for visualizing viruses. The method works by rapidly freezing samples with liquid nitrogen, viewing them with a special electron microscope equipped with a cryo stage (a specimen stage that cools the sample down using either liquid nitrogen or liquid helium), and reconstructing samples. Many different angles are viewed and the sample is reconstructed in 3D via computer (Gulati et al., 2019). This powerful tool allows for exact three-dimensional models of samples to be made. Antibiotic Resistance/Viruses Antibiotic resistance refers to infectious microorganisms evolutionarily gaining the ability to circumvent the consequences of antibiotics, the medicines commonly used to treat their effects by killing them. The use of viruses to combat such microbes is an area of great interest, especially as antibiotic resistance increases at an alarming rate and some infections (e.g. pneumonia and tuberculosis, among others) often become very difficult to treat (Zaman et al., 2017). For context, current threats include multidrug-resistant

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


This way, the phages can replicate—viruses are unable to do so without a host—and produce increasing quantities of progeny. Eventually, a critical mass of progeny is reached, and certain lytic proteins are activated, which hydrolyze the cell wall, resulting in rupture of the bacterial cell. Also, when this cell ruptures, existing phage progeny are free to infect other bacterial cells and repeat the cycle. This way, the bacteria are ultimately cleared from the system (Lin et al., 2017).

(MDR) bacteria—which are resistant to a few of the most powerful antibiotics and kill 25,000 patients in European hospitals annually—and extensively drug-resistant (XDR) bacteria, which are resistant to several of the most effective drugs and result in mortality for more than half of those infected (Moghadam et al., 2020). The mechanism predominating the use of viruses against antibiotic microbes is called phage therapy. Bacteriophages are viruses able to infect and kill bacteria, notably without negative implications for the human or animal host (Principi et al., 2019). Bacteriophage therapy includes infection of specific bacterial hosts by phages, which essentially function to hijack the cellular machinery and induce lysis (cell rupturing) of the host bacteria (Nikolich & Filippov, 2020). Although phage biologists have recognized that phage life cycles exist on a spectrum and are composed of many classifications, the two conventionally known categories are lysogenic phages and lytic phages. Lysogenic phages integrate their genetic material into the bacterial host chromosomes as prophages, thus replicating with each cell division. Some environmental stimuli can trigger the induction of a transition to the lytic cycle and subsequent release of phage offspring from outside of the chromosome. However, phage therapy relies on lytic phages; these inject their genetic material into bacterial cells and take control of the replication machinery. FALL 2021

Figure 9: An electron microscope. Image Source: Wikimedia Commons

Among the benefits of bacteriophage therapy is the fact that phages can be engineered. For instance, the OMKO1 phage infects only bacterial cells presenting a certain cell surface protein that is important to the bacteria’s system for evading antibiotics. Thus, pressure will result in bacterial cells that are phage-resistant by nature of mutating the protein important for antibiotic resistance (hence, their resistant ability will be eliminated). Interestingly, and somewhat ironically, phage therapy can be more efficacious in preventing antibiotic resistance when delivered in combination with antibiotics. Since phages and antibiotics present different selective pressures, evolutionary tradeoff yields minimal amounts of resistance mechanisms (Torres-Barcelo, 2018). There are several other advantages to phage therapy, including the fact that it operates via mechanisms different from antibiotics and thus antibiotic-resistant bacteria do not present an already-phage-resistant threat (Loc-Carrillo & Abedon, 2011). Unfortunately, there are also downfalls to consider in the use of phage therapy against antibiotic-resistant microbes. Notably, in order for a phage to be effective and safe it must have a range of characteristics (i.e. be constitutively lytic, survive in the host and reach its target, have the ability to clear its target, and not induce, a harmful response in the host organism) which may be difficult to achieve in combination. Other problems include the phages’ narrow host range, which limits them from eliminating all targets, as well as the possibility that they may be more prone to negative consequences like those of other pharmaceuticals than scientists currently think (Loc-Carrillo & Abedon, 2011).

Applications in the Gut Microbiome Our gastrointestinal tract is inhabited by a plethora of different viruses. To put things into perspective, the gut microbiome has DNA and RNA viruses that collectively outnumber bacterial cells by as many as 10:1 (Mukhopadhya et al.,

113


2019). Each gram of human gut content is said to contain at least 108- 109 virus-like particles, termed VLPs, with the majority belonging to the family Podoviridae (Mukhopadhya et al., 2019). The human gut microbiome is thus a very complex ecosystem, with organisms ranging from bacteria to yeast, fungi, and even viruses inhabiting it. Thanks to recent technological advances such as high-throughput and nextgeneration sequencing, entire viral genomes have been sequenced and analyses of microbial communities (metagenomics) completed, collectively revealing new insights into the role of human gut virome composition and how it functions, as well as its potential clinical applications as a therapeutic method. The viruses that inhabit the gut microbiome have been separated into five virotypes: eukaryotic viruses, plant-derived viruses, giant viruses (larger than 300kb), prophages, and small viruses (smaller than 145 kb) (Scarpellini et al., 2015). In contrast to the bacteria gut microbiome, the gut virome is more stable and doesn’t fluctuate as much in response to environmental factors (Mukhopadhya et al., 2019). The gut microbiome can be influenced by a variety of factors, including diet, smoking, and antibiotics whereas the gut virome is not. While much remains unknown, there is hope that the gut virome may be an undiscovered entity related to inflammation processes. The gut virome, being very underresearched, holds significant promise in future therapeutic applications (Mukhopadhya et al., 2019).

Viruses and Machine Learning The diverse applications of viruses in various fields have prompted researchers to experiment with novel methods of data processing: machine learning and artificial intelligence. Machine learning is a type of artificial intelligence that trains a “machine,” or a set of algorithms, to learn from existing data sets and find patterns (Dhall et al., 2019). Due to the large scale of data sets about viral genomes, machine learning mechanisms such as support vector machines (SVM), deep neural networks, and random forests have been applied into the field of virology. A major application of machine learning to virus research is the recovery of viral genomes from existing metagenomic data sets, which contain both host and viral genetic sequences. For example, researchers used a machine learning method to recover genomes of the Inoviridae virus families, which infect bacteria like Vibrio cholerae and intensify diseases like cholera (Roux et al., 2019). They trained the algorithm using sequences from known Inoviridae and those from other bacteria or viruses to identify Inoviridae sequences, which later recovered more than 10,000 Inoviridae genomes from metagenomic data sets (Roux et al., 2019). Another example is MARVEL, a random forest machine learning method that makes prediction using many individual decision trees. Amgarten et al. (2018) trained this system using three factors: density of genes compared to the length of the genome, strand shifts between neighboring

Figure 10: An artistic representation of the gut microbiome. Image Source: Flickr

114

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


genes, and fraction of significant hits from the Prokaryotic Virus Orthologous Groups (pVOGs) database. Afterwards, MARVEL identified 58 new viral genomes from metagenomic datasets of compost samples, only one of which had bacterial marker genes, demonstrating high accuracy. Virsorter2 is another machine learning system that uses automatic classifiers to identify viral genomes from metagenomic datasets (Guo et al., 2021). Machine learning can extend from identifying genomes to associating particular viral sequences with specific diseases. Unlike MARVEL, which uses automatic machine learning classifiers like density of genes, VirFinder (Ren et al., 2017) identifies k-mers, combinations of DNA letters (“DNA words”) with a certain length of k, within sequences, rather than specific genes. Ren et al. (2017) tested VirFinder with human gut metagenomic data, identifying the viral sequences for people with and without liver cirrhosis. The study identified types of viruses more prevalent in healthy or diseased people (Ren et al., 2017), supporting the previously suggested correlation between changes in the human gut microbiome and liver cirrhosis (Qin et al., 2014). Combinations of machine learning have been implemented for viruses as therapeutics. Multiple machine learning methods, including the random forest classifier, were used to predict human adaptation of swine/avian influenza A viruses (IAVs) from large data sets (Li et al., 2019). Additionally, machine learning systems like neural networks or SVMs have predicted adeno-associated virus (AAV) capsids, or viral protein shells, that would form viral structures and possibly become AAV vectors for gene therapy vectors (Marques et al., 2020). In these ways, the role of machine learning in virology continues to evolve and grow, and it is likely to contribute to the expansion of virology in many different settings.

Conclusions The COVID-19 pandemic has brought the field of virology to the public eye. However, viruses and viral diseases have made significant impacts across human history, from the Black Plague of the 1300s to the Spanish Flu of the 1900s. As we look back at the history of virology, we can gain better understanding of the advances in treating viruses that have been made in recent years. Hopefully, with more study, we can develop more treatments against viral infections and begin to utilize viral vectors ourselves to tackle more

FALL 2021

challenges in human medicine. References Ahn, J., & Flamm, S. L. (2011). Hepatitis C therapy: Other players in the game. Clinics in Liver Disease, 15(3), 641–656. https://doi. org/10.1016/j.cld.2011.05.008 Aleem, A., & Kothadia, J. (2021). Remdesivir. StatPearls. https://www.statpearls.com/ ArticleLibrary/viewarticle/122861 Aligne, C. A. (2016). Overcrowding and Mortality During the Influenza Pandemic of 1918. American Journal of Public Health, 106(4), 642– 644. https://doi.org/10.2105/AJPH.2015.303018 Amgarten, D., Braga, L. P. P., da Silva, A. M., & Setubal, J. C. (2018). MARVEL, a Tool for Prediction of Bacteriophage Sequences in Metagenomic Bins. Frontiers in Genetics, 9, 304. https://doi.org/10.3389/fgene.2018.00304 Anson, D. S. (2004). The use of retroviral vectors for gene therapy-what are the risks? A review of retroviral pathogenesis and its relevance to retroviral vector-mediated gene delivery. Genetic Vaccines and Therapy, 2(1), 9. https://doi. org/10.1186/1479-0556-2-9

"A major application of machine learning to virus research is the recovery of viral genomes from existing metagenomic data sets, which contain both host and viral genetic sequences."

Andtbacka, R. H. I., Kaufman, H. L., Collichio, F., Amatruda, T., Senzer, N., Chesney, J., Delman, K. A., Spitler, L. E., Puzanov, I., Agarwala, S. S., Milhem, M., Cranmer, L., Curti, B., Lewis, K., Ross, M., Guthrie, T., Linette, G. P., Daniels, G. A., Harrington, K., … Coffin, R. S. (2015). Talimogene Laherparepvec Improves Durable Response Rate in Patients With Advanced Melanoma. Journal of Clinical Oncology: Official Journal of the American Society of Clinical Oncology, 33(25), 2780–2788. https://doi. org/10.1200/JCO.2014.58.3377 Beeching, N. J., Fenech, M., & Houlihan, C. F. (2014). Ebola virus disease. BMJ (Clinical Research Ed.), 349, g7348. https://doi. org/10.1136/bmj.g7348 Belshe, R. B. (2009). Implications of the Emergence of a Novel H1 Influenza Virus | NEJM. Retrieved September 8, 2021, from https://www. nejm.org/doi/full/10.1056/NEJMe0903995 Brown, T. A. (2002). The Human Genome. In Genomes. 2nd edition. Wiley-Liss. https://www. ncbi.nlm.nih.gov/books/NBK21134/

115


Carroll, D., Watson, B., Togami, E., Daszak, P., Mazet, J. A., Chrisman, C. J., Rubin, E. M., Wolfe, N., Morel, C. M., Gao, G. F., Burci, G. L., Fukuda, K., Auewarakul, P., & Tomori, O. (2018). Building a global atlas of zoonotic viruses. Bulletin of the World Health Organization, 96(4), 292–294. https://doi.org/10.2471/BLT.17.205005 De Jesus, N. H. (2007). Epidemics to eradication: The modern history of poliomyelitis. Virology Journal, 4(1), 70. https://doi.org/10.1186/1743422X-4-70 Dhall, A., Dai, D., & Van Gool, L. (2019). Realtime 3D Traffic Cone Detection for Autonomous Driving. 494–501. https://doi.org/10.1109/ IVS.2019.8814089 Dokubo, E. K., Wendland, A., Mate, S. E., Ladner, J. T., Hamblion, E. L., Raftery, P., Blackley, D. J., Laney, A. S., Mahmoud, N., Wayne-Davies, G., Hensley, L., Stavale, E., Fakoli, L., Gregory, C., Chen, T.-H., Koryon, A., Allen, D. R., Mann, J., Hickey, A., … Fallah, M. P. (2018). Persistence of Ebola virus after the end of widespread transmission in Liberia: An outbreak report. The Lancet Infectious Diseases, 18(9), 1015–1024. https://doi.org/10.1016/S1473-3099(18)30417-1 Ebola Virus Disease—WHO. (2021). Retrieved September 8, 2021, from https://www.who.int/ westernpacific/health-topics/ebola Ehrhardt, S. A., Zehner, M., Krähling, V., CohenDvashi, H., Kreer, C., Elad, N., Gruell, H., Ercanoglu, M. S., Schommers, P., Gieselmann, L., Eggeling, R., Dahlke, C., Wolf, T., Pfeifer, N., Addo, M. M., Diskin, R., Becker, S., & Klein, F. (2019). Polyclonal and convergent antibody response to Ebola virus vaccine rVSV-ZEBOV. Nature Medicine, 25(10), 1589–1600. https://doi. org/10.1038/s41591-019-0602-4 Ellebedy, A. H., & Ahmed, R. (2016). Chapter 15 - Antiviral Vaccines: Challenges and Advances. In B. R. Bloom & P.-H. Lambert (Eds.), The Vaccine Book (Second Edition) (pp. 283–310). Academic Press. https://doi.org/10.1016/B978-012-802174-3.00015-1 Emond, R. T., Evans, B., Bowen, E. T., & Lloyd, G. (1977). A case of Ebola virus infection. British Medical Journal, 2(6086), 541–544. Fukuhara, H., Ino, Y., & Todo, T. (2016). Oncolytic virus therapy: A new era of cancer treatment at dawn. Cancer Science, 107(10), 1373–1379. https://doi.org/10.1111/cas.13027

116

Gavrilov, L. A., & Gavrilova, N. S. (2020). What Can We Learn about Aging and COVID-19 by Studying Mortality? Biochemistry. Biokhimiia, 85(12), 1499–1504. https://doi.org/10.1134/ S0006297920120032 Geddes, A. M. (2006). The history of smallpox. Clinics in Dermatology, 24(3), 152–157. https:// doi.org/10.1016/j.clindermatol.2005.11.009 Goldsmith, C. S., & Miller, S. E. (2009). Modern Uses of Electron Microscopy for Detection of Viruses. Clinical Microbiology Reviews, 22(4), 552–563. https://doi.org/10.1128/CMR.00027-09 Graham, B. S. (2013). Advances in antiviral vaccine development. Immunological Reviews, 255(1), 230–242. https://doi.org/10.1111/ imr.12098 Grasso, S., & Santi, L. (2010). Viral nanoparticles as macromolecular devices for new therapeutic and pharmaceutical approaches. International Journal of Physiology, Pathophysiology and Pharmacology, 2(2), 161–178. Groseth, A., Feldmann, H., & Strong, J. E. (2007). The ecology of Ebola virus. Trends in Microbiology, 15(9), 408–416. https://doi. org/10.1016/j.tim.2007.08.001 Gulati, N. M., Torian, U., Gallagher, J. R., & Harris, A. K. (2019). Immunoelectron Microscopy of Viral Antigens. Current Protocols in Microbiology, 53(1), e86. https://doi. org/10.1002/cpmc.86 Guo, J., Bolduc, B., Zayed, A. A., Varsani, A., Dominguez-Huerta, G., Delmont, T. O., Pratama, A. A., Gazitúa, M. C., Vik, D., Sullivan, M. B., & Roux, S. (2021). VirSorter2: A multi-classifier, expert-guided approach to detect diverse DNA and RNA viruses. Microbiome, 9(1), 37. https:// doi.org/10.1186/s40168-020-00990-y Hartman, A. L., Bird, B. H., Towner, J. S., Antoniadou, Z.-A., Zaki, S. R., & Nichol, S. T. (2008). Inhibition of IRF-3 Activation by VP35 Is Critical for the High Level of Virulence of Ebola Virus. Journal of Virology, 82(6), 2699–2704. https://doi.org/10.1128/JVI.02344-07 Hosseinidoust, Z. (2017). Phage-Mediated Gene Therapy. Current Gene Therapy, 17(2), 120–126. https://doi.org/10.2174/15665232176661705101 51940 Howard, R. S. (2005). Poliomyelitis and the postpolio syndrome. BMJ : British Medical Journal, 330(7503), 1314–1318.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Ison, M. G. (2011). Antivirals and resistance: Influenza virus. Current Opinion in Virology, 1(6), 563–573. https://doi.org/10.1016/j. coviro.2011.09.002 Jacob, S. T., Crozier, I., Fischer, W. A., Hewlett, A., Kraft, C. S., Vega, M.-A. de L., Soka, M. J., Wahl, V., Griffiths, A., Bollinger, L., & Kuhn, J. H. (2020). Ebola virus disease. Nature Reviews Disease Primers, 6(1), 1–31. https://doi.org/10.1038/ s41572-020-0147-3

American Society of Gene Therapy, 16(5), 931– 941. https://doi.org/10.1038/mt.2008.37 Mehndiratta, M. M., Mehndiratta, P., & Pande, R. (2014). Poliomyelitis: Historical Facts, Epidemiology, and Current Challenges in Eradication. The Neurohospitalist, 4(4), 223–229. https://doi.org/10.1177/1941874414533352 Melnick, J. L. (1996). Current status of poliovirus infections. Clinical Microbiology Reviews, 9(3), 293–300. https://doi.org/10.1128/CMR.9.3.293

Lacey, B. W. (1949). THE NATURAL HISTORY OF POLIOMYELITIS. The Lancet, 253(6560), 849–859. https://doi.org/10.1016/S01406736(49)92350-8

Metzger, W. G., & Vivas-Martínez, S. (2018). Questionable efficacy of the rVSV-ZEBOV Ebola vaccine. The Lancet, 391(10125), 1021. https:// doi.org/10.1016/S0140-6736(18)30560-9

Li, Y., Carroll, D. S., Gardner, S. N., Walsh, M. C., Vitalis, E. A., & Damon, I. K. (2007). On the origin of smallpox: Correlating variola phylogenics with historical smallpox records. Proceedings of the National Academy of Sciences, 104(40), 15787– 15792. https://doi.org/10.1073/pnas.0609268104

Miller, J. C. (2014). Epidemics on Networks with Large Initial Conditions or Changing Structure. PLOS ONE, 9(7), e101421. https://doi. org/10.1371/journal.pone.0101421

Li, J., Zhang, S., Li, B., Hu, Y., Kang, X.-P., Wu, X.Y., Huang, M.-T., Li, Y.-C., Zhao, Z.-P., Qin, C.-F., & Jiang, T. (2019). Machine Learning Methods for Predicting Human-Adaptive Influenza A Viruses Based on Viral Nucleotide Compositions. Molecular Biology and Evolution, 37(4), 1224– 1236. https://doi.org/10.1093/molbev/msz276 Lin, D. M., Koskella, B., & Lin, H. C. (2017). Phage therapy: An alternative to antibiotics in the age of multi-drug resistance. World Journal of Gastrointestinal Pharmacology and Therapeutics, 8(3), 162–173. https://doi.org/10.4292/wjgpt. v8.i3.162 Loc-Carrillo, C., & Abedon, S. T. (2011). Pros and cons of phage therapy. Bacteriophage, 1(2), 111–114. https://doi.org/10.4161/bact.1.2.14590 Lundstrom, K. (2018). Viral Vectors in Gene Therapy. Diseases (Basel, Switzerland), 6(2), E42. https://doi.org/10.3390/diseases6020042 Marques, J., Dillingham, M., Beckett, P., Cherradi, Y., Paun, A., Boumlic, A., & Carter, P. (2020). Optimizing Viral Vector Manufacturing for Gene Therapy. PharmTech, 2020 eBook(2), 24–29. McCaffrey, A. P., Fawcett, P., Nakai, H., McCaffrey, R. L., Ehrhardt, A., Pham, T.-T. T., Pandey, K., Xu, H., Feuss, S., Storm, T. A., & Kay, M. A. (2008). The host response to adenovirus, helper-dependent adenovirus, and adeno-associated virus in mouse liver. Molecular Therapy: The Journal of the

FALL 2021

Miranda, M. E., Ksiazek, T. G., Retuya, T. J., Khan, A. S., Sanchez, A., Fulhorst, C. F., Rollin, P. E., Calaor, A. B., Manalo, D. L., Roces, M. C., Dayrit, M. M., & Peters, C. J. (1999). Epidemiology of Ebola (Subtype Reston) Virus in the Philippines, 1996. The Journal of Infectious Diseases, 179(Supplement_1), S115–S119. https://doi. org/10.1086/514314 Moghadam, M., Amirmozafari, N., Shariati, A., Hallajzadeh, M., Mirkalantari, S., Khoshbayan, A., & Masjedian Jazi, F. (2020). How Phages Overcome the Challenges of Drug Resistant Bacteria in Clinical Infections. Infection and Drug Resistance, 13, 45–61. https://doi. org/10.2147/IDR.S234353 Morens, D. M., Folkers, G. K., & Fauci, A. S. (2009). What Is a Pandemic? The Journal of Infectious Diseases, 200(7), 1018–1021. https:// doi.org/10.1086/644537 Mukhopadhya, I., Segal, J. P., Carding, S. R., Hart, A. L., & Hold, G. L. (2019). The gut virome: The ‘missing link’ between gut bacteria and host immunity? https://journals.sagepub.com/ doi/10.1177/1756284819836620 Muyembe-Tamfum, J. J., Mulangu, S., Masumu, J., Kayembe, J. M., Kemp, A., & Paweska, J. T. (2012). Ebola virus outbreaks in Africa: Past and present. Onderstepoort Journal of Veterinary Research, 79(2), 06–13. Nikolich, M. P., & Filippov, A. A. (2020).

117


Bacteriophage Therapy: Developments and Directions. Antibiotics (Basel, Switzerland), 9(3), E135. https://doi.org/10.3390/antibiotics9030135 Pardi, N., Hogan, M. J., & Weissman, D. (2020). Recent advances in mRNA vaccine technology. Current Opinion in Immunology, 65, 14–20. https://doi.org/10.1016/j.coi.2020.01.008 Parrino, J., & Graham, B. S. (2006). Smallpox vaccines: Past, present, and future. The Journal of Allergy and Clinical Immunology, 118(6), 1320– 1326. https://doi.org/10.1016/j.jaci.2006.09.037 Petrosillo, N., Viceconte, G., Ergonul, O., Ippolito, G., & Petersen, E. (2020). COVID-19, SARS and MERS: Are they closely related? Clinical Microbiology and Infection: The Official Publication of the European Society of Clinical Microbiology and Infectious Diseases, 26(6), 729– 734. https://doi.org/10.1016/j.cmi.2020.03.026 Plowright, R. K., Parrish, C. R., McCallum, H., Hudson, P. J., Ko, A. I., Graham, A. L., & LloydSmith, J. O. (2017). Pathways to zoonotic spillover. Nature Reviews Microbiology, 15(8), 502–510. https://doi.org/10.1038/nrmicro.2017.45 Pourrut, X., Kumulungui, B., Wittmann, T., Moussavou, G., Délicat, A., Yaba, P., Nkoghe, D., Gonzalez, J.-P., & Leroy, E. M. (2005). The natural history of Ebola virus in Africa. Microbes and Infection, 7(7), 1005–1014. https://doi. org/10.1016/j.micinf.2005.04.006 Principi, N., Silvestri, E., & Esposito, S. (2019). Advantages and Limitations of Bacteriophages for the Treatment of Bacterial Infections. Frontiers in Pharmacology, 10, 513. https://doi.org/10.3389/ fphar.2019.00513 Qin, N., Yang, F., Li, A., Prifti, E., Chen, Y., Shao, L., Guo, J., Le Chatelier, E., Yao, J., Wu, L., Zhou, J., Ni, S., Liu, L., Pons, N., Batto, J. M., Kennedy, S. P., Leonard, P., Yuan, C., Ding, W., … Li, L. (2014). Alterations of the human gut microbiome in liver cirrhosis. Nature, 513(7516), 59–64. https://doi. org/10.1038/nature13568 Ravanfar, P., Satyaprakash, A., Creed, R., & Mendoza, N. (2009). Existing antiviral vaccines. Dermatologic Therapy, 22(2), 110–128. https:// doi.org/10.1111/j.1529-8019.2009.01224.x Ren, J., Ahlgren, N. A., Lu, Y. Y., Fuhrman, J. A., & Sun, F. (2017). VirFinder: A novel k-mer based tool for identifying viral sequences from assembled metagenomic data. Microbiome, 5(1), 69. https://doi.org/10.1186/s40168-017-0283-5

118

Riechmann, L., Clark, M., Waldmann, H., & Winter, G. (1988). Reshaping human antibodies for therapy. Nature, 332(6162), 323–327. https:// doi.org/10.1038/332323a0 Roux, S., Adriaenssens, E. M., Dutilh, B. E., Koonin, E. V., Kropinski, A. M., Krupovic, M., Kuhn, J. H., Lavigne, R., Brister, J. R., Varsani, A., Amid, C., Aziz, R. K., Bordenstein, S. R., Bork, P., Breitbart, M., Cochrane, G. R., Daly, R. A., Desnues, C., Duhaime, M. B., … EloeFadrosh, E. A. (2019). Minimum Information about an Uncultivated Virus Genome (MIUViG). Nature Biotechnology, 37(1), 29–37. https://doi. org/10.1038/nbt.4306 Samudrala, P. K., Kumar, P., Choudhary, K., Thakur, N., Wadekar, G. S., Dayaramani, R., Agrawal, M., & Alexander, A. (2020). Virology, pathogenesis, diagnosis and in-line treatment of COVID-19. European Journal of Pharmacology, 883, 173375. https://doi.org/10.1016/j. ejphar.2020.173375 Scarpellini, E., Ianiro, G., Attili, F., Bassanelli, C., Santis, A. D., & Gasbarrini, A. (2015). The human gut microbiota and virome: Potential therapeutic implications. Digestive and Liver Disease, 47(12), 1007–1012. https://doi.org/10.1016/j. dld.2015.07.008 Schwaber, J., & Cohen, E. P. (1973). Human × Mouse Somatic Cell Hybrid Clone secreting Immunoglobulins of both Parental Types. Nature, 244(5416), 444–447. https://doi. org/10.1038/244444a0 Simões, E. A. F., Forleo-Neto, E., Geba, G. P., Kamal, M., Yang, F., Cicirello, H., Houghton, M. R., Rideman, R., Zhao, Q., Benvin, S. L., Hawes, A., Fuller, E. D., Wloga, E., Pizarro, J. M. N., Munoz, F. M., Rush, S. A., McLellan, J. S., Lipsich, L., Stahl, N., … Sivapalasingam, S. (2020). Suptavumab for the Prevention of Medically Attended Respiratory Syncytial Virus Infection in Preterm Infants. Clinical Infectious Diseases, ciaa951. https://doi.org/10.1093/cid/ciaa951 Singer, B. J., Thompson, R. N., & Bonsall, M. B. (2021). The effect of the definition of ‘pandemic’ on quantitative assessments of infectious disease outbreak risk. Scientific Reports, 11(1), 2547. https://doi.org/10.1038/s41598-021-81814-3 Sung, Y., & Kim, S. (2019). Recent advances in the development of gene delivery systems. Biomaterials Research, 23(1), 8. https://doi.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


org/10.1186/s40824-019-0156-z The IMpact-RSV Study Group. (1998). Palivizumab, a Humanized Respiratory Syncytial Virus Monoclonal Antibody, Reduces Hospitalization From Respiratory Syncytial Virus Infection in High-risk Infants. Pediatrics, 102(3), 531–537. https://doi.org/10.1542/peds.102.3.531 Torres-Barceló, C. (2018). The disparate effects of bacteriophages on antibiotic-resistant bacteria. Emerging Microbes & Infections, 7(1), 168. https://doi.org/10.1038/s41426-018-0169-z Umakanthan, S., Sahu, P., Ranade, A. V., Bukelo, M. M., Rao, J. S., Abrahao-Machado, L. F., Dahal, S., Kumar, H., & Kv, D. (2020). Origin, transmission, diagnosis and management of coronavirus disease 2019 (COVID-19). Postgraduate Medical Journal, 96(1142), 753–758. https://doi. org/10.1136/postgradmedj-2020-138234 Using Oncolytic Viruses to Treat Cancer— National Cancer Institute. (2018, February 9). [CgvBlogPost]. https://www.cancer.gov/newsevents/cancer-currents-blog/2018/oncolyticviruses-to-treat-cancer van Kan-Davelaar, H. E., van Hest, J. C. M., Cornelissen, J. J. L. M., & Koay, M. S. T. (2014). Using viruses as nanomedicines. British Journal of Pharmacology, 171(17), 4001–4009. https:// doi.org/10.1111/bph.12662 Vardanyan, R., & Hruby, V. (2016). Chapter 34— Antiviral Drugs. In R. Vardanyan & V. Hruby (Eds.), Synthesis of Best-Seller Drugs (pp. 687– 736). Academic Press. https://doi.org/10.1016/ B978-0-12-411492-0.00034-1

Wessner, D. R. (2010). Mimivirus | Learn Science at Scitable. Discovery of the Giant Mimivirus. https://www.nature.com/scitable/topicpage/ discovery-of-the-giant-mimivirus-14402410/ Wever, P. C., & van Bergen, L. (2014). Death from 1918 pandemic influenza during the First World War: A perspective from personal and anecdotal evidence. Influenza and Other Respiratory Viruses, 8(5), 538–546. https://doi.org/10.1111/ irv.12267 What is a Pandemic? (2021). http://www.who. int/csr/disease/swineflu/frequently_asked_ questions/pandemic/en/ Zaman, S. B., Hussain, M. A., Nye, R., Mehta, V., Mamun, K. T., & Hossain, N. (2017). A Review on Antibiotic Resistance: Alarm Bells are Ringing. Cureus, 9(6), e1403. https://doi.org/10.7759/ cureus.1403 Zhang, T., Wu, Q., & Zhang, Z. (2020). Probable Pangolin Origin of SARS-CoV-2 Associated with the COVID-19 Outbreak. Current Biology, 30(7), 1346-1351.e2. https://doi.org/10.1016/j. cub.2020.03.022 Zhou, X., Jiang, X., Qu, M., Aninwene, G. E., Jucaud, V., Moon, J. J., Gu, Z., Sun, W., & Khademhosseini, A. (2020). Engineering Antiviral Vaccines. ACS Nano, 14(10), 12370– 12389. https://doi.org/10.1021/acsnano.0c06109 Zoonoses. (2020, July 29). https://www.who.int/ news-room/fact-sheets/detail/zoonoses

Villarreal, L. P. (2008). Are Viruses Alive? Scientific American. https://www.scientificamerican.com/ article/are-viruses-alive-2004/ Wang, W., Wu, C., Amarasinghe, G. K., & Leung, D. W. (2019). Ebola Virus Replication Stands Out. Trends in Microbiology, 27(7), 565–566. https://doi.org/10.1016/j.tim.2019.05.004 Weinreich, D. M., Sivapalasingam, S., Norton, T., Ali, S., Gao, H., Bhore, R., Musser, B. J., Soo, Y., Rofail, D., Im, J., Perry, C., Pan, C., Hosain, R., Mahmood, A., Davis, J. D., Turner, K. C., Hooper, A. T., Hamilton, J. D., Baum, A., … Yancopoulos, G. D. (2020). REGN-COV2, a Neutralizing Antibody Cocktail, in Outpatients with Covid-19. New England Journal of Medicine. https://doi. org/10.1056/NEJMoa2035002

FALL 2021

119


HIV/AIDS and Their Treatments STAFF WRITERS: CAROLINE CONWAY '24, SREEKAR KASTURI '24, FRANK CARR '22, LAUREN FERRIDGE '23, SOYEON (SOPHIE) CHO '24, ZOE CHAFOULEAS '24, JUSTIN CHONG '24, SARAH LAMSON '24 , JOHN ZAVRAS '24 TEAM LEADS: ANAHITA KODALI '23, MATTHEW LUTCHKO '23 Cover Image: Scanning electron image of HIV.

Introduction to HIV/AIDS

weakened immune system.

Image Source: Wikimedia Commons

Though the most commonly discussed viral disease in the current day is COVID-19, there have been many other viral diseases that have forever changed the fields of virology, immunology, and clinical medicine. Perhaps the most well-known of these is the HIV/AIDS epidemic.

Since HIV’s emergence in the early 1980’s, over 60 million people have been infected and 25 million people have died because of it or related illnesses. HIV disproportionally impacts developing countries and minority communities because they lack educational resources, screening tools, and access to the necessary medical treatment. HIV was ‘discovered’ when gay men started becoming unusually susceptible to illnesses their immune systems would usually be able to fight off. Afflicted persons also noticed dark purple lesions on their arms and face, which signaled the rare and aggressive Kaposi’s sarcoma. These symptoms became the trademark for the “gay plague” in HIV hotspots such as New York and San Francisco. An increase in the number of HIV positive individuals marked an increase in social stigma towards gay communities and amplified social paranoia with respect to this new unknown virus. Likewise, doctors did not know how to treat this new mystery disease and all they could do at that time was treat the various “opportunistic infections” (Greene, 2007). It took at least a year for the medical field to learn more about the virus and its transmission. Epidemiological evidence concluded that HIV and AIDS was transmitted

HIV, or human immunodeficiency virus, originated from chimps in Central Africa. It is likely that the chimpanzee version of the virus – simian immunodeficiency virus (SIV) – was passed to humans who hunted chimpanzees and were exposed to contaminated chimpanzee blood. This viral crossover event led to the introduction of HIV to the human world and one of the deadliest pandemics in history (About HIV/AIDS CDC, 2021). Since emerging on a large scale in 1981, HIV has proven to be a deadly virus that has baffled scientists and required concerted efforts from doctors, epidemiologists, and law-making officials to understand and control. HIV attacks the human immune system and leads to AIDS (acquired immunodeficiency syndrome) if left untreated. If HIV progresses to AIDS, there is no cure. Patients are more prone to multiple infections as a result of a severely 120

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


through contaminated bodily fluids and blood. This meant it was a sexually transmitted disease where people sharing drug needles were also susceptible to elevated risks of infection. In addition to these two modes of transmission, preand post-natal routes of exposure were found to be a possibility, meaning an infected mother could transmit it to her infant during pregnancy or through breastfeeding. Infection rates rose through the 80s, 90s, and 2000s: from 683,000 people living with HIV at the end of the 80s to an estimated 1,172,700 people in 2010. (About HIV/ AIDS CDC, 2021). Case counts peaked in 1992 when AIDS became the leading cause of death for men aged 25-44 and no significant decrease in cases was observed until 1997. This success in preventing an increase in cases was accredited to education efforts on modes of transmission and increased screening efforts in all communities.

Acquired Immunodeficiency Syndrome (AIDS) – is experienced. This is a critical stage in which the immune system is at its most damaged and there are an increased number of opportunistic infections. The development of AIDS implies both high viral load and high virulence. Without treatment at this stage, patients typically survive for only about 3 years (About HIV/AIDS CDC, 2021). The emergence of HIV has defined an era of epidemiological trial and error. Although HIV is still a threat and a cure is yet to be discovered, increased education and screening, which are already being implemented, is the first step to defeating this virus.

HIV is a retrovirus characterized by a chronic course of disease, a long latency period, and persistent viral replication. It seeks out and destroys cells that orchestrate the immune response, specifically CD4 T lymphocytes (Greene, 2007). There are two types of HIV: HIV-1 and HIV-2. HIV-1 is spread throughout the world and HIV-2 is contained to Western and Central Africa. Although largely similar and sharing the three basic structural genes, HIV-1 and HIV-2 differ in their internal organization. Furthermore, HIV-1 infection course takes longer to progress, while HIV-2 appears less virulent and is more commonly associated with nervous system diseases in addition to AIDS (Emanuele, 2010). Both types of HIV have an extremely high mutation rate, which makes them even more virulent. This is because of a high error rate in its reverse transcriptase and recombination events, which results in a high rate of mutation, thereby making it possible for patients to be infected with several different forms of HIV at the same time (Greene, 2007).

There are many disparities that exist in the distribution of HIV/AIDS across the US. For one, there are clear racial divides in the prevalence of HIV/AIDS. For example, Black Americans bare the largest burden of HIV infection in the US. Though they account for only 12% of the total American population, they account for 46% of the HIV-infected people in the country. 1,715 Black people per 100,000 individuals have HIV, a figure almost 8 times higher than that of White people. In particular, Black women are disproportionately affected by HIV infection; the HIV diagnosis rate for Black women is over 14 times higher than the rate of diagnosis for white women (Moore, 2011). The Latinx community is also disproportionately affected by HIV/AIDS, as the AIDS case rate across the Latinx community is 3 times higher than that of White Americans (Gonzalez et al., 2009). For Indigenous populations, a host of HIV-related behaviors – including high levels of domestic violence, discrimination from other racial/ethnic groups, and high levels of intravenous substance abuse – coupled with mistrust of health services have poor healthcare and outcomes for Indigenous peoples with HIV (Negin et al., 2015). Finally, though Asian Americans are often considered to be “model” minority groups that typically do not engage in risky sexual or drug related behaviors, they are the only ethnic group in the US that has had a continued increase in HIV infection rates across the 2010 – 2020 decade (Kim & Aronowitz, 2019).

HIV infection is separated into 3 stages. The first stage, Acute HIV Infection, is the stage where patients have high amounts of HIV in their blood and are very contagious. They may experience flu-like symptoms or no symptoms at all. Next, chronic HIV infection or asymptomatic HIV infection of clinical latency is when HIV is still active in the patient but reproduces at lower levels. It is possible to transmit HIV at this stage, but it is a crucial stage for medication to prevent progression to stage three. Without medication, one could be in this stage for up to a decade. The transition from stage 2 to 3 occurs when HIV in the blood increases and CD4 cell count falls below 200cell/mm. At this point Stage 3 – FALL 2021

Disparities in HIV/AIDS Racial Disparities

"Since HIV’s emergence in the early 1980’s, over 60 million people have been infected and 25 million people have died because of it or related illnesses."

Socioeconomic Disparities HIV is clearly connected to both social and economic disparities. For urban communities, there is a clear inverse relationship between 121


"Specific populations are affected more severely by substance abuse and HIV."

122

socioeconomic status and rate of HIV infection – in other words, the lower socioeconomic status a certain urban community has, the higher the rate of HIV infection is (“Economically Disadvantaged,” 2019). There are many reasons for this – a lack of socioeconomic resources, including education and financial security, is linked to HIV-risk behaviors, like unsafe sex or intravenous drug use (“HIV/AIDS and Socioeconomic Status,” n.d.). Critically, though the overall rate of mortality due to HIV/AIDS has dropped in recent years, the decline in mortality for lower socioeconomic groups has been significantly slower than the decline in mortality for more privileged groups (Singh et al., 2013). There are many potential reasons for this – one is that though poorer individuals have been shown to start HIV/AIDS treatment earlier than richer individuals, they have less compliance with treatment regimens. Strict compliance to the recommended drug treatments is essential to the treatments’ success (Tran et al., 2016). Disparities in the LGBT Community Historically, HIV/AIDS have both disproportionately affected the LGBT community. In fact, the first American officially diagnosed with AIDS in the US was Ken Horne, a gay sex worker (Ayala & Spieldenner, 2021). Even though there has been significant progress made in treating and preventing the spread of HIV/AIDS since 1980 when Horne was first diagnosed with AIDS, the progress has been uneven, particularly for queer men. As of 2019, gay and bisexual men accounted for about 55% of individuals with HIV/AIDS in the US, though they only account for about 2% of the entire US population (“HIV in the United States and Dependent Areas,” 2021). In part, these disparities are due to healthcare policies that overlook the LGBT community. Studies have shown that the topic of safe sex practices is uncomfortable for healthcare workers to discuss with their patients – one study found that only 68% of medical residents felt comfortable discussing inclusive sexual history, and only 26% of this 68% felt comfortable discussing the topic with LGBT patients (Frasca et al., 2019). Reasons behind these issues include the fact that policies may overlook the existence of LGBT people when developing questionnaires or screenings and that members of the community may be frightened to share their sexual orientation in fear of discrimination (Wheeler, et al., 2011). Social and structural conditions in the US perpetuate the epidemic, too. LGBT individuals are more likely to drink and use illicit substances, are more likely to delay receiving healthcare, are more likely to

report poor quality care by healthcare providers, and are less likely to have adequate healthcare than their heterosexual cisgender peers (“LGBTQIA+ Youth and Mental Health,” 2021); each of these factors can increase the risk of contracting HIV. Disparities Associated with Substance Abuse Substance abuse disorders, such as addictions to alcohol, crack cocaine, and heroin, are associated with sexually transmitted diseases like HIV because they can also be transmitted through contaminated syringes or needles (“Substance Use,” 2021), such as those used to inject drugs. In 2016, nearly 20 percent of HIV diagnoses among men and 21 percent of diagnoses among women were caused by drug usage (“Drug Use and Viral Infections,” 2020). For drugs that are not injected, the lack of judgement caused by drug use may lead people to partake in risky sexual behavior, making the transmission of HIV more likely (“Substance Use,” 2021). Not only can drugs contribute to the spread of HIV, but they can also worsen the symptoms and progression of the virus. For example, use of drugs like cocaine increases the permeability of the blood brain barrier to viruses, making it easier for HIV to enter the brain, which causes increased nerve cell injury and affects thinking, learning, and memory (Norman et al., 2009; “Drug Use and Viral Infections,” 2020). Specific populations are affected more severely by substance abuse and HIV. For example, the incarcerated population is disproportionately affected. The criminal justice system’s population of HIV-infected individuals is 2 to 5 times larger than the outside community. In fact, it is estimated that 1 in 7 of HIV positive people in the US are in the prison system. Additionally, almost half of federal and state prisoners are reported to meet the criteria for drug abuse or dependence, yet few of these prisoners are screened for HIV or receive treatments for substance abuse. Ethnic minorities also appear to be more affected by both substance abuse and HIV infections. For example, in 2009, the rate of infection within the American Hispanic was three times that of the white communities (“Who Is at Risk,” 2012).

Virology Overview HIV is a lentivirus within the larger family of retroviruses, which are known to hold their single stranded RNA (ssRNA) genome with a viral envelope and a capsid (Chinen & Shearer, 2002). Two major types of HIV have been discovered: HIV-1 and HIV-2. The three broad

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


strains of the HIV-1 virus are the following: M (Main)—globally prevalent, O (Outlier)—mostly in Africa, and N (non-M, non-O)—only found in the west-central African country of Cameroon (Simon et al., 1998). Like other retroviruses, HIV synthesizes DNA copies of its RNA genome and inserts its viral DNA into the host cell’s DNA, after which the host cell produces viral RNA that form retroviral virions (Nisol & Saib, 2004). The HIV RNA genome encodes and produces 15 proteins— which are divided into regulatory, accessory, and structural proteins—that contribute to the replication process. The regulatory proteins include reverse transcriptase, integrase, and protease, which contribute, respectively, to the reverse transcription of viral RNA into DNA, integration of viral DNA into the host cell’s genome, and cleavage of precursor proteins like gp160 (Gelderblom et al., 1989). The accessory proteins increase the infectivity of the released virions and help with transport at different stages (Emerman & Malim, 1998). The structural proteins are produced by the env, gag, and pol genes and are common to both HIV1 and HIV-2. The env gene produces two envelope glycoproteins called gp120 and gp41 that result from the cleavage of a larger precursor protein gp160 (Fanales-Belasio et al., 2010; Chinen & Shearer, 2002). gp120 is attached to gp41, which is embedded in the lipidic membrane of the viral envelope. Envelope glycoproteins, especially gp 120, contain more variable regions than other parts of an HIV virion. The variable regions allow HIV to bypass the immune system (Wyatt et al., 1998). Beneath the viral envelope are core proteins produced by the gag gene. One core protein is p17, which comprises the matrix that surrounds core parts like the capsid and the RNA genome. Core proteins called p24 and p6 form the capsid that encloses the RNA. The nucleocapsid protein p7 is attached to the ssRNA itself. The pol gene forms replication enzymes that are used to produce copies of the HIV virions (Fanales-Belasio et al., 2010; Chinen & Shearer, 2002). An HIV virion enters a human cell as the gp120 and gp41 glycoproteins bind to the CD4 receptors on the human cell membrane. Through one of the variable regions on gp120, called V3, the membrane undergoes conformational change upon binding with CD4 receptors (Kwong et al., 1998), allowing either of the chemokine coreceptors called CCR5 and CXCR4 to bind and

FALL 2021

fuse with the host cell (Livingston et al., 2017). After an HIV virion binds to the host cell, it enters the human cytoplasm and its capsid unpacks the viral RNA and proteins. Here, the viral reverse transcriptase transcribes single stranded RNA into double stranded DNA (dsDNA) through reverse transcription. In this process, cellular lysine tRNA is used as a primer for the viral RNA (Cen et al., 2001), contributing to the errors made by the viral reverse transcriptase, one nucleotide per 1500 to 4000 bases. Due to these errors, many mutants arise from replication that cause drug resistance problems (Tantillo et al., 1994). The viral dsDNA is transported from the cytoplasm to the nucleus, possibly by HIV’s Vpr (Zhang et al., 2001) and Vif proteins (Miller & Sarver, 1997). In the nucleus, the viral integrase integrates dsDNA into a random location on the human genome, after which the host is likely infected for the rest of their life. Following integration, the integrated viral DNA is transcribed by human RNA polymerase II to produce three regulatory proteins: Tat, Rev, and Nef. In particular, Tat facilitates faster transcription through transactivation, and Rev regulates the export of different lengths of RNA transcripts, with partial RNA transcripts for structural proteins and full-length transcripts for the viral RNA genome of future virions (Parada & Roeder, 1996; Emerman & Malim, 1998). Nef is able to sequester a variety of cell surface proteins by engaging with host trafficking proteins; this disrupts host immune systems and promotes the replication cycle of the virus (Buffalo et al., 2019). As different RNA transcripts are transcribed, viral proteases cleave precursor proteins, such as gag-pol, which produce reverse transcriptase, integrase, and protease (Weller & Williams, 2001). Cellular proteases also cleave precursor proteins like gp160, which produce gp120 and gp41, the two envelope glycoproteins, along with other regulatory proteins like Vpr, Vpu, and Vif (Chinen & Shearer, 2002). More research is needed on the late stage of HIV replication, and the assembly of these proteins, which seem to involve cellular components and require energy (Tritel & Resh, 2001). A viral envelope originating from the host cell’s membrane forms around these proteins, which are released into newly generated HIV virions that undergo maturation.

Immunology Overview The human immune system is responsible for defending the body against outside invaders and harmful molecules and thus serves to combat

123


pathogens and toxins. It consists of two different branches: the innate immune system and the adaptive immune system. The innate immune system consists of cells and responses that are quick and non-specific. First, there are the barriers that prevent the entry of pathogens into the body. These barriers include the skin, the digestive system, and the mucosal barriers which guard openings into the body such as the mouth, nostrils, and the open portions of the genitalia. Through physical and chemical means (such as the enzymes secreted by mucosa and skin, and the gastric acid in the digestive system), many pathogens are prevented from ever posing a threat to the body (Abbas et al., 2020; Agerberth & Guðmundsson, 2006; Boyton & Openshaw, 2002; Smith, 2003). If a threat such as a bacteria or virus is able to slip past these barriers, they then face the cells of the innate immune system. These include the important phagocytic cells (macrophages, dendritic cells, and neutrophils) which recognize, engulf, and digest many types of pathogens. Once broken into little pieces called antigens, these destroyed pathogens are presented on extracellular proteins called MHC molecules to the T cells of the adaptive immune system located throughout the blood and lymphatic systems. Other innate immune cells include the mast cells, basophils, and eosinophils that are responsible for fighting off larger pathogens, especially parasitic worms. In modern times, they often are the cause of allergies, which are simply cases of the immune

system responding to a non-dangerous pathogen as if it were dangerous. Pathogens are recognized by these cells using sets of proteins on their external and internal membranes called Pattern Recognition Receptors (PRRs). Different PRRs are specific for different kinds of pathogen markers, such as double-stranded RNA present in viruses and the flagella of bacteria (Abbas et al., 2020; Agerberth & Guðmundsson, 2006). The adaptive immune system consists of cells that are highly specific. Each line of cells is specific for only a single antigen and takes time to fully launch a response. It consists of two different classes of cells: B and T cells, named based on their site of final maturation; B cells emerge complete from the bone marrow while T cells emerge complete from the thymus (Abbas et al., 2020; Janeway, Shlomchik, et al., 2001; Janeway, Travers, et al., 2001). T cells bind to antigens that are presented to them on the MHC molecules of various cells using their T cell Receptors (TCRs). T cells are further divided into classes based on the different cell surface proteins they display. CD8+ T cells are also known as Cytotoxic T cells (CTLs) and are responsible for detecting and destroying cells displaying antigens on MHC Class I molecules. These often include cells that are cancerous, infected with a virus, or infested with some sort of intracellular bacteria. CD4+ T cells are also known as Helper T cells (Ths) and have a variety of functions, including recognizing

Image 1: Image of the HIV capsid. Image Source: Wikimedia Commons

124

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 2: HIV-1 approaching and interacting with the membrane of a T cell. Image Source: Wikimedia Commons

antigens presented on MHC Class II molecules. Th1 cells are responsible for activating and supporting CTLs, while Th2 cells are responsible for activating and supporting B cells. There is one other lineage of CD4+ T cells that are not Ths. These also display CD25 and are known as Regulatory T cells (Tregs). Tregs are involved in preventing the actions of adaptive immune cells with antigen specificity for self-antigens (and thus preventing autoimmune disorders). They also ramp down the immune response after the antigen has been cleared. Because of these various supportive and regulatory functions, CD4+ T cells are some of the most integral cells in the body’s arsenal (Abbas et al., 2020; Janeway, Shlomchik, et al., 2001).

destruction of the antigen (Abbas et al., 2020; Janeway, Travers, et al., 2001).

B cells recognize their specific antigens via binding to them with their B cell Receptors (BCRs). These antigens can either be 1) presented to them by Th2s or cells of the innate immune system or 2) free floating antigens in the blood or lymph systems. This recognition results in the activation of the B cell and the initiation of rapid division, producing many progeny B cells. During replication, these progenies undergo a process known as somatic hypermutation by which they produce slight variations in their BCRs until they produce one with incredibly high specificity. Then, the daughter cells develop into plasma cells and memory B cells. Plasma cells secrete free floating versions of their BCRs into the blood and lymph known as antibodies. These antibodies can be of different “classes” based on the type of plasma cell they are, with each class having a slightly different function. Antibodies bind to antigens present on infected cells or on the pathogens themselves. Once bound, the antibodies may signal the antigens for phagocytosis, group the antigens together and precipitate them out of the solution, prevent them from undergoing their mechanism of action, or conduct various other functions that result in the

Structural Biology of HIV

FALL 2021

Upon activation, both B and T cells also produce progeny that are known as “memory cells.” These cells reside within the circulatory systems for weeks to even years, waiting for the antigen they possess a specificity for to reappear in the body. When that happens, the memory cell binds to the antigen and activates to produce more cells with its specificity, causing a more rapid version of the adaptive response generated during the first activation of the memory cell’s ancestor (Abbas et al., 2020; Janeway, Travers, et al., 2001).

The Biology of HIV/AIDS

HIV is composed of a lipid bilayer membrane, two strands of RNA, and diverse types of viral proteins that allow the virus to invade host cells and grow (RCSB, 2000). The structural biology of any virus is crucial to study as information on its structure and function allows targeted drug development to slow or eliminate the virus in infected patients. Additionally, understanding the way HIV operates may provide headway in the development of a vaccine (Engelman & Cherepanov, 2012). Within the viral proteins of HIV, there are subcategories of structural proteins, viral enzymes, and accessory proteins (RCSB, 2000). Each group has a specific function that could be targeted in the future of HIV treatment.

"The structural biology of any virus is crucial to study as information on its structure and function allows targeted drug development to slow or eliminate the virus in infected patients."

The structural proteins of HIV include the surface protein, transmembrane protein, matrix protein, capsid protein, and nucleocapsid protein. The surface protein and transmembrane protein are found on the outer layer of the membrane and are coated in carbohydrates to allow them to invade un-noticed, posing as non-threatening

125


antibodies (Engelman & Cherepanov, 2012). The proteins have jagged structures to penetrate the surface of potential host cells (RCSB, 2000). The matrix protein is located in the inner surface of the lipid bilayer membrane and assists new viruses in budding from the surface of infected host cells (RCSB, 2000). The capsid protein forms a cone shape around the viral RNA and transports the genetic material during infection. Finally, the nucleocapsid protein forms a complex with the RNA to stabilize and protect it during transport (RCSB, 2000).

"Chronic HIV infection involves a gradual deterioration of the humoral immune response."

In addition to the proteins mentioned, HIV has three viral enzymes that are essential for its spread and maturation. First, reverse transcriptase builds DNA using the viral RNA genome (Engelman & Cherepanov, 2012). This enzyme is vital in replicating genetic material to create new viruses. Additionally, the enzyme integrase combines the genetic material of HIV with that of its host cell, which allows HIV to remain dormant in cells for long periods of time. Lastly, protease cleaves large proteins into smaller, functional pieces so that HIV may mature (RCSB, 2000). There have been drugs developed to block the function of each of these enzymes in an attempt to slow and eliminate HIV (Engelman & Cherepanov, 2012). In addition to structural proteins and viral enzymes, HIV has accessory proteins that allow the virus to thrive and take part in infection progression. The negative regulatory factor protein attaches to proteins in the infected cell to stop the production of proteins important to the host cell’s defense (RCSB, 2000). This is a crucial factor in how an HIV infection progresses to AIDS (acquired immune deficiency syndrome). To assist the spread of HIV, the viral protein “u” weakens the interaction between envelope proteins and their cell receptors to allow viruses to escape in the cell budding process. Additionally, the viral infectivity factor protein attaches to the host cell’s defense proteins, causing the cell to destroy its own defense proteins in the process of destroying the virus. Lastly, there are accessory proteins that protect HIV’s genetic material. The trans-activator of transcription accelerates viral protein production while the regulator of the virion binds to viral RNA to regulate the slicing and transport of genetic material (RCSB, 2000). Clearly, each component of HIV is crucial in its ability to spread quickly, maintain infection progression, or even hide dormant in the body for long periods of time (Engelman & Cherepanov, 2012). The future of HIV medicine lies in understanding its structural biology.

126

Acute HIV Infection Acute HIV infection (AHI), also known as primary HIV infection, is when an individual first becomes infected with HIV. After the virus enters the human body, there is a period of viral replication that causes the level of HIV in the blood to rise, which greatly increases the risk of future HIV transmission. HIV has two main assets. The first is mutability of the virus. Reverse transcriptase, the enzyme through which HIV replicates itself, sometimes makes mistakes during transcription. Numerous mistakes accumulate and allow the virus to rapidly mutate over time in response to immunological responses. Studies have shown that when an immune response is targeted toward a particular HIV amino acid sequence, the virus has the ability to change the targeted sequence and become invisible to immune cells. Secondly, HIV can cause the death of CD4+ (helper) T cells, resulting in progressive dysregulation of the immune system. HIV attacks CD4+ cells and infects them, which triggers an immune system response that causes a rise in the number of killer T cells. These killer T cells kill the infected CD4+ T cells. Consequently, the HIV viral load begins to decline and the number of CD4 cells begins to recover. However, the virus is not completely eliminated from the body and still remains (Cohen et al., 2013). The detection of AHI is very important for HIV prevention and treatment implementation. However, clinical diagnosis of AHI is extremely difficult because the symptoms that occur during the transition from seronegative to seropositive are often not recognized as an indicator of AHI (Hoenigl et al., 2016). Previous screening programs rely on point-of-care (POC) HIV antibody testing. POC HIV antibody testing is a technology that allows patients to get tested for HIV and know their status in one visit in under an hour. Though they give rapid results, POC HIV antibody tests often do not indicate AHI. The Centers for Disease Control and Prevention began addressing this problem by including fourth generation HIV-1 p24 antigen-based immunoassays in laboratory diagnosis of HIV (Hoenigl et al., 2016). Of the current diagnostic tests for AHI, HIV RNA viral load testing seems to be the most useful diagnostic test. This is because HIV antibody testing results are generally negative or indeterminate during AHI (Chu & Selwyn, 2016). Many people with acute HIV infection commonly experience a fever, swollen lymph nodes, and joint and muscle aches. These symptoms can

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


begin just a few days after an individual has been exposed to HIV and can usually last anywhere from 2 weeks to several months (Gay et al., 2011). However, the signs and symptoms of the infection can feel like any other common virus infection, leaving individuals unable to realize their illness is actually an acute HIV infection. Chronic HIV Infection Chronic HIV infection involves a gradual deterioration of the humoral immune response. More specifically, HIV attacks and depletes antigen-specific CD4 T cells, which are necessary for lasting B cell memory, which results in B cell dysfunction (Lindqvist et al., 2012). The exact molecular mechanisms leading to this dysfunction remain largely unknown. Lindqvist et al. (2012) proposed that B cell differentiation is disrupted by HIV-driven increases in populations of TFH (CD4 T follicular helper) cells, which interact with antigen-specific B cells to instigate the processes of antibody affinity maturation and B cell memory development. NakayamaHosoya et al. (2015), on the other hand, found that broad B cell dysfunction during chronic HIV infections could be due to DNA methylation at the interleukin 2 (IL-2) locus in CD4 T cells. This theory is compatible with that of van Grevenynghe et al. (2011), who found that memory B cells survived at lower rates in cases of disrupted IL-2 signaling. They implicated IL-2 signaling disruption because IL-2 typically phosphorylates Foxo3a, a transcription factor associated with proapoptotic genes. This phosphorylation results in the degradation of Foxo3a in the cytoplasm, preventing the transcription of the proapoptotic genes. In cases of methylation at the IL-2 locus as Nakayama-Hosoya et al. (2015) described, IL-2 is ultimately unable to phosphorylate Foxo3a, allowing the proapoptotic genes to undergo transcription. This, in turn, reduces the survival of memory B cells during chronic HIV infection by increasing rates of apoptosis (van Grevenynghe et al., 2011). Another molecular factor to consider in chronic HIV infection is the SLAMF7 immune cell receptor, which is generally upregulated in cases of chronic immune activation. Unsurprisingly, SLAMF7 appears to play a role in the immune activity of HIV-infected individuals as well as elevated levels of SLAMF7+ peripheral blood mononuclear cells (specialized immune cells) (O’Connell et al., 2019). Chronic immune activation itself may in part drive mortality in patients living with chronic HIV and could be one reason modern HIV treatments like antiretroviral therapy, while able to significantly extend the lifespan of HIV+

FALL 2021

patients, have failed to completely extend patient life expectancy to the average for non-infected individuals (Rajasuriar et al., 2013). Aside from the pathology and symptoms of HIV itself, patients living with chronic HIV face additional challenges. For example, individuals with chronic HIV are especially susceptible to chronic obstructive pulmonary disease (COPD), due in part to severe depletion of lung mucosal CD4 T cells (Popescu et al., 2014). Chronic pain is also incredibly prevalent among people living with HIV, with studies estimating that up to 85% of HIV-infected patients experience chronic pain. While this pain can result from inflammationcaused tissue damage or infection, almost half of reported chronic pain is neuropathic, meaning it results from nervous system dysfunction. This dysfunction might be caused by any number of factors, including HIV viral infection, secondary pathogens, and medications. Generally, disadvantaged groups such as women, people of low socioeconomic status, and drug users do not receive adequate treatment for chronic pain (Bruce et al., 2017). These treatment disparities are especially concerning since individuals with both chronic HIV and chronic pain tend to report more severe depressive symptoms than individuals coping with either condition alone, possibly due to the internalization of stigmas associated with chronic pain and HIV (Goodin et al., 2018). Various factors influence what a patient with chronic HIV considers most crucial for their well-being. In one study, homeless substance users emphasized the need for a 24-hour hotline to provide motivation and support in the face of extreme isolation and the fear of dying. For homeless ex-offenders, housing concerns took precedence over the health concerns associated with chronic HIV itself. In the same study, women stressed the necessity of women-only support groups in order to avoid unwelcome advances from men. Many interviewees referenced feeling shame due to the stigma surrounding HIV and shared that they had faced rejection from family, friends, and spiritual communities (Sankar & Luborsky, 2003). Overall, disparities between demographic groups can affect how equipped patients are to cope with chronic HIV. In addition to being a matter of physical health, living with HIV can profoundly influence mental, social, and spiritual wellness. Comorbidities of HIV/AIDS The comorbidities of HIV can be considered

127


as diseases outside the scope of AIDS and its associated illnesses. The average number of HIVassociated comorbidities among HIV patients is 1.1 (Lorenc et al., 2016), and the most common ones include cardiovascular disease, respiratory diseases, and hepatic diseases. Moreover, psychiatric disorders are also extremely prevalent among HIV positive individuals. However, studies show that mortality among HIV infected individuals is primarily due to liver disease, including hepatitis B and C and antiretroviral toxicity, vascular disease, lung disease, cancer, and violence. Even though these comorbidities can occur by chance, most are often due to the infection itself and its risk factors. As HIV severity increases, comorbidity increases as well. Overlapping risk factors can also lead to potential coinfection, causing comorbidity. Because of the effects the HIV infection have on the immune system, older HIV positive individuals have a higher disease burden than those who do not have the infection. This is simply because of the natural decline the immune system experiences as one ages. Negative lifestyles, such as alcoholism, can also affect increase the risk for overlapping diseases. Some associated disorders are often associated with ethnicity, gender, and socioeconomic status. Recent literature suggests that patients living with HIV should be assessed independently for common medical conditions. The presence of comorbidities necessitates looking into the associated diseases and the significance of HIV care, so that healthcare services can respond adequately to patients who have both HIV and a comorbidity. In the event that patients express symptoms of various associated diseases related to HIV, it may be possible to diagnose the virus early, which can lead to antiretroviral therapy.

Treatment of HIV/AIDS Current treatments for HIV/AIDS Though not perfectly effective, thanks to antiretroviral therapy, HIV-positive patients can lead relatively ordinary lives with life expectancies near (though still below) average. Antiretroviral therapy can help patients continue working; highly active antiretroviral therapy increases the probability of remaining employed from 58% to 94% (Goldman & Bao, 2004). However, HIV treatment should not be confused with HIV elimination. HIV can remain dormant in cells, forming viral reservoirs that can reactivate should daily treatment cease at any point. Due to this fact, antiretroviral

128

therapy is only considered a treatment, not a cure. Despite this shortcoming, antiretroviral therapy is highly effective and can reduce the amount of HIV RNA in the blood plasma to an undetectable level, which, when maintained for at least six months, practically eliminates the risk of sexually transmitting HIV (National Institute of Allergy and Infectious Diseases [NIAID], 2018). For most individuals, this undetectable level of HIV RNA can be reached within six months of starting antiretroviral therapy. “Blips” in which HIV RNA levels temporarily rise above the undetectable threshold are common even with daily antiretroviral therapy, and the cause of these blips remains unknown. An HIV-negative individual can take antiretroviral medication as a preventative measure; this is known as preexposure prophylaxis (NIAID, 2020). As for the details of antiretroviral therapy itself, there are five approved medication classes. The first class consists of HIV entry inhibitors, which prevent HIV from entering cells by impeding conformational changes in receptors, acting as antagonists by binding to the HIV receptors, or inhibiting fusion between HIV and host membranes. The second and third classes of antiretroviral medication are nucleoside and non-nucleoside reverse transcriptase inhibitors, which both prevent the enzyme reverse transcriptase from successfully completing DNA polymerization, thus stopping the reverse transcription process that would ordinarily convert the HIV single-stranded RNA into double-stranded DNA. The fourth class contains integrase strand transfer inhibitors, which prevent the HIV enzyme integrase from joining the HIV DNA resulting from reverse transcription with the host DNA. The final class is HIV protease inhibitors. These medications limit HIV infection by inhibiting the HIV protease enzyme, which is essential to HIV replication (Spach, 2021). Demographic group disparities are reflected in the distribution of antiretroviral therapy. For instance, women tend to receive later diagnoses despite having more clinical symptoms than men with comparable levels of HIV RNA in their plasma. Generally, HIV+ women access antiretroviral therapy later and die sooner than male HIV patients. Furthermore, HIV+ African Americans are less likely to access antiretroviral therapy than white patients (Sankar & Luborsky, 2003). Because chronic HIV is often accompanied by conditions of chronic pain, treating HIV can also involve pain management strategies. Depending

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 3: HIV Protease bound with ritonavir. Image Source: Wikimedia Commons

on the patient and their pain severity, chronic pain might be treated using cognitive behavioral therapy, yoga, physical or occupational therapy, hypnosis, acupuncture, medical cannabis, or medications (Bruce et al., 2017). However, HIV treatment can also extend beyond physical needs. For example, antiretroviral therapy in Mombasa, Kenya and in other areas of sub-Saharan Africa emphasizes self-care in addition to the use of medications, particularly highlighting the importance of managing stress (de Klerk & Moyer, 2017). Treatment for HIV/AIDS must incorporate options to address related mental health conditions such as depression in order to truly promote overall patient wellness. Long-Acting Drugs As of now, almost all antiretroviral treatments for the HIV virus involve taking medication at least once a day. This is not only exhausting for people living with HIV, but it can also be difficult to maintain. People living with the virus must remain extremely disciplined. Studies have shown that up to 55% of people would prefer not to take medication every day if they had the option to. It also found that 58% of people view taking daily medications as a constant reminder of the virus and many also revealed anxiety and nervousness about the fact that taking the medication could reveal to others that they are infected with HIV (“HIV treatment: Are long-acting therapies the future?”, 2021). Thus, it is clear that it is both dangerous and inconvenient to require people living with HIV/AIDS to take medication every day for the rest of their lives. Although it has taken scientists long enough to reach this stage, there is hope for a better future without daily medication. Pharmaceutical companies have shifted their

FALL 2021

attention to creating long-acting therapies that would eliminate the need for daily medications. Long-acting antiretroviral (LA ART) therapies are injections that scientists hope can be given to HIV/AIDS patients every 1-2 months. The hope is that these therapies will reduce daily pill burdens for patients, which could increase compliance with treatment regimens. Recently, the FDA approved the first LA ART: cabotegravir and rilpivirine combination therapy. This treatment consists of 6 intramuscular injections per year (Thoueille et al., 2021). Cabotegravir is an HIV integrase inhibitor; this class of drugs work to block the activity of HIV integrase, which effectively stops HIV from multiplying in the bloodstream. Rilpivirine is a non-nucleoside reverse transcriptase inhibitor (NNRTI); NNRTIs bind to the HIV-1 reverse transcriptase and block its activity, which prevents HIV from replicating. Together, the drugs work to decrease the amount of HIV in a patient’s blood (Cabotegravir and Rilpivirine Injections, n.d.). Other companies are looking into developing more LA ARTs in the future.

"Though not perfectly effective, thanks to antiretroviral therapy, HIV-positive patients can lead relatively ordinary lives with life expectancies near (though still below) average."

Broadly Neutralizing Antibodies Broadly neutralizing HIV-1 antibodies (bNAbs) are a nascent approach to treating individuals infected with HIV, as conventional antiretroviral therapy only slows down replication of the virus in the body and is not curative (Liu et al., 2020). bNAbs have shown potential in treating and even eliminating HIV infection, with the additional advantages of higher safety and activating the host immune response. bNAbs function by binding to specific sequences,

129


called epitopes, of viral surface envelope proteins that the HIV virus uses to bind to host cell receptors to gain entry into host cells (Kumar et al., 2018). By binding to these proteins that facilitate cell entry, bNAbs can stop infection at the very beginning, inoculating healthy patients from potential HIV infection and stopping viral spread in infected patients. bNAbs are advantageous because unlike traditional neutralizing antibodies, these broadly neutralizing antibodies bind to conserved epitopes among a diverse collection of genotypically different HIV viruses, ensuring that the mutagenic nature of the virus do not affect the long-term efficacy of the treatment (Rusert et al., 2016).

"With better understanding of its structural biology and impacts on the immune system, in combination with advances made in modern day medicine, new treatments may prove to be effective in treating the disease."

While generally, bNAbs are ineffective in curing existing HIV infection, recent studies have demonstrated that bNAbs can destroy latent reservoirs of HIV by recruiting effector cells. For example, next-generation bNAbs like VRC01 and PGT121 can recruit immune cells to block HIV replication in infected cells in the viral reservoir (Halper-Stromberg & Nussenzweig, 2016). Others like 3BNC117 can kill latently infected cells because they bind to epitopes of viral envelope proteins expressed on the surface of the cell membrane of infected host cells, triggering the recruitment of natural killer cells. By neutralizing these viral reservoirs, bNAb treatment in combination with other chemodrugs can significantly reduce viral load and the frequency and probability of viral rebound (Chun et al., 2014). While cell entry is an important facet to address, HIV infection is much more robust and significant in cell-to-cell transmission. This perhaps explains why some bNAbs tested in vitro neutralized the majority of free viruses, but in vivo, these antibodies only had a small or moderate effect in suppressing viremia (Malbec et al., 2013). bNAbs also hold much potential to be modified and improved in terms of efficacy, longevity, and safety. For example, modifying the Fc receptor – a receptor involved in antigen recognition – of bNAbs has been shown to prolong the duration of the immunoglobulin serum in humans. Bispecific and trispecific bNAbs have recently been at the forefront of research for their ability to significantly arrest viral transfer and infection because they can bind to both the HIV envelope protein and surface molecules of highly permissive and vulnerable cellular targets of HIV (Montefiori, 2016). Overall, these broadly neutralizing antibodies are paving the way for a new avenue of cutting-edge

130

research as more potent and broad monoclonal antibodies are being harvested from infected individuals. These antibodies have the potential to both halt HIV spread in the body and destroy active reservoirs of HIV, serving as an expansive new therapeutic field of virological and epidemiological research.

Conclusion HIV/AIDS has been a devastating American epidemic for the past several decades. With better understanding of its structural biology and impacts on the immune system, in combination with advances made in modern day medicine, new treatments may prove to be effective in treating the disease. Despite the progress made, there are still limitations in all of the treatments available for the disease today that prevent patients with HIV/AIDS from living their lives completely normally. Hopefully, with more work done in the future, a true cure to HIV/AIDS will be found.

References 10 Things to Know About HIV Suppression | NIH: National Institute of Allergy and Infectious Diseases. (n.d.). Retrieved July 17, 2021, from https://www.niaid.nih.gov/ diseases-conditions/10-things-know-about-hivsuppression A Mixed Methods Evaluation of an Inclusive Sexual History Taking and HIV Prevention Curriculum for Trainees | SpringerLink. (n.d.). Retrieved November 14, 2021, from https:// link.springer.com/article/10.1007/s11606-01904958-z Abbas, A. K., Lichtman, A. H., Pillai, S., Baker, D. L., & Baker, A. (2020a). Antigen Capture and Presentation to Lymphocytes. In Basic immunology: Functions and disorders of the immune system (6th edition, pp. 51–73). Elsevier. Abbas, A. K., Lichtman, A. H., Pillai, S., Baker, D. L., & Baker, A. (2020b). Effector Mechanisms of Humoral Immunity. In Basic immunology: Functions and disorders of the immune system (6th edition, pp. 158–177). Elsevier. Abbas, A. K., Lichtman, A. H., Pillai, S., Baker, D. L., & Baker, A. (2020c). Effector Mechanisms of T-Cell Mediated Immunity. In Basic immunology: Functions and disorders of the immune system (6th edition, pp. 119–137). Elsevier.

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Abbas, A. K., Lichtman, A. H., Pillai, S., Baker, D. L., & Baker, A. (2020d). Humoral Immune Responses. In Basic immunology: Functions and disorders of the immune system (6th edition, pp. 137–158). Elsevier. Abbas, A. K., Lichtman, A. H., Pillai, S., Baker, D. L., & Baker, A. (2020e). Innate Immunity. In Basic immunology: Functions and disorders of the immune system (6th edition, pp. 23–51). Elsevier. Abbas, A. K., Lichtman, A. H., Pillai, S., Baker, D. L., & Baker, A. (2020f). T Cell-Mediated Immunity. In Basic immunology: Functions and disorders of the immune system (6th edition, pp. 96–119). Elsevier. About HIV/AIDS | HIV Basics | HIV/AIDS | CDC. (2021, June 1). https://www.cdc.gov/hiv/ basics/whatishiv.html Abuse, N. I. on D. (2020, July 30). Drug Use and Viral Infections (HIV, Hepatitis) DrugFacts. National Institute on Drug Abuse. https://www. drugabuse.gov/publications/drugfacts/drug-useviral-infections-hiv-hepatitis Abuse, N. I. on D. (--). Who Is at Risk for HIV Infection and Which Populations Are Most Affected? National Institute on Drug Abuse. https://www.drugabuse.gov/publications/ re s e arch - re p or t s / h iv ai d s / w h o - r i s k - h iv infection-which-populations-are-most-affected Agerberth, B., & Guðmundsson, G. H. (2006). Host Antimicrobial Defence Peptides in Human Disease. In W. M. Shafer (Ed.), Antimicrobial Peptides and Human Disease (pp. 67–90). Springer. https://doi.org/10.1007/3-540-299165_3 Ayala, G., & Spieldenner, A. (2021). HIV Is a Story First Written on the Bodies of Gay and Bisexual Men. American Journal of Public Health, 111(7), 1240–1242. https://doi.org/10.2105/ AJPH.2021.306348 Boyton, R. J., & Openshaw, P. J. (2002). Pulmonary defences to acute respiratory infection. British Medical Bulletin, 61(1), 1–12. https://doi. org/10.1093/bmb/61.1.1 Broadly neutralizing antibodies that inhibit HIV-1 cell to cell transmission | Journal of Experimental Medicine | Rockefeller University Press. (n.d.). Retrieved July 12, 2021, from https://rupress. org/jem/article/210/13/2813/41501/Broadly-

FALL 2021

neutralizing-antibodies-that-inhibit-HIV-1 Bruce, R. D., Merlin, J., Lum, P. J., Ahmed, E., Alexander, C., Corbett, A. H., Foley, K., Leonard, K., Treisman, G. J., & Selwyn, P. (2017). 2017 HIVMA of IDSA Clinical Practice Guideline for the Management of Chronic Pain in Patients Living With HIV. Clinical Infectious Diseases, 65(10), e1–e37. https://doi.org/10.1093/cid/ cix636 Buffalo, C. Z., Iwamoto, Y., Hurley, J. H., & Ren, X. (n.d.). How HIV Nef Proteins Hijack Membrane Traffic To Promote Infection. Journal of Virology, 93(24), e01322-19. https://doi.org/10.1128/ JVI.01322-19 Buffalo, C. Z., Iwamoto, Y., Hurley, J. H., & Ren, X. (2019). How HIV Nef Proteins Hijack Membrane Traffic To Promote Infection. Journal of Virology, 93(24), e01322-19. https://doi.org/10.1128/ JVI.01322-19 Cabotegravir and Rilpivirine Injections: MedlinePlus Drug Information. (n.d.). Retrieved November 14, 2021, from https://medlineplus. gov/druginfo/meds/a621009.html Cen, S., Khorchid, A., Javanbakht, H., Gabor, J., Stello, T., Shiba, K., Musier-Forsyth, K., & Kleiman, L. (2001). Incorporation of lysyltRNA synthetase into human immunodeficiency virus type 1. Journal of Virology, 75(11), 5043– 5048. https://doi.org/10.1128/JVI.75.11.50435048.2001 Chinen, J., & Shearer, W. T. (2002). Molecular virology and immunology of HIV infection. Journal of Allergy and Clinical Immunology, 110(2), 189–198. https://doi.org/10.1067/ mai.2002.126226 Chu, C., & Selwyn, P. A. (2010). Diagnosis and initial management of acute HIV infection. American Family Physician, 81(10), 1239–1244. Chun, T.-W., Murray, D., Justement, J. S., Blazkova, J., Hallahan, C. W., Fankuchen, O., Gittens, K., Benko, E., Kovacs, C., Moir, S., & Fauci, A. S. (2014). Broadly neutralizing antibodies suppress HIV in the persistent viral reservoir. 111(36), 13151–13156. Cohen, M. S., Shaw, G. M., McMichael, A. J., & Haynes, B. F. (2011). Acute HIV-1 Infection. The New England Journal of Medicine, 364(20), 1943– 1954. https://doi.org/10.1056/NEJMra1011874

131


Core Concepts—Antiretroviral Medications and Initial Therapy—Antiretroviral Therapy— National HIV Curriculum. (n.d.). Retrieved July 17, 2021, from https://www.hiv.uw.edu/go/ antiretroviral-therapy/general-information/coreconcept/all Economically Disadvantaged | HIV by Group | HIV/AIDS | CDC. (2019, December 11). https:// www.cdc.gov/hiv/group/poverty.html Emanuele. (2010). HIV virology and pathogenic mechanisms of infection: A brief overview. Ann Ist Super Santia, 46(1), 5–14. https://doi. org/10.4415/ANN_10_01_02 Emerman, M., & Malim, M. H. (1998). HIV1 Regulatory/Accessory Genes: Keys to Unraveling Viral and Host Cell Biology. Science, 280(5371), 1880–1884. https://doi.org/10.1126/ science.280.5371.1880 Engelman, A., & Cherepanov, P. (2012). The structural biology of HIV-1: Mechanistic and therapeutic insights. Nature Reviews. Microbiology, 10(4), 279–290. https://doi. org/10.1038/nrmicro2747

Goodin, B. R., Owens, M. A., White, D. M., Strath, L. J., Gonzalez, C., Rainey, R. L., Okunbor, J. I., Heath, S. L., Turan, J. M., & Merlin, J. S. (2018). Intersectional health-related stigma in persons living with HIV and chronic pain: Implications for depressive symptoms. AIDS Care, 30(sup2), 66–73. https://doi.org/10.1080/09540121.2018.1 468012 Greene, W. C. (2007). A history of AIDS: Looking back to see ahead. European Journal of Immunology, 37(S1), S94–S102. https://doi. org/10.1002/eji.200737441 HIV and Substance Use | HIV Transmission | HIV Basics | HIV/AIDS | CDC. (2021, April 21). https://www.cdc.gov/hiv/basics/hivtransmission/substance-use.html HIV in the United States and Dependent Areas | Statistics Overview | Statistics Center | HIV/ AIDS | CDC. (2021, August 9). https://www.cdc. gov/hiv/statistics/overview/ataglance.html

Fanales-Belasio, E., Raimondo, M., Suligoi, B., & Buttò, S. (2010). HIV virology and pathogenetic mechanisms of infection: A brief overview. Annali Dell’Istituto Superiore Di Sanita, 46(1), 5–14. https://doi.org/10.4415/ANN_10_01_02

HIV treatment: Are long-acting therapies the future? (n.d.). Retrieved December 6, 2021, from https://www.pharmaceutical-technology.com/ features/hiv-treatment-long-acting-therapiesfuture/

Frasca, K., Castillo-Mancilla, J., McNulty, M. C., Connors, S., Sweitzer, E., Zimmer, S., & Madinger, N. (2019). A Mixed Methods Evaluation of an Inclusive Sexual History Taking and HIV Prevention Curriculum for Trainees. Journal of General Internal Medicine, 34(7), 1279–1288. https://doi.org/10.1007/s11606-019-04958-z

HIV Treatment, the Viral Reservoir, and HIV DNA | NIH: National Institute of Allergy and Infectious Diseases. (n.d.). Retrieved July 17, 2021, from https://www.niaid.nih.gov/diseasesconditions/hiv-treatment-viral-reservoir-hivdna

Gelderblom, H. R., zel, M., & Pauli, G. (1989). Morphogenesis and morphology of HIV structure-function relations. Archives of Virology, 106(1–2), 1–13. https://doi. org/10.1007/BF01311033 Goldman, D. P., & Bao, Y. (2004). Effective HIV Treatment and the Employment of HIV+ Adults. Health Services Research, 39(6p1), 1691–1712. https://doi.org/10.1111/j.14756773.2004.00313.x Gonzalez, J. S., Hendriksen, E. S., Collins, E. M., Durán, R. E., & Safren, S. A. (2009). Latinos and HIV/AIDS: Examining Factors Related to Disparity and Identifying Opportunities

132

for Psychosocial Intervention Research. AIDS and Behavior, 13(3), 582–602. https://doi. org/10.1007/s10461-008-9402-4

HIV/AIDS and Socioeconomic Status. (n.d.). Https://Www.Apa.Org. Retrieved September 1, 2021, from https://www.apa.org/pi/ses/resources/ publications/hiv-aids Hoenigl, M., Green, N., Camacho, M., Gianella, S., Mehta, S. R., Smith, D. M., & Little, S. J. (2016). Signs or Symptoms of Acute HIV Infection in a Cohort Undergoing Community-Based Screening. Emerging Infectious Diseases, 22(3), 532–534. https://doi.org/10.3201/eid2203.151607 Human Immunodeficiency Virus (HIV). (2016). Transfusion Medicine and Hemotherapy, 43(3), 203–222. https://doi.org/10.1159/000445852 Janeway, C. A. Jr., Shlomchik, M. J., Walport, M.,

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


& Travers, P. (2001). T Cell Mediated Immunity. In C. Janeway (Ed.), Immunobiology: The immune system in health and disease (5. ed, pp. 295–341). Garland Publ. [u.a.]. Janeway, C. A. Jr., Travers, P., Walport, M., & Shlomchik, M. J. (2001a). Adaptive Immunity to Infection. In C. Janeway (Ed.), Immunobiology: The immune system in health and disease (5. ed, pp. 381–425). Garland Publ. [u.a.]. Janeway, C. A. Jr., Travers, P., Walport, M., & Shlomchik, M. J. (2001b). Innate Immunity. In C. Janeway (Ed.), Immunobiology (5th ed., pp. 35–93). Garland Science. Janeway, C. A. Jr., Travers, P., Walport, M., & Shlomchik, M. J. (2001c). The Humoral Immune Response. In C. Janeway (Ed.), Immunobiology: The immune system in health and disease (5. ed, pp. 341–381). Garland Publ. [u.a.]. JCI - Towards HIV-1 remission: Potential roles for broadly neutralizing antibodies. (n.d.). Retrieved July 12, 2021, from https://www.jci.org/articles/ view/80561 Kim, B., & Aronowitz, T. (2019). Invisible Minority: HIV Prevention Health Policy for the Asian American Population. Policy, Politics, & Nursing Practice, 20(1), 41–49. https://doi. org/10.1177/1527154419828843 Klerk, J. de, & Moyer, E. (2017). “A Body Like a Baby”: Social Self-Care among Older People with Chronic HIV in Mombasa. Medical Anthropology, 36(4), 305–318. https://doi.org/10 .1080/01459740.2016.1235573 Kumar, R., Qureshi, H., Deshpande, S., & Bhattacharya, J. (2018). Broadly neutralizing antibodies in HIV-1 treatment and prevention. Therapeutic Advances in Vaccines and Immunotherapy, 6(4), 61–68. https://doi. org/10.1177/2515135518800689 Kwong, P. D., Wyatt, R., Robinson, J., Sweet, R. W., Sodroski, J., & Hendrickson, W. A. (1998). Structure of an HIV gp120 envelope glycoprotein in complex with the CD4 receptor and a neutralizing human antibody. Nature, 393(6686), 648–659. https://doi.org/10.1038/31405 Lindqvist, M., van Lunzen, J., Soghoian, D. Z., Kuhl, B. D., Ranasinghe, S., Kranias, G., Flanders, M. D., Cutler, S., Yudanin, N., Muller, M. I., Davis, I., Farber, D., Hartjen, P., Haag, F., Alter, G., zur Wiesch, J. S., & Streeck, H. (2012). Expansion of

FALL 2021

HIV-specific T follicular helper cells in chronic HIV infection. Journal of Clinical Investigation, 122(9), 3271–3280. Liu, Y., Cao, W., Sun, M., & Li, T. (2020). Broadly neutralizing antibodies for HIV-1: Efficacies, challenges and opportunities. Emerging Microbes & Infections, 9(1), 194–206. https://doi.org/10.10 80/22221751.2020.1713707 Lorenc, A., Ananthavarathan, P., Lorigan, J., Banarsee, R., Jowata, M., & Brook, G. (2014). The prevalence of comorbidities among people living with HIV in Brent: A diverse London Borough. London Journal of Primary Care, 6(4), 84–90. https://doi.org/10.1080/17571472.2014.1149342 2 Miller, R. H., & Sarver, N. (1997). HIV accessory proteins as therapeutic targets. Nature Medicine, 3(4), 389–394. https://doi.org/10.1038/nm0497389 Montefiori, D. C. (2016). Bispecific Antibodies Against HIV. Cell, 165(7), 1563–1564. https:// doi.org/10.1016/j.cell.2016.06.004 Moore, R. D. (2011). Epidemiology of HIV Infection in the United States: Implications for Linkage to Care. Clinical Infectious Diseases: An Official Publication of the Infectious Diseases Society of America, 52(Suppl 2), S208–S213. https://doi.org/10.1093/cid/ciq044 Nakayama-Hosoya, K., Ishida, T., Youngblood, B., Nakamura, H., Hosoya, N., Koga, M., Koibuchi, T., Iwamoto, A., & Kawana-Tachikawa, A. (2015). Epigenetic Repression of Interleukin 2 Expression in Senescent CD4+ T Cells During Chronic HIV Type 1 Infection. The Journal of Infectious Diseases, 211(1), 28–39. https://doi. org/10.1093/infdis/jiu376 Negin, J., Aspin, C., Gadsden, T., & Reading, C. (2015). HIV Among Indigenous peoples: A Review of the Literature on HIV-Related Behaviour Since the Beginning of the Epidemic. AIDS and Behavior, 19(9), 1720–1734. https:// doi.org/10.1007/s10461-015-1023-0 Nisole, S., & Saïb, A. (2004). Early steps of retrovirus replicative cycle. Retrovirology, 1, 9. https://doi.org/10.1186/1742-4690-1-9 Norman, L. R., Basso, M., Kumar, A., & Malow, R. (2009). Neuropsychological consequences of HIV and substance abuse: A literature review and implications for treatment and future research.

133


Current Drug Abuse Reviews, 2(2), 143–156. https://doi.org/10.2174/1874473710902020143 O’Connell, P., Pepelyayeva, Y., Blake, M. K., Hyslop, S., Crawford, R. B., Rizzo, M. D., Pereira-Hicks, C., Godbehere, S., Dale, L., Gulick, P., Kaminski, N. E., Amalfitano, A., & Aldhamen, Y. A. (2019). SLAMF7 Is a Critical Negative Regulator of IFN-α–Mediated CXCL10 Production in Chronic HIV Infection. The Journal of Immunology, 202(1), 228–238. https:// doi.org/10.4049/jimmunol.1800847 Parada, C. A., & Roeder, R. G. (1996). Enhanced processivity of RNA polymerase II triggered by Tat-induced phosphorylation of its carboxyterminal domain. Nature, 384(6607), 375–378. https://doi.org/10.1038/384375a0 Popescu, I., Drummond, M. B., Gama, L., Coon, T., Merlo, C. A., Wise, R. A., Clements, J. E., Kirk, G. D., & McDyer, J. F. (2014). Activation-induced Cell Death Drives Profound Lung CD4^sup +^ T-Cell Depletion in HIV-associated Chronic Obstructive Pulmonary Disease. American Journal of Respiratory and Critical Care Medicine, 190(7), 744–755. Rajasuriar, R., Khoury, G., Kamarulzaman, A., French, M. A., Cameron, P. U., & Lewin, S. R. (2013). Persistent immune activation in chronic HIV infection: Do any interventions work? AIDS, 27(8), 1199–1208. https://doi.org/10.1097/ QAD.0b013e32835ecb8b Rusert, P., Kouyos, R. D., Kadelka, C., Ebner, H., Schanz, M., Huber, M., Braun, D. L., Hozé, N., Scherrer, A., Magnus, C., Weber, J., Uhr, T., Cippa, V., Thorball, C. W., Kuster, H., Cavassini, M., Bernasconi, E., Hoffmann, M., Calmy, A., … Trkola, A. (2016). Determinants of HIV1 broadly neutralizing antibody induction. Nature Medicine, 22(11), 1260–1267. https://doi. org/10.1038/nm.4187 Sankar, A., & Luborsky, M. (2003). Developing a community-based definition of needs for persons living with chronic HIV. Human Organization, 62(2), 153–165. http://dx.doi.org.dartmouth.idm. oclc.org/10.17730/humo.62.2.695j11t5pmpmljr2 Simon, F., Mauclère, P., Roques, P., LoussertAjaka, I., Müller-Trutwin, M. C., Saragosti, S., Georges-Courbot, M. C., Barré-Sinoussi, F., & Brun-Vézinet, F. (1998). Identification of a new human immunodeficiency virus type 1 distinct from group M and group O. Nature Medicine, 4(9), 1032–1037. https://doi.org/10.1038/2017

134

Singh, G. K., Azuine, R. E., & Siahpush, M. (2013). Widening Socioeconomic, Racial, and Geographic Disparities in HIV/AIDS Mortality in the United States, 1987–2011. Advances in Preventive Medicine, 2013, e657961. https://doi. org/10.1155/2013/657961 Smith, J. L. (2003). The Role of Gastric Acid in Preventing Foodborne Disease and How Bacteria Overcome Acid Conditions†. Journal of Food Protection, 66(7), 1292–1303. https://doi. org/10.4315/0362-028X-66.7.1292 Tantillo, C., Ding, J., Jacobo-Molina, A., Nanni, R. G., Boyer, P. L., Hughes, S. H., Pauwels, R., Andries, K., Janssen, P. A. J., & Arnold, E. (1994). Locations of Anti-AIDS Drug Binding Sites and Resistance Mutations in the Three-dimensional Structure of HIV-1 Reverse Transcriptase. Journal of Molecular Biology, 243(3), 369–387. https://doi.org/10.1006/jmbi.1994.1665 The Structural Biology of HIV. (n.d.). Retrieved November 14, 2021, from https://cdn.rcsb.org/ pdb101/learn/resources/structural-biology-ofhiv/index.html The structural biology of HIV-1: Mechanistic and therapeutic insights | Nature Reviews Microbiology. (n.d.). Retrieved November 14, 2021, from https://www.nature.com/articles/ nrmicro2747 Thoueille, P., Choong, E., Cavassini, M., Buclin, T., & Decosterd, L. A. (2021). Long-acting antiretrovirals: A new era for the management and prevention of HIV infection. Journal of Antimicrobial Chemotherapy, dkab324. https:// doi.org/10.1093/jac/dkab324 Tran, B. X., Hwang, J., Nguyen, L. H., Nguyen, A. T., Latkin, N. R. K., Tran, N. K., Thuc, V. T. M., Nguyen, H. L. T., Phan, H. T. T., Le, H. T., Tran, T. D., & Latkin, C. A. (2016). Impact of Socioeconomic Inequality on Access, Adherence, and Outcomes of Antiretroviral Treatment Services for People Living with HIV/AIDS in Vietnam. PLOS ONE, 11(12), e0168687. https:// doi.org/10.1371/journal.pone.0168687 Tritel, M., & Resh, M. D. (2001). The late stage of human immunodeficiency virus type 1 assembly is an energy-dependent process. Journal of Virology, 75(12), 5473–5481. https://doi. org/10.1128/JVI.75.12.5473-5481.2001 van Grevenynghe, J., Cubas, R. A., Noto, A.,

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


DaFonseca, S., He, Z., Peretz, Y., Filali-Mouhim, A., Dupuy, F. P., Procopio, F. A., Chomont, N., Balderas, R. S., Said, E. A., Boulassel, M.R., Tremblay, C. L., Routy, J.-P., Sékaly, R.-P., & Haddad, E. K. (2011). Loss of memory B cells during chronic HIV infection is driven by Foxo3a- and TRAIL-mediated apoptosis. Journal of Clinical Investigation, 121(10), 3877–3888. Weller, I. V., & Williams, I. G. (2001). ABC of AIDS. Antiretroviral drugs. BMJ (Clinical Research Ed.), 322(7299), 1410–1412. https://doi. org/10.1136/bmj.322.7299.1410 Wheeler, D. P., & Dodd, S.-J. (2011). LGBTQ Capacity Building in Health Care Systems: A Social Work Imperative. Health & Social Work, 36(4), 307–309. Wilson, C., & Cariola, L. A. (2020). LGBTQI+ Youth and Mental Health: A Systematic Review of Qualitative Research. Adolescent Research Review, 5(2), 187–211. https://doi.org/10.1007/ s40894-019-00118-w Wyatt, R., Kwong, P. D., Desjardins, E., Sweet, R. W., Robinson, J., Hendrickson, W. A., & Sodroski, J. G. (1998). The antigenic structure of the HIV gp120 envelope glycoprotein. Nature, 393(6686), 705–711. https://doi.org/10.1038/31514 Zhang, S., Feng, Y., Narayan, O., & Zhao, L. J. (2001). Cytoplasmic retention of HIV1 regulatory protein Vpr by protein-protein interaction with a novel human cytoplasmic protein VprBP. Gene, 263(1–2), 131–140. https:// doi.org/10.1016/s0378-1119(00)00583-7

FALL 2021

135


The Psychedelic Renaissance STAFF WRITERS: CAROLINE CONWAY '24, BENJAMIN BARRIS '25, ZACHARY OJAKLI '25, ANDREA CAVANAGH '24, EVAN BLOCH '24, ETHAN LITMANS '24, SHAWN YOON '25, NATHAN THOMPSON '25, SHUXUAN (ELIZABETH) LI '25, ELAINE PU '25, DAVID VARGAS '23, BROOKLYN SCHROEDER '22 TEAM LEADS: DEV KAPADIA '23, DANIEL CHO '22 Cover Image: An image of psilocybin, also known as magic mushrooms; these are known to have hallucinogenic effects on those who ingest the mushroom, which has been observed to have clinical benefits. Image Source: Wikimedia Commons

Introduction The usage of psychedelics has garnered wider acceptance in the eyes of the public recently, especially during the aftermath of the COVID-19 pandemic. With the increase in mental-health diagnoses because of quarantine and increasing positive support behind psychedelics, clinicians have looked to psychedelics as an effective means of treating patients (Nochaiwong et al., 2021). Physicians have also recently found promising effects of psilocybin (psychedelic mushrooms that have become the face of psychedelic culture) and other psychedelic drugs as a means of treating diabetes and heart disease. Psychedelics are a class of drug that have psychoactive effects on users, including changes in perception, mood, and cognitive processes. Some psychedelics can be found in nature, like psilocybin or ayahuasca, while others are made synthetically in labs, like lysergic acid diethylamide (LSD). In the US, the height of psychedelics came in the 1960s during the counterculture movement, though this period was also the reason of why many of these drugs were pushed to the top of regulators’ hit lists at the onset of the war on drugs. And for many non-

136

Western cultures, psychedelic plants have been used as sacramental tools for thousands of years, shaping the course of many established religions. However, these drugs have demonstrated clinical benefits and a much-improved safety profile relative to other illicit drugs. Some psychedelics are now only considered Schedule III drugs, meaning moderate to low physical dependence or psychological dependence. As such, biotech companies have started to experiment not only with psychedelics themselves but also many of the pathways that the drugs target in the body, blossoming what is becoming a Psychedelic Renaissance (Cohen, 2021). For example, for mental health treatment, recent studies from Alan Davis et al. of John Hopkins University found that psilocybin-assisted therapy was up to two times more effective than psychotherapy, and four times more effective than traditional prescribed antidepressants. Out of a randomized clinical trial of 24 participants, over 71% of their participants showed improvement with clinically significant decreases in mental health symptoms as evaluated by blinded clinician rater-assessed depression severity and self-reported secondary outcomes. (Davis et al., 2020). However, the benefits of psychedelics extend DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


further than just mental health. To test the effects of psychedelics on disease reduction, Otto Simonsson and his team at the University of Oxford studied the effects of psychedelics on cardiometabolic health, analyzing data from 375,000 participants in the National Survey on Drug Use and Health. They found that users of psychedelics showed a 2.2% reduced chance of heart disease and a 3.75% reduced chance of diabetes, indicating that psychedelics could potentially be used as an effective means of treatment for patients suffering from other cardiometabolic diseases as well (Simonsson, 2021). There are several legal obstacles that lie in the way of allowing psychedelics to become a mainstream medicinal practice. Despite the FDA listing psilocybin as a “breakthrough therapy” in 2019, as of 2021, the U.S. Congress has not yet made funds available for research dedicated to psychedelic usage. Currently, only Oregon is pursuing the administration of psychedelic services with licensing by the Oregon Health Authority through licensing by the Oregon Psilocybin Advisory Board (Marks & Cohen, 2021). With this in mind, we aim to give an analysis on the history, science, cultural context, and ethics surrounding psychedelics.

History of Psychedelics Today, many people might recoil at the idea of using psychedelics medically. In the words of Sessa (2007), psychedelics have been “demonised in the West” since the 1960s. However, the use of psychedelics is nothing new. Psychedelics have an especially well-documented history in religious FALL 2021

contexts. For instance, early Sanskrit texts reference an Indian fermented psychoactive juice called “soma,” and records indicate that Athenian philosopher Plato consumed hallucinogenic fungus for the Eleusinian ceremonies, secret religious rites tied to the cult of Demeter and Persephone, in the 5th century BCE (Sessa, 2007).

Image 1: Molecular structure of lysergic acid diethylamide (LSD), the drug that thrust psychedelics into the spotlight of public opinion. This drug was one of the most frequently used recreational drugs during the 1960s counterculture movement by the youth of the time.

Psychedelics were thrust into the Western medicinal spotlight with the initial synthesis of lysergic acid diethylamide (LSD) in 1943. Psychologists and psychiatrists began to investigate LSD’s potential therapeutic effects in the treatment of mood disorders and alcoholism. Due to this surge in psychedelic-driven research throughout the 1950s, estimates suggest that tens of thousands of people were treated with psychedelics over a 15-year-period (Gardner et al., 2019). This was evidenced in the Canadian province of Saskatchewan, where LSD came to be considered the default treatment for alcoholism thanks to the research of Humphry Osmond and Abram Hoffer during the 1950s, among others. In attempting a biochemical treatment for alcoholism, Osmond hoped to combat some of its associated stigmas by providing evidence that it was a disease rather than a matter of individual character. Interestingly, Osmond and Hoffer were more interested in the subjective experience induced by LSD than in the chemical effects of the drug itself. Specifically, they were fascinated by how similar reports of LSD experiences sounded to descriptions of delirium tremens: alcoholic experiences of “hitting bottom” that were fatal 10% of the time but could often otherwise lead to a turning point in an individual’s struggle with alcoholism. Osmond and Hoffer hypothesized that in mimicking the experience of delirium tremens with LSD, they might be able to help patients reach turning points in their disease without risking their lives. Overall, they found that one-time treatment with LSD aided in recovery from alcoholism roughly 50% of the time (Dyck, 2006). More recent metaanalyses reviewing studies done in the 1950s and 1960s on LSD as an alcoholism treatment support the claim that LSD can have a positive impact on patients, and meta-analyses covering research from 1949 to 1973 on psychedelic treatments of mood disorders have found that of 423 individuals across 19 studies, 79% of patients showed clinical improvement. However, many such psychedelic studies were broadly criticized for failing to adhere to conventional empirical standards, like the inclusion of control conditions (Carhart-Harris & Goodwin, 2017).

Image Source: Wikimedia Commons

"Psychedelics are a class of drug that have psychoactive effects on users, including changes in perception, mood, and cognitive processes"

It is worth noting that many researchers investigated the effects of psychedelic substances 137


not only to test treatments for various clinical conditions but also to study the general limits of the human mind, since drugs like LSD were thought to remove mental barriers against common mental predispositions enabling people to make new thoughts (Elcock, 2021). Perhaps this curiosity was partially responsible for the surge of recreational LSD use in the 1960s. Regardless of the origins of recreational use, it had dire consequences for the position of psychedelic treatments in the medical community. By 1965, psychedelics were prohibited within the United States, and the 1970 Controlled Substances Act placed both LSD and psilocybin in the most restrictive category of drugs. Similar changes followed abroad, and psychedelic-driven studies ceased (Gardner et al., 2019).

"Psychedelics are a class of drug that have psychoactive effects on users, including changes in perception, mood, and cognitive processes."

However, the psychedelic story in medicine is far from over. The 1990s and 2000s witnessed something of a renaissance in psychedelic research on healthy individuals and those with mood disorders or addictions. With shifts in modern attitudes about other controlled substances—like cannabis—and about the United States’ war on drugs, medicinal psychedelics are receiving renewed attention (Gardner et al., 2019). Sessa (2007) notes that this is especially true considering the development of the psychedelic substance MDMA (3,4-Methylenedioxymethamphetamine). While the legal status of psychedelics remains a barrier to widespread psychedelic-based treatment, the tide seems to be turning in favor of psychedelicdriven medical research.

Cultural Context of Psychedelics Before psychedelics were adopted into modern Western societies, indigenous cultures used psychedelics for both spiritual and health reasons as the plants from which certain psychedelics are derived were thought to possess sacred healing properties. Many of these cultures continue to use psychedelics today. As aforementioned, South American Indigenous societies use Ayahuasca for various spiritual ceremonies and rituals. And, Mexico has bolstered its psychedelic tourism sector as more tourists arrive to Mexico to experience novel encounters of consuming psychedelic plants and substances. Aside from Mexican and South American Indigenous societies, other civilizations worldwide have utilized psychedelics for religious purposes. In ancient Indian religion, psychedelics played a crucial role as individuals consumed soma, a psychedelic plant, as a ritualistic practice (Williams et al., 2020). In addition, historians

138

believed that various religious figures found in the Old Testament of the Bible used psychedelics as a means of divine communication. African cultures have also been heavily influenced by psychedelic use, as in Western African and Ethiopian societies, plants like Çaate (Katha edulis) were a central fixture in healing practices and were believed to ward off evil. Ubulawu, a healing foam created from grinding several plant roots together, specifically has widespread usage in southern Africa for traditional medicine, ancestral communication, and shamanic practices (Williams et al., 2020). In the United States, the use of psychedelics was woven into a more politically motivated narrative in the late 60s and early 70s. Nixon’s presidential term saw the criminalization of Black individuals through the War on Drugs, though the racial group used drugs at a lower rate than other racial groups. Groups that opposed the ongoing Vietnam War, like Hippies, were told their opinions were influenced using psychedelics like LSD. Psychedelics also had a profound influence on the arts, with music, literature, and visual art featuring new genres and styles inspired by certain drugs. Since the 1960s, the stigma around psychedelics has significantly decreased, paving the way for its emerging usage in medicine (Williams, 2018). During the 1950’s and 1960’s, psychedelics were popular and widely accepted for their promising effects. More than 40,000 were administered LSD before it became illegal due to the misuse of psychedelics by high-profile researchers, lack of knowledge, and a rise of conservatism. However, there has been a resurgence of interest in the field of psychedelics for legally sanctioned research in the twenty-first century. There are several reasons this has happened. Over the past couple decades, there has been little improvement in the research of improving depression and anxiety, and psychedelics prove a potential route to explore (dos Santos et al., 2021). Widespread media coverage on this topic has been generally positive, which has gained political and public acceptance in the topic. This has been coupled with the profits and recent large-scale acceptance of cannabis legalization (Petranker et al., 2020). However, it is important to note that any emergence of clinically adverse effects of psychedelics could quickly turn mainstream media against them again. There are various other cultural consider. Apart from the social psychedelics, another large obstacle status. As of now, compounds like

factors to stigma of is its legal Psilocybin,

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


DMT, LSD, and MDMA are classified as Schedule I in United Nations Convention on Psychotropic Substances. This decision was originally made in 1971, where they described these psychedelics as not having any therapeutic benefits and a high abuse/dependency rate (dos Santos et al., 2021). One breakthrough of psychedelics is that they do not have a high abuse potential. Therefore, only small doses of psychedelics are used in current studies as participants are already unlikely to become addicted. In fact, one study involving ritual ayahuasca users reported significant reductions in previous drug use rather than any social or psychological measures associated with drug dependence (dos Santos et al., 2021). While psychedelic drugs can generally be used in trials, they are difficult to conduct due to a lack of funding. Since psychedelics are given in small doses, many pharmaceutical companies are wary of investing and supporting these studies. However, this perception has changed in recent years, and many pharmaceutical companies have emerged to develop psychedelics as prescription medicines (dos Santos et al., 2021). For example, the breakthrough therapy of psilocybin used in major depression has been granted by the US FDA and funded by Usona Institute and Compass Pathways/Atai (dos Santos et al., 2021). One common misconception is that psychedelics would become universally accessible to anyone needing them, giving the patient full range on when to use them. However, a discussion in the medical community is that doctors should have the capacity to prescribe psychedelics to their patients without having to fail all previous treatments. More research is needed to examine purified and synthetic compounds to accurately establish dosing (Allen, 2021). The recent surge of acceptance in this topic has increased support for legally sanctioned research. The topic of psychedelics within medicine will likely become more widely accepted with positive results for treating resistant depression, anxiety, and substance use disorders. This is a delicate topic though, as most psychedelics (i.e., ecstasy and psilocybin) are considered schedule I, meaning that they have a high potential for abuse and a lack of accepted safety for use under medical supervision, and have developed a negative connotation due to their misuse recreationally.

Science of Psychedelics Psilocybin

FALL 2021

Psilocybin, more commonly known as “shrooms,” is one of the more well-known psychedelic drugs, especially considering its current relevance in the medical field and its potential as a therapeutic agent. Mushrooms containing psilocybin fall under a group of psychoactive fungi (Dasgupta & Wahed, 2014). Chemically, psilocybin is a tryptamine alkaloid that possesses an extra phosphoryloxy group located at the fourth carbon (ChEBI, 2013). The chemical composition of psilocybin is significant in its functionality as a psychedelic. Once psilocybin is ingested, it is converted to psilocin through the dephosphorylation of the phosphoryloxy substituent. Physiologically, psilocybin is rapidly dephosphorylated in the body to psilocin, which serves as an agonist for several serotonin receptors. Image 2: Molecular structure of psilocybin, a psychoactive fungus and one of the more well-known psychedelic drugs. This drug has been implicated in potential therapeutic effects by the medical community. Image Source: Wikimedia Commons

Psilocybin alters the functioning of the human brain by activating serotonin 5-HT2A receptors. This then triggers increased striatal dopamine concentrations which could explain the correlated increase in euphoria and depersonalization (Vollenweider & Kometer, 2010). The heightened attention toward psilocybin can be attributed to its potential as a therapeutic agent for conditions such as anxiety and depression. Through activation of the serotonin 5-HT2A receptors, there is subsequent modulation of multiple brain regions, including the amygdala, the prefrontal, and limbic regions (Lowe et al., 2021). MDMA MDMA is a psychoactive drug whose properties do not fall neatly into a predefined category, having mixed effects of both stimulants and hallucinogens. MDMA’s formal name is 3,4-Methylenedioxymethamphetamine, but its common street names are ecstasy and molly. Ecstasy is often taken for the sudden and intense

139


high that it yields: a perfect “euphoria.” But many criticisms concern the recreational use of ecstasy, such as its serotonergic neurotoxicity, a phenomenon that leads to a reduction in markers for serotonin across higher brain regions. It has been shown to damage the prefrontal cortex and hippocampus, impairing higher-level functioning, explicit memory, and implicit memory (Blagrove et al., 2011). When animals go through MDMA withdrawal, there exists lower serotonin transporter (SERT) binding, meaning that serotonin activity decreases in the hippocampus (Parrott, 2013a). In humans, MDMA also leads to a cascade of physiological and psychological problems, including tremors, abnormal hormone activity, pain perception, and fluctuating mood (Parrott, 2013a). In addition, prenatal exposure to MDMA causes developmental problems which may lead to early death of the fetus. However, MDMA also has a history of medical uses, such as its first use in psychotherapy in the early 1900s. In addition, it has been investigated as a potential aid in cancer therapy, as long-term exposure to MDMA causes apoptosis in cultured human cells (Parrott, 2013b). Specifically, it causes lymphoma cells in vitro to lyse, but only with a concentration so large that it induces even more harmful side effects; thus, further research is needed to separate its potential uses in cancer treatment with its negative effects (Wasik et al., 2012). DMT DMT (N, N-dimethyltryptamine) is a hallucinogen that is naturally found in plants and animals. One of the most distinctive characteristics of DMT is that its effects can be felt within minutes and can dissipate in less than an hour (Strassman & Qualls, 1994). Biologically, DMT is formed from tryptophan, an amino acid found in food. First, the enzyme AADC catalyzes the decarboxylation of tryptophan; the INMT-facilitated methylation of the resulting product produces NMT and then DMT (Barker, 2018). DMT can also be synthesized chemically (Cameron & Olson, 2018). A small molecule, DMT can affect the brain directly by crossing the blood-brain barrier that protects the brain from pathogens and toxic chemicals in the blood (Cameron & Olson, 2018; Pajouhesh & Lenz, 2005). However, the effects of this psychedelic, when administered into the veins or muscles, are diminished because monoamine oxidase A (MAO-A) readily metabolizes

140

DMT into indoleacetic acid, dropping it to undetectable concentrations within one hour of administration (Barker, 2018). Therefore, DMT is often administered with monoamine oxidase inhibitor or in a way that can avoid metabolism and allow for the penetration of the blood-brain barrier (Riba et al., 2015). Once in the brain, DMT acts as an agonist to 5-HT2A receptors that play a role in causing hallucinogenic effects (Barker, 2018). 5-HT2A receptors are the main excitatory receptor subtype for serotonin. However, they may also have an inhibitory effect on certain areas of the visual and orbitofrontal cortexes. DMT also binds to Sigma-1 receptors, which is speculated to increase production of antistress and antioxidant proteins by aiding in protein folding and defending against structural hypoxia (a state in which oxygen is not available at the tissue level to maintain homeostasis) or oxidative stress (a state caused by an imbalance between production and accumulation of oxygen reactive species) that can damage protein structure (Carbonaro & Gatch, 2016; Szabo et al., 2016). In addition, animal studies suggest that DMT may protect astrocytes, which are glial cells that hold neurons together and facilitate nutrient exchange in the brain (Szabó et al., 2021). DMT can give rise to a range of sensations in humans. These include out of body experiences, perception of visual or auditory changes in the environment, thoughts about death and afterlife, and communication with otherworldly beings. These symptoms may be signs that DMT induces near-death experiences associated with increased value of self and others and reduced anxiety about death (Groth-Marnat & Summers, 1998; Timmermann et al., 2018). DMT may also have adverse effects, including nausea, increased heart rate, and increased blood pressure (Cameron & Olson, 2018; Strassman & Qualls, 1994). It is difficult to isolate the effects of DMT because recreational DMT users often use other substances such as narcotics, depressants, and alcohol (Cakic et al., 2010; Cameron & Olson, 2018). However, studies on religious users of this psychedelic through the consumption of ayahuasca show that controlled use of DMT is non-addictive, relatively safe, and beneficial to mental and physical health by improving mood and cognition (Cameron & Olson, 2018). Due to its positive effects on mental wellbeing, DMT is widely researched for its potential as therapeutics in depression, though such studies often focus on DMT analogs or ayahuasca (Barker, 2018). For example, clinical testing of psilocybin, a compound with similar structure

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


to DMT, showed potential for the treatment of depression, anxiety, and addiction (Cameron & Olson, 2018). In addition, compared to current antidepressants, which have slower onset and are often less effective, a study showed that even one dose of ayahuasca produced long term improvements in depressive symptoms (Osório et al., 2015). Despite these promising results, clinical testing of DMT alone was not approved until 2020. Current clinical trials are focusing on DMT as intervention for depression and major depressive disorder by changing neuroplasticity (D'Souza and Flynn, 2021). Researchers envision the applications of DMT to extend beyond treatment for mood disorders. As a Sigma-1 agonist, DMT may have the potential to treat neurodegenerative disorders such as Alzheimer’s disease by reducing neuroinflammation and suppressing ER stressrelated apoptosis (Ruscher & Wieloch, 2015; Szabó et al., 2021). The effectiveness of triptan drugs that have a similar structure to that of DMT in treating migraines also presents the possibility of using nonpsychedelic modifications of DMT for therapeutic purposes (Cameron & Olson, 2018). Ayahuasca Ayahuasca is a psychoactive substance commonly used by Indigenous populations throughout Amazonian regions. Although Brazil signed the Convention on Psychotropic Substances of 1971 in Vienna, Austria, which restricts the usage of DMT and puts the substance under federal control, the Brazilian government has sanctioned the use of ayahuasca for religious use. Its application in shamanic ceremonies and rituals is integral to the experience of the ceremonies themselves; considered a sacrament, ayahuasca is widely used as both a mestizo folk curative and spiritual tool. Ayahuasca is commonly ingested as a tea by boiling the leaves of the Psychotria viridis shrub along with the stalks of the Banisteriopsis caapi vine in water (Labate & Goldstein, 2009). Ayahuasca is extremely pharmacologically complex compared to most psychedelics. Not only does it contain the psychoactive compound DMT, but it also contains other chemicals known as monoamine oxidase inhibitors, or MAOIs, which block enzymes in the human body commonly responsible for breaking down DMT. In other words, β-carbolines in MAOIs interrupt the process of deamination of DMT by inhibiting gut monoamine oxidase enzymes, thus making the substance orally active and allowing it to

FALL 2021

reach the brain (Aronson, 2014). After oral administration, hallucinogenic effects first occur 30 to 60 minutes after intake. These effects peak around 60 to 120 minutes after ingestion, and resolve after around 250 minutes (Aronson, 2014). While there are occasionally dysphoric reactions, such as nausea, anxiety, disorientation, diarrhea, and vomiting, this is largely uncommon, especially since the substance is administered by practiced shamans. The effects of ayahuasca have been described as a transcendental circle, in which there is a cycle of experiences consistent across all instances of use/ Subjects first feel extremely vulnerable with feelings of confusion, paranoia, and possibly fear. While this can be overwhelming, this agitating and fear-inducing state is interrupted by a shift of mood and feeling akin to that of a spiritual experience, as participants feel as though they have connected with the universe and a higher power (Kjellgren et al., 2009). Throughout the entire experience, time feels altered, although subjects are aware of their surroundings and can speak. As aforementioned, DMT was classified as a Schedule I substance in the United States during the Convention on Psychotropic Substances of 1971. This classification is largely unfitting due to its contemporary medicinal use and low potential for abuse. For instance, even though ayahuasca has intense effects on thought processes, emotion, and perception, the drug does not cause addiction or dependence in a similar manner to commonly abused drugs. As a result, it has been described as having properties and uses of a “nondrug” in a court setting. For example, when the DEA seized ayahuasca from the União Do Vegetal (UDV), a church from Brazil with about 130 members in the United States, the UDV subsequently petitioned the federal government for religious use of the substance through citation of the Religious Freedom Restoration Act of 1993. When the Supreme Court asked for compelling interest that demonstrated the dangers and negative effects of ayahuasca that would thus justify the obstruction of free exercise of religion, the government was unable to produce sufficient evidence. As a result, the Court upheld the sacramental use of ayahuasca under the First Amendment’s free exercise of religion clause (Rainey, 2009).

"Due to its positive effects on mental wellbeing, DMT is widely researched for its potential as therapeutics in depression..."

In South America, ayahuasca has been used in rituals and has served as a form of traditional medicine and psychiatry. It has been tested clinically as a form of therapy due to its strong serotonergic effects (Frecska et al., 2016). Other

141


"Numerous studies have found that ketamine is a promising alternative to traditional antidepressants due to its ability to rapidly onset, be effective, and yield lasting impacts."

progressive efforts in the use of ayahuasca include its application to the treatment of addictions, in which some researchers suggest breaking down taboos against the drug and mimicking the practices of Indigenous peoples (Mabit, 2007). Ayahuasca has strong biological effects on the user, but with the right supervision and state of mind, a substance addict may see meaningful progress in becoming less dependent on other addictive drugs due to its ability to calm the nervous system through reducing excitotoxicity, inflammation, and oxidative stress, factors that are correlated with neurodegeneration (Dos Santos & Hallak, 2017). Ayahuasca also has been shown to have unique neurological benefits, including the generation of neurons by reducing oxidative stress and inflammation (Dos Santos & Hallak, 2017). Further, some people struggling with depression who were resistant to customary forms of treatment made remarkable strides in their recovery after taking just a single dose of ayahuasca (Soler et al., 2016). It has produced similar, incredible results in the recovery of people struggling with anxiety, mood disorders, and PTSD (Dos Santos et al., 2018; Inserra, 2018). Ayahuasca’s fascinating applications seem boundless. One study found that sustained use of the drug improved mindfulness substantially: a weekly dose for four weeks proved equally as effective as an eight-week mindfulness-based stress reduction course (Soler et al., 2018). Other research on ayahuasca’s effects on personality has discovered its advantageous effects on the user’s mental health, confidence, and optimism (Bouso, 2012). Finally, a remarkable study displayed the strong benefits ayahuasca can have on self-identity. In interviews with many gay and lesbian people who had been socialized in their communities to view their sexual orientation as unacceptable, researchers found

that ayahuasca had incredible, positive effects on their perceptions of themselves as well as their affirmations of their sexuality (Cavnar, 2014). While these findings appear to portray the drug in a strictly pragmatic, advantageous manner, it is important to note that methodological bias may contribute to scientific studies reporting the benefits of ayahuasca far more often than its adverse effects. Ketamine Ketamine is a chemically synthesized anesthetic drug with mild hallucinogenic effects when taken in sub-sedative doses. The drug was originally discovered in the 1960’s and FDA-approved for medical use in 1970 (Pribish et al., 2020). The synthesis process begins with a cyclohexanone molecule treated with a 2-chlorophenyl magnesium bromide reagent and high temperatures in a hydrocarbon solvent resulting in an oxidation reaction that produces ketamine (Pribish et al., 2020). Ketamine functions as a non-competitive antagonist for the NMDAglutamate receptor in the brain, which plays a key role in central sensitization and the transmission of pain signals. By holding the NMDA calcium channel open, glutamate is continually absorbed by postsynaptic neurons, producing the drug’s hallucinogenic effects (Pribish et al., 2020). Firms like Algernon Pharmaceuticals are investigating drugs targeting the receptor in disease treatment, suggesting that ketamine might have clinical benefits that are currently being underutilized due to its Schedule III drug status in the United States that gives it a negative light in the public (Pribish et al., 2020). Despite Ketamine being a Schedule III drug, recent research has discovered that it has numerous medical benefits. For the past 70 years, ketamine

Image 3:Ayahuasca, one of the oldest used psychedelics that is part of the culture of many indigenous populations scattered throughout South America, is commonly ingested through tea made from the boiling of the leaves. Image Source: Wikimedia Commons

142

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


cells recruitment and cytokines production, further contributing to its efficacy. This opens a significant opportunity to pivot much of the opiate use towards drugs like ketamine that are less likely to be abused (Abdollahpour et al., 2020).

has been used primarily as an anesthetic drug. Particularly, it can induce dissociation, analgesia (the inability to feel pain), sedation, catalepsy (a trance-like state in which consciousness and feeling are lost), and bronchodilation (the dilation of airways). More recently, though, it has been found that ketamine can be used to treat depression, acute and chronic pain, seizures, headaches, and substance disorders. Furthermore, ketamine has been shown to act as a neuroprotector, which can have a wide variety of positive effects. Namely, neuroprotectors helps promote the preservation of neuronal structures and/or function that protect against neuronal injury. Numerous studies have found that ketamine is a promising alternative to traditional antidepressants due to its ability to rapidly onset, be effective, and yield lasting impacts. A single dose of ketamine has been shown to have an antidepressant effect, which begins as early as 2 hours after administration, peaks at 24 hours, and lasts for up to 7–14 days (Shiroma et al., 2020). This has been found to be especially useful for specific groups, such as soldiers who are especially subject to increased mental and physical trauma and feelings of hopelessness. These conditions significantly decreased after a single dose. This is most likely due to its ability to enhance the brain’s neuroplasticity. Scientists have also found positive correlations with ketamine’s treatment of other psychological disorders, including anxiety and obsessive-compulsive disorder (Kryst et al., 2020). Ketamine has also been shown to have similar analgesic effects to opiates but without the serious risk of addiction that plagues traditional opioids that has led to the opioid problem in the United States (Abdollahpour et al., 2020). Ketamine also has well-documented anti-inflammatory properties by interacting with inflammatory

FALL 2021

However, treatment applications are not the only area where ketamine has been observed to be useful. Research has shown that the psychedelic has had utility in the surgical setting; unlike most anesthetics typically used in surgery, ketamine does not shut down breathing reflexes, allowing patients to maintain normal patterns. This lets patients anesthetized with ketamine to receive operations without the need for intubation which often causes throat problems and pain post-op (Sassano-Higgins et al., 2016). However, ketamine does not produce as uniform of an anesthetic response in everyone as compared with other more common anesthetics. This means patients anesthetized with ketamine may stay under for longer or shorter than expected – especially in larger doses – making it difficult to anesthetize with ketamine for long intense operations (Sassano-Higgins et al., 2016).

Image 4: Ketamine, as opposed to psychedelics like ayahuasca and psilocybin, is not a hallucinogenic plant, but a synthesized chemical that has hallucinogenic effects; nevertheless, despite its synthetic origin, it still has important clinical benefits, though this same origin has led many to treat the drug like other synthetic street drugs, causing the schedule III drug distinction of ketamine. Image Source: Wikimedia Commons

While ketamine has been shown to have a variety of benefits, its drawbacks must be considered. Some of the side effects include cognitive impairments, abdominal pain, liver injury, and dose-dependent urogenital pathology. After repeated doses, another risk includes its neurotoxicity and long-term episodic and semantic memory impairment. More immediate risks include tachyarrhythmias, which is a rapid abnormal heart rate, hallucinations, and flashbacks. Finally, ketamine has been shown to lead to apoptotic cell death in neurons in the cerebral cortex and hippocampal region, causing long-term deficits in cognitive processing. And so, future research aims to catalogue more accurately both the benefits and the drawbacks of the drug to find areas where it can be efficacious and safe; soon, the drug may see its delisting from the schedule III drug list and more widespread acceptance by the medical community (SassanoHiggins et al., 2016).

Ethics of Psychedelics Through the years, society has had an everevolving view about controlled substances. As the conversation shifted to allow for greater use of substances, such as a ground-based movement for the legalization of marijuana, psychedelics became the next topic of debate. The premise of this debate was centered around

143


"While the United States is beginning to discuss the merits of decriminalization, many other countries lack laws that prohibit such activities."

healthcare applications. Some recent examples of legislators that have taken the step to legalize psychedelics stem from the west coast of the United States. The first state to take this jump was Oregon, when on November 4th, 2020, through Measure 109, the medical supervised use of psilocybin was approved. This measure was supported by a clear majority of citizens (nearly 60%) and began what has been termed a psychedelic renaissance (Romero, 2021). An extremely recent example of the shifting views on these chemicals is California’s advancement towards decriminalization of psychedelics for those 21 and older. As a forerunner in marijuana legalization, California may be attempting to take the next steps in discussing drug criminality. While the United States is beginning to discuss the merits of decriminalization, many other countries lack laws that prohibit such activities. Brazil, for example, has no laws or restrictions on psilocybin, and this substance does not have the same taboo that it does in the States. Jamaica, Netherlands, and Portugal are just a few of the many countries in the world that do not have strict laws against the use of psychedelics, providing a variety of templates and examples for how we can create our own laws in the future (Feuer, 2020).

Conclusion Admittedly, psychedelics are a complicated topic both socially and medically. However, there is a rich history surrounding their advent and usage in many cultures throughout the world. While the status of psychedelics for medicinal/therapeutic uses is hazy given its classification as schedule I substances under the Controlled Substances Act, there are clear mental and physiological benefits that have been documented in studies that use psychedelics as a treatment option. With the current paradigm shift occurring with drugs such as marijuana and efforts from citizens and lawmakers to reschedule such drugs, there is a clear direction that is being paved for a similar rescheduling for psychedelics. While still highly stigmatized, it will be interesting to see the research that comes out regarding psychedelics and to see its status in society in the coming decades.

144

org/10.4103/jfmpc.jfmpc_875_19 Allen, T. (2021, October 8). Psychedelic Science Holds Promise for Mainstream Medicine. University of Nevada, Las Vegas. http://www. unlv.edu/news/release/psychedelic-scienceholds-promise-mainstream-medicine Aronson, J. K. (2014). 76—Plant Poisons and Traditional Medicines. In J. Farrar, P. J. Hotez, T. Junghanss, G. Kang, D. Lalloo, & N. J. White (Eds.), Manson’s Tropical Infectious Diseases (Twenty-third Edition) (pp. 1128-1150.e6). W.B. Saunders. https://doi.org/10.1016/B978-0-70205101-2.00077-7 Barbosa, P. C. R., Mizumoto, S., Bogenschutz, M. P., & Strassman, R. J. (2012). Health status of ayahuasca users. Drug Testing and Analysis, 4(7– 8), 601–609. https://doi.org/10.1002/dta.1383 Barker, S. A. (2018). N, N-Dimethyltryptamine (DMT), an Endogenous Hallucinogen: Past, Present, and Future Research to Determine Its Role and Function. Frontiers in Neuroscience, 12, 536. https://doi.org/10.3389/fnins.2018.00536 Blagrove, M., Seddon, J., George, S., Parrott, A. C., Stickgold, R., Walker, M. P., Jones, K. A., & Morgan, M. J. (2011). Procedural and declarative memory task performance, and the memory consolidation function of sleep, in recent and abstinent ecstasy/ MDMA users. Journal of Psychopharmacology (Oxford, England), 25(4), 465–477. https://doi. org/10.1177/0269881110372545 Bouso, J. C., González, D., Fondevila, S., Cutchet, M., Fernández, X., Barbosa, P. C. R., Alcázar-Córcoles, M. Á., Araújo, W. S., Barbanoj, M. J., Fábregas, J. M., & Riba, J. (2012). Personality, Psychopathology, Life Attitudes and Neuropsychological Performance among Ritual Users of Ayahuasca: A Longitudinal Study. PLOS ONE, 7(8), e42421. https://doi.org/10.1371/ journal.pone.0042421

References

Cakic, V., Potkonyak, J., & Marshall, A. (2010). Dimethyltryptamine (DMT): Subjective effects and patterns of use among Australian recreational users. Drug and Alcohol Dependence, 111(1), 30–37. https://doi.org/10.1016/j. drugalcdep.2010.03.015

Abdollahpour, A., Saffarieh, E., & Zoroufchi, B. H. (2020). A review on the recent application of ketamine in management of anesthesia, pain, and health care. Journal of Family Medicine and Primary Care, 9(3), 1317–1324. https://doi.

Cameron, L. P., & Olson, D. E. (2018). Dark Classics in Chemical Neuroscience: N,NDimethyltryptamine (DMT). ACS Chemical Neuroscience, 9(10), 2344–2357. https://doi. org/10.1021/acschemneuro.8b00101

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Carbonaro, T. M., & Gatch, M. B. (2016). Neuropharmacology of N,N-dimethyltryptamine. Brain Research Bulletin, 126, 74–88. https://doi. org/10.1016/j.brainresbull.2016.04.016 Carhart-Harris, R. L., & Goodwin, G. M. (2017). The Therapeutic Potential of Psychedelic Drugs: Past, Present, and Future. Neuropsychopharmacology, 42(11), 2105–2113. https://doi.org/10.1038/npp.2017.84 Cavnar, C. (2014). The effects of ayahuasca ritual participation on gay and lesbian identity. Journal of Psychoactive Drugs, 46(3), 252–260. https:// doi.org/10.1080/02791072.2014.920117 Cohen, J. (2021). Psychedelic Drugs Are Moving From The Fringes Of Medicine To The Mainstream. Forbes. https://www.forbes.com/ sites/joshuacohen/2021/07/05/psychedelicd r u g s - are - mov i ng - f rom - t he - f r i nge s - of medicine-to-the-mainstream/ Dasgupta, A., & Wahed, A. (2014). Chapter 17 - Challenges in Drugs of Abuse Testing: Magic Mushrooms, Peyote Cactus, and Designer Drugs. In Clinical Chemistry, Immunology and Laboratory Quality Control (pp. 307–316). Elsevier. https://doi.org/10.1016/B978-0-12407821-5.00017-6 Davis, A. K., Barrett, F. S., May, D. G., Cosimano, M. P., Sepeda, N. D., Johnson, M. W., Finan, P. H., & Griffiths, R. R. (2021). Effects of PsilocybinAssisted Therapy on Major Depressive Disorder: A Randomized Clinical Trial. JAMA Psychiatry, 78(5), 481–489. https://doi.org/10.1001/ jamapsychiatry.2020.3285 dos Santos, R. G., Bouso, J. C., Rocha, J. M., Rossi, G. N., & Hallak, J. E. (2021). The Use of Classic Hallucinogens/Psychedelics in a Therapeutic Context: Healthcare Policy Opportunities and Challenges. Risk Management and Healthcare Policy, 14, 901–910. https://doi.org/10.2147/ RMHP.S300656 Dos Santos, R. G., & Hallak, J. E. C. (2017). Effects of the Natural β-Carboline Alkaloid Harmine, a Main Constituent of Ayahuasca, in Memory and in the Hippocampus: A Systematic Literature Review of Preclinical Studies. Journal of Psychoactive Drugs, 49(1), 1–10. https://doi.or g/10.1080/02791072.2016.1260189 Dos Santos, R. G., Osório, F. L., Crippa, J. A. S., Riba, J., Zuardi, A. W., & Hallak, J. E.

FALL 2021

C. (2016). Antidepressive, anxiolytic, and antiaddictive effects of ayahuasca, psilocybin and lysergic acid diethylamide (LSD): A systematic review of clinical trials published in the last 25 years. Therapeutic Advances in Psychopharmacology, 6(3), 193–213. https://doi. org/10.1177/2045125316638008 D’Souza, D. C., & Flynn, L. T. (2021). Fixed Order, Open-Label, Dose-Escalation Study of DMT in Humans—Tabular View—ClinicalTrials. gov. https://clinicaltrials.gov/ct2/show/record/ NCT04711915 Dyck, E. (2006). ‘Hitting Highs at Rock Bottom’: LSD Treatment for Alcoholism, 1950–1970. Social History of Medicine, 19(2), 313–329. https://doi.org/10.1093/shm/hkl039 Feuer, W. (2020, November 4). Oregon becomes first state to legalize magic mushrooms as more states ease drug laws in “psychedelic renaissance.” CNBC. https://www.cnbc.com/2020/11/04/ oregon-becomes-first-state-to-legalize-magicmushrooms-as-more-states-ease-drug-laws.html Frecska, E., Bokor, P., & Winkelman, M. (2016). The Therapeutic Potentials of Ayahuasca: Possible Effects against Various Diseases of Civilization. Frontiers in Pharmacology, 7, 35. https://doi. org/10.3389/fphar.2016.00035 Gardner, J., Carter, A., O’Brien, K., & Seear, K. (2019). Psychedelic-assisted therapies: The past, and the need to move forward responsibly. International Journal of Drug Policy, 70, 94–98. https://doi.org/10.1016/j.drugpo.2019.05.019 Groth-Marnat, G., & Summers, R. (1998). Altered Beliefs, Attitudes, and Behaviors Following Near-Death Experiences. Journal of Humanistic Psychology, 38(3), 110–125. https:// doi.org/10.1177/00221678980383005 Inserra, A. (2018). Hypothesis: The Psychedelic Ayahuasca Heals Traumatic Memories via a Sigma 1 Receptor-Mediated EpigeneticMnemonic Process. Frontiers in Pharmacology, 9, 330. https://doi.org/10.3389/fphar.2018.00330 Kjellgren, A., Eriksson, A., & Norlander, T. (2009). Experiences of encounters with ayahuasca—"the vine of the soul". Journal of Psychoactive Drugs, 41(4), 309–315. https://doi.org/10.1080/0279107 2.2009.10399767 Kryst, J., Kawalec, P., Mitoraj, A. M., Pilc, A., Lasoń, W., & Brzostek, T. (2020). Efficacy of

145


single and repeated administration of ketamine in unipolar and bipolar depression: A meta-analysis of randomized clinical trials. Pharmacological Reports, 72(3), 543–562. https://doi.org/10.1007/ s43440-020-00097-z

Parrott, A. C. (2013a). Human psychobiology of MDMA or “Ecstasy”: An overview of 25 years of empirical research. Human Psychopharmacology, 28(4), 289–307. https://doi.org/10.1002/hup.2318

Labate, B. C., & Goldstein, I. (2009). Ayahuasca– From Dangerous Drug to National Heritage: An Interview with Antonio A. Arantes. International Journal of Transpersonal Studies, 28(1), 53–64. https://doi.org/10.24972/ijts.2009.28.1.53

Parrott, A. C. (2013b). MDMA, serotonergic neurotoxicity, and the diverse functional deficits of recreational ‘Ecstasy’ users. Neuroscience & Biobehavioral Reviews, 37(8), 1466–1484. https://doi.org/10.1016/j.neubiorev.2013.04.016

Leonard, J. (2020, January 31). Ayahuasca: What it is, effects, and usage. https://www. medicalnewstoday.com/articles/ayahuasca

Passie, T., Seifert, J., Schneider, U., & Emrich, H. M. (2002). The pharmacology of psilocybin. Addiction Biology, 7(4), 357–364. https://doi. org/10.1080/1355621021000005937

Lowe, H., Toyang, N., Steele, B., Valentine, H., Grant, J., Ali, A., Ngwa, W., & Gordon, L. (2021). The Therapeutic Potential of Psilocybin. Molecules, 26(10), 2948. https://doi.org/10.3390/ molecules26102948 Mabit, J. (2007). Published in the book “Psychedelic Medicine (Vol. 2): New Evidence for Hallucinogic Substances as. Malcom, K. (2019). ‘Mystical’ Psychedelic Compound Found in Normal Brains. University of Michigan. https://labblog.uofmhealth.org/labreport/mystical-psychedelic-compound-foundnormal-brains Marks, M., & Cohen, I. G. (2021). Psychedelic therapy: A roadmap for wider acceptance and utilization. Nature Medicine, 27(10), 1669–1671. https://doi.org/10.1038/s41591-021-01530-3 Nochaiwong, S., Ruengorn, C., Thavorn, K. et al. (2021) Global prevalence of mental health issues among the general population during the coronavirus disease-2019 pandemic: a systematic review and meta-analysis. Sci Rep 11, 10173. https://doi. org/10.1038/s41598-021-89700-8 Osório, F. de L., Sanches, R. F., Macedo, L. R., dos Santos, R. G., Maia-de-Oliveira, J. P., WichertAna, L., de Araujo, D. B., Riba, J., Crippa, J. A., & Hallak, J. E. (2015). Antidepressant effects of a single dose of ayahuasca in patients with recurrent depression: A preliminary report. Brazilian Journal of Psychiatry, 37, 13–20. https:// doi.org/10.1590/1516-4446-2014-1496 Pajouhesh, H., & Lenz, G. R. (2005). Medicinal chemical properties of successful central nervous system drugs. NeuroRX, 2(4), 541–553. https:// doi.org/10.1602/neurorx.2.4.541

146

Petranker, R., Anderson, T., & Farb, N. (2020). Psychedelic Research and the Need for Transparency: Polishing Alice’s Looking Glass. Frontiers in Psychology, 11, 1681. https://doi. org/10.3389/fpsyg.2020.01681 Pribish, A., Wood, N., & Kalava, A. (2020). A Review of Nonanesthetic Uses of Ketamine. Anesthesiology Research and Practice, 2020, 5798285. https://doi.org/10.1155/2020/5798285 Rainey, J. G. (2009). Gonzales v. O Centro Espirita Beneficente Uniao Do Vegetal. https://www.mtsu. edu/first-amendment/article/745/gonzales-v-ocentro-espirita-beneficente-uniao-do-vegetal Riba, J., McIlhenny, E. H., Bouso, J. C., & Barker, S. A. (2015). Metabolism and urinary disposition of N,N-dimethyltryptamine after oral and smoked administration: A comparative study. Drug Testing and Analysis, 7(5), 401–406. https://doi. org/10.1002/dta.1685 Romero, D. (2021, September 18). California moves closer to decriminalizing psychedelic drugs. NBC News. https://www.nbcnews. com/news/us-news/california-moves-closerdecriminalizing-psychedelic-drugs-n1279509 Ruscher, K., & Wieloch, T. (2015). The involvement of the sigma-1 receptor in neurodegeneration and neurorestoration. Journal of Pharmacological Sciences, 127(1), 30–35. https://doi.org/10.1016/j. jphs.2014.11.011 Sassano-Higgins, S., Baron, D., Juarez, G., Esmaili, N., & Gold, M. (2016). A Review of Ketamine Abuse and Diversion. Depression and Anxiety, 33(8), 718–727. https://doi.org/10.1002/ da.22536

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Shiroma, P. R., Thuras, P., Wels, J., Albott, C. S., Erbes, C., Tye, S., & Lim, K. O. (2020). A randomized, double-blind, active placebocontrolled study of efficacy, safety, and durability of repeated vs single subanesthetic ketamine for treatment-resistant depression. Translational Psychiatry, 10(1), 1–9. https://doi.org/10.1038/ s41398-020-00897-0 Simonsson, O., Osika, W., Carhart-Harris, R., & Hendricks, P. S. (2021). Associations between lifetime classic psychedelic use and cardiometabolic diseases. Scientific Reports, 11(1), 14427. https://doi.org/10.1038/s41598021-93787-4 Soler, J., Elices, M., Dominguez-Clavé, E., Pascual, J. C., Feilding, A., Navarro-Gil, M., García-Campayo, J., & Riba, J. (2018). Four Weekly Ayahuasca Sessions Lead to Increases in “Acceptance” Capacities: A Comparison Study With a Standard 8-Week Mindfulness Training Program. Frontiers in Pharmacology, 9, 224. https://doi.org/10.3389/fphar.2018.00224 Soler, J., Elices, M., Franquesa, A., Barker, S., Friedlander, P., Feilding, A., Pascual, J. C., & Riba, J. (2016). Exploring the therapeutic potential of Ayahuasca: Acute intake increases mindfulnessrelated capacities. Psychopharmacology, 233(5), 823–829. https://doi.org/10.1007/s00213-0154162-0 Strassman, R. J., & Qualls, C. R. (1994). DoseResponse Study of N,N-Dimethyltryptamine in Humans: I. Neuroendocrine, Autonomic, and Cardiovascular Effects. Archives of General Psychiatry, 51(2), 85–97. https://doi.org/10.1001/ archpsyc.1994.03950020009001

Neuropharmacology, 192, 108612. https://doi. org/10.1016/j.neuropharm.2021.108612 Timmermann, C., Roseman, L., Williams, L., Erritzoe, D., Martial, C., Cassol, H., Laureys, S., Nutt, D., & Carhart-Harris, R. (2018). DMT Models the Near-Death Experience. Frontiers in Psychology, 9, 1424. https://doi.org/10.3389/ fpsyg.2018.01424 Vollenweider, F. X., & Kometer, M. (2010). The neurobiology of psychedelic drugs: Implications for the treatment of mood disorders. Nature Reviews Neuroscience, 11(9), 642–651. https:// doi.org/10.1038/nrn2884 Wasik, A. M., Gandy, M. N., McIldowie, M., Holder, M. J., Chamba, A., Challa, A., Lewis, K. D., Young, S. P., Scheel-Toellner, D., Dyer, M. J., Barnes, N. M., Piggott, M. J., & Gordon, J. (2012). Enhancing the anti-lymphoma potential of 3,4-methylenedioxymethamphetamine ('ecstasy’) through iterative chemical redesign: Mechanisms and pathways to cell death. Investigational New Drugs, 30(4), 1471–1483. https://doi.org/10.1007/s10637-011-9730-5 Williams, H. (2018, October 17). How LSD influenced Western culture. https://www. bbc.com/culture/article/20181016-how-lsdinfluenced-western-culture Williams, M. T., Reed, S., & George, J. (2020). Culture and psychedelic psychotherapy: Ethnic and racial themes from three Black women therapists. Journal of Psychedelic Studies, 4(3), 125–138. https://doi. org/10.1556/2054.2020.00137

Szabo, A., Kovacs, A., Riba, J., Djurovic, S., Rajnavolgyi, E., & Frecska, E. (2016). The Endogenous Hallucinogen and Trace Amine N,N-Dimethyltryptamine (DMT) Displays Potent Protective Effects against Hypoxia via Sigma-1 Receptor Activation in Human Primary iPSC-Derived Cortical Neurons and MicrogliaLike Immune Cells. Frontiers in Neuroscience, 10, 423. https://doi.org/10.3389/fnins.2016.00423 Szabó, Í., Varga, V. É., Dvorácskó, S., Farkas, A. E., Körmöczi, T., Berkecz, R., Kecskés, S., Menyhárt, Á., Frank, R., Hantosi, D., Cozzi, N. V., Frecska, E., Tömböly, C., Krizbai, I. A., Bari, F., & Farkas, E. (2021). N,N-Dimethyltryptamine attenuates spreading depolarization and restrains neurodegeneration by sigma-1 receptor activation in the ischemic rat brain.

FALL 2021

147


Vegetarianism Debate STAFF WRITERS: OWEN SEINER ‘24, LILY DING ‘24, CARSON PECK ‘22, JULIETTE COURTINE ‘24, ETHAN WEBER ‘24, CALLIE MOODY ‘24 TEAM LEAD: KRISTAL WONG '22 Cover Image: Amongst all the diet trends and fads, one of the most prominent includes the vegetarian diet, in which individuals abstain from consumption of meat. This article will explore possible benefits and consequences of following such a diet. Image Source: Wikimedia Commons

Introduction

and vegetables (CDC, 2021).

Omnivore. Carnivore. Pescatarian. Vegetarian. Modern diet culture is surrounded by various dietary practices that can embrace or restrict the variety of food available. Vegetarianism is described as a plant-based dietary practice that abstains from meat and meat by-products. Individuals decide to follow vegetarian lifestyles for a multitude of reasons, including health benefits, environmental concerns, moral biases, and religion.

This article will explore the origins of vegetarianism and analyze its current trends, follow the climate/environmental impacts of vegetarian diets, and expose health impacts of pursuing vegetarian lifestyles.

In the US, about 4-5% of individuals follow a vegetarian diet; this figure includes vegans, a subset of vegetarians who don’t eat any animal byproducts (e.g. eggs and cheese) (Hrynowski, 2021; Staher, 2020). Interestingly, a 2016 survey found that 37% of the US population sometimes or always orders vegetarian options when eating out (Hrynowski, 2021). In comparison, a 2014 study found that 31% of the global population follows a vegetarian diet, with the largest percentage of the world’s vegetarians from India (Figus, 2014). What does this say about the US? One thing this statistic signifies is that Americans eat less fruits and vegetables. According to the CDC, only 1 in 10 Americans get enough fruits 148

Background Anthropologic origins of carnivorous consumption begin as early as 2 million years ago. Early hominids practiced scavenging (finding an already killed carcass) and hunting (killing the prey) as fit for their ecological environment, but evolutionary advantage caused meat consumption to gain popularity. Carnivorous consumption of meat and bone marrow proved to be a beneficially efficient source of carloric resources and nutrients. These alternative means of consumption allowed hominins to increase body size without sacrificing mobility or dexterity (Pobliner, 2013). The origin of vegetarianism traces back to as early as 500 BC (Spencer, 1993). The beginning of the movement is evident from the days of Buddha in ancient India. The vegetarian philosophy was DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


not originally created for the same reasons that inspire contemporary vegetarian ideals, such as personal health concerns or the environmental impact of meat consumption, but rather a spiritual need coupled with unique societal foundations. In ancient India, the Hindu belief in reincarnation supported the emergence of the vegetarian philosophy. The Hindus believed that the pain and terror of the animal that was killed lived on in one’s reincarnated soul. Therefore, their belief that consciousness lives on after death discouraged members of the Hindu religion from meat diets (Rosen, 2006). It was not enforced in their scriptures, yet the language was unambiguous in stating that true peace could not be obtained if one killed animals and consumed meat. In addition to spiritual reasons for being vegetarian, India’s relative wealth played a unique role in supporting widespread vegetarianism. The concept of a “taboo,” especially one concerning food, could only be present in a civilization where food was not scarce; basic needs must have been met for the consideration of limiting one’s food source to be entertained. Ancient India relied on cows for resources more valuable than meat, which made them too valuable to eat and enabled a taboo against killing cows. The milk and urine of cows were important for dairy and cleanser, and cow dung in agriculture allowed for vegetation growth even with little rainfall (Rosen, 2006). These societal circumstances and beliefs allowed for a philosophy similar to modern vegetarianism to take root. The term “vegetarianism” has only been in use since the 1840s, but the moral creed to not kill animals for food was formed in this time period (Spencer, 1993). Much has changed since then, perhaps most notably the traction and credibility of the vegetarian movement itself. Research on vegetarianism didn’t begin until the 1800s, and by this point strict spiritual or religious reasoning for the diet had grown uncommon. Morality continued to be an important motivator, but because of the more abstract nature of this incentive, as well as a decrease in the acceptance of the authority of religion, vegetarianism was a fringe movement. However, the rising influence of science throughout the late 1800s into the 20th century created a new wave of vegetarians with a desire for improved bodily health (Whorton, 1994). Research on nutrition also began to appear at this time, which supported the diet with empirical evidence and allowed it to be recognized as a healthy alternative. Beginning in the early 21st century, the vegetarian diet became a larger minority in western countries (reaching FALL 2021

up to 8% in Canada and 3% in the United States), and therefore, a need for the formalization of the study of vegetarianism was recognized (Ruby, 2012). Historically, the vegetarian diet has been more popular in less developed countries where reductions in meat consumption help alleviate poverty or are more fitting for those in poverty. More affluent countries, such as the US and Europe, incorporate more meat into their diets. And although greater consumption of meat and animal products is correlated with greater wealth, it is important to note that these carnivorous dietary practices aren’t necessarily followed by greater health outcomes (Godfray et al., 2010). In fact, the more developed countries have higher rates of coronary heart disease, diabetes, and atherosclerosis. In addition, the lifestyles of vegetarians have been a source of prejudice and stereotyping. The infamous term “vegaphobia” is often associated with those discriminating against individuals who adopt plant-based diets. While typically not thought of as a legitimate social issue, social stigma against vegetarians and vegans exists and are often perpetuated by the media. In a 2011 study at the London School of Economics, researchers analyzed mentions of veganism in UK newspapers throughout 2007 and found 397 mentions of veganism or vegetarianism with 74.3% being negative, 20.2% neutral—mostly in the case of food and travel reviews—and only 5.5% positive (Cole & Morgan, 2011). Of the negative mentions of veganism, 29.2% ridiculed veganism, 28.8% portrayed it as an overly ascetic lifestyle, 27.8% described it as overly difficult or as a fad, and 13.5% represented vegans as overly sensitive or hostile (Cole & Morgan, 2011). Additionally, complex social issues are intertwined with vegetarianism, with another study conducted in the UK finding that those who follow plant-based diets were consistently perceived as less masculine (Thomas, 2016).

"The origin of vegetarianism traces back to as early as 500 BC."

In further touching upon the social issues related to the adoption of vegetarian diets, the topic of accessibility must be addressed, especially given that vegetarian diets are often perceived as being more expensive than standard diets. In a study performed in the United States, women were sorted into two categories: plant-based (n=1109) or control (n=1145). Women in the plant-based group only spent an additional $1.22 per week compared to their counterparts who received no special dietary instructions (Hyder et al., 2009). While statistically significant, an additional $1.22 spent on groceries per week likely is not a huge 149


financial barrier for much of the developed world. Climate/Environmental Effects

"Beyond basic livestock contributions to greenhouse gas emissions, the animal feed industry contributes greatly to climate change through its mass agricultural production of often non-organic crops."

Can vegetarian diets mitigate the ills of climate change? The answer is complex: one of the main causes of global climate change over the last 150 years, and especially over the last six decades, can be traced to agricultural activity, such as deforestation and crop irrigation (Wuebbles et al., 2017; Ruddiman, 2005). Evidence suggests that deforestation and irrigation even dating back to the origins of agriculture 11,000 years ago have influenced warming patterns (Ruddiman, 2005). Agriculture not only changes land use and cover, but also changes the rate by which greenhouse gases are being released into the atmosphere (Smith et al., 2014). Though its emissions have decreased 24% since 2006, the agriculture, forestry, and other land use sector is still responsible for around 25% of anthropogenic greenhouse gas emissions, thus significantly contributing to global warming (Smith et al., 2014). Animal agriculture contributes significantly to climate change due to high demands for both land and food. In fact, the livestock sector is one of the greatest contributors to the global anthropogenic greenhouse gas emissions, composing almost 80% of the agricultural sector’s emissions (McMichael et al., 2007). Animal husbandry, the management and production of domestic (farm) animals, has been an integral part of human history since long before the rise of the modern agricultural system; yet, increases in global household income, along with unprecedented population

growth, has created a commensurate increase in the demanded quantity of meat (Rojas-Downing et al., 2017). Farmers now carry a larger herd size, which requires larger amounts of feedstock, grazing pastures, and water. While these effects increase climate impacts, farmers often adopt large herds as a buffer against the consequences of climate change, creating a cycle in which the impacts of climate change necessitate a greater herd size, which further engenders more climatic change (Næss and Bårdsen, 2013). Such practices continue to negatively impact the climate, and farmers respond by intensifying their already damaging practices. It is estimated that, collectively, the livestock supply chains emitted around 8.1 gigatons of carbon dioxide equivalent in 2012 (FAO, 2021). The two most significant types of greenhouse gases that are emitted from animal agriculture are methane and nitrous oxide. Mainly produced by enteric fermentation and manure storage, methane has 28 times greater effect on global warming than carbon dioxide per unit. Nitrous oxide, which can also arise from manure storage and fertilizer use, has a global warming potential that is 265 times higher than carbon dioxide (Grossi et al., 2019). And although some may say that using manure as fertilizer offsets the GHG emissions footprint, the offset is not substantial enough (Godfray et al., 2010). Among all domesticated animals, cattle were the greatest contributor to total emissions with approximately 5.0 gigatonnes of carbon dioxide equivalent, which accounts for 62% of the animal husbandry sector’s total emissions (FAO, 2021). Pigs and poultry, the other most popular domesticated

Image 1: Cultivated livestock such as the cattle pictured above are the largest producers of greenhouse gas (GHG) emissions amongst the global agricultural sector. Image Source: Wikimedia Commons

150

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


types of agricultural animals, emit around 7-11% of the sector’s total emissions (FAO, 2021) Another way that the livestock sector contributes to greenhouse gas emissions is through feed production and water demands. Feed production includes the greenhouse gas emissions, not only from feed processing and transport, but also land use changes, manufacturing, use of fertilizers and pesticides, and manure excreted and applied to fields. Feed production and processing contribute to around 45% of the sector’s greenhouse gas emissions, which is greater than the number of emissions from enteric fermentation and over four times that of manure storage (Grossi et al., 2019). Beyond basic livestock contributions to greenhouse gas emissions, the animal feed industry contributes greatly to climate change through its mass agricultural production of often non-organic crops. Organic agriculture, far more commonly used when growing produce intended for humans than in livestock, is far more energy efficient and conservative than non-organic agriculture. Organic agriculture uses no artificial pesticides or fertilizers, two of the biggest agricultural contributors to climate change. For instance, the N2O released by mineral fertilizers alone accounts for almost 20% of global agricultural greenhouse gas emissions, and N2O emissions overall account for 38% of all agricultural greenhouse gas emissions. The necessary reduction of mineral fertilizer, however, requires other practices to optimize nutrients in the soil, without artificially adding them using fertilizers. One of the best manners to do this is to diversify crops and employ crop rotation techniques. This is feasible when producing food for human consumption, because there are a variety of common crops produced for that purpose. However, there are very few crops produced as animal feed, and thus crop rotation and diversification become far more difficult (Scialabba et al., 2010). Thus, production of organic crops, intended for direct human consumption, is far more efficient and ecofriendlier than mass-producing crops for animal consumption. Animal husbandry’s contribution to greenhouse gas emissions is further exacerbated when forests are converted into pasture or cropland. Deforestation is one of the main contributors towards greenhouse gas emissions in the agricultural sector. Oftentimes, the slash-andburn clearing technique whereby vegetation is cut (slashed) and burned down to create agricultural

FALL 2021

lands for usages such as plantations and pastures (Tinker et al., 1996). These changes in land-use and cover not only degrade the land, but also have important impacts on climate change. Deforestation causes large releases of carbon dioxide from the soil and vegetation, which are carbon sinks that store carbon dioxide (Ekblad and Bastviken, 2019). Through natural growth processes, plants take up carbon dioxide from the atmosphere and nitrogen from the soil; however, the decomposition of dead plant biomass that often occurs as a result of deforestation releases significant amounts of greenhouse gases (Smith et al., 2014). In addition, overgrazing is an important environmental issue tied to land use from omnivorous diets. Overgrazing results in decreased biodiversity and vegetation cover, leading to soil erosion, which prevents water from permeating the soil (Kosmas et al., 2015). Without moist soil, the land undergoes desertification and loses the ability to become agriculturally productive. Water consumption is another issue. Throughout the world, around one third of all water goes directly towards the production of animal products, mostly meat and dairy. While ovo-lacto vegetarians continue to consume dairy products, in choosing to avoid meat products, vegetarians conserve large quantities of water. Generally, animal products require more water to produce the same caloric content compared to crop-based foods that vegetarians consume more frequently. As a result of the excess water required to produce food for livestock, meat makes up 37% of the food-related water footprint of the average American. If all meat were to be replaced with crop food, the average water footprint would decrease by 30% (Meknonnen andHoekstra, 2012).

" When considering the viability of vegetarian or vegan diets, it is important to consider not only the common dietary deficiencies associated with these diets but also the typical nutritional value of plants as well."

Biological Effects Are vegetarian diets biologically nutritious? Are they sustainable? When considering the viability of vegetarian or vegan diets, it is important to consider not only the common dietary deficiencies associated with these diets but also the typical nutritional value of plants as well. For digestion, plants are essential for fiber. Differences between the digestion of plants and animal products are mainly due to the differing chemical compositions of these two food categories. Most notably, the presence of fiber in plant material is of critical importance for a number of gastrointestinal (GI) conditions. According to the British Journal of Nutrition, dietary fiber a any indigestible carbohydrate-based plant

151


"... modern plantbased meat alternatives enable vegans and vegetarians to adhere to their diets without risking any nutritional deficiencies."

material and is known to influence nutrient bioaccessibility, microbial fermentation, GI hormone signaling, metabolizable energy, and postprandial metabolism (Grundy et al., 2016). Because fiber is indigestible by endogenous enzymes, humans rely on the diverse microbiota of the GI tract to ferment fiber into short-chain fatty acids such as acetate, propionate, and butyrate (Tomova et al., 2019). Fiber is also known to accelerate the speed of digestion, allowing for significantly reduced digestion times for plant material as compared to meats. Conversely, animal protein relies extensively on a cohort of digestive enzymes in order to be broken down into amino acids. This process is slowed significantly by the presence of high fat content within the meat. A slower metabolism may be unfavorable because the slower expenditure of calories and energy converts excess calories into fat in the body (Harvard Health Publishing, 2021). Generally, plant-based diets tend to be much higher in vitamin C, magnesium, folate, manganese, thiamin, potassium, and vitamin E (van Vliet et al., 2020). A 2003 study of the diets of over 65,000 UK citizens revealed significantly higher intakes of these vitamins among vegans and vegetarians as compared to meat-eaters (Davey et al., 2003). Each of these compounds are vital for human health and can be readily obtained by an exclusively plant-based diet. However, some vital nutrients are either much more readily or exclusively obtained from animal sources, including vitamins A (retinol), B12 (adenosyl- and hydroxocobalamin), D (cholecalciferol), K2 (menaquinone-4), minerals such as iron and zinc, and long-chain polyunsaturated fatty acids. Precursors to these nutrients may be sourced from plants, but many people lack the metabolic efficiency to achieve sufficient rates of enzymatic conversion of these precursors to satisfy their own nutritional

demands (van Vliet et al., 2020). As such, dietary supplements are often advised as a complement to plant-based diets. A common source of controversy with respect to the nutritional discrepancy between plant-based and omnivorous diets is the amount and quality of protein that each diet offers. According to the National Academy of Medicine, the current recommended dietary allowance (RDA) protein for an adult is 0.8 g/kg (Institute of Medicine, 2005). Animal proteins are most commonly recommended to meet this daily allowance because of their high quality and modest caloric load (van Vliet et al., 2020). Using only plants to meet this daily allowance poses two problems for vegetarians and vegans: firstly, that the amount of easily digestible protein in plants is significantly lower than that in animal sources, and secondly, that the quality of raw plant protein is often lower due to reduced amounts of the essential amino acids such as methionine and lysine (Sarwar et al., 2012; van Vliet et al., 2015). Despite the complications associated with obtaining a sufficiently large and diverse daily intake of protein from plants alone, modern plant-based meat alternatives enable vegans and vegetarians to adhere to their diets without risking any nutritional deficiencies. Such alternatives include but are not limited to tofu, beans, nuts, seeds, and quinoa.

Health Effects Switching to a vegetarian diet is commonly cited as a way to improve one’s general health. Some of the strongest evidence in support of this idea relates to the reduction of diseases like heart disease, diabetes, and cancer. Multiple studies have shown that vegetarians are generally less likely to suffer from these ailments than meateaters. For example, one long-term study on heart disease followed 76,000 men and women

Image 2: Legumes, a healthy plant-based food can be consumed in place of meat protein sources. Image Source: Wikimedia Commons

152

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Image 3: Human-animal contact, which increase with livestock cultivation, increases the likelihood of the emergence of zoonotic diseases such as the SARS-CoV-2 virus. Image Source: Wikimedia Commons

in the UK, USA, and Germany for about 10 years. Adjusting for factors like age, sex, and smoking, researchers found that people who did not consume meat or fish were 24% less likely to die from ischemic heart disease than those who did (Key et al., 1998). The gap in heartdisease death rate between vegetarians and nonvegetarians was higher the younger the subjects were. Furthermore, Qian et al.’s study of over 300,000 people found links between plant-based diets and lower risk of type 2 diabetes (2019). These researchers determined that people who ate a mostly vegetarian diet were 23% less likely to develop this disease (Qian et al., 2019). Finally, Lanou et al. suggested that vegetarian diets are modestly preventative of cancer (2011). Although the results regarding vegetarianism and specific cancers were less clear, the report stated that most large observational studies showed vegetarianism correlates to a 10-12% reduction in overall cancer risk (Lanou et al., 2011).

more effectively. Tonstad et al. found that people who don’t eat meat tend to have lower BMIs in general (regardless of whether they are trying to lose weight) (2009).

Scientific research also lends support to the idea that vegetarianism can lead to weight loss. In 2014, researchers analyzed a collection of clinical trials on this subject. They found that those on vegetarian diets lost an average of 4.4 pounds more than those on diets with meat, controlling for obvious factors like exercise and overall calorie intake (Huang et al., 2014). Other research has shown similar implications. For example, dieters who went vegetarian were showed to able to lose twice as much weight as dieters who continued to eat meat (Kahleova et al., 2017). The vegetarians also improved their metabolism more by reducing subfascial and intramuscular fat

References

FALL 2021

Conclusion

"In making decisions on whether to follow such a diet, it is important to consult the science and personal values"

Evidently, there are many reasons for and against following vegetarian diets. In making decisions on whether to follow such a diet, it is important to consult the science and personal values. This article has briefly outlined arguments pertaining to the environment, biology, and health, but is not an exhaustive list. Other possible areas to consider are the issue of water usage/droughts that come with livestock cultivation and emergence of zoonotic diseases. It is also important to consider the barriers to following vegetarian diets as fruit and vegetable filled diets have been seen to correlate with lack of accessibility and lower incomes (CDC, 2021).

Ahammad, H., Clark, H., Dong, H., & Tubiello, F. (2014). Climate Change 2014 Mitigation of Climate Change: Working Group III Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. https://doi. org/10.1017/CBO9781107415416 CDC. (2021, February 16). Only 1 in 10 Adults Get Enough Fruits or Vegetables. Centers for Disease Control and Prevention. https:// www.cdc.gov/nccdphp/dnpao/images/news/

153


shopping-for-produce.jpg Davey, G. K., Spencer, E. A., Appleby, P. N., Allen, N. E., Knox, K. H., & Key, T. J. (2003). EPIC– Oxford:lifestyle characteristics and nutrient intakes in a cohort of 33 883 meat-eaters and 31 546 non meat-eaters in the UK. Public Health Nutrition, 6(3), 259–268. https://doi.org/10.1079/ PHN2002430 Diamond, J. (n.d.). The Worst Mistake in the History of the Human Race. Discover Magazine. Retrieved April 11, 2021, from https://www. discovermagazine.com/planet-earth/the-worstmistake-in-the-history-of-the-human-race Ekblad, A., & Bastviken, D. (2019). Deforestation releases old carbon. Nature Geoscience, 12(7), 499–500. https://doi.org/10.1038/s41561-0190394-7 Figus, C. (n.d.). 375 million vegetarians worldwide. All the reasons for a green lifestyle. EXPONet. Retrieved September 13, 2021, from http://www.expo2015.org/magazine/en/ lifestyle/375-million-vegetarians-worldwide. html Food and Agriculture of the United Nations (FAO). (2021). GLEAM 2.0—Assessment of greenhouse gas emissions and mitigation potential. Fao.Org. https://www.fao.org/gleam/ results/en/ Gardos, G., & Cole, J. O. (1976). Maintenance antipsychotic therapy: Is the cure worse than the disease? The American Journal of Psychiatry, 133(1), 32–36. https://doi.org/10.1176/ ajp.133.1.32 Godfray, H. C. J., Beddington, J. R., Crute, I. R., Haddad, L., Lawrence, D., Muir, J. F., Pretty, J., Robinson, S., Thomas, S. M., & Toulmin, C. (2010). Food Security: The Challenge of Feeding 9 Billion People. Science, 327(5967), 812–818. https://doi.org/10.1126/science.1185383 Grossi, G., Goglio, P., Vitali, A., & Williams, A. G. (2019). Livestock and climate change: Impact of livestock on climate and mitigation strategies. Animal Frontiers, 9(1), 69–76. https://doi. org/10.1093/af/vfy034 Grundy, M. M.-L., Edwards, C. H., Mackie, A. R., Gidley, M. J., Butterworth, P. J., & Ellis, P. R. (2016). Re-evaluation of the mechanisms of dietary fibre and implications for macronutrient bioaccessibility, digestion and postprandial

154

metabolism. British 116(5), 816–833. S0007114516002610

Journal of Nutrition, https://doi.org/10.1017/

Grunert, K. G. (2006). Future trends and consumer lifestyles with regard to meat consumption. Meat Science, 74(1), 149–160. https://doi.org/10.1016/j.meatsci.2006.04.016 Harvard Health. (2018, April 1). The truth about metabolism. Harvard Health. https://www. health.harvard.edu/staying-healthy/the-truthabout-metabolism Harvard Health. (2021, October 6). Does Metabolism Matter in Weight Loss? Harvard Health. https://www.health.harvard.edu/dietand-weight-loss/does-metabolism-matter-inweight-loss Hrynowski, G. (2019, September 27). What Percentage of Americans Are Vegetarian? Gallup. Com. https://news.gallup.com/poll/267074/ percentage-americans-vegetarian.aspx Huang, R.-Y., Huang, C.-C., Hu, F. B., & Chavarro, J. E. (2016). Vegetarian Diets and Weight Reduction: A Meta-Analysis of Randomized Controlled Trials. Journal of General Internal Medicine, 31(1), 109–116. https://doi. org/10.1007/s11606-015-3390-7 Kahleova, H., Klementova, M., Herynek, V., Skoch, A., Herynek, S., Hill, M., Mari, A., & Pelikanova, T. (2017). The Effect of a Vegetarian vs Conventional Hypocaloric Diabetic Diet on Thigh Adipose Tissue Distribution in Subjects with Type 2 Diabetes: A Randomized Study. Journal of the American College of Nutrition, 36(5), 364–369. https://doi.org/10.1080/0731572 4.2017.1302367 Key, T. J., Fraser, G. E., Thorogood, M., Appleby, P. N., Beral, V., Reeves, G., Burr, M. L., ChangClaude, J., Frentzel-Beyme, R., Kuzma, J. W., Mann, J., & McPherson, K. (1998). Mortality in vegetarians and non-vegetarians: A collaborative analysis of 8300 deaths among 76,000 men and women in five prospective studies. Public Health Nutrition, 1(1), 33–41. https://doi.org/10.1079/ PHN19980006 Lanou, A. J., & Svenson, B. (2010). Reduced cancer risk in vegetarians: An analysis of recent reports. Cancer Management and Research, 1. https://doi.org/10.2147/CMAR.S6910 McMichael, A. J., Powles, J. W., Butler, C. D., &

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


Uauy, R. (2007). Food, livestock production, energy, climate change, and health. The Lancet, 370(9594), 1253–1263. https://doi.org/10.1016/ S0140-6736(07)61256-2 Næss, M. W., & Bårdsen, B.-J. (2013). Why Herd Size Matters – Mitigating the Effects of Livestock Crashes. PLoS ONE, 8(8), e70161. https://doi. org/10.1371/journal.pone.0070161 Panel on Macronutrients, Panel on the Definition of Dietary Fiber, Subcommittee on Upper Reference Levels of Nutrients, Subcommittee on Interpretation and Uses of Dietary Reference Intakes, Standing Committee on the Scientific Evaluation of Dietary Reference Intakes, Food and Nutrition Board, & Institute of Medicine. (2005). Dietary Reference Intakes for Energy, Carbohydrate, Fiber, Fat, Fatty Acids, Cholesterol, Protein, and Amino Acids (p. 10490). National Academies Press. https://doi.org/10.17226/10490 Pobiner, B. (2013). Evidence for Meat-Eating by Early Humans | Learn Science at Scitable. Nature Education Knowledge, 4(6). https://www.nature. com/scitable/knowledge/library/evidence-formeat-eating-by-early-humans-103874273/ Qian, F., Liu, G., Hu, F. B., Bhupathiraju, S. N., & Sun, Q. (2019). Association Between Plant-Based Dietary Patterns and Risk of Type 2 Diabetes: A Systematic Review and Meta-analysis. JAMA Internal Medicine, 179(10), 1335. https://doi. org/10.1001/jamainternmed.2019.2195 Quinton, A. (2019, June 27). Cows and climate change. UC Davis. https://www.ucdavis.edu/ food/news/making-cattle-more-sustainable Rojas-Downing, M. M., Nejadhashemi, A. P., Harrigan, T., & Woznicki, S. A. (2017). Climate change and livestock: Impacts, adaptation, and mitigation. Climate Risk Management, 16, 145– 163. https://doi.org/10.1016/j.crm.2017.02.001 Rosen, S. (2006). Essential Hinduism. Greenwood Publishing Group. Ruby, M. B. (2012). Vegetarianism. A blossoming field of study. Appetite, 58(1), 141–150. https:// doi.org/10.1016/j.appet.2011.09.019 Ruddiman, W. F. (2005). How Did Humans First Alter Global Climate? Scientific American, 292(3), 46–53. https://doi.org/10.1038/ scientificamerican0305-46 Sarwar Gilani, G., Wu Xiao, C., & Cockell, K.

FALL 2021

A. (2012). Impact of Antinutritional Factors in Food Proteins on the Digestibility of Protein and the Bioavailability of Amino Acids and on Protein Quality. British Journal of Nutrition, 108(S2), S315–S332. https://doi.org/10.1017/ S0007114512002371 Schahczenski, & Hill, H. (2009). Agriculture, Climate Change and Carbon Sequestration (No. 012309; pp. 1–16). National Center for Appropriate Technology (NCAT). https://www. canr.msu.edu/foodsystems/uploads/files/agclimate-change.pdf Scialabba, N. E.-H., & Müller-Lindenlauf, M. (2010). Organic agriculture and climate change. Renewable Agriculture and Food Systems, 25(2), 158–169. https://doi.org/10.1017/ S1742170510000116 Spencer, C. (1996). The Heretic’s Feast: A History of Vegetarianism. UPNE. Stahler, C. (n.d.). How Many Adults in the U.S. are Vegetarian and Vegan | The Vegetarian Resource Group (VRG). Retrieved September 13, 2021, from https://www.vrg.org/nutshell/ Polls/2019_adults_veg.htm Tinker, P. B., Ingram, J. S. I., & Struwe, S. (1996). Effects of slash-and-burn agriculture and deforestation on climate change. Agriculture, Ecosystems & Environment, 58(1), 13–22. https://doi.org/10.1016/0167-8809(95)00651-6 Tomova, A., Bukovsky, I., Rembert, E., Yonas, W., Alwarith, J., Barnard, N. D., & Kahleova, H. (2019). The Effects of Vegetarian and Vegan Diets on Gut Microbiota. Frontiers in Nutrition, 6, 47. https://doi.org/10.3389/fnut.2019.00047 Tonstad, S., Butler, T., Yan, R., & Fraser, G. E. (2009). Type of Vegetarian Diet, Body Weight, and Prevalence of Type 2 Diabetes. Diabetes Care, 32(5), 791–796. https://doi.org/10.2337/ dc08-1886 van Vliet, S., Burd, N. A., & van Loon, L. J. (2015). The Skeletal Muscle Anabolic Response to Plantversus Animal-Based Protein Consumption. The Journal of Nutrition, 145(9), 1981–1991. https:// doi.org/10.3945/jn.114.204305 van Vliet, S., Kronberg, S. L., & Provenza, F. D. (2020). Plant-Based Meats, Human Health, and Climate Change. Frontiers in Sustainable Food Systems, 4, 128. https://doi.org/10.3389/ fsufs.2020.00128

155


Whorton, J. C. (1994). Historical development of vegetarianism. The American Journal of Clinical Nutrition, 59(5), 1103S-1109S. https://doi. org/10.1093/ajcn/59.5.1103S Wuebbles, D. J., Easterling, D. R., Hayhoe, K., Knutson, T., Kopp, R. E., Kossin, J. P., Kunkel, K. E., LeGrande, A. N., Mears, C., Sweet, W. V., Taylor, P. C., Vose, R. S., Wehner, M. F., Wuebbles, D. J., Fahey, D. W., Hibbard, K. A., Dokken, D. J., Stewart, B. C., & Maycock, T. K. (2017). Ch. 1: Our Globally Changing Climate. Climate Science Special Report: Fourth National Climate Assessment, Volume I. U.S. Global Change Research Program. https://doi.org/10.7930/ J08S4N35

156

DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE


DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE Hinman Box 6225 Dartmouth College Hanover, NH 03755 USA http://dujs.dartmouth.edu dujs@dartmouth.edu

FALL 2021

157


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.