DUJS
Dartmouth Undergraduate Journal of Science WINTER 2021
|
VOL. XXIII
|
N O. 1
HOPE FOR THE TRIUMPH OF SCIENCE IN TROUBLED TIMES
Neuroscience, Narrative, and Never-Ending Stories
Sublimable Adhesives: The New Way to Stick
Healthcare: Roots of Inequality
pg. 58
pg. 188
pg. 302
1
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Letter From the Editor in Chief The interesting thing about doing college remotely – as many college students now know – is that we tackle the same college-level subjects, but are transported to a setting which almost parallels that which we had in high school. Despite the intellectual jump that we have made in the past few years – we’re all physically right back where we started. Just last Thursday I was sitting outside the café that used to be a staple during my pre-pandemic life, ordering the same latte and sitting in the same spot next to that outlet that had seen the frayed end of my computer charger many times before, when I noted a few students who walked in with matching sweatshirts whose last letters were unmistakably “HS.” “High school students,” I sighed, rolling my eyes. As a college student that was obviously decades older, I felt the intellectual superiority in the moment and looked back down at the table, rolling my eyes at the thermodynamics problem set before me. A few minutes later, I overheard the high school students starting their group project on cancer research. After running through a few search results on NCBI, deterred by the complexity of papers , they started searching for more specific keywords relevant to the rubric they were given. Eventually, I noticed that they clicked into a page titled “Regulating Regulatory T Cells to Fight Cancer” – an article written by Frankie Carr. Frankie is a ’22 at Dartmouth College – and one of our DUJS writers. It took me a few minutes to recognize the layout of our website and realize that the article was one of ours! The students read the article all the way through and poked around the website – ultimately checking out a few different articles before they resumed their search in the NCBI database, this time searching for topics that they had encountered on the DUJS website. The café took on a new meaning for me that day. Within the Hanover community, Dartmouth College sometimes seems to be nestled within a bubble. At DUJS we have certainly been affected by this mindset, as many of our pre-pandemic efforts focused on sharing science writing within our own college community. The pandemic has necessarily caused us to look up from this niche community into other pockets of the world, making efforts to expand our readership to a population that is as diverse as the student body that we celebrate. The journal this term reflects topics that are pertinent to our new pandemic surroundings, and as usual, our interest in topics that evolve from class discussions. Our journal has once again been able to feature a record number of writers, many of whom are newly inducted members of the DUJS family and are just beginning their journey into scientific journalism. While terribly exciting, this term served as something of a bittersweet time for me, Sam Neff, Anna Brinks, Megan Zhou, and Liam Locke; all of us have been intimately involved with DUJS since our freshman year. While it’s been nearly four long years since we all met on a Thursday night in Carson 061, we have been inspired by all the wonderful DUJS leaders before us and have tried to fill their large shoes over the past year. It has been a great honor and privilege to work with all the amazing writers who have come, many of whom have now become the backbone of the organization – so much so that DUJS would be virtually unrecognizable without their hard work and dedication. We as a board leave DUJS inspired, and are ecstatic to be able to follow DUJS as we leave it in the hands of new leadership who will undoubtedly take the organization to new heights. We have cherished our time with all of you, editors and writers alike, and cannot wait to see where you take our beloved organization next. Warmly, Nishi Jain Editor in Chief
DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs.dartmouth.science@gmail.com Copyright © 2020 The Trustees of Dartmouth College
The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community and beyond by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EXECUTIVE BOARD President: Sam Neff '21 Editor-in-Chief: Nishi Jain '21 Chief Copy Editors: Anna Brinks '21, Liam Locke '21, Megan Zhou '21 EDITORIAL BOARD Managing Editors: Anahita Kodali '23, Dev Kapadia '23, Dina Rabadi '22, Kristal Wong '22, Maddie Brown '22 Assistant Editors: Aditi Gupta '23, Alex Gavitt '23, Daniel Cho '22, Eric Youth '23, Sophia Koval '21 STAFF WRITERS Aarun Devgan ‘23
Julia Patterson ‘24
Abenezer Sheberu ‘24
Juliette Courtine ‘24
Abigail Fischer ‘23
Justin Chong ‘24
Aditi Gupta ‘23
Justin Fajar ‘24
Alexandra Limb ‘21
Katie Walther ‘24
Amittai Wekesa ‘24
Kristal Wong ‘22
Anahita Kodali ‘23
Lauren Ferridge ‘23
Andrew Sasser ‘23
Leah Johnson ‘23
Andy Cavanaugh ‘24
Liam Locke ‘21
Anna Brinks ‘21
Lucy Fu ‘22
Arshdeep Dhanoa ‘24
Maddie Brown ‘22
Ashna Kumar ‘24
Maeen Arslan ‘23
Audrey Herrald ‘23
Mary-Margaret Cummings ‘24
Ayushya Ajmani ‘24
Megan Zhou ‘21
Bamlak Messay ‘23
Melanie Prakash ‘21
Basile Montagnese ‘22
Miranda Yu ‘24
Callum Forest, Forsyth
Nishi Jain ‘21
County Day School
Noa Phillips ‘23
Cameron Sabet ‘24
Nephi Seo ‘23
Caroline Conway ‘24
Owen Seiner ‘24
Chelsea-Starr Jones ‘23
Rachel Matthew ‘24
Christopher Connors ‘21
Rori Stuart ‘22
Collins Kariuki, Pomona
Rujuta Purohit ‘24
College
Sam Neff ‘21
Daniel Chen ‘24
Samantha Palermo ‘24
Daniel Kotrebai ‘24
Soyeon (Sophie) Cho ‘24
Dev Kapadia ‘23
Spriha Pandey ‘24
Erika Hernandez ‘22
Stephanie Damish ‘24
Georgia Dawahare ‘23
Stephanie Finley ‘23
Gil Assi ‘22
Stephanie Lebby ‘20
Grace Nguyen ‘24
Toma Benarfa Moyne ‘24
Ian Hsu ‘23
Tyler Chen ‘24
Jason Dong ‘24
Vaani Gupta ‘24
Jennifer Chen ‘23
Vaishnavi Katragadda ‘24
Jenny Chen ‘21
Valentina Fernandez ‘24
Jessica Meikle ‘23
Varun Lingadal ‘23
Jillian Troth ‘24
Wilson Murane ‘22
Julia Draves ‘23
SPECIAL THANKS Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company
Table of Contents Individual Articles Warp Speed: A Brief History of Vaccine Development Aarun Devgan '23, Pg. 6
6
Periodontal Disease: Pathogenesis, Treatment, and How One Superhuman Drug Could Change the Future of Dentistry Alexandra Limb ’21, Pg. 10 Data-Drive Behavior Change: Machine Learning and Its Impacts on Other Industries Amittai Wekesa ’24, Pg. 16 Factors in the Onset of Obesity - An American Public Health Crisis Anahita Kodali ’23, Pg. 32 Super Selective Synthesis: The Evolution of Enantioselective Methods
36
Andrew Sasser '23, Pg. 38 The Motivators and Social Consequences of Gossip Ashna Kumar ’24, Pg. 48 Renewed Views on Gaming During the COVID-19 Pandemic Callum Forest, Forsyth Country Day School, Pg. 54 Neuroscience, Narrative, and Never-Ending Stories Caroline Conway ’24, Pg. 60 Superconductivity: Past, Present, and Future
58
Collins Kariuki, Pomona College, Pg. 66 An Overview of the Nuclear Industry Dev Kapadia ’23, Pg. 76 The Chilling Reason for Goosebumps Georgia Dawahare ’23, Pg. 90 A Short History of DNA: From Discoverers to Innovators Gil Assi ’22, Pg. 94 Understanding Infertility and Its Determinants
130
Grace Nguyen ’24, Pg. 102 A Year of COVID-19: Implications for the Heart and Lungs Jennifer Chen ’23, Pg. 112 The Effects of Anxiety on Reaction to COVID-19 and Prevention Efforts Jessica Meikle ’23 & Leah Johnson ’23, Pg. 122 The Microbiota-Gut-Brain Axis May Be the Missing Link Between Autism Spectrum Disorder and Anorexia Nervosa Jillian Troth ’24, Pg. 132 A Call for Efforts to Address Asian American Health Disparities: Fighting Heart
156
Disease, Liver Disease, and Obesity Kristal Wong ’22, Pg. 148 COVID Symptom Severity and the Immune Response Lauren Ferridge ’23, Pg. 158 Gallbladder Cancer—Emerging Treatment Methods Offer Hope for Better Prognosis Lucy Fu ’22, Pg. 164
4
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table of Contents Individual Articles Compost Tea: A Review of the Experimental Data and Its Potential Implications Maddie Brown ’22, Ian Hsu ’23, Basile Montagnese ’22, & Julia Draves ’23, Pg. 170 The Health and Environmental Impacts of Mass Incarceration Maeen Arslan '23, Pg. 178 From Microneedles to Nanogels: The Diverse Mechanisms of Drug Delivery into the Body Miranda Yu ’24, Pg. 184
168
Sublimable Adhesives: The New Way to Stick Nephi Seo ’23, Pg. 190 Plant Environmental Perception Signaling Pathways & Epigenetic Memory Owen Seiner ’24, Pg. 194 The Impact of Stories as Therapy Rachel Matthew ’24, Pg. 200 Passing Through Barriers: Quantum Tunneling and its Applications
182
Rujuta Purohit ’24, Pg. 206 Treatment and Early Detection of Cardiogenic Pulmonary Edema Soyeon (Sophie) Cho ’24, Pg. 216 Carbon Recycling: A Novel Pathway to Renewable Fuel Production Spriha Pandey ’24, Pg. 224 The Arctic’s ‘Zombie’ Wildfires: Burning Peat Could Prove Disastrous in the Climate Fight Spriha Pandey ’24, Pg. 230 Fundamentals of Language Learning and Acquisition Tyler Chen ’24, Pg. 234
228
The Role of Phagocytosis in Intracerebral Hemorrhage: An Avenue for Novel Treatments and Better Patient Outcomes Vaishnavi Katragadda ’24, Pg. 246 A Closer Look at GM2 Gangliosidoses in Tay-Sachs Disease Valentina Fernandez ’24, Pg. 256
Team Articles Augmenting the Human Experience - the Story of Prosthetics
284
Pg. 264 Beyond the Genetic Code: The Era of Epigenetics Pg. 286 Healthcare: Roots of Inequality Pg. 304 The History of DNA Sequencing Techniques Pg. 328 Organoids on a Chip: Celebrating Decades of Work Pg. 340
302
Selected Review of Topics in Primary Care Affected by COVID-19 Pg. 352
WINTER 2021
5
Warp Speed: A Brief History of COVID-19 Vaccine Development BY AARUN DEVGAN ’23 Cover: DNA Molecules in 3D Source: Pixabay
6
Introduction The novel coronavirus has been the center of attention in the national news and within local communities alike for over a year now. In December of 2019, the virus was discovered after it originated and rapidly spread in Wuhan, China. By March of 2020, the World Health Organization declared COVID-19 to be a worldwide pandemic as a result of the speed and reach of its spread. Due to its damaging respiratory effects and its similarity to the first SARS virus which ravaged Asia in 2002-3, the virus was named severe acute respiratory syndrome coronavirus 2 or SARS-CoV-2. The disease that the virus causes was named COVID-19 (Yuki et al. 2020). The SARS pandemic (caused by SARS-CoV-1) in 2002-2003 infected 8,096 people and killed 774 (CDC, 2013). SARSCoV-2 has a high homology to SARS-CoV, as it is around 80% similar in genetic makeup. However, SARS-CoV-2 has proven itself to be much more contagious and therefore much
more deadly. As of March 18th, 2021, there have been 122 million cases of COVID-19 worldwide which have resulted in 2.69 million deaths (Ritchie, 2021).
Development of the Vaccine Remarkably, the SARS-CoV-2 coronavirus vaccine was developed in less than a year. This is a truly incredible feat, especially considering that the previous fastest-made vaccine (made for mumps) took 4 years to develop back in 1967. This is largely thanks to the fact that SARSCoV-2 belongs to a family of coronaviruses that have been studied extensively in the past. The CDC explains that there are seven species of the coronaviruses, four common ones that cause the common cold and the 3 more dangerous species that caused the SARS outbreak in 2002, MERS in 2012, and COVID-19 in 2019 (CDC, 2020). In fact, coronaviruses have been studied for over 50 years, which means that the viral structure, genome, and life cycle DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
large Y shaped proteins that are used by the Figure 1: Close up image of immune system to identify antigens present coronavirus particles. on foreign particles, like SARS-CoV-2 (Ghose, Source: Pixabay 2020). The antibodies are able to identify the foreign antigen selectively because the antibody will only bind to its specific antigen, making it possible to detect the presence of the coronavirus even in the presence of thousands of other proteins.
Safety and Efficacy of the COVID-19 Vaccine of these viruses are already well understood. In addition, the speed of the vaccine development has been further stimulated by the fact that this is a global pandemic, and therefore all scientists around the globe are sharing findings and working around the clock to speed up the vaccine development process and save lives.
Introduction to mRNA Vaccines The vaccines ultimately developed to immunize individuals against COVID-19 are messenger RNA vaccines, a relatively new strategy for attacking against infectious diseases. Prior to now, the standard method of vaccination has been to introduce a weakened or inactive form of the virus into the body. This trains the immune system to fight off the virus such that upon future exposure, the immune system will be more ready to fight off the infection successfully. Messenger RNA vaccines, in contrast, provide a set of instructions that enable the body to produce viral proteins. This triggers a similar sort of immune response which activates the immune system and provides protection against real infection (CDC, 2020). The way that mRNA vaccines work is actually quite simple. The vaccines themselves contain genetic material called messenger RNA which is encased in a special coating. Messenger RNA simply provides the instructions for the ribosomes in our cells to synthesize proteins out of these instructions. The special coating allows the vaccine to reach the cells without our body’s defense system breaking it down first. In this case, the messenger RNA transcripts SARS-CoV-2 spike protein. After the cells have learned how to make the protein, they will display it on their surface. The immune system recognizes that the viral protein does not belong in the body and begins to make antibodies against it (CDC, 2020). These antibodies are
WINTER 2021
While many have expressed safety concerns about the COVID-19 vaccine, mRNA vaccines are actually quite safe. For starters, mRNA vaccines do not carry a live virus so there is no danger of a disabled virus gaining any sort of infectious abilities. In addition, the mRNA vaccine does not ever enter the nucleus of cells, and therefore will not directly interact with human DNA. Additionally, mRNA is non replicative and is ridded from the body through metabolic processes within a couple of days (Schlake, 2012). Furthermore, not just the safety, but the effectiveness of COVID-19 Vaccines is quite high. For instance, the vaccine made by the company Pfizer/BioNTech has a 95% effectiveness based on a trial involving 44,000 people (Polack, 2021). Similarly, the COVID-19 vaccine producer, Moderna demonstrated a 94% effectiveness based on a trial with 30,000 people (Baden, 2021). It is important to note that these efficacies are in reference to receiving two doses of the vaccine spaced 3 weeks apart for the Pfizer vaccine and 4 weeks apart for the Moderna vaccine. While not 100%, these vaccines are extremely effective compared to others. For instance, data from the U.S. Influenza Vaccine Effectiveness Network for the year 2019-2020 estimated that the overall effectiveness of the seasonal influenza vaccine was 45% (CDC, 2020).
"The vaccines ultimately developed to immunize individuals against COVID-19 are messenger RNA vaccines, a relatively new strategy for attacking against infectious diseases."
While the vaccine is widely regarded as safe, it is important to note that there are some slight side effects that accompany it. For instance, as with most shots, there will be pain and soreness at the site of the injection for a few days after getting the vaccine. There is also a possibility of swollen and painful lymph nodes in the arm receiving the vaccination. Lastly, tiredness, headaches, muscle aches, nausea, and fever are possible but less likely side effects (Shmerling, 2021). Many vaccine recipients have noted that 7
they experienced more severe symptoms after receiving the second dose as opposed to the first Source: Pixabay one. This is because after the first dose, it takes your body time to recognize the antigen and for your immune system to rev up. When the second dose is received, however, the immune system is ready to attack the antigen, and this results in slightly worse physical symptoms (Mayo Clinic, n.d.). It is important to keep in mind that the benefits of the vaccine, namely preventing or dampening the effects of COVID-19, greatly outweighs its potential side effects.
Figure 2: Patient receiving a vaccine.
"The scientific consensus is that the COVID-19 vaccine is something that all people should eventually receive. However, with the limited supply of vaccines, each state government has come up with a plan such that vaccines are distributed to those who need them the most first. "
8
In addition to these mild side effects, there is the possibility of a severe, life-threatening allergic reaction called anaphylaxis. The chances of this, however, are very rare. The CDC estimates that 11 cases out of a million will experience this allergic reaction. The reaction tends to occur very soon after receiving the vaccine and therefore it is recommended that patients are monitored for 15 minutes afterwards. On the off chance that an allergic reaction does occur, an EpiPen can be used to treat it (Shmerling, 2021).
be 72% effective in the United States at warding off infection and 85% protective against developing severe symptoms (Branswell, 2021). The one-does J&J vaccine seems promising for quickly vaccinating the entire population of the United States. In March of 2021, Johnson & Johnson received approval by the U.S. FDA for emergency use of the vaccine and there are plans to distribute 100 million shots within the first half of 2021 (J&J, 2021).
Comparing Vaccine Developers
Vaccine Rollout Plan
As of February 2021, only the vaccines developed by Pfizer and Moderna are being distributed in the United States. While these vaccines do have slight differences, they are more similar than they are different. They both utilize mRNA technology, have efficacy of around 95%, and must be taken in two doses spaced a few weeks apart. Some differences, however, lie in the dosage of each vaccine. The Pfizer vaccine is a 30-microgram dose while Moderna’s is 100 micrograms. It is clear that the smaller dosage of the Pfizer vaccine does not result in a loss of efficacy and thus Moderna has been asked to test the efficacy of lower doses as to allocate resources more widely. Another main difference and obstacle for the Pfizer vaccine is the fact that it must be stored between -112 OF and -76 OF. Because of this, the vaccines have to be shipped in thermal containers with dry ice and stored in special freezers (Branswell, 2021). In addition to these two vaccine manufactures, Johnson & Johnson is developing a vaccine in which only one dose is needed. Rather than using mRNA technology, J&J is using a viral vector vaccine which is a harmless adenovirus, a common medium-sized virus known to cause fevers, that carries the genetic code for the SARSCoV-2 spike protein. J&J has used this technology before to develop a vaccine for Ebola that was authorized and used by the European Medicines Agency. The J&J one-dose vaccine has proven to
The scientific consensus is that the COVID-19 vaccine is something that all people should eventually receive. However, with the limited supply of vaccines, each state government has come up with a plan such that vaccines are distributed to those who need them the most first. While there are variations between states, the general roll out plan has been broken down into three phases. The first phase has two subgroups: phase 1A, and 1B. The phase 1A group, which is made of healthcare workers and first responders, are the first to receive the vaccine. Next, the people in phase 1B, who are older adults living in group homes or overcrowded areas and people of all ages with underlying conditions that put them at significantly higher risk of severe symptoms, are able to get the vaccine. Next, phase 2 encompasses a wide array of people: schoolteachers, childcare workers, group homes for people with disabilities or mental illness, people in jail, people of all ages with an underlying condition putting them at moderate risk for severe symptoms, and all older adults. Phase 3 covers the rest of the population, encompassing children and young adults (CDC, 2020). While the CDC has proposed the above rollout plan to be the most fair and best way to end the pandemic, there is some controversy. For instance, the main question is whether DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
frontline workers and first responders should be prioritized over the elderly. If the main goal is to vaccinate those at most risk, then it makes sense to vaccinate the at-risk elderly first. However, if the goal is to end the pandemic quickly and reduce the total number of deaths, then frontline workers
Conclusion Since late 2019, the coronavirus put a halt on daily lives of people all around the globe. People have faced unprecedented economic, social, and medical hardships. However, due to the record-breaking time to develop the vaccines for COVID-19, the end of the pandemic is in sight. While there are differences in the three vaccines (Pfizer’s, Moderna’s, and J&J’s), at the end of the day, they all do an excellent job at preventing the contraction of coronavirus and dampening the symptoms if contracted. The main goal now is to vaccinate all residents in the United States as quickly as possible, starting with those who are at the most risk. References Baden, L. R., Al., E., Group*, for the C. O. V. E. S., Author AffiliationsFrom Brigham and Women’s Hospital (L.R.B.), Haynes, B. F., Others, F. K. and, … F. P. Polack and Others. (2021, February 4). Efficacy and Safety of the mRNA-1273 SARSCoV-2 Vaccine: NEJM. New England Journal of Medicine. https://www.nejm.org/doi/full/10.1056/NEJMoa2035389. Center for Disease Control and Prevention. (2013, April 26). CDC SARS Response timeline. https://www.cdc.gov/about/ history/sars/timeline.htm. Center for Disease Control and Prevention. (2013, April 26). CDC SARS Response timeline. https://www.cdc.gov/about/ history/sars/timeline.htm. Centers for Disease Control and Prevention. (2020, February 15). Human Coronavirus Types. Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/types. html.
Johnson & Johnson. Content Lab U.S. (2021, February 27). https://www.jnj.com/johnson-johnson-covid-19-vaccineauthorized-by-u-s-fda-for-emergency-usefirst-single-shotvaccine-in-fight-against-global-pandemic. Mayo Clinic. “Understanding COVID-19 Vaccine Side Effects, Why Second Dose Could Feel Worse.” Mayo Clinic, Mayo Foundation for Medical Education and Research, newsnetwork.mayoclinic.org/discussion/understandingcovid-19-vaccine-side-effects-why-second-dose-could-feelworse/. Polack, F. P., Al., E., for the C4591001 Clinical Trial Group*, Author AffiliationsFrom Fundacion INFANT (F.P.P.) and iTrials-Hospital Militar Central (G.P.M.), Longo, E. J. R. and D. L., Others, F. K. and, … F. P. Polack and Others. (2020, December 31). Safety and Efficacy of the BNT162b2 mRNA Covid-19 Vaccine: NEJM. New England Journal of Medicine. https:// www.nejm.org/doi/full/10.1056/NEJMoa2034577. Ritchie, R. and data: H. (2021, March 18). Coronavirus (COVID-19) Deaths - Statistics and Research. Our World in Data. https://ourworldindata.org/covid-deaths?country=IND ~USA~GBR~CAN~DEU~FRA. Schlake, T., Thess, A., Fotin-Mleczek, M., & Kallen, K.-J. (2012). Developing mRNA-vaccine technologies. RNA Biology, 9(11), 1319–1330. https://doi.org/10.4161/rna.22269 Shmerling, R. H. (2021, February 11). COVID-19 vaccines: Safety, side effects –– and coincidence. Harvard Health Blog. https://www.health.harvard.edu/blog/covid-19-vaccinessafety-side-effects-and-coincidence-2021020821906. Solis-Moreira, J. COVID-19 vaccine: How was it developed so fast? https://www.medicalnewstoday.com/articles/ how-did-we-develop-a-covid-19-vaccine-so-quickly#Othercoronaviruses. The National Academies of Science Engineering Medicine. nationalacademies.org. https://www.nationalacademies.org/ our-work/a-framework-for-equitable-allocation-of-vaccinefor-the-novel-coronavirus. Yuki, K., Fujiogi, M., & Koutsogiannaki, S. (2020). COVID-19 pathophysiology: A review. Clinical Immunology, 215, 108427. https://doi.org/10.1016/j.clim.2020.108427
Center for Disease Control and Prevention. (2020, December, 18). Understanding mrna covid-19 vaccines. https://www. cdc.gov/coronavirus/2019-ncov/vaccines/different-vaccines/ mrna.html. Centers for Disease Control and Prevention. (2020, March 26). Interim Estimates of 2019–20 Seasonal Influenza Vaccine Effectiveness - United States, February 2020. Centers for Disease Control and Prevention. https://www. cdc.gov/mmwr/volumes/69/wr/mm6907a1.htm?s_ cid=mm6907a1_w. Ghose, T. (2020, July 17). What are antibodies? LiveScience. https://www.livescience.com/antibodies.html. Goodnough, A., & Hoffman, J. (2020, December 5). The Elderly vs. Essential Workers: Who Should Get the Coronavirus Vaccine First? The New York Times. https://www.nytimes. com/2020/12/05/health/covid-vaccine-first.html.
WINTER 2021
9
Periodontal Disease: Pathogenesis, Treatment, and How One Superhuman Drug Could Change The Future of Dentistry BY ALEXANDRA LIMB '21 Cover: Visualization of Porphyromonas gingivalis, one of the bacterial strains responsible for the pathogenesis of periodontal disease. An overabundance of oral microbes can lead to the formation of dental plaque and eventually trigger inflammation of the gums. Source: Wikimedia Commons
What is Periodontal Disease? Periodontal disease, more commonly known as gum disease, is the infection and inflammation of the gums within the mouth. It is one of the most common oral health concerns among adults, affecting 47.2% of individuals over the age of 30 in the U.S. The numbers are even higher in geriatric populations, with 70.1% of adults over the age of 65 experiencing periodontal disease. Common risk factors of periodontal disease include smoking, diabetes, genetic predisposition, and immunodeficiencies (such as AIDS). However, the disease is most often the result of consistently poor brushing and flossing practices over a large period of time (Periodontal Disease, 2018) [Figure 1]. Inadequate brushing allows bacterial strains such as Porphyromonas gingivalis, Aggregatibacter actinomycetemcomitans, and Prevotella intermedia to persist within the oral cavity (Vieira, 2014). While the human
10
body naturally possesses a rich microbiome within the mouth, the overgrowth of these microorganisms due to poor oral hygiene results in a change in the immune response that typically manages the ecological relationship between commensal bacteria and their human hosts (Cekici et al., 2014). Thus, the presence of bacterial strains are not necessarily the direct cause of the disease, but rather, it is the rapid increase in their growth rate that contributes to the immune and inflammatory responses associated with periodontal disease (Cekici et al., 2014). Periodontal disease often involves a progression: first, the abundance of microbial strains allows them to structurally organize themselves to form an ordered biofilm layer commonly known as dental plaque (Marsh, 2006). This layer can then harden alongside mucus to form calculus, which can spread further below the gum line (Marsh, 2006). If the calculus is not successfully removed by dental treatment, it begins to cause a DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Image of an individual flossing between the teeth. Periodontal disease is most often caused by a lack of adequate brushing and flossing habits over a large period of time, which leads to an overgrowth of bacteria and subsequent development of plaque, calculus, and inflamed gums. (Source: NIH Image Gallery).
myriad of problems, eventually leading to the development of periodontitis. Once the dental plaque begins to form, the body launches an initial inflammatory response to the congregation of bacteria at the dentogingival margin (the interface between the gums and teeth). This is called gingivitis, an early stage of periodontal disease characterized by the inflammation of the gingival epithelium and connective tissue between the teeth and gums (Cekici et al., 2014). Physical indications of gingivitis include bad breath, tooth sensitivity, pain while chewing, and red, swollen, and bleeding gums (Vieira, 2014). This stage of periodontal disease pathogenesis is referred to as the “initial lesion,” a preliminary inflammatory response of nearby leukocytes, which are white blood cells that target bacterial and other foreign invasions (Cekici et al., 2014, What Are White Blood Cells? - Health Encyclopedia University of Rochester Medical Center, n.d.). The gingiva’s epithelial cells trigger an immune pathway in which signaling proteins will direct neutrophils, the immune cells that are the first line of defense against infection, to leave local blood vessels and migrate to the site of tissue inflammation in order to kill bacteria. As more immune cells such as neutrophils, macrophages, and lymphocytes appear, further inflammation occurs, which is accompanied clinically by gingival bleeding. Attachment between tooth and connective tissue still remains intact at this stage (Cekici et al., 2014). The terminology “periodontitis” refers to a more progressive form of periodontal disease resulting in inflammation of supportive tissues
WINTER 2021
of the teeth alongside tissue attachment loss and bone destruction. Periodontitis is associated with gum recession from the teeth, bone loss, and tooth loss. It results from the transition from an “established lesion” to an “advanced lesion.” The established lesion, understood clinically as moderate to severe gingivitis, occurs when the body moves from its innate to acquired immune response. The innate immune response is the body’s initial, nonspecific line of defense against foreign invaders in which macrophages are dominant. The acquired or “adaptive” immune response is a slower line of defense, but it better targets the specific germ causing infection using T and B lymphocytes. An advanced lesion, the most severe form of periodontal disease, manifests when loss of tissue and bone occur. At this point, inflammation can extend much deeper, down to the alveolar bone (Cekici et al., 2014). An overall depiction of the physical indicators of periodontal disease, from its early to more advanced stages, is shown below. [Figure 2]
"Once the dental plaque begins to form, the body launches an initial inflammatory response to the congregation of bacteria at the dentogingival margin (the interface between the gums and teeth). This is called gingivitis..."
Critical Impacts of Periodontal Disease on Cardiovascular Health Despite its centralized location within the mouth, periodontal disease has severe implications for the rest of the body. Notably, it has significant impacts on the development of cardiovascular disease, which is a leading cause of death globally –in 2016, 31% of all global deaths were due to some form of cardiovascular disease (Cardiovascular Diseases, n.d.). Recent research establishes that oral infections are a significant contributor to atherosclerosis, a disease affecting the arteries
11
Figure 2: An illustration of the features of a healthy, normal tooth versus one affected by periodontal disease. Notably, periodontal disease can result in plaque and calculus buildup, inflamed gums, receding bone, and a deepened pocket between tooth and gum attachment. Source: Wikimedia Commons
that can compromise blood flow. This condition involves several steps: arterial fatty deposits known as atheromas accumulate within the walls of arteries, blood flow is restricted and blood clots subsequently form, blood vessels experience tissue death, and eventually, death can occur. [Figure 3]
"Studies have shown that periodontal bacterial strains activate proteins that cause platelet aggregation, resulting in increased fatty buildups in artery walls."
12
Specifically, studies have shown that periodontal bacterial strains activate proteins that cause platelet aggregation, resulting in increased fatty buildups in artery walls. Another study by Haraszthy et al. indicated the presence of DNA from periodontal pathogens in carotid atheromas. Notably, 26% of specimens were positive for P. gingivalis, a known periodontal related bacterial strain (Dhadse et al., 2010). In another study, 59.9% of coronary artery samples showed periodontal bacterial DNA in which P. gingivalis was present in 52.9% of these samples (Vieira, 2014). There are also links between the presence of periodontal pathogens and an increase in c-reactive protein (CRP) levels. CRP is a major contributor to atheroma formation by binding diseased muscle tissue during myocardial infarction (heart attack), which leads to inflammation. A study based in Scotland demonstrated that participants with very poor oral hygiene had an increased level of CRP and an increased risk of cardiovascular disease, suggesting that there is a connection between periodontal bacteria and cardiovascular health (Dhadse et al., 2010).
Understanding Current Treatments for Periodontal Disease Established treatment methods for periodontal disease are highly dependent on the stage of the disease. As a preventative measure, brushing and flossing significantly reduces the growth of microbes in the oral cavity. Additionally, supragingival and subgingival irrigation methods, which inject water or acetyl salicylate acid into and around the gums, can successfully reduce microbial accumulation. This method washes away unattached plaque and blocks the initiation of periodontitis. However, once the initial inflammation begins, mechanical therapies must be used. “Periodontal scaling” is the gold standard of nonsurgical treatment methods for periodontal disease. Power-driven, vibrating instruments are used to mechanically remove the microorganism colonies on the teeth and at the gum line, eliminating plaque and calculus. It is proven that after this process, the microbe accumulation drops to 0.1%, but recolonizes within a week (Tariq et al., 2012). As a result, mechanical methods cannot be used alone. A one-stage, full mouth disinfection combines both mechanical and antimicrobial therapy to avoid further infection from microbe re-growth. For example, doxycycline at subantimicrobial doses can prevent the harmful aspects of immune response to bacteria (Tariq et al., 2012). Chemotherapeutic agents can also help treat the disease by altering host responses to bacteria.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Diagram showing the progression of atherosclerosis within the arteries. As fatty deposits of plaque and cholesterols build up within the arterial walls, blood flow is restricted due to narrowed arteries. This can have severe consequences for cardiovascular health by causing blood clots, infarction of blood vessels, and even death. Source: Wikimedia Commons
Surgical methods can be used as a more invasive, but effective, form of treatment. Conservative surgical treatment can be performed to retain alveolar bone and soft tissue as well as gain access to the root surfaces to remove plaque and calculus (Graziani et al., 2017). This includes flap surgery, which first lifts the gums back to remove the plaque and calculus and then places the gums in an orientation that reduces the space between gum and tooth, reducing the periodontal pocket where bacterial growth can occur. Another surgical treatment is bone grafting, which facilitates bone regrowth and attachment to the teeth, and gum grafting, which restores receding gums to a more natural level (Treating Gum Disease, n.d.). [Figure 4] Resective surgeries smoothen and re-contour the alveolar bone and tissue in order to decrease craters, making it harder for bacteria to aggregate and grow (Graziani et al., 2017, Treating Gum Disease, n.d.).
Superhuman Drug Developments for the Future of Periodontal Disease Treatment Despite existing treatments mediating the impacts of periodontitis, there is no way to reverse its effects. While surgical and nonsurgical therapies can certainly control the growth of bacteria and manage the progression of the disease, there is no way to completely revert back to a pre-periodontal disease state. However, scientists are now proposing innovative new treatment methods to eventually reverse periodontitis altogether
WINTER 2021
by complete restoration of diseased tissue, particularly in geriatric patients. In a recent study conducted within the University of Washington Department of Oral Health Sciences, aging mice were treated with rapamycin, an FDA approved drug for the treatment of the cancer and lymphomas that has more recently been established to have anti-aging capacities. The study was focused on the rapamycin drug therapy’s effectiveness as a novel treatment modality for slowing the progression of periodontal disease (An et al., 2020). The central understanding behind this research is that age is one of the most critical risk factors in periodontal disease. As the body ages, molecular changes within periodontal tissue cells contribute to greater bone loss in elderly periodontitis patients (Huttner et al., 2009). As a result, novel methods that target and reverse the biological aging process could show highly promising results for the future of periodontal treatment. Rapamycin is
"As the body ages, molecular changes within periodontal tissue cells contribute to greater bone loss in elderly periodontitis patients (Huttner et al., 2009). As a result, novel methods that target and reverse the biological aging process could show highly promising results for the future of periodontal treatment." Figure 4: Gum grafting is a surgical method for periodontal disease in which gum recession can be restored to the original gum line. While more invasive, this can be effective in preventing further bacterial growth. It also can be aesthetically pleasing by giving the gums a healthier appearance. Source: Wikimedia Commons
13
an inhibitory drug that targets the rapamycin complex I (mTORC1). In experiments on mice, it has been shown to extend lifespan and improve age-related genetic phenotypes. Notably, it was found to extend the mean lifespan of mice by around 10% when administered in a shortterm treatment course for 6-12 weeks during adulthood. Other effects of the rapamycin drug Source: Wikimedia Commons on mice include improved cardiac and immune system functions (Rapamycin Rejuvenates Oral Health in Aging Mice | ELife, n.d.).
Figure 5: A depiction of the p65 regulatory protein subunit that is part of the larger NF- κB transcription factor group. This biomolecule plays a significant role in the expression of target inflammatory genes that are responsible for gingival inflammation.
"Mice treated with rapamycin maintained significantly more periodontal bone compared to the controls. Furthermore, it was confirmed that this was not the slowing down of existing bone loss, but rather, promoting the growth of entirely new bone."
14
In the periodontal disease study, aged mice were treated with rapamycin or a control placebo drug over the course of eight weeks and microCT imaging was performed. Three major components of periodontal disease were monitored throughout the treatment process: loss of periodontal bone, inflammation of connective tissue, and pathogenic changes to the microbiome. CT scans were used to measure the amount of periodontal bone present at the maxilla and mandible sites. Measurements were conducted at stages of 6 months (young), 14 months (adult), and 20 months (elderly). In both control and experimental groups, the results indicated that periodontal bone loss occurred with aging. However, the mice treated with rapamycin maintained significantly more periodontal bone compared to the controls. Furthermore, it was confirmed that this was not the slowing down of existing bone loss, but rather, promoting the growth of entirely new bone. The second factor used to track periodontal disease progression was the gingival inflammation among test subjects. Part of the biological aging process involves an accumulation of a group of pro-inflammatory transcription factors, known as nuclear factorκB (NF-κB). NF-κB is activated in both the normal aging process and periodontal disease pathogenesis, triggering the expression of specific gene regulators that subsequently induce expression of target inflammatory genes. The NF- κB group of regulatory proteins is composed of various subunits such as p65, which was the primary area of focus in this experiment. In control mice, there was an in increase in p65 regulatory protein expression within the gingival tissue and bone, triggering the signaling pathway that eventually led to greater inflammatory gene expression. Thus, increased p65 levels coincided with an exacerbated inflammatory response in the gingival tissues [Figure 5].
Mice treated with rapamycin had successful reversal of the age-related inflammatory changes in the oral cavity caused by p65 regulatory protein. This indicates that the drug treatment has the potential to reduce gingival inflammation that is due to aging. Lastly, the oral microbiome was studied over the course of treatment to see if rapamycin could revert it to a healthier condition. Analysis of the bacterial diversity within the oral cavity showed that the rapamycin treated mice had less of an increase in microbe species richness during aging compared to the control subjects. Specifically, there was significant reduction of Bacteroidetes in the rapamycin treated mice. This vast, 700 species phylum of bacteria includes the bacterial strains associated with periodontal disease such as P. gingivalis. Notably, when researchers compared the levels of Bacteroidetes between young, untreated mice and old, rapamycin treated mice, they observed that there were the same or lower levels of Bacteroidetes. This suggests that the drug could successfully transform an aging, bacteria rich microbiota to its original, more balanced composition. Overall, these results indicate that treatment with rapamycin resulted in regeneration of the periodontal bone, decreased inflammation of gingival and periodontal bone, and a reversal of the oral cavity microbiota to a non-pathogenic state. This study claims to be the first report of being able to reverse an aged oral cavity to a youthful state, indicating the vast potential for further development in age-centered periodontal treatment methods. A critical area of further study is whether these age-reversing effects will persist in the long term or revert back to its aged state, much like many existing treatments. Moreover, it would be interesting to consider what other aged-related oral health functions could benefit from treatment with
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
rapamycin (Rapamycin Rejuvenates Oral Health in Aging Mice | ELife, n.d.). Periodontal disease is an extremely prevalent global health concern and demands attention due to its consequences not only within the mouth, but the rest of the body as well. Particularly, its critical link to cardiovascular disease demonstrates the importance of pursuing novel treatment methods that can eventually eradicate periodontitis altogether, as well as help prevent cardiovascular disease. While dentistry has come a long way in establishing successful treatment methods that allow patients to manage their periodontitis, the future of the field involves treatments that fully reverse damage so patients do not have to suffer in the long-term. Reversing the aging of mice, and potentially applying this treatment method to humans in the future, is perhaps one of the most unexpected, but incredibly fascinating upheavals within dentistry. The use of a dental drill or antibiotic treatment to mediate the impacts of periodontitis is not unexpected, but a complete reversal of the oral cavity’s aging at a biological level is groundbreaking, perhaps even superhuman.
oralhealth/conditions/periodontal-disease.html An, J. Y., Kerns, K. A., Ouellette, A., Robinson, L., Morris, H. D., Kaczorowski, C., Kaeberlein, M. (2020). Rapamycin rejuvenates oral health in aging mice. ELife, 9. doi:10.7554/elife.54318 Tariq, M., Iqbal, Z., Ali, J., Baboota, S., Talegaonkar, S., Ahmad, Z., & Sahni, J. K. (2012). Treatment modalities and evaluation models for periodontitis. International Journal of Pharmaceutical Investigation, 2(3), 106–122. https://doi. org/10.4103/2230-973X.104394 Treating Gum Disease. (n.d.). Retrieved March 9, 2021, from https://my.clevelandclinic.org/health/treatments/10907gum-disease-treatment Vieira, R. W. (2014). Cardiovascular and periodontal diseases. Revista Brasileira de Cirurgia Cardiovascular : Órgão Oficial Da Sociedade Brasileira de Cirurgia Cardiovascular, 29(1), VII–IX. https://doi.org/10.5935/1678-9741.20140003 What Are White Blood Cells? - Health Encyclopedia— University of Rochester Medical Center. (n.d.). Retrieved March 6, 2021, from https://www.urmc.rochester.edu/ encyclopedia/content.aspx?ContentID=35&ContentType ID=160
References Cardiovascular diseases (CVDs). (n.d.). Retrieved March 9, 2021, from https://www.who.int/news-room/fact-sheets/ detail/cardiovascular-diseases-(cvds) Cekici, A., Kantarci, A., Hasturk, H., & Van Dyke, T. E. (2014). Inflammatory and immune pathways in the pathogenesis of periodontal disease. Periodontology 2000, 64(1), 57–80. https://doi.org/10.1111/prd.12002 Dhadse, P., Gattani, D., & Mishra, R. (2010). The link between periodontal disease and cardiovascular disease: How far we have come in last two decades ? Journal of Indian Society of Periodontology, 14(3), 148–154. https://doi. org/10.4103/0972-124X.75908 Graziani, F., Karapetsa, D., Alonso, B., & Herrera, D. (2017). Nonsurgical and surgical treatment of periodontitis: How many options for one disease? Periodontology 2000, 75(1), 152–188. https://doi.org/10.1111/prd.12201 Huttner, E. A., Machado, D. C., de Oliveira, R. B., Antunes, A. G. F., & Hebling, E. (2009). Effects of human aging on periodontal tissues. Special Care in Dentistry: Official Publication of the American Association of Hospital Dentists, the Academy of Dentistry for the Handicapped, and the American Society for Geriatric Dentistry, 29(4), 149–155. https://doi.org/10.1111/ j.1754-4505.2009.00082.x Marsh, P. D. (2006). Dental plaque as a biofilm and a microbial community – implications for health and disease. BMC Oral Health, 6(Suppl 1), S14. https://doi.org/10.1186/1472-68316-S1-S14 Periodontal Disease | Oral Health Conditions | Division of Oral Health | CDC. (2018, December 14). https://www.cdc.gov/
WINTER 2021
15
Data-Driven Behavior Change: Machine Learning and Its Impacts on Other Industries BY AMITTAI WEKESA '24 Cover: A visualization of data on lunar impact craters, analyzed with machine learning and deep learning. Machine learning offers efficient analysis of massive datasets. Source: Wikimedia Commons
16
Introduction
A History of Artificial Intelligence
Machine learning can be an elusive term, often known for being a subfield of artificial intelligence (AI). Artificial intelligence can generally be subdivided into five fields: machine learning, deep learning, computer vision, robotics, and natural language processing. Deep learning is more specialized than machine learning, extending neural networks, a machine learning model, to more sophisticated structures which usually yield better predictions. Computer vision and natural language processing are specific to the areas of image and language processing. On the other hand, robotics usually incorporates the other AI subfields for stimulus detection and response in order to make intelligent decisions in automated machinery and robot systems. Evidently, all subfields drive each other and are not clearly defined without one another; machine learning must be understood in the broader context of artificial intelligence.
Developments in artificial intelligence theory started as early as the 1930s with Alan Turing and Kurt Gödel. In 1931, Gödel, an Austrian logician, developed a mathematical proof that, although all true statements are derivable in first-order systems, some are unprovable in higher-order systems. This called to attention the common practice scientists and mathematicians had at the time of extending simple first-order arithmetic axioms to higher-order systems (Ertel & Black, 2018). This development was a step towards understanding the possibilities of infinite choice and how axioms from simpler systems can and cannot be extended to more complex setups. On the other hand, Alan Turing’s proof that no general algorithm can predict all possible program-input pairs for the halting problem (a problem of predicting whether a program will finish running or continue running forever based on a description of the program) DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
119). Other pioneers emerged with their new ideas and developments. The same year, Allen Newell and Herbert Simon from Carnegie Mellon University created the Logic Theorist, a computer program that could prove the logical theorems in Principia Mathematica, a sophisticated mathematics textbook. However, a lack of a coordinated effort to pursue artificial intelligence hindered progress in the field for a while (Sejnowski, 2018).
also went towards defining the computational limits that any intelligent machine would have to contend with (Ertel & Black, 2018). In 1943, McCullogh and Pitts modeled the firstever theoretical neural network, a predictive structure that is still used in machine learning and deep learning systems today. However, due to computational limitations, they could not build a fully functional model at the time (López-Cajún & Ceccarelli, 2018). Despite these theoretical formulations, artificial intelligence was still a merely academic field comparable to mathematics—theoretical proofs with no practical developments yet. It was an undefined field regarded under Automata Studies, a broader area that included the study of general automata theory not directly linked to artificial intelligence (Shannon & McCarthy, 1956). While this generalization and the underdeveloped technology of the time hindered focused research in artificial intelligence, the field still found its way into numerous symposiums and conferences, such as the 1948 Hixon Symposium on Cerebral Mechanisms in Behavior held at Caltech, which inadvertently found itself comparing the computer and the brain, albeit theoretically. The first stored-program computers were pioneered in 1949. A year later, Marvin Minsky and Dean Edmunds, two Ph.D. students at Harvard, designed a simple neural network—a simple deterministic prediction engine—long before the formal concept of the neural network was even conceptualized. Claude Shannon, a computing professor at The Massachusetts Institute of Technology, proposed a chess program and built many relay systems demonstrating intelligence features (Sejnowski, 2018). In 1955, Arthur Samuel from IBM made a checkers program that learned to play better than its inventor, the first-ever program to achieve this feat (Rada, 1986, p. WINTER 2021
In 1956, Claude Shannon proposed a machine intelligence workshop that brought together many pioneers in the field, including Shannon, Minsky, Nathaniel Rochester, and John McCarthy. This workshop, held at Dartmouth College and broadly known as the “Dartmouth Workshop,” became the first collaborative effort in AI and even coined the term “artificial intelligence” (McCarthy, 2006). McCarthy went on to establish an AI lab at Stanford University and to invent LISP, a high-level programming language that became a standard tool in developing machine learning and AI systems. In 1962, Frank Rosenblatt published Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. This book sought to go beyond the biology of brain matter and explore how human intelligence works. The same year, David Hubel and Torsten Weisel published Receptive Fields, Binocular Interactions, and Functional Architecture in the Cat’s Visual Cortex, a book that, for the first time, analyzed the response properties of individual neurons (Rada, 1986). These books offered interesting perspectives to early developers in artificial intelligence, which, even today, is often developed to mimic the processes of brain functionality in some capacities. Even then, scientists were only just beginning to understand AI’s capabilities and limitations. By 1972, French scientist Alain Colmerauer invented yet another logic-based
Figure 1: People may compare artificial intelligence to human intelligence, but there are key differences between AI and human intelligence. Source: Flickr
"In 1962, Frank Rosenblatt published Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. This book sought to go beyond the biology of brain matter and explore how human intelligence works."
Figure 2: A recreation of an Abacus machine, one of the earliest computational devices. While such a device supported basic mathematical computations, it could not support sophisticated machine learning and AI algorithms. Source: Wikimedia Commons
17
Figure 3: A simple linear regression model where the data points are mapped out onto some spatial space based on a specific feature (such as house prices). Then, a line of best fit (or regression) is determined. Predictions can then be made on new data points depending on where they lie on the generated model. Source: Wikimedia Commons
"From the aweinspiring robotics soccer leagues from 2003 to Google’s first self-driving cars in 2009, to Daimler’s fully autonomous truck pioneered on the Autobahn in 2015, advancements in the field of AI have increased in pace due to high computational power in the current age."
programming language, PROLOG, which sought to address the shortfalls of LISP and other programming languages (Ertel & Black, 2018). The new language provided advantages that encouraged people to pursue projects in computing. For example, around nine years later, Japan embarked on the “Fifth Generation Project” to develop the first fifth-generation computer based on PROLOG. In 1990, Pearl, Cheeseman, Whittaker, and Spiegelhalter officially brought probability theory into artificial intelligence with Bayesian networks (Sejnowski, 2018). With Bayesian models, developers could design algorithms that consider probabilistic uncertainties in the inputs and outputs, resulting in better predictions with compensation for possible errors. This model popularized multiagent systems and opened the door for new understanding and developments in the field. In 1992, yet another machine learning subfield, reinforcement learning, received its first significant validation when Gerald Tesauro’s Temporal Difference Learning algorithm demonstrated the superiority of reinforcement learning over traditional supervised learning (Tesauro, 1995). Another reinforcement learning program—a chess program called Deep Blue—defeated chess grandmaster Garry Kasparov in 1997 (Kasparov, 1998). Advancements in the 2000s and 2010s were mainly in robotics, deep learning, and reinforcement learning, which have risen to be some of the most critical AI fields. From the awe-inspiring robotics soccer leagues from 2003 to Google’s first self-driving cars in 2009, to Daimler’s fully autonomous truck pioneered on the Autobahn in 2015, advancements in the field of AI have increased in pace due to
18
high computational power in the current age. In 2016, AlphaGo, a reinforcement learning algorithm developed by Google DeepMind, beat the European Go champion 5:0 and the Korean champion 4:1 in 5 games each. This feat showed the infinite power of machine intelligence because Go has trillions of possibilities, higher than other board games such as chess.
Machine Learning Revisited But what is machine learning, and can a machine actually learn? By itself, “machine learning” refers to computer algorithms parsing and learning from data to build a prediction model, and then using the model to extrapolate to general cases from the test data and make informed decisions (Grossfeld, 2020). Today, machine learning drives most of the typical AI systems around us. For example, on-demand music streaming services such as Spotify use machine learning to create prediction models around specific kinds of music. In this way, Spotify’s algorithms only need to compare a user’s music taste and usage trends to its recommendation database to determine which music to recommend and when to recommend it. However, there is no magic behind the predictions, as we like to think about machines being “intelligent.” There are four fundamental types of machine learning algorithms depending on the training mechanisms: supervised learning, unsupervised learning, reinforcement learning, and recommender systems. The first type is supervised learning, which means that algorithms are fed labeled data with varying conditions and taught to internalize DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: A basic representation of k-means clustering (with k = 3). The machine learning algorithm maps the data depending on specified features and subdivides the data into k groups. Source: Wikimedia Commons
the labels and their associations to different features in the data. Afterward, the algorithms can make similar predictions on unlabeled data by identifying features and recalling what label was associated with those features (Ertel & Black, 2018). For example, a supervised learning algorithm can use linear regression to predict housing prices in an area knowing that houses with more rooms will cost more. However, this will usually be a very simplistic and objective model. It is common to model with multiple variables and develop a model involving multiple weighted linear regression sub-models for different features such as the number of rooms, the house’s square footage, the house’s age, and the house’s location. It is important to note that supervised learning (and every other machine learning approach) does not involve machines understanding what is going on behind the given data. For example, in the linear regression model mentioned above, the model does not care why, for instance, houses on the Eastern end of the city cost less than those on the Western end. As a result, when a machine learning algorithm, for instance, trains on data from one region, the model it develops can be very inaccurate for data from other areas or time periods if the model is not retrained. Thus, it is essential to keep in mind the subjectivity of data in the functionality of machine learning models (Sejnowski, 2018). Unsupervised learning is akin to supervised learning: it is also pattern recognition but is from unlabeled data as opposed to labeled WINTER 2021
data. A common way of achieving this is through clustering methods such as k-means clustering. In k-means clustering, a prediction engine seeks to identify related groups in a dataset instead of individual labels. The algorithm looks at different features in the data and arbitrarily creates k groups. It is then easier for predictions to be made for a single datapoint (such as a user in the system) based on the features common among users in the same group as the specific user (Ertel & Black, 2018). Reinforcement learning is arguably the more human-like method of learning. It is akin to when humans do something, process feedback, determine how favorable the outcome was, and then use the information to decide better in similar situations in the future. Likewise, reinforcement-learning systems use weighted costs to determine the favorability of different outcomes in order to build a comprehensive model (Sejnowski, 2018). A system learning how to play chess can simulate billions of games against people or pre-built game sequences. At first, the system will make arbitrary and uninformed decisions, but over time it will learn which moves have the most favorable outcome in every possible position, becoming nearly perfect. An example of a reinforcementlearning system is DeepMind’s AlphaGo program, a gaming engine that learned to play Go better than professional gamers (Schrittwieser et al., 2020). Go, the board game has 250150 or 10360 possible move variations, giving it unprecedented complexity—and
"Reinforcement learning is arguably the more humanlike method of learning. It is akin to when humans do something, process feedback, determine how favorable the outcome was, and then use the information to decide better in similar situations in the future."
19
Figure 5: A basic representation of reinforcement learning in which an agent interacts with its environment, creating a new state and a measure of reward. The measure of reward determines how the system reacts in the immediate state and in similar situations in the future. Source: Wikimedia Commons
the algorithm mastered every last one move variation (Koch & Koch, 2016). After losing two consecutive games to AlphaGo, Lee Sedol, then the world’s second-best Go player, admitted in an interview: "From the very beginning of the game, there was not a moment in time when I felt that I was leading" (Metz, 2017).
"The final classification of machine learning algorithms is recommender systems. These usually involve both labeled data (supervised learning model) and unlabeled data (unsupervised learning model) but differ from the two learning models in that the learning is continuous."
The final classification of machine learning algorithms is recommender systems. These usually involve both labeled data (supervised learning model) and unlabeled data (unsupervised learning model) but differ from the two learning models in that the learning is continuous (Ertel & Black, 2018). For example, for Netflix’s movie recommendation system, due to the fluidity of movie tastes (a user liking multiple disparate movie titles) and high variance (“similar” users on a metric having different tastes in other metrics), recommender systems cannot afford to use a predetermined model for recommendations. Instead, user preferences are continuously tracked and the recommendations are updated as frequently as needed. Models like k-means clustering are helpful in making predictions in a static temporal point, but the data changes with time so the model must change to maintain accuracy.
Machine Learning in Other Industries Most aspects of modern life are, in some way, being changed by machine learning. With advancements in machine learning, Netflix can cater specific movies to its viewers, Google can suggest search terms based on user search histories, and Facebook can tune advertisements to specific users. In addition, machine learning holds the promise to truly revolutionize other industries beyond such mundane applications. The adoption of machine learning and artificial intelligence is not always as straightforward as
20
people may think; any company needs to do a comprehensive analysis of its organizational, leadership structures, and workflows, and then they can plan ahead for how a machine learning system would fit in with the existing systems. This is usually a challenge for those companies not in technology-oriented industries, as they usually need to create entire departments, upgrade their systems, and hire specialists to support the new system. Such challenges make some companies and industries opposed to this new change (Char et al., 2018). Healthcare is an example of an industry that has rapidly adopted machine learning in daily operations. Machine learning offers various advantages over traditional systems; first, it leads to improved efficiency by helping hospitals and health organizations manage data and make well-informed decisions from data trends. Recently, diagnostic algorithms have been found to be as good as, if not better than, human physicians at detecting and predicting the presence of cancerous cells (Beam & Kohane, 2018, p. 1317). Education has also seen improvements from the use of machine learning. For example, Khan Academy, a popular online-learning website, employs machine learning to rate students on various “skills” based on their performance in periodic assessments. These ratings are then fed into a machine learning system, which determines what content users need to review and which content they have mastered, effectively creating a study roadmap for each individual student in the system. Through machine learning, Khan Academy is able to offer a level of personalized study that even real-world classes have not achieved (Alenezi & Faisal, 2020). Additionally, machine learning itself is a tool for learning. For instance, popular language translation websites such as
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Google Translate have a huge dataset on the structure and usage of different languages and can analyze a sentence from one language to make a context-aware translation to a different language. This might also allow people who speak different languages to have a conversation without the need for a translator. Indeed, machine translation has multiple advantages over human translators. Machine translation systems are readily available wherever and whenever needed, most are multilingual (in fact, Google translate knows a total of 109 different languages, a feat unattainable to human translators), and machine translation is faster than human translation (Niño, 2009, p. 255). In agriculture, an entirely new field of research and practice has emerged due to the use
of machine learning. Agriculture plays an important role in most economies. With increases in population, global warming, and volatile climate conditions, it is becoming harder for economies to sustain their domestic and export food needs. Termed “precision agriculture,” an application of machine learning aspires to overcome this challenge by studying every variable in an area—rainfall, soil type, temperature, presence of pests, etc.—to help optimize the agricultural process and decisions in the area and ensure maximum produce (Sharma et al., 2021, p. 4868). Similarly, animal fertility, breeding patterns, growth patterns, and behavior can be tracked to calibrate the most optimal mating time, ensuring the best outcome for farmers. Every variable can and is tracked, making it easy to diagnose a problematic season and single out the environmental variable or the decision that led to the problem. Governments, courts, and other administrative agencies are often faced with the need to analyze massive datasets and informed decisions that might affect millions or even billions of people. It becomes a challenge to comprehensively analyze all the data involved and identify every relevant feature and trend. Thus, many organizations turn to machine learning (Coglianese & Lehr, 2017). With machine learning, all the data does not have to be manually processed; instead, one only needs to feed it into the algorithm and tell it which kinds of features and trends to look for. This saves a lot of time, allowing offices to operate faster and more efficiently. The
transport
industry
has
also
Figure 6: A machine learning model in use to interpret x-ray scans. Diagnostics are one of the emerging applications of machine learning systems in healthcare. Source: Wikimedia Commons
"Governments, courts, and other administrative agencies are often faced with the need to analyze massive datasets and informed decisions that might affect millions or even billions of people. It becomes a challenge to comprehensively analyze all the data involved and identify every relevant feature and trend. Thus, many organizations turn to machine learning."
seen Figure 7: Google Translate, a popular machine translation engine that uses machine learning to detect a language and translate a phrase into any of 108 other languages. Source: Wikimedia Commons
WINTER 2021
21
Figure 8: An autonomous car’s “mind”: it uses machine learning to detect different objects, lane markings on the road, pedestrians, and more. Source: Wikimedia Commons
"Although machine learning offers advantages over traditional systems in many industries, it is never perfect. Ceding more control to algorithms should be done with the awareness that these systems offer no guarantees of fairness, equitability, or even veracity."
improvement in efficiency due to machine learning and AI. On the manufacturing end, machine learning is often used in optimizing fluid dynamics. Fluid dynamics, the study of the flow of liquids and gases, helps shape vehicles and airplanes to reduce air drag. Fluid dynamics is particularly important in race car and aircraft design, but it is also finding its way into the design of regular cars as electric car companies such as Porsche and Tesla try to maximize the mileage a user gets out of a single charge. On the user’s end, machine learning is embedded in multiple products: navigation services such as Google Maps use machine learning to estimate traffic weight along different routes and suggest the fastest route from one place to another (Lau, 2020). This is particularly important in cities that experience traffic congestion, where the shortest route is not necessarily the fastest one. Additionally, autonomous cars, which heavily rely on machine learning and other AI algorithms, promise to be a better alternative to traditional cars in the future (Haydin, 2021).
Pitfalls of Machine Learning Although machine learning offers advantages over traditional systems in many industries, it is never perfect. Ceding more control to algorithms should be done with the awareness that these systems offer no guarantees of fairness, equitability, or even veracity. Machine learning systems actually introduce a new set of challenges that most organizations are usually ill-equipped for. One such problem is AI bias, which can happen when a model is trained with very few parameters and ignores other important parameters. Models with high bias make consistent but inaccurate predictions. High bias might also appear where a model that was properly trained is tasked to make predictions on a similar but fundamentally different dataset, resulting in consistent but inaccurate extrapolations (Char et al., 2018). For example, a system designed to make health predictions on local residents in an area should not be used in a different area without retraining because residents in the second area may not share the physiological idiosyncrasies that residents in the first area had.
Figure 9: A visual imagination of an algorithm. The thought of complicated software algorithms deciding which people deserve which jobs and opportunities can be daunting, more so the seeming randomness of unexpected predictions. Source: Wikimedia Commons
22
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Why not just train the model on data from the whole world? For one, it is difficult to gather every last bit of the whole world’s data on the healthcare industry or other industries. But the bigger problem lies in the fact that while training machine learning models on more “general” data reduces bias, it also creates a new problem: high variance. High variance occurs when a model uses too many parameters and is overly sensitive to small fluctuations in data, leading to drastic differences in predictions for the tiniest changes in data. For instance, multiple diseases are known to have similar symptoms, and many diseases have symptoms that do not show in every patient. A high variance model might make correct but inconsistent predictions (Ertel & Black, 2018).
A more interesting challenge arises when algorithms are trained on biased data. From elite college admissions to jobs, prison convictions, and even the most basic societal expectations, it is not difficult to find bias in society. These biases are captured when data is collected and carry on into machine learning models trained on the data. For instance, Facebook’s advertising algorithm showed a prep-school teacher advert to an audience of 94% women and instead showed an advert for a truck driver to an audience of 87% men (ProPublica, 2020). In law, programs designed to aid judges in sentencing convicts have been proven to racially profile and discriminate against certain races which are historically discriminated against by the American justice system (Char et al., 2018). This issue calls for caution in the implementation of machine learning models and awareness that data might be biased. This also leads to ethical concerns over the appropriateness of machine learning in some fields, especially when the decisions being made have an impact on the lives of other people. Others call into question the long-term effects of the extensive integration of machine learning into society. Elon Musk, the billionaire founder of Tesla, an industry-leading company in self-driving technology which uses machine learning, once likened the adoption of machine learning and artificial intelligence into society to “summoning the demon” and termed it the “biggest existential threat” (Coglianese & Lehr, 2017). This and other negative assumptions detract from the advantages of machine learning in society. For instance, automation of
Figure 10: A visual representation of a neural network with an input layer, one hidden layer, and an output layer. Each node is referred to as a neuron, after the biological neuron. Source: Wikimedia Commons
"In law, programs designed to aid judges in sentencing convicts have been proven to racially profile and discriminate against certain races which are historically discriminated against by the American justice system."
Figure 11: A neural network is similar to a network of biological neurons (pictured here): at each level, a computation is executed either turning on the activation function and transmitting a signal (analogous to a computational “1”), or not turning on the activation function (a computational “0”). Source: Wikipedia, Jennifer Walinga
WINTER 2021
23
manual labor in factories could potentially render jobs obsolete en masse (Paus, 2018). Even when it creates new positions, the previous workers will not fit in the created positions, which begs a bigger question: in an employment ecosystem dominated by machine learning, what place do those lacking technical skills have? This must be addressed before the widespread adoption of machine learning and artificial intelligence.
Building a Neural Network in Julia "A neural network, an important computational structure in machine learning and the foundation of deep learning, is a set of algorithms that seeks patterns in a dataset by mimicking the functionality of biological neural systems."
A neural network, an important computational structure in machine learning and the foundation of deep learning, is a set of algorithms that seeks patterns in a dataset by mimicking the functionality of biological neural systems (Judd, 1990). Julia was used because of its speed over other numerical computing languages like Python (Julia is compiled while Python is interpreted), its native support for important operations like the dot operator (which has to be done with a library in Python), and support for metaprogramming— the ability of a program to modify itself and other programs as it runs, as opposed to only modifying data (Bezanson, 2012). Section 1.1: Relevant functions used in a neural network 1. Linear regression: This is of the form y = m*x + b . To make it more intuitive, the equation is usually adjusted, changing “m” to “W” to represent weights.
4. The tanh function: This is a hyperbolic function that is particularly useful when we need to map arbitrary numerical inputs to the range -1 to 1.
5. The tanh derivative: the tanh derivative is also needed for gradient descent. It is computed as follows.
2. The sigmoid function: This is relevant where it is necessary to map an arbitrary numerical input to an output between 0 and 1.
3. The sigmoid derivative: during backpropagation, the gradient of the function at given points for gradient-descent optimization must be found. Since the sigmoid function will have been computed before needing its derivative, it is most efficient to derive a differential equation (an equation for the derivative involving the original function itself ) instead of explicitly computing the derivative.
24
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Section 1.2: Software implementation of relevant functions
View project code in the appendix at the end of this article. There is a code block for each paragraph from section 1.2 to section 6.
of iterations. A thousand iterations is a good compromise between ensuring the weights are stabilized and limiting redundant iterations. enough to stabilize the neural network weights and not computationally Section 6: Prediction routine
Section 2.1: Neural network abstraction In computer science, data abstraction is the process of masking the details and data in a program by creating an outer shell as a single representation of the collective data and logic. Rather than passing multiple lists across different functions, it is easier to create an outer shell to represent the neural network, then store all the relevant data inside the neural network abstraction. This outer shell (called a “class” in some languages such as Python and Java, or a “struct” in Julia and C) shall then be passed as a single object into Section 2.2: Initializing the Neural Network An abstraction is not active until an instance of is created. To do this, a function is created to handle the instantiation request with appropriate instance variables or parameters and return a Neural network tuned to the requirements. Section 3: Forward propagation In a feed-forward neural network, forward propagation refers to processing inputs “forward” through the network of neurons to generate predictions.
After building and training this neural network, the next step is to analyze predictions on an unseen dataset. This is similar to the training routine, but backward propagation and iteration is not needed since parameters have already been optimized. Section 7: Predictions A neural network can be used for most patternrecognition problems, including decision boundary problems, linear, quadratic, and exponential regression, even image recognition and natural language processing although those would require more sophisticated types of neural networks.
"In computer science, data abstraction is the process of masking the details and data in a program by creating an outer shell as a Here are some predictions on data: single representation 1. Exponential data sequence: Where needed, of the collective data a neural network can be used for sequential and logic." linearization of higher-order functions or transcendental functions. This is useful where getting faster, relatively accurate estimates is better than getting exact values at a higher computational cost (Sejnowski, 2018).
Figure 12: Sequential linearization of an exponential data sequence. (Source: Image generated by the author)
Section 4: Back-propagation Backward propagation goes in the reverse direction to forward propagation. While training a neural network, backward propagation is used to modify the parameters through a process known as gradient descent. Backward propagation is done after each iteration of forward propagation (Sejnowski, 2018). Section 5: Training routine The structure of the neural network is complete, but a routine for training the neural network must be created. This is completed by forward-propagating then back-propagating the neural network for a predetermined number of iterations (Judd, 1990). The weights in the neural network are updated in each iteration, and tend to stabilize after a number
WINTER 2021
2. Decision boundary problems: The neural network determines a boundary to separate data into two distinct groups. Neural networks try to learn the decision boundary which minimizes empirical error in the division of data. The type and accuracy of generated decision boundaries is dependent on the number of hidden layers in a neural network. A neural network without hidden layers can only predict linear decision boundaries (Sejnowski, 2018).
25
Conclusion
Figure 13: A decision boundary generated by a neural network with 4 hidden layers.
There is a lot to be expected from machine learning and AI in the future. Firstly, integration of AI into other industries is set to increase as the computational methods in AI become more powerful. While machine learning may eventually lead to the loss of certain work positions, it also creates more efficient workflows and allows professionals to focus their efforts elsewhere by removing the need to manually track and maintain processes (Pai, 2020).
(Source: Image generated by the author)
Figure 14: A linear decision boundary generated by a neural network with zero hidden layers. The presence of hidden layers is critical to a neural network’s ability to develop a sophisticated understanding of data and make useful decisions. (Source: Image generated by the author)
3. Higher order regression: Using a neural network generates a higher degree model that better fits the data. While this is good, it might lead to overfitting when a neural network is used on an overly simple dataset. Figure 15: A higher order regression model (red line) generated to fit the trend in the data (green line). (Source: Image generated by the author)
4. Noisy data: Neural networks are not useful on all kinds of data. It is important to process data before feeding it into a neural network (or any other prediction model) and try to bring out the important features or suppress the less important where necessary. Unprocessed data is often noisy, which results in nonsensical predictions. Figure 16: An attempted prediction on a noisy dataset. While neural networks optimize to minimize the error in predictions, there is no guarantee of accurate predictions if the data is noisy. (Source: Image generated by the author)
26
Another trend is the rise of deep learning. Termed “the deep learning revolution,” its rise is driven by the ability of deep neural networks to detect even the slightest trends from vast datasets (Sejnowski, 2018). Deep learning is itself a more sophisticated application of neural networks in data analysis. For instance, a deep neural network can have hundreds or thousands of hidden layers (the neural networks used in this article only had four hidden layers). Besides the feed-forward neural networks in machine learning, deep learning employs recurrent neural networks (RNNs) and convolutional neural networks (CNNs). RNNs involve iteration at specific layers in the neural network, which helps capture dependencies in the data and is ideal for predictions on data such as contextspecific grammatical sentences, while CNNs can identify the features in data in the process of learning and predicting, making them ideal for problems such as image processing where it is harder for a human to identify and label specific data points in the numeric representation of an image (Pai, 2020). With the rise of deep learning, GPU (Graphical Processing Unit) and TPU (Tensor Processing Unit) computing is also gaining attention because of its use in large-scale deep learning systems. GPUs are designed for processing and rendering graphics, which usually involves short bursts of intense workloads. As a result, GPUs are fundamentally optimized for parallel computing unlike ordinary CPUs which are optimized for sequential computing. This feature positions GPUs to better handle hundreds or thousands of concurrent, shorttime computation tasks from multiple neurons, speeding up the performance of neural networks by up to 20 times. In contrast, an ordinary CPU would have to queue each task into an instruction pipeline and execute each individually or only a few at a time and await execution. TPUs, pioneered by Google, are specifically optimized for AI acceleration and
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Barrows, J. (2017, December 21). Artificial Intelligence Will Change The Job Landscape Forever. Here’s How To Prepare. Forbes. https://www.forbes.com/sites/quora/2017/12/18/ artificial-intelligence-will-change-the-job-landscape-foreverheres-how-to-prepare/?sh=a13b81e27f40 Beam, A. L., & Kohane, I. S. (2018). Big Data and Machine Learning in Health Care. JAMA, 319(13), 1317. https://doi. org/10.1001/jama.2017.18391 Bezanson, J. S. K. (2012, February 14). Why We Created Julia. Julia Lang. https://julialang.org/blog/2012/02/why-wecreated-julia/
share much of the advantages of GPUs (Huang, 2018). As new technologies such as GPUs, TPUs, and improved CPUs optimize the computational capability of modern computers for AI, specialists are able design and implement more powerful machine learning and deep learning models to solve challenging problems. This is, in part, a contributor to the rising adoption of machine learning and AI in other industries. More powerful applications of AI are emerging: NASA is using AI to hunt for exoplanets (Italiana, 2018), CERN is using AI to extract useful information from terabytes of data collected from its particle accelerator (Nature Editorial, 2018), meteorologists use AI to predict tropical cyclones with up to 99% accuracy (Italiana, 2018), and Eve, a robot “scientist” designed at Cambridge University and The University of Manchester, proved instrumental in the discovery that triclosan, a toothpaste ingredient, can be used to fight drug-resistant malaria (Italiana, 2018). Future advancements in processing and in AI will lead to more powerful applications and wider adoption of AI. can be expected to lead to increased adoption of AI subfields in other industries. In fact, a 2017 PwC study estimates that AI could replace 38% of all current jobs in the United States in the next 15 years (Barrows, 2017). However, a second PwC study done in the UK in 2018 determined that AI creates about as many technical jobs as the manual jobs it displaces (PricewaterhouseCoopers, 2018). For industry workers, there is but one tough decision: keep up or be rendered obsolete. References Alenezi, H. S., & Faisal, M. H. (2020). Utilizing crowdsourcing and machine learning in education: Literature review. Education and Information Technologies, 25(4), 2971–2986. https://doi.org/10.1007/s10639-020-10102-w
WINTER 2021
Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing Machine Learning in Health Care — Addressing Ethical Challenges. New England Journal of Medicine, 378(11), 981–983. https://doi.org/10.1056/nejmp1714229 Coglianese, C., & Lehr, D. (2017). Regulating by robot: administrative decision making in the machine-learning era. The Georgetown Law Journal, 105(5), 1147–. (Coglianese & Lehr, 2017, p. 4) Ertel, W., & Black, N. T. (2018). Introduction to Artificial Intelligence (Undergraduate Topics in Computer Science) (2nd ed. 2017 ed.). Springer.
Figure 17: A bi-directional recurrent neural network (BIRNN). BI-RNNs are particularly important in natural language processing, where words before and after each word provide important context for the usage of the word being classified. In a BI-RNN, layers computing in opposite directions are stacked together such that one generates predictions in the natural ordering of words in a sentence and the other computes predictions in the reverse ordering of words. Both predictions are then considered for the final interpretation of each word. Thus, the model can look as far ahead and as far behind as needed to make the best predictions (Goldberg & Hirst, 2017). (Source: Wikimedia Commons)
Goldberg, Y., & Hirst, G. (2017). Neural Network Methods in Natural Language Processing (Synthesis Lectures on Human Language Technologies). Morgan & Claypool Publishers. Grossfeld, B. (2020, October 12). Deep learning vs machine learning: a simple way to understand the difference. Zendesk. https://www.zendesk.com/blog/machinelearning-and-deep-learning/#:%7E:text=Machine%20 learning%20uses%20algorithms%20to,on%20what%20 it%20has%20learned&text=Deep%20learning%20is%20 a%20subfield,most%20human%2Dlike%20artificial%20 intelligence Haydin, V. (2021, March 5). How Machine Learning Algorithms Make Self-Driving Cars a Reality. Intellias. https://www. intellias.com/how-machine-learning-algorithms-make-selfdriving-cars-a-reality/ Huang, J. (2018, December 21). Accelerating AI with GPUs: A New Computing Model | NVIDIA Blog. The Official NVIDIA Blog. https://blogs.nvidia.com/blog/2016/01/12/acceleratingai-artificial-intelligence-gpus/ Italiana, U. D. S. (2018, October 15). AI: A limitless source of potential. Study International. https://www. studyinternational.com/news/ai-a-limitless-source-ofpotential/ Judd, S. J. (1990). Neural Network Design and the Complexity of Learning (Neural Network Modeling and Connectionism) (Illustrated ed., Vol. 1). MIT Press. Kasparov, G. (1998). My 1997 Experience with DEEP BLUE. ICGA Journal, 21(1), 45–51. https://doi.org/10.3233/icg-199821107 Koch, C., & Koch, C. (2016, March 19). How the Computer Beat the Go Master. Scientific American. https://www. scientificamerican.com/article/how-the-computer-beat-thego-master/#:%7E:text=With%20its%20breadth%20of%20 250,or%2010360%20possible%20moves. Lau, J. (2020, September 3). Google Maps 101: How AI helps predict traffic and determine routes. Google. https://blog. google/products/maps/google-maps-101-how-ai-helpspredict-traffic-and-determine-routes/
27
López-Cajún, C., & Ceccarelli, M. (2018). Explorations in the History of Machines and Mechanisms: Proceedings of the Fifth IFToMM Symposium on the History of Machines and Mechanisms (History of Mechanism and Machine Science (32)) (Softcover reprint of the original 1st ed. 2016 ed.). Springer. McCarthy, J. (2006, October 30). The Dartmouth Workshop--as planned and as it happened. Stanford University. http://wwwformal.stanford.edu/jmc/slides/dartmouth/dartmouth/node1. html Metz, C. (2017, June 3). In Two Moves, AlphaGo and Lee Sedol Redefined the Future. Wired. https://www.wired.com/2016/03/ two-moves-alphago-lee-sedol-redefined-future/ Nature Editorial. (2018, May 4). Particle physicists turn to AI to cope with CERN’s collision deluge. Nature. https://www. nature.com/articles/d41586-018-05084-2?error=cookies_not_ supported&code=a9d46c28-e422-40fd-bc43-a2ce6caab27e Niño, A. (2009). Machine translation in foreign language learning: language learners’ and tutors’ perceptions of its advantages and disadvantages. ReCALL, 21(2), 241–258. https://doi.org/10.1017/s0958344009000172 Pai, A. (2020, October 19). CNN vs. RNN vs. ANN – Analyzing 3 Types of Neural Networks in Deep Learning. Analytics Vidhya. https://www.analyticsvidhya.com/blog/2020/02/cnn-vsrnn-vs-mlp-analyzing-3-types-of-neural-networks-in-deeplearning/ Paus, E. (2018). Confronting Dystopia: The New Technological Revolution and the Future of Work (Illustrated ed.). ILR Press. PricewaterhouseCoopers. (2018, July 17). AI will create as many jobs as it displaces by boosting economic growth. PwC. https:// www.pwc.co.uk/press-room/press-releases/AI-will-create-asmany-jobs-as-it-displaces-by-boosting-economic-growth.html ProPublica. (2020, March 2). Facebook Ads Can Still Discriminate Against Women and Older Workers, Despite a Civil Rights Settlement. https://www.propublica.org/article/ facebook-ads-can-still-discriminate-against-women-and-olderworkers-despite-a-civil-rights-settlement Rada, R. (1986). Artificial intelligence. Artificial Intelligence, 28(1), 119–121. https://doi.org/10.1016/0004-3702(86)90034-2 Schrittwieser, J., Antonoglou, I., Hubert, T., Simonyan, K., Sifre, L., Schmitt, S., Guez, A., Lockhart, E., Hassabis, D., Graepel, T., Lillicrap, T., & Silver, D. (2020, December 23). Mastering Atari, Go, chess and shogi by planning with a learned model. Nature. https://www.nature.com/articles/s41586-020-030514?error=cookies_not_supported&code=6a225d41-20f3-485d8628-56e5761fbf32 Sejnowski, T. J. (2018). The Deep Learning Revolution (1st ed.). The MIT Press. Shannon, C. E., & McCarthy, J. (1956). Automata Studies. (AM34), Volume 34 (Annals of Mathematics Studies). Princeton University Press. Sharma, A., Jain, A., Gupta, P., & Chowdary, V. (2021). Machine Learning Applications for Precision Agriculture: A Comprehensive Review. IEEE Access, 9, 4843–4873. https://doi. org/10.1109/access.2020.3048415 Tesauro, G. (1995, March 1). Temporal Difference Learning and TD-Gammon. BackGammon. https://bkgm.com/articles/ tesauro/tdl.html
28
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Appendix: Project Code
Section 1.2: Software implementation of relevant functions
Section 2.1: Neural network abstraction
Section 2.2: Initializing the Neural Network
WINTER 2021
29
Section 3: Forward propagation
Section 4: Back-propagation
30
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Section 5: Training Routine
Section 6: Prediction routine
WINTER 2021
31
Factors in the Onset of Obesity - An American Public Health Crisis BY ANAHITA KODALI '23 Cover: Obesity is typically diagnosed based on BMI. The different levels of obesity are shown here. Source: Wikimedia Commons
Introduction Obesity is one of America’s greatest public health concerns. For both children and adults, rates of obesity have generally been on the rise since 1980 (Hales et al., 2018). In 1980, 4.8% of men and 7.9% of women were obese; in 2008, those values had doubled to 9.8% and 13.8%, respectively (Carallo, 2011). More recently, as of 2018, about 42% of American adults were obese and as of 2016, about 18.5% of American children were obese (CDC, 2020; CDC, “Childhood Obesity Facts,” 2019). It is well understood that obesity impairs both the mental and physical health; accordingly, obese individuals are estimated to have 30% more medical expenses than their non-obese counterparts. In 2006, obesity-related medical costs accounted for about 40% of the entire American healthcare budget. In addition, obesity causes significant drops in productivity – annually, obesity-related absenteeism in the
32
workplace for each individual obese worker costs between $79 dollars and $132 dollars, accounting for billions of dollars of the total American economy (Çakmur, 2017). Given both the healthcare and productivity costs, it is clear that obesity presents a large economic burden on a national scale. And, with trends showing that obesity levels are only on the rise, it is crucial to understand the determinants for and risk factors of obesity in order to try to curb its growth. From a medical standpoint, obesity is a serious issue. Individuals with obesity can have insulin resistance and are at higher risk for Type II diabetes (Barnes 2011). They are at risk for several cardiovascular conditions, including hypertension and coronary heart disease, and often have high cholesterol. They also often have respiratory complications and have lower vital and total lung capacity, along with the possibility of sleep apnea. Obese DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Fast food is considered to be “energy dense.” Over the past several years, consumption of fast food has been on the rise, a possible contributor to rising obesity levels (Rennie et. al., 2005). Source: Wikimedia Commons
individuals have significantly higher chances of developing cancer and arthritis. Skin disorders, like intertrigo and eczema, are common. Furthermore, obese individuals are three times more likely to die prematurely than their nonobese peers (Ogunbode 2009). On a biological level, the mechanism of obesity is not completely understood. Interestingly, however, it has become increasingly clear over the past years that obesity is not just a biological disorder. The growing prevalence of obesity is driven by external factors. In addition to health behaviors, there are clear disparities due to geography, socioeconomics, and race and ethnicity that are correlated with obesity. These make the obesity epidemic difficult to stop, as each factor is complex in its own way and requires different treatment by public health professional. This paper aims to comprehensively review both what is known about obesity from a medical perspective and each of the external factors; the hope is that a better understanding of the obesity crisis will allow public health experts to research and implement more nuanced solutions to the problem.
Obesity Overview There has been much debate within the scientific community about the classification of obesity given that it has determinants that are not solely biological. As such, many argue that obesity should be labeled as a behavioral WINTER 2021
abnormality. Nonetheless, many renowned medical foundations, including the American Medical Association, have chosen to classify obesity as a disease: as such, obesity refers to the condition where a person has a Body Mass Index (BMI) of over 30 (Stoner & Cornwall, 2014). The biological mechanisms underlying the accumulation of fat that characterizes obesity are still unknown. There are several proposed ideas, but two mechanisms seem to hold the most support. The first of such mechanisms is genetic. Estimates vary, but various studies show that high BMI has between a 40% and 70% chance of being inherited. Both monogenic (characteristics arising from a single gene types) and polygenic (characteristics arising from multiple gene types) forms of obesity have been explored. On the monogenic side, researchers have been able to implicate mutations in leptin and melanocortin-4 receptor genes. Both play critical roles in energy homeostasis (which is further explored in the next paragraph); leptin is a hormone that inhibits hunger, and melanocortin-4 receptor binds several agonists, many of which also lead to hunger inhibition. On the polygenic side, several combinations of over 300 genetic loci have been marked as significant in resulting in obesity. Epigenomic research on obesity is ongoing and may reveal further genetic influences on obesity (Heymsfield & Wadden, 2017).
"On a biological level, the mechanism of obesity is not completely understood. Interestingly, however, it has become increasingly clear over the past years that obesity is not just a biological disorder. The growing prevalence of obesity is driven by external factors."
33
Figure 2: Obesity prevalence varies depending on region. The South (pictured in in light red), and particularly the Southeast (pictured in bright red), has disproportionately high levels of obesity as compared to the other regions in the US. Source: Wikimedia Commons
"Despite the increase in obesity over the past few decades, there has not been an equal increase in overall caloric intake, suggesting that there instead have been differences to the types of food and drink being consumed."
The second proposed idea is dysregulation in energy homeostasis mechanisms. Energy homeostasis refers to the balance of consumption and burning of calories. There are two main processes thought to be involved in energy homeostasis’s contribution to obesity, the first of which is sustained positive energy balance – this refers to a state in which a person is overconsuming calories while not expending enough for a long period of time. The second is resetting of the set point of body weight for a higher weight – for some reason, the body may choose to accept a higher weight as the “normal” body weight. This second process would explain why obese individuals cannot reduce their caloric intake easily and why even those who lose weight often regain it. The reasoning for this reset of weight set-point is not known currently and could be inherited or arise due to some biological or environmental trigger. A combination of both of energy processes would lead to a buildup of excess weight, and eventually to obesity (Schwartz et al., 2017). Overall, it is clear that obesity onset is extremely complex and due to a variety of biological factors. More research into both genetic and energy-related reasons will elucidate more specific mechanisms underlying obesity in the future. However, the biological factors cannot be divorced from external determinants, including behavior, geographical location, and demographics.
Behavioral Drivers of Obesity There are a number of behavioral determinants related to obesity. Based on the effect that they
34
have these determinants can be classified in two general categories: energy intake and energy expenditure. Energy intake refers to food and drink consumption. Despite the increase in obesity over the past few decades, there has not been an equal increase in overall caloric intake, suggesting that there instead have been differences to the types of food and drink being consumed. One significant change has been an increase in the amount of energy dense food being consumed. Energy dense foods are those that contain high levels of fats; fast foods and processed foods are generally energy dense (Rennie et. al., 2005). There has been an upward trend of consumption of fast food in the US over the past few years, and while studies are not conclusive, there is some evidence supporting an association between consumption of ultra-processed foods and obesity (Rennie et. al., 2005; Poti et al., 2017). This increase in consumption may be related to the significant increase in the portion size of energy dense foods, including both fast foods and food items sold in discrete units, like prepackaged foods, snacks, and drinks, that has occurred over the past few decades (Livingstone & Pourshahidi, 2014). Concerningly, restaurant food is also more energy dense than homemade food. Both the increase in portion size and energy-density of the food consumed may contribute to rising obesity levels. (Rennie et. al., 2005). Another area of interest is consumption of sugar-rich drinks, like sodas and juices. Over the past several years, there has been an upward trend in consumption of sugary drinks. Like with energy dense food consumption, the independent effects of sugary drinks on weight
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
and obesity still needs to be explored (Rennie et. al., 2005). However, there is research that suggests that sugary drinks are associated with obesity and at the very least, discouragement of their consumption is important (Malik et al., 2006). The second behavioral factor to study is energy expenditure. This refers to the amount of physical activity a person does to burn calories consumed. Over the past several decades, there has been a steady decrease in physical activity among the general American population, meaning less energy is being spent, contributing to the caloric imbalance caused by unhealthy eating behaviors (Rennie et. al., 2005; Browson et al., 2005). There are five main factors involved in this decline: leisure time, sedentary time, work-related physical activity, transportation-related physical activity, and general physical activity. There has been a slight increase in leisure time, which could potentially mean more time for exercise. However, this has been offset by the significant increase in sedentary time (time spent not doing physical activity). Additionally, there have been overall decreases to work-related physical activity, transportation-related physical activity, and general physical activity in the home – there are several reasons for this, including a general shift towards office jobs and away from physically taxing jobs (Browson et al., 2005). Overall, a combination of increased energy intake and decreased energy expenditure has led to unhealthy caloric imbalances; these create the aforementioned dysfunctional biological regulation pathways.
Disparities in Obesity In addition to behavioral factors, obesitycausing factors can be further stratified into different demographic groups, including geographic region, socioeconomic status, and race and ethnicity. By studying these groups, it becomes abundantly clear that there are several disparities in the American obesity landscape; additionally, however, when studying these groups it is critical to note that though discussed here in relative isolation, they are interconnected and their combined synergistic effects are what have resulted in obesity-related disparities.
WINTER 2021
Geographic Disparities It is clear that geographic region plays a significant role in determining risk for developing obesity. Generally, the West Cost and Northeast regions tend to have lower rates of obesity than the South, especially the Southeast (Wang & Beydoun, 2007). The comparison between the Midwest and the South is less clear – some studies have found that the Midwest has a lower rate of obesity than the South (Wang & Beydoun, 2007), while others have found the opposite (Sung & Etemadifar, 2019). Within these regions, there are clusters of high rates of obesity and low rates of obesity. These clusters vary per region, but there have been some trends between race and employment status. For some demographics, clusters have positive effects, meaning a disproportionately high number of individuals within the demographic group are obese. For others, clusters have negative effects, meaning a disproportionately low number of individuals within the demographic group are obese. In the South, clusters have a positive effect on Black populations. In the Northeast, there is a negative effect on Hispanic populations. In both the West and the Midwest, there is a positive effect on unemployed individuals; the reasons for why these relationships exist are still being explored (Myers et al., 2015). Across all regions, rural communities across the US generally have higher levels of obesity than urban communities; reasons may include transportation issues and lack of accessibility to healthy foods (Hill et al., 2014).
'Studies have found that being under the poverty level, being unemployed, and receiving food stamps are strongly associated with obesity."
Socioeconomic Disparities There is a clear relationship between socioeconomic status and obesity. Though obesity has risen among all demographic groups over the years, the prevalence of obesity amongst certain groups is disproportionately high. Generally, lower socioeconomic groups have higher obesity rates (Loring et al., 2014). For men in general, prevalence of obesity is similar across all income levels; however, for Mexican American and non-Hispanic Black men, obesity is generally more prevalent at higher incomes relative to those in lower income brackets. For women, obesity is generally more prevalent at lower incomes (CDC, “Obesity and Socioeconomic Status in Adults”, 2019). Studies have found that being under the poverty level, being unemployed, and receiving food stamps are strongly associated with obesity (Akil & Ahmad, 2011).
35
Racial & Ethnic Disparities Obesity prevalence is different among different racial and ethnic groups. Black and Hispanic Americans across the United States have a higher obesity incidence than White and Asian Americans (CDC, 2010). Obesity is also on the rise amongst Indigenous populations; this rise is of importance as traditional Western norms are often not aligned with Indigenous ways of living – thus, curbing obesity in Indigenous populations will potentially require a significantly different paradigm and set of public health guidelines (Bell et al., 2017). As discussed previously, as income increases, prevalence of obesity tends to decrease across all demographic groups; however, this protective effect is not as significant for Black individuals as it is for White individuals. Thus, even across different socioeconomic groups, obesity tends to disproportionately impact people of color (Assari, 2018).
Conclusions Obesity is one of the biggest public health threats the United States currently faces and also puts a significant burden on the American economy. The biological mechanisms of obesity are still being explored and may also be correlated to different external factors, including geography, socioeconomic status, and race and ethnicity. These factors all play a role in determining access to healthy food, job type, and healthcare, all of which may contribute to the onset of obesity. Thus, different populations have disproportionate levels of obesity. These diverse, unevenly weighted external factors, combined with inconclusive research regarding a particular individual’s independent risk factor for obesity, make the disease especially challenging to tackle on a national scale. Different racial, ethnic, income, and gender groups will need to be targeted by public health measures at the local level to curb the rise in dangerous obesity. References Akil, L., & Ahmad, H. A. (2011). Effects of socioeconomic factors on obesity rates in four southern states and Colorado. Ethnicity & Disease, 21(1), 58–62. Assari, S. (2018). Family Income Reduces Risk of Obesity for White but Not Black Children. Children, 5(6), 73. https://doi. org/10.3390/children5060073 Barnes, A. S. (2011). The epidemic of obesity and diabetes: Trends and treatments. Texas Heart Institute Journal, 38(2), 142–144.
36
Bell, R., Smith, C., Hale, L., Kira, G., & Tumilty, S. (2017). Understanding obesity in the context of an Indigenous population—A qualitative study. Obesity Research & Clinical Practice, 11(5), 558–566. https://doi.org/10.1016/j. orcp.2017.04.006 Brownson, R. C., Boehmer, T. K., & Luke, D. A. (2005). DECLINING RATES OF PHYSICAL ACTIVITY IN THE UNITED STATES: What Are the Contributors? Annual Review of Public Health, 26(1), 421–443. https://doi.org/10.1146/annurev. publhealth.26.021304.144437 Çakmur, H. (2017). Obesity as a Growing Public Health Problem. Adiposity - Epidemiology and Treatment Modalities. https://doi.org/10.5772/65718 Carallo, Kim. (2011, February 3). Worldwide Obesity Doubled Over Past Three Decades. ABC News. Retrieved January 22, 2021, from https://abcnews.go.com/Health/global-obesityrates-doubled-1980/story?id=12833461 CDC. (2020, June 29). Obesity is a Common, Serious, and Costly Disease. Centers for Disease Control and Prevention. https://www.cdc.gov/obesity/data/adult.html CDC (2019, June 10). Obesity and Socioeconomic Status in Adults: United States, 2005–2008. Centers for Disease Control. https://www.cdc.gov/nchs/products/databriefs/db50.htm CDC. (2019, June 24). Childhood Obesity Facts. Centers for Disease Control and Prevention. https://www.cdc.gov/ obesity/data/childhood.html Hales, C. M., Fryar, C. D., Carroll, M. D., Freedman, D. S., & Ogden, C. L. (2018). Trends in Obesity and Severe Obesity Prevalence in US Youth and Adults by Sex and Age, 2007-2008 to 2015-2016. JAMA, 319(16), 1723. https://doi. org/10.1001/jama.2018.3060 Heymsfield, S. B., & Wadden, T. A. (2017). Mechanisms, Pathophysiology, and Management of Obesity. New England Journal of Medicine, 376(3), 254–266. https://doi.org/10.1056/ NEJMra1514009 Hill, J. L., You, W., & Zoellner, J. M. (2014). Disparities in obesity among rural and urban residents in a health disparate region. BMC Public Health, 14(1), 1051. https://doi.org/10.1186/14712458-14-1051 Livingstone, M. B. E., & Pourshahidi, L. K. (2014). Portion Size and Obesity. Advances in Nutrition, 5(6), 829–834. https://doi. org/10.3945/an.114.007104 Loring, B., Robertson, A., Organisation mondiale de la santé, & Bureau régional de l’Europe. (2014). Obesity and inequities: Guidance for addressing inequities in overweight and obesity. World Health Organization, Regional Office for Europe. Malik, V. S., Schulze, M. B., & Hu, F. B. (2006). Intake of sugarsweetened beverages and weight gain: A systematic review. The American Journal of Clinical Nutrition, 84(2), 274–288. https://doi.org/10.1093/ajcn/84.2.274 Myers, C. A., Slack, T., Martin, C. K., Broyles, S. T., & Heymsfield, S. B. (2015). Regional disparities in obesity prevalence in the United States: A spatial regime analysis. Obesity (Silver Spring, Md.), 23(2), 481–487. https://doi.org/10.1002/oby.20963 Ogunbode, A. M., Fatiregun, A. A., & Ogunbode, O. O. (2009).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Health risks of obesity. Annals of Ibadan Postgraduate Medicine, 7(2), 22–25. https://doi.org/10.4314/aipm. v7i2.64083 Poti, J. M., Braga, B., & Qin, B. (2017). Ultra-processed Food Intake and Obesity: What Really Matters for Health— Processing or Nutrient Content? Current Obesity Reports, 6(4), 420–431. https://doi.org/10.1007/s13679-017-0285-4 Rennie, K. L., Johnson, L., & Jebb, S. A. (2005). Behavioural determinants of obesity. Best Practice & Research Clinical Endocrinology & Metabolism, 19(3), 343–358. https://doi. org/10.1016/j.beem.2005.04.003 Schwartz, M. W., Seeley, R. J., Zeltser, L. M., Drewnowski, A., Ravussin, E., Redman, L. M., & Leibel, R. L. (2017). Obesity Pathogenesis: An Endocrine Society Scientific Statement. Endocrine Reviews, 38(4), 267–296. https://doi.org/10.1210/ er.2017-00111 Stoner, L., & Cornwall, J. (2014). Did the American Medical Association make the correct decision classifying obesity as a disease? Australasian Medical Journal, 462–464. https://doi. org/10.4066/AMJ.2014.2281 Sung, B., & Etemadifar, A. (2019). Multilevel Analysis of SocioDemographic Disparities in Adulthood Obesity Across the United States Geographic Regions. Osong Public Health and Research Perspectives, 10(3), 137–144. https://doi. org/10.24171/j.phrp.2019.10.3.04 Wang, Y., & Beydoun, M. A. (2007). The Obesity Epidemic in the United States Gender, Age, Socioeconomic, Racial/Ethnic, and Geographic Characteristics: A Systematic Review and Meta-Regression Analysis. Epidemiologic Reviews, 29(1), 6–28. https://doi.org/10.1093/epirev/mxm007
WINTER 2021
37
Super Selective Synthesis: The Evolution of Enantioselective Methods BY ANDREW SASSER '23 Cover: Two enantiomers of an amino acid. These molecules are non-superimposable mirror images of one another, in the same way human hands are. Because enantiomers can have different physiological effects, chemists have been greatly interested in designing pathways to selectively produce a desired enantiomer Source: Wikimedia Commons
38
Introduction During the development of new synthetic reactions, chemists seek to convert simple inorganic and organic reagents into more complex products like drugs and polymers. To optimize these reactions, chemists consider the costs of input materials, time and energy requirements, and the quantity and quality of product produced (Pallardy, 2020). Many synthetic chemists are interested in changing the ratio of different chiral (or “handed” products, like human hands) enantiomeric products – molecules that are non-superimposable mirror images of each other (Hunt, 2020). In particular, the objective is enantioselective synthesis, where enantiomeric products are formed from achiral molecules – molecules that lack chiral centers like alkenes. Specifically, in enantioselective synthesis, the formation of one enantiomer is favored over another (IUPAC, 1997).
To develop enantioselective reaction schemes, scientists must consider the impact on total yield, enantiomeric excess (how much more of one enantiomer is produced over another), and the difficulty of forming and separating desired enantiomers. Scientists have several different options available for facilitating enantioselective synthesis. First, chemists can start with chiral materials – either by manipulating an already chiral starting material or adding a chiral auxiliary – a molecule that can be temporarily attached to facilitate the formation of one enantiomer (Glorius & Gnas, 2006). Second, chemists can separate mixtures of different enantiomers (racemic mixtures) via recrystallization and kinetic resolution (Robinson & Bull, 2003). Third, chemists can use chiral catalysts to facilitate the formation of a single desired enantiomeric product because of thermodynamic or kinetic favorability (Walsh, 2009). This paper will compare and contrast the benefits and drawbacks of each DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Enantiomers of the drug thalidomide; the left image is (S)-thalidomide and the right image is (R)-thalidomide. The primary difference between these two drugs is the 3D arrangement of their atoms; the (S) enantiomer has a “dashed” bond (indicating that the left side of the molecule is pointing “into the page”) whereas the R enantiomer has a “wedged” bond (indicating that the left side of the molecule is pointing “out of the page”). This difference in structure is responsible for the pharmacological differences of these compounds – the R enantiomer is a sedative, whereas the S enantiomer can cause birth defects Source: Wikimedia Commons
of these methods, as well as their ramifications for the synthesis of medicinally important compounds.
The Importance of Enantiomers In order to understand the importance of enantioselective synthesis, one must consider the potentially significant differences in properties when different enantiomers of the same molecule interact with a chiral biomolecule. Although their chemical formulas are identical, enantiomers differ in the arrangement of their atoms in 3-dimensional space. Additionally, though enantiomers generally have similar appearances and physical properties (besides optical rotation), they interact with other chiral molecules in different ways. For example, while the two enantiomers of the molecule carvone are both colorless oils, one smells like mint, and the other is used to flavor rye bread (Smith, 1998). As olfactory receptors are composed of chiral amino acids strung together into a chiral protein, the differing structures of the enantiomers of carvone impact interactions with these receptors, leading to differences in their smell (Feng and Zhou, 2009). Differences in receptor interactions between enantiomer pairs not only affect their smells – they also have serious consequences for their pharmaceutical properties. One of the most infamous examples of the different biological properties of an enantiomer pair is that of Thalidomide, a drug that was prescribed as
WINTER 2021
a sedative and treatment against morning sickness in pregnant women during the 1950s. However, its use was soon discontinued after over 10,000 children were born with birth defects (Franks et al., 2004). It was later determined that while the “R” enantiomer of Thalidomide acted as a sedative, the “S” enantiomer caused births defects by inhibiting proteins that facilitate the development of new blood vessels (Franks et al., 2004). Similar to Thalidomide, the biological effects of the drug Penicillamine vary greatly depending on the particular enantiomer administered. While the “D” enantiomer of the drug is effective in treating rheumatoid arthritis, the “L” enantiomer becomes toxic in the body because it inhibits the action of pyridoxine, otherwise known as Vitamin B6 (Jaffe et al., 1964). Due to the drastically different biological properties of the enantiomers of certain drugs and other chemical compounds, it is important to understand and optimize methods that can selectively produce more of a desired enantiomer.
"Although their chemical formulas are identical, enantiomers differ in the arrangement of their atoms in 3-dimensional space."
The Thermodynamics and Kinetics of Enantioselective Synthesis At a fundamental level, the ability to control the enantioselectivity of a reaction is impacted by the kinetics and thermodynamics of the competing reactions that produce different enantiomers. In order for one reaction to outcompete the other, it must have a lower activation energy – the minimum amount
39
Figure 2: Energy plot of an enantioselective addition reaction. As one of the pathways (the pathway on the right) requires more energy for the reaction to occur, the pathway on the left, forming the other enantiomer is kinetically favored Source: Wikimedia Commons
"Donald Cram developed one of the first models for asymmetric induction in the early 1960s. Cram’s rule states simply that the stereochemistry of a new chiral center formed from a double bond is determined by the stereochemistry of an adjacent stereocenter."
40
of energy needed for the reaction to proceed (Wade, 2013). Chemical reactions that result in the preferential formation of a particular enantiomer do so through a process called asymmetric induction, which is influenced by a chiral stereocenter present in one of the reactants, catalysts, or the surrounding environment (IUPAC, 1994). Donald Cram developed one of the first models for asymmetric induction in the early 1960s. Cram’s rule states simply that the stereochemistry of a new chiral center formed from a double bond is determined by the stereochemistry of an adjacent stereocenter (Cram & Elhafez, 1952). This rule is applicable to substrates that undergo nucleophilic additions such as carbonyls (C=O groups), where an electron rich substrate attacks an electron poor atom – such as the carbon in the carbonyl group which possesses a partial positive charge. Cram added that the nucleophile will attack the carbonyl from the least sterically hindered side of the molecule, meaning that it will approach the molecule from the side where the least bulky substituents are on the adjacent, or alpha, stereocenter (Cram & Elhafez, 1952). While Cram’s model successfully predicted the stereochemical configuration of 50 compounds with previously unknown configurations, it was far from perfect (Cram & Elhafez, 1952). Later work by Marc Cherest revealed that the Cram model had assumed that there was an eclipsing interaction between the “R” substituent of a carbonyl and the bulkiest substituent on its
alpha carbon. This would have suggested that increasing the size of R should have decreased the stereoselectivity of reactions; in reality, increasing the size of R actually increased stereoselectivity (Cherest et al., 1968). This observation was attributed to the fact that the transition state for a nucleophilic attack is more favorable when gauche interactions – steric interactions between substituents on adjacent carbons – are minimized. When R is more bulky –an isopropyl group as opposed to a methyl substituent, for example – the gauche interactions between R and its closest substituent on the alpha carbon become more significant. Thus, the conformation of the molecule which pairs R with the smallest substituent on the same side of the planar carbonyl group is most favored (Cherest et al., 1968). The presence of stereocenters at positions other than alpha has also been observed to cause asymmetric induction. For example, the presence of a beta stereocenter – one that is two carbons away from a carbonyl group – has been found to influence stereoselectivity in the same way as the alpha stereocenter (Evans et al., 1996). Additionally, studies conducted by Kendall Houk suggest that alkenes with chiral carbon substituents also allow for stereoselective control of that substituent if the alkene is a Z alkene – meaning the two largest substituents are on the same side of the double bond (Clayden et al., 2012). When this
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
type of alkene undergoes electrophilic addition or epoxidation, the electron-poor electrophile must approach the alkene from its least hindered side. Therefore, Z isomers force the chiral stereocenter to assume a conformation where the smallest R group is eclipsing the double bond in order to minimize steric interactions with other alkene substituents (Clayden et al., 2012). Finally, macrocyclic systems – molecules with more than 8 atoms arranged in a ring – are also able to undergo asymmetric induction. Still and Galynker demonstrated that the addition of a single methyl substituent on a macrocyclic ring could manipulate the stereoselectivities of enolate alkylations, dimethylcuprate additions, and catalytic hydrogenations, with stereoselectivities above 90% in many cases meaning that 90% of the compounds will be 1 stereoisomer, and 10% will be the other (Still & Galynker, 1981). As scientists are currently investigating macrocycles for their potential use in drug delivery, the ability to control macrocyclic stereochemistry via asymmetric induction is of high importance (Marsault & Peterson, 2011; Still & Galynker, 1981). Although the enantioselectivity of a reaction is normally the same provided that the reactants involved are the same, outside forces can also change the enantioselectivity of a reaction. One of the most important considerations is the temperature of a reaction. As enantioselectivity is determined by the relative rates of competing reactions, and the difference in the natural logarithms of the rate constant is inversely proportional to temperature, a lower temperature generally leads to higher enantioselectivity (Gawley & Aube, 2012). However, there are some instances where an increase in temperature can facilitate high enantioselectivity and even change the favored product. For example, Matsumoto et al. (2016) demonstrated that the synthesis of pyrmidyl alkanols with a chiral initiator – a chiral reactant – lead to the formation of the S enantiomer with significant enantiomeric excess at 0˚C. However, at -44˚C, the R enantiomer was the most favored product. Similarly, Toth et al. (1993) reported that for the hydroformylation of styrene, as catalyzed by PtCl(SnCl3) complexes, enantioselectivity switched from 60.6% excess of the S product at 30˚C to 56.7% excess of the E product at 100˚C. Although the mechanism for this change in selectivity trends is unknown, it has been suggested that changes in the solvation of the reaction, or interaction of reactants with solvent, might cause changes in the reaction’s entropy that ultimately affect
WINTER 2021
enantioselectivity (Gawley & Aube, 2012). Thus, when designing synthetic schemes to increase enantioselectivity, it is important to not only consider all reactants and catalysts, but also the reaction conditions, such as solvent type and temperature.
Method #1: Separating Racemic Mixtures In designing an enantioselective synthesis, there are two main ways to produce enantiomerically purified products. First, scientists can choose to develop a synthetic scheme in which chiral starting materials, catalysts, or auxiliaries are used that generate one enantiomer in excess. The second option is to separate the enantiomers of racemic mixtures – mixtures that have equal amounts of two enantiomers - from one another to make a purified final product. Both options involve the use of chiral materials – either to promote one reaction over another, or to physically separate different types of enantiomers. The methods used in separating racemic mixtures will be evaluated below.
Figure 1: This figure depicts the two common methods of the intermittent fasting diet. The image on the left depicts the method that involves only eating during a specified time interval every day. For example, only eating between 12pm and 6pm. The image on the right depicts the other method of intermittent fasting: picking 2 or 3 days each week to eat nothing and eating normally on all the other days. Source: Wikimedia Commons
Chiral Resolution Enantiomers are typically very difficult to separate from one another because many of their physical properties, like melting point and solubility, are the same (Clayden et al., 2012). In order to effectively separate a set of enantiomers, they must be converted to a diastereomer by adding a reagent that creates an additional chiral center. These diastereomers have different physical properties that allow them to be separated through conventional physical methods like crystallization and chromatography (Wyatt & Warren, 2007). The process of separating enantiomers is called chiral resolution (Clayden et al., 2012).
"Enantiomers are typically very difficult to separate from one another because many of their physical properties, like melting point and solubility, are the same."
Effective chiral resolving agents must meet three main criteria. It should be enantiomerically pure, its reaction with both enantiomers should go to completion, and it should not become racemic under the selected reaction conditions. Additionally, the reaction to form the diastereomer should be reversible in order to produce the original enantiomer (Gawley & Aube, 2012). Many of these reagents rely upon acid-base reactions with their enantiomers. For example, in the RRR synthesis (Resolution-Racemization-Recycle synthesis) of anti-depressant duloxetine, (S)mandelic acid reacts with a racemic mixture
41
Figure 3: Example of Kinetic Resolution – mixture of mandelic acid is reacted with chiral menthol. One of the enantiomers of mandelic acid, the D enantiomer (the plus indicates the direction a solution would rotate planepolarized light) combines more readily with pure menthol; after saponification (treatment with aqueous base), a pure mandelic acid can be produced Source: Wikimedia Commons
of the drug to produce diastereomers of the (S)-enantiomer (Fujima et al., 2006). When produced in a toluene/methanol mixture, this diastereomer is insoluble and can be filtered out; deprotonation with sodium hydroxide then removes the mandelic acid, liberating the (S)-enantiomer. This process ultimately led to a 93% purity of the (S)-enantiomer (Fujima et al., 2006).
"In chiral column chromatography, a racemic mixture is passed through a silica gel column with a chiral “stationary phase” of an enantiomerically pure substance like anthryl alcohol. One of the enantiomers will have a higher “affinity” for this stationary phase, causing it to travel down the column at a much slower speed than the other enantiomer."
42
Besides taking advantage of the differences in solubility between diastereomer products, chemists have also designed methods to separate enantiomers via chiral column chromatography. In chiral column chromatography, a racemic mixture is passed through a silica gel column with a chiral “stationary phase” of an enantiomerically pure substance like anthryl alcohol (Wyatt & Warren, 2007). One of the enantiomers will have a higher “affinity” for this stationary phase, causing it to travel down the column at a much slower speed than the other enantiomer (Clayden et al., 2012). Chiral column chromatography has proven to be an incredibly important method for separating enantiomers. For example, thanks to its resolution on a poly-N-acrylamide column, scientists found that the (+) enantiomer of the anti-malarial drug chloroquine was more effective and less toxic than the (-) enantiomer (Wyatt & Warren, 2007). Similarly, chemists at the company Dupont found that the fungicide hexaconazole could be purified to its biologically active enantiomer only through this method, as crystallization proved to be too difficult. While chiral column chromatography is efficient, it can sometimes be difficult to achieve. For example, this procedure for separating hexaconazole required precise solvent conditions, with a 918:80:2 solvent ratio of F3C-CCl3, MeCN, and Et3N, respectively (Wyatt & Warren, 2007). Additionally, it is difficult to scale chromatography separations to industrial levels, limiting its effectiveness in producing
enantiomerically pure drugs (Gawley & Aube, 2012). Kinetic Resolution Unlike chiral resolution, which relies upon the different physical properties of different diastereomers, kinetic resolution takes advantage of the difference in the reaction rates between different enantiomers with a common chiral catalyst or reagent. In kinetic resolution, one of the enantiomers reacts much faster than the other; over time, this results in an excess of the slower reacting enantiomer in the reaction chamber. To be practical, the starting material must be easily separated from the resolution product, and both the racemic mixture and resolving agent should be cheap (Keith et al., 2001). One of the most popular reactions in kinetic resolution is the acylation of any alcohol groups. In this method, a secondary alcohol (an alcohol with only 1 hydrogen substituent attached to the same carbon as a the -OH group) is reacted with an acid anhydride to form an ester; this process is typically facilitated by the chiral catalyst DMAP. When a chiral nucleophilic catalyst is employed, enantiomeric purity values in excess of 97% have been observed (Wyatt & Warren, 2007). This method is highly practical as the resolving agent (acetic anhydride) is fairly cheap and esters can be easily separated via fractional distillation (Keith et al., 2001). Another approach is an epoxidation reaction where a three atom cycloether is formed. This reaction has been used for the resolution of allylic alcohols (Wyatt & Warren, 2007). Chiral catalysts used for epoxidation range from titanium alkoxides to fructose-based catalysts, and allylic alcohols are generally only available in their racemic form, thus making kinetic resolution a necessity (Keith et al., 2001). Overall, kinetic resolution is incredibly useful for enantioselective synthesis because it enables DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: The chemotherapy drug Paclitaxel can be produced from the natural terpene product verbenone, as seen on the left. As many natural products are already chiral, they can be important in facilitating enantioselective synthesis Source: Wikimedia Commons
the use of cheaper and more readily available racemic mixtures of starting materials (Keith et al., 2001). However, the maximum possible yield of kinetic resolution is typically 50%, as under most circumstances the undesirable diastereomer byproduct cannot be converted back to the desired enantiomer (Keith et al., 2001). While chiral and kinetic resolution methods are good at separating mixtures of enantiomers, they certainly have some drawbacks. Typically, the yields of both these separation techniques are 50% or less, meaning that a significant amount of starting material is wasted (Keith et al., 2001). Thus, many chemists focus on developing reactions that selectively produce one enantiomer in order to raise total yield. Special types of reagents that can produce these materials, such as chiral starting pool materials, chiral auxiliaries and catalysts, will be evaluated below.
Method #2: Starting from Chiral Materials - The Natural Pool and Auxiliaries When it comes to development of enantioselective schemes, many synthetic chemists have turned to using natural products and biological enzymes to facilitate enantioselective reactions. Many biological products, such as sugars and amino acids, have been used as key starting blocks (Morrow, 2016). Additionally, through improvements in biotechnology, some enantioselective methods using natural and synthetic enzymes have had great success (Jaeger & Eggert, 2004). Both of these methods will be explored further below.
WINTER 2021
Chiral Auxiliaries Chiral auxiliaries are enantiomers that, when added to an achiral starting material, can bias the stereochemical outcome of a reaction to form a desirable diastereomer. The auxiliary unit is then removed from the diastereomer in a way that does not cause the final product to become racemic, eliminating the need for a kinetic or chiral resolution (Gnas & Glorius, 2006). Many chiral auxiliaries are based off of simple derivatives of biological molecules, such as amino acids, carbohydrates, and terpenes, which are already chiral and abundant in nature (Diaz-Muñoz et al., 2019).
"Many chiral auxiliaries are based off of simple derivatives of biological molecules, such as amino acids, carbohydrates, and terpenes, which are already chiral and abundant in nature."
Chiral auxiliaries have had a lengthy track record of success in producing important pharmaceutical compounds. This technique was first introduced by E.J. Corey in 1975 with his enantioselective synthesis of an intermediate of prostaglandin, a hormone which can lower platelet count (Corey & Ensley, 1975). Corey’s work, which used a cyclic auxiliary called 8-phenylmenthol derived from the terpene pulegone, has proven to be very versatile in facilitating a wide range of synthetic schemes. This compound, which can facilitate stereochemical control in cycloadditions, reduction-oxidation reactions, and photochemical reactions, has enabled efficient syntheses of the glaucoma medication latanoprost and calyciphylline A, an alkaloid that has demonstrated anti-HIV activity (DiazMuñoz et al., 2019). Another popular class of auxiliaries are the derivatives of the heterocycle oxazolidinone, a 5 membered ring compound that has both an oxygen and a nitrogen atom in the ring. First developed by David Evans in 1981 from amino acids, this type of auxiliary has been used to facilitate asymmetric aldol reactions, alkylations, hydroxylations, and
43
Figure 5: Example of enantioselective hydrogenation as developed by a Rhodium catalyst; the * on the product indicates the selective formation of a new stereocenter adjacent to a phenyl ring (a ring of 6 carbons bonded to 5 hydrogens). This work, as developed by William Knowles, was not initially very effective, but thanks to continued research, metalmediated catalysts have become the method of choice for enantioselective synthesis Source: Wikimedia Commons
Diels-Alder cycloadditions (Heravi et al., 2016). Oxazolidinones are notable because they can facilitate the formation of two new stereocenters at the same time. As a result, this auxiliary has enabled new pathways to drugs with larger numbers of stereocenters, ranging from the anticonvulsant agent pregabalin to the antibiotic (-)-cytovaricin (Diaz-Muñoz et al., 2019).
"One of the most commonly employed methods in enantioselective synthesis is the chiral pool strategy, which draws upon the use of enantiomerically pure starting materials from nature to create enantiomerically pure compounds."
Chiral auxiliaries are very effective in facilitating enantioselective synthesis and are reusable, unlike the reagents used in traditional resolution methods (Gawley & Aube, 2012). However, chiral auxiliaries must be used in large stoichiometric ratios to have the desired effect, and the cost of producing these auxiliaries can be quite high. Additionally, while some chiral auxiliaries can help form multiple new stereocenters, each chiral auxiliary adds at least two steps to a synthesis, which can reduce reaction yield (Gawley & Aube, 2012). Chiral Pool Synthesis One of the most commonly employed methods in enantioselective synthesis is the chiral pool strategy, which draws upon the use of enantiomerically pure starting materials from nature to create enantiomerically pure compounds. Under this strategy, achiral reagents can be used to manipulate a starting material while retaining desired chiral centers (Kumar, 2014). Among the most popular starting materials for chiral pool synthesis are amino acids. Naturally used to build proteins, every amino acid except for glycine is chiral and available in an enantiomerically pure form (Clayden et al., 2012). This strategy has been successful in developing efficient strategies for synthesizing important bioactive drugs. For instance, Aratikatla and Bhattacharya reported the synthesis of
44
R-lacosamide, a drug used to treat epilepsy, from the amino acid serine. Unlike previous methods, which produced low yields due to the kinetic resolution of a key intermediate, the group’s synthesis produced the R enantiomer of the drug with 80% yield (Aratikatla & Bhattacharya, 2015). Similarly, Stuk et al. (1994) reported the synthesis of hydroxyethylene dipeptide isosteres, a key component of HIV protease inhibitors, from phenylalanine. Besides essential amino acids, chemists have also turned to other functional groups from the chiral pool. Of particular interest are the terpenes, which are aromatic oils comprised of repeating C5H8 units that are commonly found in plants (Clayden et al., 2012). Syntheses performed with terpene may have incredible importance for treating both infectious diseases and cancer. For example, the anti-malarial agent (+)-cardamom peroxide was synthesized from the terpene pinene and molecular oxygen (O2) (Brill et al., 2017). Similarly, the compounds Ingenol and Englerin A, both synthesized from terpenes, have shown potential in treating renal cancer and precancerous actinic keratosis (Brill et al., 2017). On the whole, chiral pool synthesis has greatly simplified the process of manipulating and introducing chiral centers with cheap and readily available starting materials. However, it generally works best only if the starting material’s structure closely resembles the final product; otherwise, more steps are required that can drastically reduce the overall reaction yield (Morrow, 2016).
Method #3: Synthetic Catalysts Due to the low yields generated by chiral auxiliaries, modern enantioselective techniques have focused on another method: transitionmetal mediated catalysis. Unlike main group elements, transition metals can easily change
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
their oxidation states, meaning that they can accept or donate electrons as needed. Additionally, transition metals are capable of binding multiple reactants, holding them close to one another and lowering the activation energy of the reaction (Oxtoby et al., 2016). Importantly, when paired with chiral ligands – groups of atoms bound to a central atom – transition metal complexes can facilitate enantioselective reactions in a highly efficient manner. The theory, history, and method of this type of process will be explored below. The key approach for introducing chirality into catalysts used for enantioselective synthesis is to make use of chiral C2-symmetric ligands. This means that they do not have super imposable mirror images but can be superimposed onto themselves if rotated 180˚ - the so-called C2 axis (Pfaltz & Drury, 2004). In general, transition metal complexes with these C2 symmetric ligands work by first coordinating (forming a bond) with a nucleophilic substituent on one of the reactants, like the oxygen in a carbonyl. Steric interactions between these ligands and the reactant will then typically favor the formation of one enantiomer over another due to the so-called “chiral fence,” which refers to the activation energy barriers of the different possible reaction pathways (Nishiyama et al., 1989). The C2 axis present in these molecules is crucial, as it limits the number of transition states, which can eliminate competing reaction pathways that might have lower selectivity (Pfaltz & Drury, 2004). One particular benefit of these ligands is that some of them are considered privileged, meaning that they can be applied to the synthesis of a wide variety of enantiomeric substrates. For example, manganese complexes with the Salen ligand can catalyze the epoxidation of many alkenes with high selectivity, whereas the chromium and cobalt complexes with the same ligand can catalyze epoxide ring opening reactions (Yoon & Jacobsen, 2003). Similarly, the BINOL and BINAP ligands, which use bulky polycycles – meaning that they possess more than one connected molecular ring – are effective at catalyzing reactions ranging from hydrogenation to the Heck reaction for producing substituted alkenes (Yoon, 2003). The development of metal-catalyzed enantioselective reactions has a long history. The first reaction was developed in 1968 by William Knowles and Leopold Horner using a
WINTER 2021
chiral derivative of the commercially available rhodium catalyst [RhCl(PPh3)3] (Knowles, 2002). By switching out the three PPh3 ligands for the chiral methylpropylphenylphosphane, the group was able to hydrogenate an alkene substrate with 15% enantiomeric excess (Knowles, 2002). However, with repeated experimentation on chiral phosphorous ligands, the group was able to achieve enantioselectivities of up to 88%. Their work was eventually used by the company Monsanto in the synthesis of L-Dopa, a drug used to combat Parkinson’s Disease (Knowles, 2002). Later work from Ryoji Noyori expanded this process of hydrogenation to other substrates, such as aldehydes and ketones. Through the use of a ruthenium complex with the BINAP ligand, the group was able to effectively convert ketones and aldehydes to chiral alcohols necessary for the synthesis of the antihistamine agents orphenadrine and neobenodine (Noyori et al., 2002). Another key advancement in the development of asymmetric catalysis occurred thanks to the efforts of Karl Barry Sharpless, who developed a series of asymmetric chiral oxidation reactions. Through the use of a titanium tartrate catalyst, Sharpless was able to transform allylic alcohols into their epoxides with over 95% enantiomeric excess (Katsuki & Sharpless, 1980). Sharpless also developed a method that could dehydroxylate, or add two alcohol groups, to an alkene using OsO4 and the readily-available quinine ligand, which has been applied in the synthesis of the anticancer agent (20S)-Camptothecin (Sharpless et al., 1994). For their combined work, Sharpless, Noyori, and Knowles were awarded shares of the 2001 Nobel Prize in Chemistry (“The Nobel Prize In Chemistry 2001”, 2001).
"Many of the original successful enantioselective catalysts required the use of precious and rare metals, such as osmium, rhodium and ruthenium. To make these processes more economical, scientists have focused on expanding the scope of potential metals."
Due to the versatility of reactions impacted by enantioselective catalysis, many scientists have focused their efforts on expanding upon the Nobel prize-winning work. One particular point of improvement is centered on the types of metals involved in synthesis. Many of the original successful enantioselective catalysts required the use of precious and rare metals, such as osmium, rhodium and ruthenium. To make these processes more economical, scientists have focused on expanding the scope of potential metals. For example, recently scientists have designed nickel and copperbased catalysts that can selectively produce chiral phosphines like BINAP and DIPAMP, both of which are incredibly important ligands
45
for metal-catalyzed enantioselective synthesis (Glueck, 2020). Previously, these ligands could only be produced by chiral resolutions or more expensive palladium catalysts. However, thanks to new studies in the reactivity of metal-ligand intermediates, Ni and Cu catalysts can now enantioselectively catalyze the alkylation – addition of long carbon chains – needed to form ligands like BINAP and DIPAMP (Glueck, 2020). Studies on cheaper transition metals have also directly improved on the prior Nobel prize winning work. Previously, enantioselective hydrogenation methods relied on Rhodium based catalysts; while these were effective, Rhodium is only found in Earth’s crust at a rate of 0.0007 parts per million (Wen et al., 2021). Recent work found that by taking titanium – a metal that is more than 8 million times as abundant in Earth’s crust than Rhodium– and adding bulky substituted cyclohexanes, hydrogenation can be achieved with a 69% enantiomeric excess (Wen et al., 2021). Similarly, cobalt – a metal present in 25 parts per million – has been found to aid in the synthesis of the active agent of the anticonvulsant Levetiracetam with 98% enantiomeric excess, thanks to the introduction of bulky bisphospine ligands (Wen et al., 2021).
Conclusion The development of enantioselective synthesis methods has demonstrated how extensively chemical synthesis has changed over time. Initially, much of the effort in enantioselective syntheses was focused on the use of chiral auxiliaries, chiral pool synthesis, and enantiomer separation methods. However, thanks to continued innovation, scientists have been able to take advantage of chiral transition metal catalysts, which both act to limit waste and increase the versatility for synthesis plans.
Cram, D. J., & Elhafez, F. A. A. (1952). Studies in Stereochemistry. X. The Rule of “Steric Control of Asymmetric Induction” in the Syntheses of Acyclic Systems. Journal of the American Chemical Society, 74(23), 5828–5835. https://doi. org/10.1021/ja01143a007 Diaz‐Muñoz, G., Miranda, I. L., Sartori, S. K., Rezende, D. C. de, & Diaz, M. A. N. (2019). Use of chiral auxiliaries in the asymmetric synthesis of biologically active compounds: A review. Chirality, 31(10), 776–812. https://doi.org/https://doi. org/10.1002/chir.23103 Evans, D. A., Dart, M. J., Duffy, J. L., & Yang, M. G. (1996). A Stereochemical Model for Merged 1,2- and 1,3-Asymmetric Induction in Diastereoselective Mukaiyama Aldol Addition Reactions and Related Processes. Journal of the American Chemical Society, 118(18), 4322–4343. https://doi. org/10.1021/ja953901u Feng, G., & Zhou, W. (2019). Nostril-specific and structurebased olfactory learning of chiral discrimination in human adults. ELife, 8, e41296. https://doi.org/10.7554/eLife.41296 Franks, M. E., Macpherson, G. R., & Figg, W. D. (2004). Thalidomide. The Lancet, 363(9423), 1802–1811. https://doi. org/10.1016/S0140-6736(04)16308-3 Gawley, R. E., & Aubé, J. (2012). Principles of asymmetric synthesis (2nd ed). Elsevier. Glueck, D. S. (2020). Catalytic Asymmetric Synthesis of P-Stereogenic Phosphines: Beyond Precious Metals. Synlett, s-0040-1707309. https://doi.org/10.1055/s-0040-1707309 Gnas, Y., & Glorius, F. (2006). Chiral Auxiliaries - Principles and Recent Applications. Synthesis, 2006(12), 1899–1930. https:// doi.org/10.1055/s-2006-942399 Heravi, M. M., Zadsirjan, V., & Farajpour, B. (2016). Applications of oxazolidinones as chiral auxiliaries in the asymmetric alkylation reaction applied to total synthesis. RSC Advances, 6(36), 30498–30551. https://doi.org/10.1039/C6RA00653A Hunt, I. (2020). Ch 7: Enantiomers. http://www.chem.ucalgary. ca/courses/350/Carey5th/Ch07/ch7-2-2.html
References
(IUPAC). (1997). IUPAC - stereoselective synthesis (S05990). https://doi.org/10.1351/goldbook.S05990
Aratikatla, E. K., & Bhattacharya, A. K. (2015). Chiral pool approach for the synthesis of functionalized amino acids: synthesis of antiepileptic drug ( R )-lacosamide. Tetrahedron Letters, 56(42), 5802–5803. https://doi.org/10.1016/j. tetlet.2015.08.077
Jaffe, I. A., Altman, K., & Merryman, P. (1964). The Antipyridoxine Effect of Penicillamine in Man. The Journal of Clinical Investigation, 43(10), 1869–1873. https://doi. org/10.1172/JCI105060
Chemistry (IUPAC), T. I. U. of P. and A. (1994). IUPAC - asymmetric induction (A00483). https://doi.org/10.1351/goldbook.A00483
Katsuki, T., & Sharpless, K. B. (1980). The first practical method for asymmetric epoxidation. Journal of the American Chemical Society, 102(18), 5974–5976. https://doi. org/10.1021/ja00538a077
Chérest, M., Felkin, H., & Prudent, N. (1968). Torsional strain involving partial bonds. The stereochemistry of the lithium aluminium hydride reduction of some simple open-chain ketones. Tetrahedron Letters, 9(18), 2199–2204. https://doi. org/10.1016/S0040-4039(00)89719-1 Clayden, J., Greeves, N., & Warren, S. G. (2012). Organic chemistry (2nd ed). Oxford University Press.
46
Corey, E. J., & Ensley, H. E. (1975). Preparation of an optically active prostaglandin intermediate via asymmetric induction. Journal of the American Chemical Society, 97(23), 6908–6909. https://doi.org/10.1021/ja00856a074
Keith, J. M., Larrow, J. F., & Jacobsen, E. N. (2001). Practical Considerations in Kinetic Resolution Reactions. Advanced Synthesis & Catalysis, 343(1), 5–26. https://doi.org/https:// doi.org/10.1002/1615-4169(20010129)343:1<5::AIDADSC5>3.0.CO;2-I Knowles, W. S. (2002). Asymmetric Hydrogenations (Nobel
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Lecture). Angewandte Chemie International Edition, 41(12), 1998–2007. https://doi.org/https://doi.org/10.1002/15213773(20020617)41:12<1998::AID-ANIE1998>3.0.CO;2-8 Kumar, Dr. Konda. (2014). CHIRAL SYNTHESIS: AN OVERVIEW. International Journal of Pharmaceutical Research and development. 6. 70. Marsault, E., & Peterson, M. L. (2011). Macrocycles Are Great Cycles: Applications, Opportunities, and Challenges of Synthetic Macrocycles in Drug Discovery. Journal of Medicinal Chemistry, 54(7), 1961–2004. https://doi.org/10.1021/ jm1012374 Matusmoto, A., Fujiwara, S., Hiyoshi, Y., Zawatzky, K., Makarov, A. A., Welch, C. J., & Soai, K. (2017). Unusual reversal of enantioselectivity in the asymmetric autocatalysis of pyrimidyl alkanol triggered by chiral aromatic alkanols and amines. Organic & Biomolecular Chemistry, 15(3), 555–558. https://doi.org/10.1039/C6OB02415G Morrow, G. W. (2016). Biorganic synthesis: an introduction. Oxford University Press. Nishiyama, Hisao., Sakaguchi, Hisao., Nakamura, Takashi., Horihata, Mihoko., Kondo, Manabu., & Itoh, Kenji. (1989). Chiral and C2-symmetrical bis(oxazolinylpyridine)rhodium(III) complexes: effective catalysts for asymmetric hydrosilylation of ketones. Organometallics, 8(3), 846–848. https://doi. org/10.1021/om00105a047 Nobel Foundation. (n.d.). The Nobel Prize in Chemistry 2001. NobelPrize.Org. Retrieved February 5, 2021, from https:// www.nobelprize.org/prizes/chemistry/2001/summary/ Noyori, R. (2002). Asymmetric Catalysis: Science and Opportunities (Nobel Lecture). Angewandte Chemie International Edition, 41(12), 2008–2022. https://doi.org/https://doi.org/10.1002/15213773(20020617)41:12<2008::AID-ANIE2008>3.0.CO;2-4
Still, W. C., & Galynker, I. (1981). Chemical consequences of conformation in macrocyclic compounds: An effective approach to remote asymmetric induction11This work was presented in part at the Fourteenth Sheffield Stereochemistry Symposium in Sheffield, England, on December 17, 1980. Tetrahedron, 37(23), 3981–3996. https:// doi.org/10.1016/S0040-4020(01)93273-9 Stuk, T. L., Haight, A. R., Scarpetti, D., Allen, M. S., Menzia, J. A., Robbins, T. A., Parekh, S. I., Langridge, D. C., & Tien, J.-H. J. (1994). An Efficient Stereocontrolled Strategy for the Synthesis of Hydroxyethylene Dipeptide Isosteres. The Journal of Organic Chemistry, 59(15), 4040–4041. https://doi. org/10.1021/jo00094a006 Toth, I., Guo, I., & Hanson, B. E. (1993). Influence of the reaction temperature on the enantioselection of styrene hydroformylation catalyzed by PtCl(SnCl3) complexes of p-aryl-substituted chiral ligands. Organometallics, 12(3), 848–852. https://doi.org/10.1021/om00027a038 Wade, L. G. (2013). Organic chemistry (8th ed). Pearson. Walsh, P. J. (2009). Fundamentals of asymmetric catalysis. Sausalito, Calif. : University Science Books. http://archive.org/ details/fundamentalsofas0000wals Wen, J., Wang, F., & Zhang, X. (2021). Asymmetric hydrogenation catalyzed by first-row transition metal complexes. Chemical Society Reviews. https://doi. org/10.1039/D0CS00082E Wyatt, P., & Warren, S. G. (2007). Organic synthesis: strategy and control. http://www.dawsonera.com/depp/reader/ protected/external/AbstractView/S9780470061206 Yoon, T. P. (2003). Privileged Chiral Catalysts. Science, 299(5613), 1691–1693. https://doi.org/10.1126/ science.1083622
Oxtoby, D. W., Gillis, H. P., & Campion, A. (2016). Principles of modern chemistry (Eighth edition). Cengage Learning. Pallardy, R. (2020). Chemical synthesis. Encyclopedia Britannica. https://www.britannica.com/science/chemicalsynthesis Pfaltz, A., & Drury, W. J. (2004). Design of chiral ligands for asymmetric catalysis: From C2-symmetric P,P- and N,N-ligands to sterically and electronically nonsymmetrical P,N-ligands. Proceedings of the National Academy of Sciences, 101(16), 5723–5726. https://doi.org/10.1073/pnas.0307152101 Porter, W. H. (1991). Resolution of chiral drugs. Pure and Applied Chemistry, 63(8), 1119–1122. https://doi. org/10.1351/pac199163081119 Robinson, D. E. J. E., & Bull, S. D. (2003). Kinetic resolution strategies using non-enzymatic catalysts. Tetrahedron: Asymmetry, 14(11), 1407–1446. https://doi.org/10.1016/ S0957-4166(03)00209-X Sharpless, K. B., Kolb, H. C., & VanNieuwenhze, M. S. (1994). Catalytic Asymmetric Dihydroxylation. Chemical Reviews, 94(8), 2483–2547. https://doi.org/10.1021/cr00032a009 Shen, Z., Lv, C., & Zeng, S. (2016). Significance and challenges of stereoselectivity assessing methods in drug metabolism. Journal of Pharmaceutical Analysis, 6(1), 1–10. https://doi. org/10.1016/j.jpha.2015.12.004
WINTER 2021
47
The Motivators and Social Consequences of Gossip BY ASHNA KUMAR '24 Cover: Gossip, despite its negative connotation can be used both as a weapon and a tool in social and professional environments. Source: Shutterstock
48
Abstract Gossip, the exchange of information about an absent third-party, is pervasive in modern society, slipping into everyday conversations intentionally or unintentionally. The majority of gossip is neutral, but more negative gossip is exchanged than positive. The main motivators for gossip include information gathering and validation, strengthening relationships, defensiveness, increasing influence, and social enjoyment. People have the urge to gossip mostly because they want to verify their existing knowledge and gain new pieces of information from others; hurting the subject of gossip is generally the weakest motivator. While gossip does have negative impacts (deterioration of trust, cooperation, and inclusion, for example), it can also has positive impacts. Gossip helps maintain social order, contributes to cohesiveness within groups, promotes trust between those sharing gosip, and serves as entertainment. Furthermore, the trust and
cooperation gossip generated by gossip is not fully hindered by its inaccuracy. Gossip is rarely done with truly cruel intent, even by people with “dark triad” personality traits which include narcissism, Machiavellianism, and psychopathy―all of which usually carry an air of malevolence.
Introduction People generally view gossip as a negative phenomenon, considering it a social weapon and associating it with bullying, harassment, and exclusion. While the term gossip carries a negative connotation, nearly everyone continues to partake in it. Despite its destructive tendencies, particularly for the victims of spiteful gossip, its effects are not all dark. It has the ability to develop relationships, increase cooperation in groups, and promote the best interests of a community (Hartung et al., 2019). Gossip helps maintain social order and sometimes stems from a strangely altruistic DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
motive: to protect individuals from others who violate social norms. In groups, gossip may serve as a deterrent, warning people not to act selfishly. It is also a convenient way to spread information, allowing individuals to better understand their place in a particular social environment. However, this does not detract from the obvious downside to gossip: the damage it can have on feelings and attitudes. It can also be used for the selfish purpose of social promotion at the expense of others’ reputations. Within teams, gossip is shown to have decreased trust and increased selfconsciousness among the victim, emotional exhaustion, and discretionary prosocial behavior (positive voluntary behavior) (Dores Cruz et al., 2019). Evidently, gossip has a variety of positive and negative influences on individuals and groups from a social, emotional, and professional standpoint, depending on the intent behind the gossip.
Characterizing Gossip Ellwardt and colleagues previously classified three straightforward categories of gossip: positive, neutral, and negative (Ellwardt et al., 2012). Positive gossip, gossip that praises or defends the subject of the exchange, has a similar effect to social support, which cultivates positive relationships. Neutral gossip includes observations that are neither positive or negative. Negative gossip is information that possibly stems from hostile intent and can be damaging to the gossip victim’s social status. Positive and negative gossip are known as evaluative gossip, due to the judgements that are made, while neutral gossip lacks these judgments and is purely observational. In a study conducted by researchers at University of California Riverside, 467 participants from ages 18-58 years old (269 women and 198 men) wore portable listening devices throughout the day so that 10% of their conversations could be analyzed remotely (Robbins & Karan, 2020). WINTER 2021
A wide variety of participants were analyzed Figure 1: Image of information in this study including undergraduates at the being exchanged in secret. University of Texas, Austin and the University Source: Pixabay of California, Los Angeles, a community sample from Atlanta, Georgia, and patient samples from couples coping with breast cancer and rheumatoid arthritis. Scientists counted 4,003 recorded instances of gossip in total before categorizing them into positive, neutral, and negative. Despite general negative connotations of the word “gossip,” this analysis revealed that most of the gossip people engage throughout the day is not negative. Nearly 75% of the gossip was neutral. Of the remaining 25%, negative gossip proved to be more common than positive gossip, with "Although many 604 instances of the former and 376 instances people claim to look of the latter. Unsurprisingly, extroverts were down upon gossip found to engage in all categories gossip much more than introverts. Most gossip centered for its supposed around acquaintances (3292 instances) rather superficiality and than celebrity figures (369 instances). It was negativity, gossip also found that people in younger age groups continues to play a tended to participate in more negative gossip. This analysis debunked several prevalent large role in people’s stereotypes, as well. Many associate women daily lives, taking with snide, catty gossip, but this paper up 65% of speaking showed that there was no gender difference time." with respect to amounts of negative gossip. However, women did gossip more overall, with the majority of that difference coming from neutral gossip. Also, this study dispelled the theory that people of relatively lowerincome and less educated backgrounds gossip more than wealthier, well-educated people, as there was no significant difference between these groups. With approximately 15% of total gossip instances being negative, this analysis reinforced the idea that not all gossip is detrimental to individuals. With such a large number and variety of participants, this study offers a good insight into gossip trends representative of a large population. Perhaps another study could examine these same categories of gossip in age groups above 58 or younger than 18, to get a better spread of the population.
Gossip as a Tool As mentioned earlier, although many people claim to look down upon gossip for its supposed superficiality and negativity, gossip continues to play a large role in people’s daily lives, taking up 65% of speaking time (Ellwardt et al., 2012). So why is gossip such a well-established social practice? Most gossip is not evaluative and is simply used as an efficient vehicle for spreading information and observations. Disseminating information allows people to 49
Figure 2: Graph showing that negative influence is the weakest driving factor for gossip, while information validation is the strongest. Source: Hartung et al., 2019; Creative Commons Attribution License (CC BY)
"The person that is privy to an exciting piece of gossip is perceived as influential due to their connections and knowledge. The most influential people within organizations, as perceived by others, are those that spend the most time gossiping."
50
better understand their social surroundings and the role they fulfill within that environment (Hartung et al., 2019). Gossip can be classified as a method of learning; by hearing about the lives of others, the significant and the trivial, people are able to advance their social and cultural knowledge. Gossip is an indirect way people can learn about others without personal interactions with them. Another motivator for gossip is its intellectual stimulation. People find gossip an enjoyable activity and a welcome distraction in the workplace, especially for those working in mundane, monotonous positions. As these jobs may provide little cognitive stimulation, gossip and banter prevent workers from becoming extremely bored. Sociologist Donald F. Roy spent two months as a factory worker, describing the experience as standing in one spot in a gloomy environment the entire day. He noted that other factory workers took part in informal communication including gossip to help stimulate their minds during such tedious and dull tasks. The factory had a “pickup man” who visited every day to gather materials, and he arrived with gossip about other factory workers. Gossip from the “pickup man” was often viewed as the best part of the workers’ day, as they were exposed to new, exciting information. Roy concluded gossip brings many workers enjoyment, job satisfaction, and endurance (Grosser et al., 2012).
Gossip may also impact influence and power. The person that is privy to an exciting piece of gossip is perceived as influential due to their connections and knowledge. The most influential people within organizations, as perceived by others, are those that spend the most time gossiping. They also have the power to shift opinions about others, making gossip a part of social influence. When a person prone to gossip critiques someone’s abilities, this damages everyone else’s image and reputation of the person that was critiqued. Moreover, gossip can be utilized by people of lower formal status to gain informal influence over their superiors. In an analysis detailed by an author studying Japanese organizational dynamics, female “office ladies” (OLs) in Japan that hold low administrative positions can impact the perception people have of workers in higher positions. The OLs’ gossip was mainly targeted at the males in their company, discussing everything from their appearance to how they treat others. Since gossip between OLs was found to circulate widely, when negative gossip is heard about particular males, OLs adapt their views of these men to fit in this new information. More OLs then quickly begin to possess negative opinions of the gossip subjects. The firm managers admitted that if one OL disliked them, it was the same as every OL disliking them. This groupthink mentality, in turn, leads to a lack of support for those men given their diminishing reputations. Consequentially, men in superior positions
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
make sure they are not behaving in ways that might incite negative gossip among OLs. In an attempt to maintain their status, they resort to uncharacteristic measures, even buying OLs meals and other expensive presents (Grosser et al., 2012). Furthermore, individuals can utilize gossip strategically to elevate their reputation by spreading positive information about their friends and family. Positive gossip not only helps boost the statuses of people they have strong relationships with, but it also serves as a way for them to gain influence. Associating with people who are the subject of positive gossip reflects well on their character, improving their reputation. Positive gossip serves a protective function here, assuring the high status of oneself and their loved ones. In summary, gossip is a matter of influence, as it holds the ability to change people’s perceptions of one another, raise or lower reputations, and allow people to gain power over others (Grosser et al., 2012). Aside from its effect on individuals, gossip also impacts group functionality as a whole. Exchanging information is associated with friendship, as it allows groups to develop knowledge, build trust, and establish social norms (Hartung et al., 2019). Although negative gossip may be harmful to the victim, it can benefit the group overall when used as a deterrent. In order to avoid being ridiculed and spurned, people within groups conform to group norms (Ellwardt et al., 2012). This controls uncooperative behavior and forces people to operate according to the collective values of the group. This concept is very similar to what was illustrated in the previous study examining how upper managers acted the way OLs wanted them to in order to preserve their status. Despite being known for the social divides it engenders, gossip is crucial to enhancing cohesion and bonding in groups (Ellwardt et al., 2012). As one of the main methods of building relationships within organizations, gossip is capable of strengthening or hindering cohesion. Thus, gossip has an effect on workplace outcome and productivity. In the workplace, gossip helps form meaningful, organic relationships between colleagues, aside from the professional ones they already possess. This cohesiveness allows work groups to accomplish organizational goals they might not have otherwise been able to.
WINTER 2021
When the subject of gossip feels validated and appreciated, has a good reputation, and is a part of robust friendships, their job performance and satisfaction improve. The gossip people partake in throughout the day is overwhelmingly neutral, but 15% of gossip is negative (Robbins & Karan, 2020). Negative gossip can have destructive consequences for individuals, but it can have certain positive impacts for groups. In order for people to transmit negative gossip to one another, they must find each other a trustworthy friend (Ellwardt et al., 2012). If the gossip recipient is not trusted, the gossiper would not confide in them, as any information they share could be misused and backfire against them. Negative gossip is an effective way to develop relationships between those partaking in the gossip. This type of gossip is also used as a means of controlling the social environment and norms in groups. If a member is behaving uncooperatively, they are at risk for being the subject of negative gossip. Even if that gossip has adverse effects on the victim of the gossip and does not benefit the group or organization as a whole, it still serves as a form of intimacy for the individual gossipers.
Negative Impacts of Gossip While certain gossip brings people together, that is not always the case. Victims of negative gossip face a multitude of consequences (Ellwardt et al., 2012). Subjects of this unfavorable attention are often already low in social or professional status and may be struggling a sense of belonging in their environment. Research studying bankers revealed that victims of negative gossip often had difficulty maintaining and strengthening work relationships, possessed a poor reputation among their colleagues, and were often compelled to leave their place of work (Grosser et al., 2012). Negative gossip can obviously lead to a lack of trust between the victim and their colleagues, and make their social environment feel uncontrollable, impeding their success in the workplace. While interpersonal relationships between gossipers might be enhanced, the opposite is true for the victim.
"While certain gossip brings people together, that is not always the case. Victims of negative gossip face a multitude of consequences ."
But these findings are not universal and clear, as some studies offer conflicting evidence that suggests gossip can be detrimental to both individuals and groups, leading to decreased trust within teams, increased selfconsciousness, and negative emotions like fear
51
(Dores Cruz et al., 2019). It has also been observed to lower prosocial behavior, voluntary behavior intended to benefit others, and proactive behavior, causing weakened work engagement. More research needs to be done to clarify the extent of the positive and negative effects and understand why in some environments gossip leads to high integration and cohesiveness while in others the opposite is true. Given all this, is the predominantly negative opinion people have regarding gossip justified? One study tackled this question by examining the reasons for gossip in individuals who possess dark triad traits such as narcissism, psychopathy, and Machiavellianism (Hartung et al., 2019). Narcissism refers to individuals expressing an abnormally high sense of self-importance. Machiavellianism is similar in that individuals are incredibly focused on self-interest and personal gain. The distinction stems from the fact that individuals exhibiting Machiavellianism resort to manipulating and deceiving others. Psychopathy is simply apathy and disregard for others and their emotions. If people possessing these characteristics used gossip as a weapon, intending to harm others, this would emphasize the negative reputation gossip already holds. But if these individuals rarely use gossip as a destructive force, the positive aspects of gossip would be emphasized. Their personality traits indicate that they would be willing to ignore societal norms and hurt others for personal gain, and thus negative gossip seems to be a likely part of their daily lives. The study reported that even those capable of and willing to act selfishly mostly used gossip as a means to understand others and themselves, not with malicious intentions. Narcissism was the only trait found to correlate with the aforementioned motivators of gossip. Other studies observed that narcissistic people employed “soft tactics” like gossip and subtle persuasion to exert their influence on others and sustain their status. All in all, if even those with the dark triad personality traits engaged in minimal negative gossip, the positive aspects of gossip are worth acknowledging as a society. The extent of gossip’s influence on individual and group behavior is evident, regardless of whether it is positive or negative. But does this gossip need to be accurate in order to have these effects? A study conducted by researchers Fonseca and Peters (2018) involving trust games suggests gossip is riddled with inaccuracy for various strategic reasons and that inaccuracy does not impact gossip’s functionality in
52
boosting cooperation as much as previously thought. Accurate gossip is more effective in increasing cooperation than inaccurate gossip, but inaccurate gossip is still effective compared to no gossip. This holds true even with varying levels of inaccuracy, indicating the robustness of this conclusion. Exogenous inaccuracy introduced to the experiment (through researchers deliberately facilitating the erroneous exchange of messages between participants) did affect levels of trust and trustworthiness, but endogenous inaccuracy from participants did not impact trust. Therefore, gossip still has the potential to create cooperation even when inaccurate.
Conclusion Overall, gossip, while primarily viewed in a negative lens, has both positive and negative social, emotional, and professional impacts. On one hand, gossip facilitates the exchange and validation of information, helps form more secure relationships, is a source of entertainment, reinforces group norms, protects individuals, increases one’s influence, and improves job satisfaction and performance. Although gossip is capable of producing such positive effects, its destructive abilities are also apparent. Targets of negative gossip struggle to find their place within the social environment and are burdened with feelings of mistrust and fear. Gossiping is not inherently unethical, but it can be utilized in a number of ways to reach a desired outcome with widespread impacts; whether that impact is beneficial or detrimental is up to the gossiper. References Dores Cruz, T. D., Beersma, B., Dijkstra, M. T. M., & Bechtoldt, M. N. (2019). The bright and dark side of gossip for cooperation in groups. Frontiers in Psychology, 10, 1374. https://doi. org/10.3389/fpsyg.2019.01374 Ellwardt, L., Labianca, G., & Wittek, R. (2012). Who are the objects of positive and negative gossip at work?: A social network perspective on workplace gossip. Social Networks, 34(2), 193-205. https://doi.org/10.1016/j.socnet.2011.11.003 Fonseca, M. A., & Peters, K. (2018). Will any gossip do? Gossip does not need to be perfectly accurate to promote trust. Games and Economic Behavior, 107, 253-281. https://doi. org/10.1016/j.geb.2017.09.015 Grosser, T. J., Lopez-Kidwell, V., Labianca, G., & Ellwardt, L. (2012). Hearing it through the grapevine: Positive and negative workplace gossip. Organizational Dynamics, 41(1), 52-61. https://doi.org/10.1016/j.orgdyn.2011.12.007 Hartung, F. M., Krohn, C., & Pirschtat, M. (2019). Better Than Its Reputation? Gossip and the Reasons Why We and Individuals With "Dark" Personalities Talk About Others. Frontiers in Psychology, 10, 1162. https://doi.org/10.3389/
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
fpsyg.2019.01162 Robbins, M. L., & Karan, A. (2020). Who Gossips and How in Everyday Life? Social Psychological and Personality Science, 11(2), 185–195. https://doi.org/10.1177/1948550619837000
WINTER 2021
53
Renewed Views on Gaming During the COVID-19 Pandemic BY CALLUM FOREST, FORSYTH COUNTRY DAY SCHOOL Cover: A father playing on the Wii with his son. Source: Flickr
Introduction The COVID-19 pandemic undoubtedly affected everyone, regardless of age. As the virus spread, the harsh realities of physical illness, death, economic burden, and social isolation quickly impacted the lives of billions of people around the world. During the height of the pandemic in 2020, businesses and schools were closed, and people were forced to quarantine in their homes, leading to increased levels of stress, anxiety, and depression. Virtual interactions quickly became the new norm. There were few silver linings during this time, but as days turned into weeks into months and a year, one media emerged as a “game changer” to help people cope with the newfound isolation— online video games.
Negative Effects of Gaming As its popularity increased, there have been some concerns surrounding physical, 54
psychological, and behavioral health such as increased prevalence of anxiety, excessive violence, depression, low self-esteem, and addiction from playing video games (Anderson et al., 2010; Ferguson, 2013; Lemola et al., 2011). As of 2017, individuals classified as “gamers'' spent an average of 6.5 hours a week playing games, which is over an hour more than the average gaming time of 2011 “gamers” (The Neilsen Company, 2017). This increase in time in front of a screen by children and young adults has led many healthcare professionals and adults to question the potential deleterious effects on ocular health in particular. Excessive blue light exposure, which can cause myopia, or nearsightedness, is very common among gamers. Another effect of prolonged screen time is digital eye strain, which is caused when someone glares at a screen for too long and does not blink as much. These conditions can cause permanent damage to a person’s eyes (Wu, 2013; French et al., 2013). Beyond blue DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
to receive prizes they earned for every correct Figure 1: Violence in video question they answered, those who played games. violent games took eight times more raffles Source: Flickr than those who did not play violent games. The results of Bushman’s experiment suggests that dishonest behavior in violent video games can affect the individual’s personality in real life (Anderson et al., 2001).
light exposure, some arguably more concerning negative impacts of excessive gaming includes its impact on behavioral health, especially its effects on violent behavior, anxiety, and depression. In a video game known as Grand Theft Auto (GTA), violence is encouraged in order to complete in-game tasks and challenges. Researchers such as psychologist Brad J. Bushman caution that this violent behavior will be transferred to the player in real life. To test this idea, Bushman conducted an experiment where 172 high school students were divided into two groups, those who would play violent games and those who would not. When told to eat candy from a bowl and that eating too much was unhealthy, the students who played violent games ate three times more candy. When given the opportunity to select raffles
Another study found that maladaptive coping was one reason why games may be associated with elevations in violence, anxiety, and depression. Maladaptive coping refers to behavior feels rewarding at first but will be harmful later. This practice leads to an addiction to playing video games (Loton et al., 2015). Furthermore, studies found that toxic communities created with video games can also lead to symptoms for anxiety and depression. Such communities often consist of cyber-bullies who cause a lot of stress to individuals. At the very extreme, suicides have been linked to cyber-bullying (Todd, 2020).
"Although there are negative effects from playing video games, there are also positive results including improved focus, better memory, and improved cooperative skills."
Positive Effects of Gaming Although there are negative effects from playing video games, there are also positive results including improved focus, better memory, and improved cooperative skills (Primack et al., 2012; Granic et al., 2014; Colder Carras et al., 2018). Games have also been known to provide a form of social connection and serve as an escape outlet from everyday struggles (Kowert et al., 2014b; Mazurek et al., 2015; Taquet et al., 2017). Games can be divided into genres based on their assumed benefits and detriments. For example, strategic games Figure 2: A young child playing Grand Theft Auto. Source: Flickr
WINTER 2021
55
Figure 3: A highlight of specific games since 1999 with a focus on benefits and market impact. Table created by author. Sources: Walton, 2012; Sarkar, 2014; Warren, 2020; Orland, 2011; University of Rochester, 2014; Silverman, 2011; Tassi, 2019; Spangler, 2020; Health Talk, 2018; Makuch, 2017; Sarkar, 2016; Crecente, 2018; Makuch, 2017; Lien, 2012; Needleman, 2019)
"With every generation, entertainment has been defined by the technology available and culture at the time. For Gen X, defined as people being born between 1965 and 1980, and beyond, the use of video games and online activities has steadily increased its role not only in entertainment, but also in education and socialization." 56
provide cognitive enhancement (Dobrowolski et al., 2015; Bediou et al., 2018). Moreover, video games have also helped people develop skills that have improved their daily performance in academics. For example, Minecraft allows players to learn about minerals and ores while also learning about different survival tactics. It further allows players to demonstrate their creativity by building houses and recreating real life images (Shaffer et al., 2005). With every generation, entertainment has been defined by the technology available and culture at the time. For Gen X, defined as people being born between 1965 and 1980, and beyond, the use of video games and online activities has steadily increased its role not only in entertainment, but also in education and socialization. This phenomenon has led to a positive influence on social interaction, improved problem-solving skills, enhanced
cooperative skills, and alleviation of anxiety and depression (Lemola et al., 2011; Granic et al., 2014; Dobrowolski et al., 2015). When playing online games, most games involve group or team player settings. When gaming in such groups, one can communicate with others and even form long-term relationships. Massively Multiplayer Online Role-playing Games (MMORPGs) are known for developing these relationships. A very popular MMORPG game is World of Warcraft. In this game, players embark on quests and complete missions in order to receive various rewards. A big part of the game is forming guilds. Every member of a guild works together to complete quests and missions, developing social and cooperative skills (Chen et al., 2006). Online games also often involve difficult tasks and challenges which can improve a person’s performance in everyday activities. Shooting DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: A mother and her son playing on the Wii together. Source: Flickr
games such as Call of Duty improve cognitive performance, and these gamers develop an improved ability to predict future events (University of Rochester, 2014). Puzzle games that involve escaping improve problem-solving skills and overall intelligence because in order to find the exit, the player must use their brain to devise a mental model and come up with a specific strategy for the difficult task at hand. Games that have modes such as duos, trios, or squads tremendously improve cooperative skills. In these modes, players must work together with their teammate or teammates in order to succeed.
Impacts of Gaming During the Pandemic During a time when everyone must isolate to prevent contraction of the coronavirus, playing video games is one of the main ways to link people and prevent loneliness or depression. Online video games allow for a connection to the outside world, so even when individuals are forced to separate from others, they are never truly alone. In addition to expanding one’s social network outside of the home, video gaming can help reconnect family members during quarantine. Just playing games with your siblings or parents while in quarantine is a great way to bond and develop a closer relationship (Gregory, 2020). Games such as Super Smash Bros Brawl or Wii Sports are great for keeping entertained with your family. WINTER 2021
In Super Smash Bros Brawl, players select a character based on performance in hopes to defeat the characters that other players chose. In Wii Sports, players select different sports challenges to participate in. This also lends to exercising together and doing physical activity in an enjoyable and fun setting (Kahlbaugh et al., 2011; Bleakley et al., 2013; Miller et al., 2014). NBA2K is another example; during the pandemic, professional sports have also lost their luster because watching the game is not the same without fans in the stands. However, sporting games like NBA2K have allowed some of the excitement of sports to be brought back whilst also triggering conversations about teams of different generations. Sports games such as NBA2K can help solidify bonds between parents and children, and between grandparents and grandchildren, in a way that is still safe and does not require being together. Ironically, it has the potential to bring back communication to pre-internet time periods (Marston et al., 2020; Wang et al., 2014). During the pandemic, it becomes apparent that gaming is no longer just for kids. Grandparents and older family members, who are at the highest risk of the often fatal symptoms of COVID-19, can still safely interact with younger family members. Both games provide an excellent source of entertainment during periods social isolation while everyone is at home.
"During a time when everyone must isolate to prevent contraction of the coronavirus, playing video games is one of the main ways to link people and prevent loneliness or depression."
57
Clearly, playing video games has the potential to provide entertainment while specifically addressing the psychosocial dilemmas of quarantine such as anxiety and depression (Jacobson et al., 2017). While immersed in the virtual world of gaming, players can socialize with people all over the globe. Especially during a time when an individual’s world could not have become smaller with limitations on travel, education, and even daily activities, online video games have allowed people to escape their homes and connect again (Drennan et al., 2008; Heylen, 2010).
Conclusion Research on gaming has mostly described its negative impacts on adolescents and young adults. These range from physical concerns with excessive blue light exposure to potential psychosocial and cognitive harm. However, forced isolation during the pandemic has created unforeseen hardships on individuals with depression and heightened anxiety levels (Dolan, 2002). Online gaming is a way to connect to and interact with friends and represents a safe way to communicate during the pandemic. While contrary to concerns surrounding gaming, video games serve as mediums for interaction and should be promoted during these times since it can support social development. Gaming in the comfort of home allows individuals to maintain a sense of normalcy in their lives while also solidifying bonds with family members as “game night” is reinvented. It helps with adapting to a different environment, building confidence and increasing the ability to deal with challenges in a resilient manner. Gaming also has many individual and group benefits such as the development of problem solving skills, serving as a source of entertainment, connecting socially during quarantine, alleviating anxiety and depression, serving as a distraction from contemporary struggles, evading loneliness, and enhancing mental health. With these benefits, remaining safe at home during a global pandemic becomes a lot easier and more promising. References Anderson, Craig & Bushman, Brad. (2001). Effects of Violent Video Games on Aggressive Behavior, Aggressive Cognition, Aggressive Affect, Physiological Arousal, and Prosocial Behavior: A Meta-Analytic Review of the Scientific Literature. Psychological science. 12. 353-9. 10.1111/1467-9280.00366. Anderson, C. A., Shibuya, A., Ihori, N., Swing, E. L., Bushman, B. J., Sakamoto, A., & Saleem, M. (2010). Violent video game effects on aggression, empathy, and prosocial behavior in Eastern and Western countries: A meta-analytic review. Psychological
58
Bulletin, 136, 151– 173. doi:10.1037/a0018251 Bediou, B., Adams, D. M., Mayer, R. E., Tipton, E., Green, C. S., and Bavelier, D. (2018). Meta-analysis of action video game impact on perceptual, attentional, and cognitive skills. Psychol. Bull. 144, 77–110. doi: 10.1037/bul0000130 Bleakley CM, Charles D, Porter-Armstrong A, McNeill MDJ, McDonough SM, McCormack B. Gaming for health a systematic review of the physical and cognitive effects of interactive computer games in older adults. J Appl Gerontol. 2013;34(3):NP166-89. Chen V.HH., Duh H.BL., Phuah P.S.K., Lam D.Z.Y. (2006) Enjoyment or Engagement? Role of Social Interaction in Playing Massively Mulitplayer Online Role-Playing Games (MMORPGS). In: Harper R., Rauterberg M., Combetto M. (eds) Entertainment Computing - ICEC 2006. ICEC 2006. Lecture Notes in Computer Science, vol 4161. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11872320_31 Colder Carras, M., Van Rooij, A. J., Spruijt-Metz, D., Kvedar, J., Griffiths, M. D., Carabas, Y., & Labrique, A. (2017). Commercial video games as therapy: A new research agenda to unlock the potential of a global pastime. Frontiers in Psychiatry, 8, 300. https://doi.org/10.3389/ fpsyt.2017.00300. Dobrowolski, P., Hanusz, K., Sobczyk, B., Skorko, M., and Wiatrow, A. (2015). Cognitive enhancement in video game players: the role of video game genre. Comput. Hum. Behav. 44, 59–63. doi: 10.1016/ j.chb.2014.11.051 Crecente, Brian (January 15, 2018). "'Fortnite: Battle Royale': The Evolution of World's Largest Battle Royale Game". Glixel. Dolan, R.J. Neuroscience and psychology: emotion, cognition, and behavior. Science, 5596 (298) (2002), pp. 1191-1194. https://doi.org/10.1126/science.1076358 Drennan J., Treacy M., Butler M., Byrne A., Fealy G., Frazer K., Irving K. The experience of social and emotional loneliness among older people in Ireland. Ageing and Society. 2008;28:1113–1132. doi: 10.1017/ s0144686×08007526. Ferguson, C. J., & Olson, C. K. (2013). Friends, fun, frustration and fantasy: Child motivations for video game play. Motivation and Emo- tion, 37, 154–164. doi:1007/s11031-0129284-7 French AN, Ashby RS, Morgan IG & Rose KA (2013a): Time outdoors and the prevention of myopia. Exp Eye Res 114: 58–68. Granic, I., Lobel, A., & Engels, R. C. M. E. (2014). The benefits of playing video games. American Psychologist, 69(1), 66–78. https://doi.org/ 10.1037/a0034857. Gregory, Sean. “Don’t Feel Bad If Your Kids Are Gaming More Than Ever. In Fact, Why Not Join Them?”Time, 22 April 2020 https://www.google.com/amp/s/time.com/5825214/videogames-screen-time-parenting-coronavirus/%3famp=true Heylen L. The older, the lonelier? Risk factors for social loneliness in old age. Ageing and Society. 2010;30:1177–1196. doi: 10.1017/ s0144686×10000292. Kahlbaugh PE, Sperandio AJ, Carlson AL, Hauselt J. Effects of playing Wii on well-being in the wlderly: physical activity, loneliness, and mood. Act Adapt Aging. 2011;35:331–344. Jacobson, N.C., Newman, M.G. Anxiety and depression as bidirectional risk factors for one another: a meta-analysis of longitudinal studies. Psychol. Bull., 143 (2017), pp. 1155-1200
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
https://doi.org/10.1037/bul0000111 Kowert, R., Domahidi, E., and Quandt, T. (2014b). The relationship between online video game involvement and gaming-related friendships among emotionally sensitive individuals. Cyberpsychol. Behav. Soc. Netw. 17, 447–453. doi: 10.1089/ cyber.2013.0656 Lemola, S., Brand, S., Vogler, N., Perkinson-Gloor, N., Allemand, M., & Grob, A. (2011). Habitual computer game playing at night is related to depressive symptoms. Personality and Individual Differences, 51, 117– 122. doi:10.1016/j. paid.2011.03.024 Lien, Tracey (September 17, 2012). "Making 'That 2K': How Visual Concepts brought NBA 2K to dizzying heights". Polygon. Loton, D., Borkoles, E., Lubman, D., Polman, R. Video game addiction, engagement and symptoms of stress, depression and anxiety: the mediating role of coping. Int. J. Ment. Health Addiction, 14 (4) (2016), pp. 565-578 Makuch, Eddie (February 9, 2017). "NBA Launching Its Own Pro Gaming League With Take-Two". GameSpot. Makuch, Eddie (March 10, 2017). "Rocket League Passes 10.5 Million Sales, As Dev Explains Why No Sequel Is Coming Soon". GameSpot. Marston HR, Musselwhite C, Hadley RA: COVID-19 vs Social Isolation: the impact technology can have on communities, social connections and citizens. Ageing Issues. 2020. Mazurek, M. O., Engelhardt, C. R., and Clark, K. E. (2015). Video games from the perspective of adults with autism spectrum disorder. Comput. Hum. Behav. 51, 122–130. doi: 10.1016/j. chb.2015.04.062 Miller KJ, Adair BS, Pearce AJ, et al. : Effectiveness and Feasibility of Virtual Reality and Gaming System Use at Home by Older Adults for Enabling Physical Activity to Improve Health-Related Domains: A Systematic Review. Age Ageing. 2014;43(2):188–195. 10.1093/ageing/aft194
The Nielsen Company (2017). Games 360 U.S. Report. New York, NY: The Nielsen Company. Taquet, P., Romo, L., Cottencin, O., Ortiz, D., and Hautekeete, M. (2017). Video game addiction: cognitive, emotional, and behavioral determinants for CBT treatment. J. Thér. Comportementale Cogn. 27, 118– 128. doi: 10.1016/j.jtcc. 2017.06.005 Tassi, Paul (October 30, 2019). "'Call of Duty: Modern Warfare' Sales Top $600 Million In Three Days". Forbes. Todd, Joe. “Social video games to play during the coronavirus quarantine.”The Conversation, 14 April 2020. https:// theconversation.com/social-video-games-to-play-duringthe-coronavirus-quarantine-134880 University of Rochester. "Playing action video games can boost learning, study finds." ScienceDaily. ScienceDaily, 10 November 2014. www.sciencedaily.com/ releases/2014/11/141110161036.htm. Walton, Mark (25 November 2012). "Minecraft In Education: How Video Games Are Teaching Kids". GameSpot. CBS Interactive. Wang B, Taylor L, Sun Q: Families that play together stay together: Investigating family bonding through video games. New Media and Society. 2018;20(11):4074–4094. 10.1177/1461444818767667 Warren, Tom (18 May 2020). "Minecraft still incredibly popular as sales top 200 million and 126 million play monthly". The Verge. Wu PC, Tsai CL, Wu HL, Yang YH & Kuo HK (2013): Outdoor activity during class recess reduces myopia onset and progression in school children. Ophthalmology 120: 1080–1085.
Needleman, Sarah (January 15, 2019). "NBA Inks Billion-Dollar Deal With Maker of 2K Videogame". The Wall Street Journal. Orland, Kyle (6 April 2011). "Minecraft Draws Over $33 Million In Revenue From 1.8M Paying Customers". Gamasutra. Primack BA, Carroll MV, McNamara M, Klem ML, King B, Rich M, … Nayak S. Role of video games in improving healthrelated outcomes: A systematic review. American Journal of Preventive Medicine. 2012;42(6): 630–638. Sarkar, Samit (6 November 2014). "Microsoft officially owns Minecraft and developer Mojang now". Polygon. Sarkar, Samit (June 1, 2016). "Rocket League surpasses 5M copies sold, 15M players". Polygon. Silverman, Ben (March 12, 2011). "NPD: Black Ops is the bestselling game in U.S. history". Games.yahoo.com. Shaffer, D. W., Squire, K. D., Halverson, R., & Gee, J. P. (2005). Video games and the future of learning. Phi Delta Kappan, 87(2), 104-111. Spangler, Todd (January 2, 2020). "'Fortnite' Revenue Dropped 25% in 2019 But Was Still the Year's Top-Earning Game". Variety.
WINTER 2021
59
Neuroscience, Narrative, and Never-Ending Stories BY CAROLINE CONWAY '24 Cover: A field of poppies. The myths of Persephone and Helen of Troy suggest that the ancient Greeks were aware of the role the opium poppy plays in pain management. Source: Wikimedia Commons
Introduction At first glance, many people see science and storytelling as distinct and separate pursuits. A considerable portion of British authors from the Victorian era agreed — so much so that they aimed to supplant traditional fairy tales with educational materials, arguing that parents did their children a disservice in allowing them to waste time on fiction when the 19th century offered so many scientific revelations. Of this movement, children’s nonfiction author Priscilla Wakefield simply said, “Nonsense has given way to reason” (Keene, 2015, p. 10). One fact this perspective fails to account for is that human storytelling is a result of evolution and is therefore, in a sense, driven by science. Storytelling is universal across human cultures, as are many narrative archetypes. As evidenced by ancient works of art including pottery, cave paintings, and carvings, storytelling predates writing (Zipes, 2012). As for how storytelling
60
might increase evolutionary fitness in humans, one theory is that storytelling provides individuals with an opportunity to practice Theory of Mind, the psychological capacity to understand the perspectives and motivations of others. A strong Theory of Mind is essential for social interaction and would have enabled early humans to better predict each other’s behavior, benefiting individuals in an array of cooperative activities ranging from hunting to warfare (Williams, 2012). The use of narratives to structure how one views the world can lighten cognitive load, especially when it comes to processing embedded mental states (Willems et al., 2020). Van Duijn et al. (2015) describe embedded mental states as “A believes that B thinks that C intends (etc.)” (p. 148). In other words, embedded mental states allow an individual to consider how another person applies Theory of Mind to others, with the original individual utilizing their own Theory of Mind in the process. The more perspectives DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
embedded, the more cognitively demanding comprehension becomes. While embedded mental states can quickly become confusing if presented in isolated sentences, they are relatively easy for people to understand in the context of a story. For instance, Shakespeare’s Othello requires the audience to grasp that Iago wants Othello to believe that Desdemona cares for Cassio, but the presentation of this situation within a narrative allows it to be understood without undue cognitive strain (van Duijn et al., 2015). It can therefore be argued that narrative lightens one’s cognitive load, which might be evolutionarily advantageous because it can increase processing efficiency, allowing the brain to parse information with less energy. In other words, narrative is a built-in tool the human brain uses to understand the world. However, more than just evolution binds science to the tradition of storytelling. As this article will show through examples in Greek mythology, Indian mythology, and Victorian fairy tales, cultural narratives have a long and storied history of concealing morsels of scientific truth.
Greek Mythology The ties between mythology and science are evident from the etymology of “mythology” itself — “mythos” references traditional narrative, while “logos” refers to logic (Karakis, 2019). It is impossible to pinpoint the exact inspirations of legends created thousands of years ago, so modern historians and scientists settle instead for educated guesses based on a myth’s cultural, geographic, and technological context. To begin, one needs only to look at the foundation — literally. The field of paleontology may provide key insights into some of Greek mythology’s greatest monsters. Of course, “paleontology” was a foreign concept to the Greeks. What did they make of fossils? Science correspondent Matt Kaplan hypothesized that these buried bones gave rise to mythical beasts like the Chimera and dragons and may even have inspired the legend of King Midas. According to legend, the Chimera was a firebreathing lion with a goat head rising from its back and a snake for a tail. No single fossil could account for such a creature. Kaplan believes instead that multiple animals fell into a tar pit and were fossilized together, creating the appearance of one bizarre monster. Kaplan
WINTER 2021
described the food chain at work, claiming that a goat, distressed at finding itself trapped in a tar pit, might attract a hungry lion, and the vultures coming to feed on the dead goat and lion could attract a bird-eating snake. As for the Chimera’s ability to breathe fire, Kaplan noted that Homer placed the Chimera in Lycia, where natural gas gradually leaks out, and the resulting blackened rocks and perpetual petroleum smell could have been thought to be supernatural in origin (Nissan, 2018). The existence of only one Chimera in Greek mythology lends credibility to Kaplan’s theory, since it is plausible that the tar pit scenario occurred once and became infamous. Kaplan explained a potential origin of Crete’s Minotaur as well, focusing on the activity of local tectonic plates. Crete rests on the Aegean Block (a section of continental crust), and the Nubian Block (a section of oceanic crust) slides directly beneath the Aegean Block. This arrangement constitutes a subduction zone — a region of Earth’s crust where tectonic plates meet. The movement of the Nubian Block can displace the Aegean Block, resulting in abrupt uplift (at times over 30 feet) in Crete. Kaplan proposed that the resulting earthquakes could have given rise to the legend of the bellowing Minotaur lurking in an underground labyrinth. Kaplan even attempted to find scientific origins for the likes of Medusa and King Midas. Medusa’s horrible gaze was said to turn creatures to stone, and Kaplan theorized that this was how the Greeks explained the existence of fossils. Along the same lines, Kaplan pointed out that pyrite (fool’s gold) can accumulate within bone, giving a possible explanation for the myth of King Midas and his power to turn objects to gold (Nissan, 2018).
"The ties between mythology and science are evident from the etymology of “mythology” itself — “mythos” references traditional narrative, while “logos” refers to logic."
In addition to describing fearsome monsters, Greek mythology reflects some of the medicinal practices of ancient Greece. Helen of Troy supposedly used mandrake or opium to ease her anguish following her abduction and the start of the Trojan War. Persephone responded similarly to her abduction at the hands of Hades, using poppies to abate her emotional pain (Karakis, 2019). Today, opiumbased drugs like morphine and other opioids remain key to alleviating pain. Interestingly, Greek mythology not only implies that the ancient Greeks understood the opium poppy’s role in pain management, but also indicates that the Greeks recognized that mental as well as physical conditions can require treatment (though they did not grasp that such mental conditions originated in the brain). 61
Figure 1: An illustration from John Cargill Brough’s The Fairy Tales of Science. Brough compared dinosaurs to more traditional fairy tale monsters.
Hellebore is another medicine featured prominently in Greek mythology. In one legend, the Greek seer-healer Melampus healed the mad princesses of Argos (who were daughters Source: Wikimedia Commons of King Proetus and were therefore known as the Proetides) using hellebore. The concept of hellebore curing madness in mythology is not isolated to this story — similarly, Antikyreus is said to have cured Hercules with hellebore after Hera induced a fit of madness that led Hercules to murder his family. Olivieri et al. (2017) found that black hellebore in particular (Helleborus niger) is a psychoactive substance with calming, antipsychotic, and antidepressant capabilities that could have treated the madness of the Proetides and Hercules as described in legend, though there is insufficient evidence to justify the widespread use of black hellebore for such purposes today. While many myths of Greek antiquity seem ludicrous, there may be more truth to them than meets the eye.
Indian Mythology "The proposed origin of the Hindu god Vishnu is tied to paleontology. When geologist Hugh Falconer discovered a giant tortoise (Colossochelys atlas) fossil in northern India in the 19th century, he believed the existence of such a creature (or its remains) inspired the legend of Vishnu’s tortoise avatar."
Like many of Kaplan’s hypotheses concerning Greek myths, the proposed origin of the Hindu god Vishnu is tied to paleontology. When geologist Hugh Falconer discovered a giant tortoise (Colossochelys atlas) fossil in northern India in the 19th century, he believed the existence of such a creature (or its remains) inspired the legend of Vishnu’s tortoise avatar. According to the story, Vishnu transformed into a tortoise and carried the world on his back. In a separate Hindu myth concerning the bird-god Garuda, a battle raged between an 80-milelong tortoise and a 160-mile-long elephant. Considering the monstrous proportions of the actual giant tortoise (over seven feet tall with a shell diameter of over eighteen feet), it seems plausible that the tortoise’s fossil could have inspired both legends of monumental tortoises. Garuda, too, might have origins in a real creature. Chakrabarti and Sen (2016) hypothesized that Garuda himself was inspired by India’s gigantic crane (Ciconia gigantea). The war between Garuda and the Nagas (serpents) in both Hindu and Buddhist mythology may have been based on an observation of the natural enmity between birds of prey and snakes in nature, though Buddhist legends describe garudas as a mythical species, not a single deity (Nissan, 2018). Like paleontology, astronomy was influential in Indian mythology. The Skanda Purana (a Hindu religious text) compared the Earth to a spinning top. The Indian mathematician-astronomer Aryabhata concluded from his work that the
62
Earth spun, and the Skanda Purana is thought to reference Aryabhata’s findings — which, while correct, were not widely accepted — when it described the spinning of Earth (Kochhar, 2009).
Victorian Fairy Tales As discussed earlier, Queen Victoria’s Great Britain underwent something of a revolution when it came to the content of children’s entertainment. The result was a surge of science-based literature aimed at children, ranging from straightforward nonfiction to elaborate fantasies inspired by scientific concepts. As with Greek and Indian mythology, paleontology continued to play a major role in storytelling. John Cargill Brough’s The Fairy Tales of Science, for instance, sensationalized the newly discovered fossils of dinosaurs like the Plesiosaurus, Megalosaurus, and Cetiosaurus. Brough compared these extinct creatures to fairy tale monsters like dragons and griffins, penning detailed descriptions of fights between ferocious dinosaurs (Keene, 2015). However, this new brand of children’s literature was not produced by adults alone. Teenagers like Madalene and Louisa Pasley also contributed. The Pasley sisters published an illustrated entomology album, incorporating both actual and fictional adventures of theirs as they hunted for insects. This was part of a larger surge in scrapbooking among upper-middle-
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Dante and the River Lethe. The Lethe supposedly erased the memories of the dead and is the origin of the modern word “lethargy.” Source: Wikimedia Commons
class women at the time, and such scrapbooks frequently included insect illustrations (Keene, 2015). However, this scientific emphasis in storytelling did not mean that the fairies of traditional fairy tales disappeared. In Fairy Know-a-bit: A Nutshell of Knowledge, author Charlotte Maria Tucker also dove into the world of entomology, using an insect fairy to explain various types of insects to young readers. Lucy Rider Meyer also sought to imbue science with a touch of magic in her work. Instead of focusing on entomology, however, Meyer emphasized chemistry, creating a specific type of fairy for each element and using the fairies to demonstrate basic molecular behavior. For example, Meyer introduced the states of matter as descriptions of the fairies’ varying activity levels. Part of the reason for the focus on small objects such as insects and molecules was that, during the
WINTER 2021
Victorian era, parlor microscopes became affordable for many families in Great Britain. This was likely a significant influence for “Down the Microscope and What Alice Found There,” which some University of Cambridge students published in Brighter Biochemistry (a satirical illustrated journal inspired by the process and politics of laboratory research). A bacterium called Pyo (short for Bacillus pyocyaneus) served as Alice’s guide as she explored a droplet of water at a microscopic level, diving into this strange new world in a manner imitating the plot of Lewis Carroll’s Alice’s Adventures in Wonderland (Keene, 2015). “Down the Microscope and What Alice Found There” reflected the merging of scientific writing with children’s literature. To be clear, these scientific fairy tales did not entirely replace traditional stories with dragons, knights, and mermaids; the two storytelling approaches coexisted as society at large continued to debate which one
"Part of the reason for the focus on small objects such as insects and molecules was that, during the Victorian era, parlor microscopes became affordable for many families in Great Britain."
63
most benefited children. However, many of the new Victorian-age fairy tales were undeniably scientific.
Conclusion
Karakis, I. (2019). Neuroscience and Greek mythology. Journal of the History of the Neurosciences, 28(1), 1–22. https://doi.or g/10.1080/0964704X.2018.1522049 Keene, M. (2015). Science in wonderland: The scientific fairy tales of Victorian Britain. Oxford University Press, Incorporated.
Though they have their differences, storytelling and science need not be divided. There is a rich history of literature embracing both disciplines. The relationship between science and storytelling can even be considered reciprocal. Just as scientific phenomena like Earth’s rotation and the discovery of fossils have influenced mythology, Ioannis Karakis (2019) noted that mythology has inspired many modern scientific terms like “lethargy” (after the river Lethe in the Greek underworld, which caused the dead to forget their lives), “panic” (after the Greek god Pan, who emitted a blood-curdling scream when woken from a nap), and “narcissism” (after Narcissus, who was cursed to fall in love with his own reflection). Similarly, the freshwater hydra, whose tentacles grow back when severed, borrowed its name from the Greek Hydra of legend, which grew two heads in the place of each one that was chopped off (Power & Rasko, 2008). Willems et al. (2020) even argued that narrative is a valuable tool that should be used more frequently in neuroscience research due to the key role narrative formation plays in human cognition.
Kochhar, R. (2009). Scriptures, science and mythology: Astronomy in Indian cultures. Proceedings of the International Astronomical Union, 5(S260), 54–61. https://doi. org/10.1017/S1743921311002146
In many storytelling traditions, such as that of ancient India, the transmission of mythology was oral (Kochhar, 2009). As a result, Indian storytelling retained a certain fluidity — different individuals told slightly different variations of traditional tales, and, over time, the tales changed as a result. This is one of the most fascinating aspects of human narratives — their remarkable capacity for change. Williams (2012) reflected that a person’s investment in any story relies on their personal past and memories. People are only able to understand narratives using experiences from their own lives. This article is a demonstration of that very concept. It is the availability of modern scientific findings that allows society to reexamine old stories with a more scientific lens. One could argue that the legends discussed in this article have no true end — like a language, they continue to evolve as long as they are in use.
Williams, D. (2012). The trickster brain: Neuroscience, evolution, and narrative. The Rowman & Littlefield Publishing Group.
Nissan, E. (2018). Monsters from Myth, and Quests for a Scientific Rationale, Or, a Science Journalist’s Take on Mythological Animals: An Exploration of Matt Kaplan’s The Science of Monsters, with a Foray into Aetiologies and Cultural Uses of Medusa. Amaltea, Journal of Myth Criticism, 10, 103–126. https://doi.org/10.5209/AMAL.58836 Olivieri, M. F., Marzari, F., Kesel, A. J., Bonalume, L., & Saettini, F. (2017). Pharmacology and psychiatry at the origins of Greek medicine: The myth of Melampus and the madness of the Proetides. Journal of the History of the Neurosciences, 26(2), 193–215. https://doi.org/10.1080/0964704X.2016.1211901 Power, C., & Rasko, J. E.J. (2008). Whither Prometheus’ Liver? Greek Myth and the Science of Regeneration. Annals of Internal Medicine, 149(6), 421–426. https://doi. org/10.7326/0003-4819-149-6-200809160-00009 van Duijn, M. J., Sluiter, I., & Verhagen, A. (2015). When narrative takes over: The representation of embedded mindstates in Shakespeare’s Othello. Language and Literature, 24(2), 148–166. https://doi. org/10.1177/0963947015572274 Willems, R. M., Nastase, S. A., & Milivojevic, B. (2020). Narratives for Neuroscience. Trends in Neurosciences (Regular Ed.), 43(5), 271–273. https://doi.org/10.1016/j.tins.2020.03.003
Zipes, J. (2012). The irresistible fairy tale: The cultural and social history of a genre. Princeton University Press.
References Chakrabarti, P., & Sen, J. (2016). 'The World Rests on the Back of a Tortoise': Science and mythology in Indian history. Modern Asian Studies, 50(3), 808-840. https://doi. org/10.1017/S0026749X15000207 64
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
65
Superconductivity: Past, Present and Future BY COLLINS KARIUKI, POMONA COLLEGE Cover: Dutch physicist Heike Kamerlingh Onnes. Onnes lay the foundations for the field of superconductivity. He was able to liquefy helium down to a temperature of 0.9K (-272 °C), enabling him to discover superconductivity in mercury. The liquefication of helium enabled him to discover superconductivity in mercury at a very low temperature which was previously unattainable. Subsequent tests of tin and lead showed that superconductivity was a property of numerous metals if they were cooled sufficiently (Ouboter, 1997; The Nobel Prize, n.d.) Source: Wikimedia Commons
66
Background Superconductivity is defined by the complete disappearance of electrical resistance (the measure of the opposition to flow of electric current) of certain solids when they are cooled below a specific point known as the critical temperature. Scientific studies of superconductivity began in 1911 with experiments on mercury in the laboratory of Dutch physicist Heike Kamerlingh Onnes. Onnes’ showed that when mercury was cooled to 4 Kelvin (-269 degrees Celsius), mercury’s electrical resistance decreased to zero (Delft & Kes, 2011). The finding of zero electrical resistance is important because it could provide pathways to more efficient electrical applications, as a lot of energy tends to be lost due to electrical resistance. For instance, such zero-resistance superconductors could be used national electric power lines and grid systems to minimize potential energy losses.
Most metallic superconductors (such as mercury and lead) are termed low-temperature superconductors since they only superconduct at temperatures below 73 Kelvin. Lowtemperature superconductors require expensive cooling systems to maintain. On the other hand, materials with unconventionally high critical temperatures are aptly named high-temperature superconductors (HTS). HTS have critical temperatures above 77 Kelvin and are comparatively easy to maintain using nitrogen as a coolant, as – 77 Kelvin is the boiling point of nitrogen. HTS are mainly ceramics containing copper, oxygen, and rareearth metals such as bismuth or thallium. HTS that contain copper and Oxygen normally have very complex stoichiometries, that is, the ratios of the particular elements are often complex, for instance, Hg2Ba2Ca2Cu3O8 and Nd1.85Ce0.15CuO4 are examples of HTS with uncommon ratios. For this reason, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
they are sometimes called Unidentified Superconducting Objects (USOs). It is normally difficult to replicate superconductivity experiments in USOs as varying the oxygen content and heat drastically changes the HTS critical temperatures. Copper and oxygen HTS are crystalline in nature with the level of superconductivity highly dependent on the planar orientation of copper and oxygen atoms. (Ginsberg, 2018; OpenStax College ‘High-temperature Superconductors’ n.d.) Despite the complicated science, investigations of superconductivity have gained traction. The discovery of the ‘Meissner Effect,’ which explores the magnetic properties of superconductors, by Walther Meissner and Robert Oschenfield sparked more interest in superconductivity. (Meissner & Oschenfield, 1933). The Meissner Effect basically states that when a certain material is in a superconducting state (a state where there is a zero or imperceptible electrical resistance) it expels any externally applied magnetic field from penetrating it. The Meissner Effect is essentially what makes magnets to levitate over superconducting materials (see figure 4) Another notable find was made by Bardeen, Cooper and Schrieffer in 1957 and sought to explain superconductivity quantitatively through what are termed as Cooper Pairs (see figure 5). The Bardeen Cooper Schrieffer (BCS) highlights the role of these Cooper Pairs (see figure in minimizing electron-atom collisions, a major cause of electrical resistance. (Bardeen et al., 1957). The joint achievement of J. Georg Bednorz and K. Alex Müller from IBM also pushed the field forward. Through their years of testing ceramic compounds, they were able to develop, test, and observe superconductivity in a brittle ceramic compound (BaxLa5−xCu5O5(3−y)) at 30 Kelvin (-243.15 degrees Celsius). Though far less than the conventional 73K threshold for attaining superconductivity, 30 Kelvin was a very huge temperature and a big deal to experimental scientists in the 1970s and 1980s because before this time there had been no material superconductive at a temperature as high as 30 K (Bednorz & Muller, 1988). Records in this field are being broken continuously with new, more complex superconductors being discovered. Moreover, more suitable candidates that display superconductivity are also being discovered. More recently, researchers at the University of Rochester discovered another candidate by the name of carbonaceous hydrogen sulphide, WINTER 2021
Figure 1: The positive charges represent cations (parent metal ions), and the negative charges represent free electrons within a metallic lattice. These free electrons can move easily throughout the lattice structure, making it easy for a metal to transport electric charge. Source: Wikimedia Commons
which currently holds the highest critical temperature at 15 degrees Celsius (Snider et al., 2020). In the case of cuprates and iron compounds, technical difficulties such as the right elemental ratios during the synthesis of crystals present hurdles, both in time and efficacy. Beyond these discoveries, huge strides are being made to physically determine and actualize superconductors at even higher temperatures. Many elements and compounds like cuprates, graphene, metallic hydrogen, hydrates and hydrides are being tested for higher-temperature superconductivity.
Conductivity and Resistance As mentioned earlier, electrical conductivity is the measure of the ease at which electric charge can pass through a material. Materials that conduct electricity are called ‘conductors,’ and those which do not conduct electricity are called ‘non-conductors’ or ‘insulators’ (Lovatt & Shercliff, n.d.). Certain elements which lie between conducting and insulating phases are called semiconductors, and examples of such materials include silicon, germanium and gallium arsenide (Petruzzello, 2020). The conductivity of a particular material tells us how well electrical current flows through it, and in most cases the best conductors are metals. Examples of excellent metallic conductors include silver, gold, aluminum, copper, and iron; poor conductors include lead, tungsten, and bismuth.
"Electrical conductivity is the measure of the ease at which electric charge can pass through a material. Materials that conduct electricity are called ‘conductors,’ and those which do not conduct electricity are called ‘non-conductors’ or ‘insulators’"
Metals conduct electricity due to their structure. They are often composed of closely packed atoms that form crystal lattices with a high degree of symmetry. This symmetry leads to the high conductivity of metal because of the free-electron theory, which asserts that the atoms in the metallic lattice have lost their valence electrons (those in the outermost electron shell). The valence electrons are free 67
to dissociate from their parent atoms and move around the lattice leading to the formation of the famous ‘sea’ of electrons. These electrons are the ones responsible for carrying charges around the metal and producing a flowing Source: Wikimedia Commons electric current. Insulators do not possess these highly mobile electrons and thus are not good conduits of electric current (Blaber & Shrestha, 2014).
Figure 2: Running current through a material with resistance results in energy loss. In this picture, energy is lost as heat and light
"Condensed matter physicists have examined the correlation between temperature and electrical resistance and have noted an almost proportional relationship between the two variables; that is, the increase in temperature results in an increase in resistance, but only up to a certain level."
Electric current, even in metals, is not allowed to flow unimpeded. Electrical resistance is the measure of the opposition to the flow of current and is more prominent in metals and transpires as a result of the crystalline metal lattice that makes conduction possible. When the mobile electrons in the lattice carry the charges through the conductor, they encounter their parent ions in the metal lattice (from which the electrons have broken free), which impede their motion. This, in turn, reduces the electrical conductivity of that particular conductor. The higher the resistance of a particular conductor, the lower the conductivity and vice versa. Condensed matter physicists have examined the correlation between temperature and electrical resistance and have noted an almost proportional relationship between the two variables; that is, the increase in temperature results in an increase in resistance, but only up to a certain level (Elert, n.d.). And while 19thcentury physicists thought that the plot of resistance vs. temperature would be linear for all temperatures, in 1908, when Heike Onnes first liquified helium, it was discovered that the electrical resistance of mercury completely disappeared at temperatures close to 3 Kelvin (Tretkoff, 2006). Why the relationship between resistance and temperature? A higher temperature means that the parent ions in the metal have more kinetic energy, vibrating more extensively and engaging in a higher number of collisions with
Figure 3: A transmission line connected to the National Grid System. National grid systems all over the world lose a lot of energy due to the electrical resistance of the cables carrying electrical current. A suitable superconductor could help reduce these energy losses. Source: Wikimedia Commons
68
electrons (“New world encyclopedia,” n.d.). This is not entirely a negative phenomenon, as electrical resistance has certain technological applications (Gregersen, 2020). Because the tungsten filament in certain light bulbs has a high electrical resistance, the filament gets very hot and glows bright when electric current passes through it – it is resistance that allows buildings to be illuminated even in the dead of night (McGrayne et al., 2020). According to Jordan Wirfs-Brock, a data journalist from University of Colorado, Boulder, about 69 trillion British Thermal Units (BTU) of energy was lost in 2013 just by transmitting electricity through the transmission and distribution grid in the USA, which utilizes reinforced aluminum conductor steel (ACSR), one of the best conductors of electricity (WirfsBrock, 2015; ElProCus, 2020). But with the study of superconductors there doesn’t have to be this much energy loss; if the right materials are used, wasted energy can be saved and the expenditure can be made more energy efficient (African Development Bank Group, n.d.).
Types of Superconductors and the Meissner Effect The Meissner Effect explores the relationship between different categories of superconductors and their response towards applied external magnetic fields. The Meissner Effect is a phenomenon whereby, below its critical temperature, a superconductor expels externally applied magnetic fields that permeate it (Diebner, 2016). Diamagnetism is a phenomenon that causes materials to line up at right angles to a nonuniform magnetic field in which they are placed. It is prominent in materials such as rare gases (Gregersen et al., 2015). A type-I superconductor, also called a soft superconductor, will expel the external
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
to flow and therefore a facilitates a greater magnetization effect. If another material such as copper was used as an electromagnet, very large electrical resistance would be generated in copper coils and would thus lead to energy loss through thermal dissipation or melting of the copper coils (Alloul et al., n.d.). Particle colliders, such as the Large Hadron Collider, also employ the use of superconducting magnets to allow particles to speed up inside the 27-km tunnel to near-light speeds without losing any energy (Gourlay, 2017). These superconducting magnets are especially important in guiding the particles through corners as they travel at high speeds. magnetic field completely when it is superconducting – it is then said to be in perfect diamagnetism. Examples of type-I conductors include tin, zinc, and lead and are mostly pure metals. By contrast, a type-II superconductor, or a hard superconductor, only portrays a partial Meissner Effect when external magnetic fields pass through certain regions. Niobium-titanium is an example of a type-II superconductor (Diebner, 2016). Type-I superconductors do not maintain the non-permeated state at higher temperatures and therefore the Meissner Effect in these superconductors is limited to lower temperatures. As a result, type-II superconductors are preferred and are widely used even in enormous devices such as particle accelerators. Unfortunately, mechanisms underlying this class of superconductors are currently not known and therefore can only be conjectured. Understanding how these highly efficient type-II superconductors work will be crucial to help scientists solve some of the problems that block the efficacy of type-I superconductors (Hyperphysics, n.d.). The Meissner Effect, through the different types of superconductors, has important applications in the health and biomedical engineering sector. Magnetic Resonance Imaging (MRIs) scanners function by locating the precession (the slow rotation of the axis of an object around another axis due to exerted torques) of the magnetization of atomic nuclei within the human body. The greater the magnetization, the more efficient and clearer the MRI scans are. Very large superconducting magnets are used to generate enormous magnetic fields and enhance the magnetization of nuclei, since the zero-resistance property of superconductors is useful in MRIs as it allows more current
WINTER 2021
Finally, the Meissner Effect is useful in that it allows for magnetic levitation (maglev) trains. Maglev trains use of superconducting magnets, which apart from creating their magnetic fields, also use the Meissner Effect to levitate over the tracks over which they are laid. The non-permeability of the superconductors towards the magnetic field and the fact that magnetic poles repel is what makes maglev trains possible (Whyte, 2016).
Cooper Pairs and the Problem of Temperature
Figure 4: Levitation of a magnet on top of a superconductor of cuprate type YBa2Cu3O7 cooled at -196°C. A magnet can levitate above a superconductor due to the Meissner Effect. If this phenomenon can be harnessed on a large scale, it could mean a revolution for our transport sector as magnetic levitation trains could be made widely accessible. Source: Wikimedia Commons, Julien Bobroff and Frederic Bouquet
"Particle colliders, such as the Large Hadron Collider, employ the use of superconducting magnets to allow particles to speed up inside the 27-km tunnel to near-light speeds without losing any energy."
Once again considering conductivity, recall that the metal lattice in metal conductors contain atoms that are always in random motion, vibrating around their average position. This random motion impedes the movement of electrons and as a result reduces the amount of electric current that flows inside the conductor (Galsin, 2002). Temperature has the most significant effect on these random vibrations and hence acts as an important factor in the conductivity of a material (Barron & Ashton, n.d.). As such, when the temperature is reduced close to absolute zero, the random motion of the atoms in the metal lattice almost comes to a halt and the vibration of the atoms about their mean position decreases (Delpierre & Sewell, 2005). When an electron approaches the metal lattice, it encounters a multitude of positively charged atoms that have lost their valence electrons. Due to electrostatics, negatively charged electrons and the positively charged atoms are attracted to each other and as the electron moves through the lattice, the surrounding positively charged particles are drawn near the electron to form an electron-phonon disturbance. This deformation moves through
69
the lattice and becomes even more positively charged due to the conglomeration of positive atoms around the first electron. When a second electron moves through the lattice, it is greatly attracted to the electron-phonon disturbance because the disturbance contains atoms that are more positively charged (Bardeen et al., 1957). Source: Wikimedia Commons. The first and second electron then form what is known as a Cooper pair, which ranges from around 100 to 1000 atoms in length – this is known as the coherence length (Kadlin, 2005).
Figure 5: Cooper pairs within an atomic lattice. (Not to scale). Cooper pairs are pairs of electrons in a superconductor that are attractively bound and have equal and opposite momentum and spin (MerriamWebster, n.d)
"Superconductivity is rendered impractical for reallife applications when the subzero temperature required is taken into account. Quantum computers that utilize superconductors, for example, need to be kept at very low temperatures. IBM’s supercomputers are kept at near absolute zero."
The connections that make a Cooper pair are relatively weak and would otherwise break when the conducting material is exposed to even higher temperatures. Thus, a very cool environment provided by liquid helium is needed to support a Cooper pair (Hyperphysics, n.d.). The number of Cooper pairs across the metal lattice multiplies over time. Due to the Cooper pair’s vast length, they overlap to form many entanglements which strengthen their “weak” connections. Hence, the set of all the Cooper pairs in the system act as one large unit that is essentially unbreakable. If one of the Cooper pair connections are broken, the free electrons could just form another Cooper pair with other free electrons instantly and join back into the vast system of entanglement (Superconductivity, n.d.). The Cooper pairs then move through the conducting material unimpeded, mainly since the vibrations of the other positively charged atoms are at a minimum (Superconductivity, n.d.). Fermions are subatomic particles, that like electrons have odd angular momentum spins (e.g., ½, 3/2, 5/2, etc.) while bosons are subatomic particles with integer spins, (e.g., 1,2,3, etc.). Fermions and bosons are part of a larger umbrella of sub-atomic particles called ‘quantons,’ which refer to particles that exhibit
Figure 6: Historical plaque on the campus of the University of Illinois at Urbana-Champaign (UIUC), Urbana-Champaign commemorating John Bardeen’s role in developing a theory for superconductivity. Source: Wikimedia Commons
70
both wave and particle-like characteristics (Moore, 2003). What makes Cooper pairs interesting from a particle physics perspective is that they allow the “transformation” of a Fermion into a more advantageous boson. Electrons are fermions and must obey the Pauli exclusion principle, which states that no two quantons can share the same quantum state or energy level (Keilman & Garćia-Ripoli, 2008). However, this does not apply to bosons. When the electrons pair up into a Cooper pair, the two electrons act like a boson and can thus share the same quantum state. This bosonic characteristic of Cooper pairs has been tested by Samuelsson and Büttiker (de Llano et al., 2006). The bosonic properties of Cooper Pairs has the effect of minimizing the collisions that happen across the conducting material, allowing for zero resistance and thus the achievement of the superconductivity phenomena. Essentially, without Cooper pairs, superconductivity would not be possible. Furthermore, there has been experimental evidence supporting the BCS theory that cemented the notion of Cooper pairs as an integral part of superconductivity (Keilman & Garćia-Ripoli, 2008). Despite this great feat, superconductivity is rendered impractical for real-life applications when the sub-zero temperature required is taken into account. Quantum computers that utilize superconductors, for example, need to be kept at very low temperatures. IBM’s supercomputers are kept at near absolute zero (0.015 K / -459.643 OF / -273.135 OC), a condition maintained by huge vats of liquified helium (Fisher, 2019). Similarly, the Large Hadron Collider (which also uses superconducting magnets) is kept at a temperature of 1.9 Kelvin (Rossi, 2003). And the superconducting magnets used in magnetic levitating trains (maglevs) are currently kept at approximately 6 Kelvin (Whyte, 2016). The act of maintaining low temperature is quite an expensive one, and it makes superconductivity applications accessible only to huge companies and government-funded research.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
al., 1992). Further in-depth research is therefore needed to better understand the mechanisms of unconventional superconductivity.
Superconductivity in Graphenes
Superconductivity in Cuprates Cuprates are compounds that consist of a layered alternating structure in which the two-dimensional superconducting compound copper (II) oxide is infused with other layers of other metals or elements to act as electron donors that stabilize the lattice. In most superconducting models involving cuprates, the superconducting activity occurs at the juncture between the oxide layers and those of the doped substance. The arrangement and layering patterns of the copper (II) oxide layers can affect the temperature at which superconductivity is attained; the greater the number of layers, the higher the critical temperature. Unlike conventional superconductors whose superconductivity is based on electron-phonon interactions, superconductivity in cuprates depends on a more exotic form of electronic transmission involving electron-electron interactions. Most cuprates (which are Type 2 superconductors) allow magnetic penetration in quantized fluxes and can consequently sustain stronger magnetic fields (Diebner, 2016). Because cuprates are made from multiple elements with a variety of physical and chemical properties, theoretical models to explain the mechanisms of superconductivity in cuprates have proven elusive. Currently, two theories exist, although neither is fully substantiated: the Weak Coupling Theory and the Interlayer Coupling Model. The former proposes that the unconventional superconductivity emerges as a result of antiferromagnetic spin fluctuations in a doped system, while the latter proposes that it emerges as an intrinsic self-enhancing BCS type superconductivity within a layered structure (Chakravarty et al., 1993; Monthoux et
WINTER 2021
Graphene is an allotrope (a different structural forms of a particular element) of carbon that consists of a single hexagonal layer of carbons atoms that are one atom-thick (see figure 8). In 2018, researchers made a subtle discovery about graphene. When two layers of atom-thick graphene are stacked together, they form a bilayer that is twisted at an angle of 1.10 along its main axis. When a voltage is applied across this twisted matrix, the resulting increase in electron density morphs the graphene bilayer into a superconductor. Paradoxically, a marginal fluctuation in the electron density alters the graphene into an insulator that is impervious to electron flow. For this reason, the angle of twisting has memorably been referred to as the “magic angle.” Both the insulating and superconducting phases were attained at very low temperatures of approximately absolute zero, although under special circumstances, Twisted Bi-layer Graphene (TBG) can exhibit a critical temperature of up to 3 Kelvin (Cao et al., 2018). To achieve the magic angle requires such dexterity that if thousands of samples are subjected to the mechanics of twistronics (manipulating the electronic properties of two-dimensional layered structures through twisting them through small angles), only a few would emerge with superconducting ability. (Carr et al., 2016; Christos et al., 2020). This close transition between superconducting and insulating phases spurred a lot of interest among researchers, as it could potentially aid in the understanding of higher temperature superconducting mechanisms. The direct relationship also offers new insights into geometric topology and electrical conductivity in nature. Recently, researchers at Caltech were able to dope the graphene bilayers with selenium and tungsten. The researchers achieved superconductivity at a relatively wider angle than to the magic angle and superconductivity was also stabilized since the insulator state was not triggered, even after varying the electron density (Arora et al., 2020).
Figure 7: A sample of a BSCCO crystal, a cuprate, which currently is one of the most practical high-temperature superconductors. Source: Wikimedia commons
"In most superconducting models involving cuprates, the superconducting activity occurs at the juncture between the oxide layers and those of the doped substance. The arrangement and layering patterns of the copper (II) oxide layers can affect the temperature at which superconductivity is attained; the greater the number of layers, the higher the critical temperature."
71
Figure 8: A hexagonal layer of a one atom-thick graphene sheet Source: Wikimedia commons
Hydrogen Superconductivity and the Problem of Pressure "Hydrogen is the lightest element and most abundant. It naturally exists as a molecular gas. So, in its molecular state, hydrogen is not of much use in conducting superconductivity experiments. To make it useful for conductivity applications, molecular hydrogen must be converted into metallic hydrogen."
As early as 1968, scientists such as Neil Ashcroft proposed that if hydrogen were to be modified or converted into metal, it would portray hightemperature superconductivity (Ashcroft, 1968). Scientists further investigated this property within hydrogen and in particular hydrides molecules. One of the reasons why hydrogenrich compounds are good candidates for superconductive materials is because hydrogen allows for a higher degree of electron-phonon disturbance, which results in an increased number of Cooper pairs and hence increased flow of electrical current at zero resistance (Snider et al., 2020). Uncreased local disturbance also augments the Cooper pair bonds by strengthening them further. This reduces the chance of the quantum bonds between the pairs breaking, ensuring higher electrical current flow (Snider et al., 2020). Hydrogen is the lightest element and most abundant. It naturally exists as a molecular gas. So, in its molecular state, hydrogen is not of much use in conducting superconductivity experiments (Jolly, 2020). To make it useful for conductivity applications, molecular hydrogen must be converted into metallic hydrogen. The only way to metallize molecular hydrogen is through extreme pressure (Dias & Silvera, 2017). It is not enough, however, to only include hydrogen in the mix. Hydrides, the negatively
72
charged analog of hydrogen that are commonly conjugated to other atoms, prove to be even more formidable candidates than pure metallic hydrogen when it comes to superconductivity. The more the number of hydrogen atoms in a particular compound, the greater the strength of the collective bonds and hence a higher chance of superconductivity occurring. As a result, many scientists have been experimenting with a wide array of hydrides to observe their critical temperatures and see if any exhibit high-temperature superconductivity (Huang et al., 2019). One promising hydride is lanthanum hydride (H3S), which has attained superconductivity at 250 Kelvin or -23.15 degrees Celsius (Drozdov et al., 2019). Other “covalent super-hydrides” have been doped with different elements to induce the effect of stronger bonds that enhance the formations of even more Cooper pairs than normal hydrides (Pickard et al., 2020). An example of the latter includes carbonaceous super hydrides, which have been the most promising candidates thus far, achieving superconductivity at almost 15 degrees Celsius (Snider et al., 2020). In their initial superconductivity experiments, Eremets and his team of scientists from the Max Plank Institute for Chemistry investigated the superconductive properties of hydrogen at extreme pressures of around 440 GPa (compared to the pressure inside the earth’s core
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
at 360 GPa) (Eremets Group, n.d.; Evers, 2015). Comparatively, in studying carbonaceous super hydrides, the Snider’s team also investigated superconductivity at pressures around 275 GPa. To do so, they employed pocket-sized Diamond Anvil Cells (DACs), which condense their target hydrides into sizes on the order of micrometers. The research team was able to report superconductivity at maximum superconducting critical temperatures of about 15 degrees Celsius at around 267 GPa (Snider et al., 2020). Despite obtaining room temperature superconductivity, H2S can only be made into a solid conductor at such high pressures (nearly 450 Giga pascals) so this level of pressure can only be achieved in DACs and is unfeasible for practical applications in larger-scale models. This necessity for extreme pressures has limited the efficacy of current superconductors, even though higher temperature superconductivity is being attained. A dilemma exists in that whatever benefit that may have be derived from newly discovered superconductors must be balanced with the high cost of maintaining them at a high pressure or low temperature state.
Conclusions Superconductivity has the potential to transform the world as we know it. The prospect of utilizing high-temperature superconductors will prove useful towards solving the energy crisis by minimizing heat energy loss with suitable superconductors as electric transmission cables. Furthermore, the global race to construct quantum computers is dependent on efforts to find suitable superconductors. The transport sector could also be transformed if maglevs are able to operate at higher temperatures. Additionally, advancements in maintaining a proper Meissner Effect in type-II superconductors could also prove useful in manufacturing maglevs. However, the field of condensed matter physics – which deals with superconductors – has a lot of research work to do before these practical achievements are realized. On a wider scale, superconductors are still a long way from making big impacts on the world, despite having been discovered almost a century ago. Condensed matter physicists need to think of a way to offset the problems of extremely low temperatures and extremely high pressures to make the dream of hightemperature superconductivity feasible. But, given the rate at which discoveries are being
WINTER 2021
made now, these kinds of findings may just be on the horizon. References ACSR Conductor: Types, Properties and Its Advantages. (2020, August 9). ElProCus - Electronic Projects for Engineering Students. https://www.elprocus.com/what-is-an-acsrconductor-types-and-its-advantages/ Alloul, H., Antoine, C., & Balibar, S. (n.d.). Magnetic Resonance Imagery [..Fr]. Supraconductivity. Retrieved March 8, 2021, from http://www.supraconductivite.fr/en/index. php?p=applications-medical-irm-more Ashcroft, N. W. (1968). Metallic Hydrogen: A HighTemperature Superconductor? Physical Review Letters, 21(26), 1748–1749. https://doi.org/10.1103/ PhysRevLett.21.1748 Bank, A. D. (2019, June 7). Light Up and Power Africa – A New Deal on Energy for Africa [Text]. African Development Bank - Building Today, a Better Africa Tomorrow; African Development Bank Group. https://www.afdb.org/en/thehigh-5/light-up-and-power-africa-%E2%80%93-a-new-dealon-energy-for-africa Bardeen, J., Cooper, L., & Schrieffer, J. (1957). Theory of Superconductivity*. Physical Review, 108(5). Barron, J., & Ashton, C. (n.d.). The Effect of Temperature on Conductivity Measurement. REAGECON, 3, 5. BCS Superconductivity Theory. (n.d.). BCS Superconductivity Theory. Bednorz, J., & Müller, K. (1988). Perovskite-type oxides The new approach to high-T, superconductivity. 60(3). Blaber, M., & Shrestha, B. (2014, November 18). 7.6: Metals, Nonmetals, and Metalloids. Chemistry LibreTexts. https:// chem.libretexts.org/Bookshelves/General_Chemistry/ Map%3A_Chemistry_-_The_Central_Science_(Brown_et_ al.)/07._Periodic_Properties_of_the_Elements/7.6%3A_ Metals_Nonmetals_and_Metalloids Carr, S., Massatt, D., Fang, S., Cazeaux, P., Luskin, M., & Kaxiras, E. (2016). Twistronics: Manipulating the Electronic Properties of Two-dimensional Layered Structures through their Twist Angle. Physical Review B, 95. https://doi.org/10.1103/ PhysRevB.95.075420 Cooper Pairs and the BCS Theory of Superconductivity. (n.d.). Retrieved January 14, 2021, from http://hyperphysics.phyastr.gsu.edu/hbase/Solids/coop.html de Llano, M., Sevilla, F. J., & Tapia, S. (2006). Cooper pairs as bosons. International Journal of Modern Physics B, 20(20), 2931–2939. https://doi.org/10.1142/S0217979206034947 Delft, D. V., & Kes, P. (2011). The discovery of superconductivity. American Institute of Physics. Delpierre, G., & Sewell, B. (2002, 2005). Temperature and Molecular Motion. https://www8.physics.utoronto. ca/~jharlow/teaching/everyday06/reading10.pdf Dias, R. P., & Silvera, I. F. (2017). Observation of the WignerHuntington transition to metallic hydrogen. Science, 355(6326), 715–718. https://doi.org/10.1126/science.aal1579
73
Diebner, A. (2016, July 28). Meissner Effect. Engineering LibreTexts. https://eng.libretexts.org/Bookshelves/Materials_ Science/Supplemental_Modules_(Materials_Science)/ Magnetic_Properties/Meissner_Effect
science/electricity/additional-info
Drozdov, A., Kong, P., Minkov, V., Kuzovnikov, M., Mozaffari, S., Balicas, L., Balakirev, F., Prakapenka, V., Greenberg, E., Knyazev, D., Tkacz, M., & Emerets, M. (2019). Superconductivity at 250 K in lanthanum hydride under high pressures. Nature. https:// doi.org/10.1038/s41586-019-1201-8
Merriam-Webster. (n.d.). Cooper pair. In Merriam-Webster Dictionary.
Elert, G. (n.d.). Electric Resistance. The Physics Hypertextbook. Retrieved January 14, 2021, from https://physics.info/electricresistance/
OpenStax College. (n.d.). High-temperature Superconductors | Physics. Lumen Physics. Retrieved March 8, 2021, from https://courses.lumenlearning.com/physics/chapter/34-6high-temperature-superconductors/
Emerson. (2010, January). Theory and Application of Conductivity. https://www.emerson.com/documents/ automation/application-data-sheet-theory-application-ofconductivity-rosemount-en-68442.pdf Eremets, M. I. (n.d.). Eremets Group. Retrieved January 14, 2021, from https://www.mpic.de/3538358/Eremets_Group Evers, J. (2015, August 17). Core | National Geographic Society. https://www.nationalgeographic.org/encyclopedia/core/ Fisher, C. (2019, April 2). IBM | What is Quantum Computing? https://www.ibm.com/quantum-computing/learn/what-isquantum-computing Galsin, J. S. (2002). Physical Effects of Impurities in Metals. In J. S. Galsin, Impurity Scattering in Metallic Alloys (pp. 93–123). Springer US. https://doi.org/10.1007/978-1-4615-1241-7_5 Ginsberg, D. (2018, February 13). Superconductivity—Highertemperature superconductivity. Encyclopedia Britannica. https://www.britannica.com/science/superconductivity Gourlay, S. (2017, August 11). Powering the field forward. CERN Courier. https://cerncourier.com/a/powering-the-fieldforward/ Gregersen, E. (2015, November 24). Diamagnetism | physics | Britannica. Britannica. https://www.britannica.com/science/ diamagnetism Gregersen, E. (2020, October 8). Resistance | electronics. Encyclopedia Britannica. https://www.britannica.com/ technology/resistance-electronics Huang, X., Wang, X., Duan, D., Sundqvist, B., Li, X., Huang, Y., Yu, H., Li, F., Zhou, Q., Liu, B., & Cui, T. (2019). High-temperature superconductivity in sulfur hydride evidenced by alternatingcurrent magnetic susceptibility. https://doi.org/10.1093/nsr/ nwz061 Jolly, W. (2020, June 1). Hydrogen | Properties, Uses, & Facts | Britannica. https://www.britannica.com/science/hydrogen Kadin, A. M. (2007). Spatial Structure of the Cooper Pair. Journal of Superconductivity and Novel Magnetism, 20(4), 285–292. https://doi.org/10.1007/s10948-006-0198-z
Meissner, W., & Ochsenfeld, R. (1933). Ein neuer Effekt bei Eintritt der Supraleitf�higkeit. Die Naturwissenschaften, 21(44), 787–788. https://doi.org/10.1007/BF01504252
Moore, T. (2003). Six Ideas That Shaped Physics (2nd ed.). http://www.physics.pomona.edu/sixideas/
Ouboter, R. de B. (1997, March). Heike Kamerlingh Onnes’s Discovery of Superconductivity. https://web.njit.edu/~tyson/ supercon_papers/Heike_Kamerlingh_Onnes_Discover.pdf Petruzzello, M. (2020, November 5). Semiconductor | Definition, Examples, Types, Materials, Devices, & Facts | Britannica. https://www.britannica.com/science/ semiconductor Pickard, C. J., Errea, I., & Eremets, M. I. (2020). Superconducting Hydrides Under Pressure. Annual Review of Condensed Matter Physics, 11(1), 57–76. https://doi.org/10.1146/annurevconmatphys-031218-013413 Rossi, L. (2003). The LHC Superconducting Magnets. EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH European Laboratory for Particle Physics, LHC Project Report 660, 6. Snider, E., Dasenbrock-Gammon, N., McBride, R., Debessai, M., Vindana, H., Vencatasamy, K., Lawler, K. V., Salamat, A., & Dias, R. P. (2020). Room-temperature superconductivity in a carbonaceous sulfur hydride. Nature, 586(7829), 373–377. https://doi.org/10.1038/s41586-020-2801-z Superconductivity. (n.d.). Retrieved January 14, 2021, from http://hyperphysics.phy-astr.gsu.edu/hbase/Solids/scond. html The Nobel Prize. (n.d.). The Nobel Prize in Physics 1913: Heike Kamerlingh Onnes. NobelPrize.Org. Retrieved March 15, 2021, from https://www.nobelprize.org/prizes/physics/1913/ onnes/facts/ Tretkoff, E. (2006, January). This Month in Physics History: January 1938: Discovery of Superfluidity. http://www.aps.org/ publications/apsnews/200601/history.cfm Whyte, C. (2016, June 14). How Maglev Works | Department of Energy. https://www.energy.gov/articles/how-maglev-works Wirfs-Brock, J. (2015, November 6). Lost In Transmission: How Much Electricity Disappears Between A Power Plant And Your Plug? [..Org]. Inside Energy. http://insideenergy. org/2015/11/06/lost-in-transmission-how-much-electricitydisappears-between-a-power-plant-and-your-plug/
Keilmann, T., & García-Ripoll, J. J. (2008). Dynamical Creation of Bosonic Cooper-Like Pairs. Physical Review Letters, 100(11), 110406. https://doi.org/10.1103/PhysRevLett.100.110406 Lovatt, A., & Shercliff, H. (n.d.). Conductivity. About Conductivity. Retrieved January 14, 2021, from https://www. lehigh.edu/~amb4/wbi/kwardlow/conductivity.htm McGrayne, S., Suckling, E., Kashy, E., & Robinson, F. (2020, November 12). Electricity: Additional Information. Encyclopedia Britannica. https://www.britannica.com/ 74
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
75
An Overview of the Nuclear Industry BY DEV KAPADIA '23 Cover: Nuclear power plants, such as those shown above, have fallen out of favor due to the high-impact negative effects of reactor meltdown incidents. Consequently, the production of these reactors has been severely curtailed, with nuclear power plant licensing actually stopping in the United States for almost three decades after the Three Mile Island accident in 1979. However, new improvements in the safety and energy efficiency of these plants could put them back in the energy conversation as governments frantically look for an alternative to fossil fuels. Source: Libreshot
Introduction Of the many problems facing our world today, one of the most troubling is climate change. It is well-documented that the Earth has been warming at a shocking rate; the average surface temperature has increased by around 1.8°F since 1880. One key cause of this has been the rising amount of carbon dioxide released into the atmosphere, trapping heat from the sun inside the atmosphere and leading to a phenomenon called the “greenhouse effect.” Since the beginning of the industrial revolution in 1800, the atmospheric CO2 concentration has increased almost 50% from 280 parts per million to 410 parts per million today. NASA has determined that the probability is greater than 95% that this rise in carbon dioxide concentration is a result of human activity (NASA, 2020). There are many factors that can affect the amount of carbon dioxide released into the
76
environment, but one of the most prominent is the burning of fossil fuels. Fossil fuels are fuels made from the buried remains of plants and animals that have become fossilized. These fuels include natural gas, coal, and oil, as well as other substances commonly used for energy production. Fossil fuels have high carbon content and can create holes in the atmosphere when burned, damaging the environment and leading to warming of the Earth. The practice of burning fossil fuels became significant during the industrial revolution when factory production and its energy needs sharply increased, and technology was invented to make the burning of fossil fuels possible. It was not until a few decades ago – with issues of the shrinking ozone layer, rising surface temperatures, and more extreme weather conditions becoming more apparent – that world leaders began paying significant attention to the damage done by burning fossil fuels. Now, governments across the world have DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: As shown above, nuclear power plant development, particularly in the United States, took a massive cut in the years following the Three Mile Island incident in 1979. Then again in 1986, after the Chernobyl accident, the industry took another hit, and finally once more in 2011 due to the aftermath of the Fukushima incident. Due to these accidents, it might take time before this energy source returns to public favor and the benefits can be seen again Source: Wikimedia Commons
started to prepare for the shift to renewable energy sources, like solar and wind, as well as non-renewable, low-carbon solutions, such as geothermal and nuclear. In fact, even the Crown Prince of Saudi Arabia Mohammed bin Salman has started to shift Saudi Arabia, a country that has been almost defined by its fossil fuel production, away from an economy dominated by oil to more diversified energy interests (Celebi et al., 2016). Despite the awareness of the damage that fossil fuels can cause, carbon emissions have barely shifted from their initial production trajectory. From 1988 to 2018, annual CO2 emissions increased from 21.68 billion tons per year to 36.15 billion tons per year. On an absolute numerical basis, this is actually a larger increase than the 8 billion tons per year increase to 21.68 billion tons per year increase from 1958 to 1988 (Ritchie & Roser, n.d.). Clearly, there must be some reason why the shift away from fossil fuels has not resulted in a sharp slowdown of carbon emissions. One possibility is the simultaneous shift away from the nuclear industry. Nuclear energy is a low-carbon emission energy source that produces high amounts of energy relative to the mass of uranium used in fuel cells. This type of energy production has been met with extensive negative press resulting from global accidents with detrimental environmental impact, the high cost of nuclear power plants, and the extremely long timeline for plant construction projects. Ever since the Three-Mile Island incident in the United States in 1979, the Chernobyl accident in Ukraine in 1986, and the Fukushima accident in 2011, where small failures in
WINTER 2021
operations caused reactor meltdowns that affected the environment and wildlife globally, governments have been weary of further incentivizing the spread of nuclear energy and more in favor of safer options, like wind and solar, despite advancements in nuclear technology (MIT Energy Initiative, 2018). Though the Three Mile Island meltdown was caused by a coolant failure that released radioactive gas, there are no deaths that have been directly linked to the meltdown; however, there have been higher rates of cancer in the surrounding areas. The Chernobyl disaster, on the other hand, directly caused the death of 31 people and it is expected that another 4,000 more might die as a result of radioactive exposure over time; similarly, the Fukushima accident has led to the death of over 2,000 people with more expected to come in the near future. These horrible incidents have sparked the nuclear industry to tighten restrictions and increase safety mechanisms for these sorts of reactors. They have also spurred development of novel technologies. This article is both a technical and an economic review of the current state of the nuclear industry, a field that has understandably been shunned relative to lower risk but also less reliable and/ or productive alternatives like wind, solar, geothermal, and even fossil fuels (Celebi et al., 2016).
"Despite the awareness of the damage that fossil fuels can cause, carbon emissions have barely shifted from their initial production trajectory. From 1988 to 2018, annual CO2 emissions increased from 21.68 billion tons per year to 36.15 billion tons per year."
The Nuclear Power Process The cycle of energy production for nuclear power plants begins with sourcing the fuel material. Because nuclear reactors utilize the radioactive decay of atoms to produce energy, an element with a high propensity to decay is 77
mined for fuel: uranium. Uranium has a highly radioactive isotope that decays well and releases a lot of energy during fission, properties which will be explored more in following sections of the article. Uranium ore is mined from the earth and then converted to a form that is suitable for travel. Often, the element is transported in uranium ore concentrate, a solid and more compact form of the material that can be more safely contained than pure ore (NEI, n.d.).
"Natural uranium is found in two common isotopes: U-238 and U-235. U-238 is very stable, with a halflife of about 4,500 million years, and is therefore not a strong candidate for the uranium used in nuclear power generation (since you need a quickly decaying entity that will subsequently release more energy). U-235 has a halflife six times as fast (at about 700 million years) and is therefore much more radioactive."
78
The uranium is transported to the nuclear power site and used for power via a process called ‘nuclear fission.’ Nuclear fission is a form of radioactive decay in which a highly reactive atomic nucleus splits into two smaller, more stable, and lighter nuclei. During this process, high energy photons called “gamma photons” are released along with other neutrons and a large amount of energy. If these released neutrons come in contact with the nucleus of another U-235 atom, then the same process could occur and cause a chain reaction, which allows for a large energy release with little input energy (NEI, n.d.). After the uranium is mined and converted into its solid form, it must be converted, enriched, and reconverted into uranium oxide via many different processes and intermediaries. The uranium oxide is eventually converted into ceramic pellets through ‘sintering,’ a process that essentially bakes the uranium at temperatures of over 1400 °C. These pellets are then collected into compact fuel rods that are then sent to power generation and burn-up (World Nuclear Association, 2020b). Before discussing how fission is facilitated in the nuclear power process, it is important to note the isotopic specificity of uranium used for nuclear fission. Natural uranium is found in two common isotopes: U-238 and U-235. U-238 is very stable, with a half-life of about 4,500 million years, and is therefore not a strong candidate for the uranium used in nuclear power generation (since you need a quickly decaying entity that will subsequently release more energy). U-235 has a half-life six times as fast (at about 700 million years) and is therefore much more radioactive; additionally, it is extremely heavy, allowing a high amount of energy to be released in fission. Although uranium as an element is 100 times more prevalent than silver, U-235, the isotope used for nuclear power generation, composes only about 0.7% of natural uranium (NEI, n.d.).
Despite the fact that U-235 is more directly useful for power generation, there are still large amounts of U-238 in the reactor core and it often constitutes most of the fuel. Although these isotopes are not directly useful for energy production, they can produce energy indirectly. U-235 is known as a “fissile” isotope because it can readily undergo fission. U-238 is known as “fertile” because if a neutron particle is shot at a U-238 atom, the atom can actually absorb the neutron and become a fissile isotope through the process of beta decay. Beta decay occurs when a neutron suddenly changes into a proton, thereby increasing the atomic number of the element by 1 while not affecting the mass number. If the atom absorbs the neutron and undergoes two beta decays, a process that can take a few weeks to occur, the atom can become plutonium-239. P-239, like U-235, is a fissile atom and produces a similar amount of energy (EIA, 2020). P-239, though, is not certain to release usable energy. About half of P-239 burns, which contributes about one third of the total nuclear power plant output, but the other half captures the neutron and does not subsequently undergo fission, instead becoming P-240, the main waste component of used fuel. Therefore, as more nuclear fuel is spent, a higher proportion of P-240 can be found. This particular feature of P-240 accumulation in nuclear fuel production is extremely important as a pressing consideration for those fearful of nuclear energy process is the concern that used fuel waste could be used for nuclear bombs. As the fission process proceeds, the used fuel contains a higher and higher amount of P-240, which has the distinctive characteristic of small, slow, and spontaneous fission. These characteristics are not nearly significant enough to cause a chain reaction in nuclear reactors, the main reason why used fuel cannot be used in nuclear bombs. The inclusion of P-240 in a nuclear bomb could cause the atom to undergo fission before the atoms implode on themselves, thereby burning up the atoms in the reactor core prematurely and severely limiting the impact of the bomb. For this reason, nuclear power plants are seen as a way to limit the proliferation of nuclear weapons within countries and use the radioactive material for good (World Nuclear Association, 2020b). After eighteen to thirty-six months, used fuel is removed from the reactor due to the high proportion of fission fragments. Used fuel, on average, will have about 1.0% U-235 left, 0.6%
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
fissile plutonium, 95% U-238, and the rest will be fission products. This used fuel is radioactive and is therefore sent for storage to decay, which lowers its radioactivity. After storage, the used fuel is either sent to a reprocessing facility (in which the fragments are removed and stored in liquified form to preserve the uranium and plutonium products) or sent to long-term storage. There are no disposal facilities currently in operation by any country for the waste that has been sent to long-term storage; there is simply not enough need to establish a facility given that most waste goes into reprocessing for new fuel. However, there are talks to install centralized, deep geological repositories in a number of countries, though no plans have been finalized (World Nuclear Association, 2020b).
Evolution of Nuclear Energy Nuclear reactors can be classed into separate “generations” that have largely been accepted by the industry, ranging from Generation I reactors to Generation IV reactors. Each new generation has focused and improved upon a specific set of features that the old ones have not addressed, with safety being one of the main focuses of improvement throughout the innovation process. Generation I reactors were developed globally following World War II in the 1950s and 1960s. These reactors were not actually in operation to produce power; instead, they were early prototypes, producing small amounts of energy exclusively for experimental and research purposes (Goldberg and Rosner, 2011). Generation II reactors are the main population of nuclear reactors that are in operation today. These reactors began operating in the late 1960s and introduced the now-popular pressurized water reactors (PWR) and boiling water reactors (BWR). These reactors have several internal safety features – both electrical and mechanical operations – that initiate automatically in the case of failure in certain locations of the reactor, called ‘passive nuclear safety mechanisms.’ However, the majority of safety mechanisms were still active nuclear safety features, designs that required human intervention in order to work properly. Unfortunately, many believed that passive features would either set thresholds too high such that their effect would do little to minimize the consequences of a meltdown, or that these thresholds would be too low and trigger without the event of an accident and hinder the production of energy.
WINTER 2021
This mindset led to less careful safety systems and greatly contributed to the outcome of the Three Mile Island and Chernobyl disasters (Goldberg and Rosner, 2011). Design and development of Generation III reactors started in response to the Chernobyl nuclear accident in 1986. While incorporating common energy production improvements such as enhanced fuel technology and thermal efficiency, engineers realized the value in not just having active nuclear safety, which had been the main safety feature of previous nuclear reactor generations, but also much more passive nuclear safety mechanisms. Passive safety systems were designed to be highly dependent on the specific design of the nuclear reactor, what environment it is in, the size of the plant, and much more. This allowed for more effective utilization of these systems in comparison to Generation II reactors. While several countries that are active supporters of nuclear energy have Generation III reactors (including China, Russia, and India), the United States does not, though several designs have been approved by the Nuclear Regulatory Commission (NRC) (Goldberg and Rosner, 2011). After Generation III reactors, the industry turned to what are known as Generation III+ reactors starting in 1996. Because their designs are similar to those of Generation III reactors, they were not given a full step up in level from their predecessors. Nevertheless, they certainly addressed many of the problems found in the Generation III reactors. First, nuclear engineers knew that certain natural phenomena could be used to drive many of the passive safety systems better than traditional chain reactions; these traditional reactions could fail due to electrical failure of the system or human intervention that was required to trigger steps before the passive mechanisms. These natural phenomena included gravity, buoyancy, pressure difference, conduction, and heat convection and have proven to be instrumental in the growing acceptance of Generation III+ reactors. Nuclear engineers also made another push to design reactors that could be standardized across different environments and locations. They saw this feat as one that would simplify the design, approval, and construction of nuclear power plants, thereby driving down capital costs so that the technology would be in contention with natural gas on a pure cost basis. Though the engineers did succeed in creating more standardized solutions, they unfortunately did
"Generation II reactors are the main population of nuclear reactors that are in operation today. These reactors began operating in the late 1960s and introduced the now-popular pressurized water reactors (PWR) and boiling water reactors (BWR)."
79
Figure 2: The above image shows the process of a pressurized water reactor. In the reactor, the burned fuel heats up water through a heat exchanger, thereby not contaminating the water and leaving it reusable in future iterations. The heated water is then converted to steam, which turns the turbine. The water is lastly sent to the condenser where it is condensed into water through contact with cold water and cycles through the process again Source: Wikimedia Commons
"There are two different types of Light Water Reactors: the pressurized water reactor (PWR) and the boiler water reactor (BWR). The main difference between the two lies in how they transfer the heat produced from fission to the surrounding water."
not reach the low levels of cost of other energy sources (Goldberg and Rosner, 2011). Generation IV reactors are still being designed. These reactors are expected to make big strides in safety, sustainability, efficiency, and cost to become competitive not only with fossil fuels but also with renewable energy sources. Though designs are still extremely preliminary. As of 2021, no idea is expected to be implemented within the next five years; though, there are many preliminary designs that utilize alternative cooling methods to allow for the reactors to reach extremely high temperatures that creates higher-temperature, and thus higherenergy, reactions. These designs were discussed during the annual Generation IV International Forum (GIF), which includes countries that are committed to the development of Generation IV technology. Countries active in this conference includes Australia, Canada, China, France, United States, Russia, Japan, the United Kingdom, among others (Goldberg and Rosner, 2011).
Light Water Reactors As stated previously, the vast majority of nuclear plants in operation are Generation II reactors. This is due to the accidents in the non-passive safety systems that occurred in Generation II reactors when Generation III reactors were being designed. These accidents shunned the nuclear industry and many of the new Generation III reactors with it, severely limited the development of future reactors, leaving those Generation II reactors still in operation. While there are currently many different Generation II
80
reactor designs in operation, the most common by far are ‘Light Water Reactors.’ The reason for the widespread acceptance of these designs is that they are lower temperature and lower power systems, which makes them easier to operate and produce less fissile material. Less fissile material decreases the chance of spontaneous reaction of fissile material that can cause incidents (EIA, 2020). There are two different types of Light Water Reactors: the pressurized water reactor (PWR) and the boiler water reactor (BWR). The main difference between the two lies in how they transfer the heat produced from fission to the surrounding water. In the BWR, the heat that is generated by the fission of uranium is put directly in contact with water so that it turns to steam. In the PWR, the heat generated by uranium fission is sent to a secondary loop that heats up water through a heat exchanger, thereby never actually coming into contact with water. This difference is key, as it has led not only to the favor of Light Water Reactors but specifically PWRs as the water is never polluted with radioactive material and can be reused (EIA, 2020). After the water is heated and turned into steam, the steam enters the turbine and spins the turbine like a traditional steam generator. The steam then goes to the condenser, is cooled with cool water from a nearby river or ocean and is then sent back to its original location. In BWRs, the water would return to the reactor vessel; in PWRs, the water would return to the steam generator. The cooling
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
water simultaneously exits the condenser and is released back into the ocean to cool down (EIA, 2020). This is a very simple process for producing nuclear energy, and its simplicity is one of the reasons that Light Water Reactors are some of the most common Generation II reactors. Additionally, the same workflows and mechanisms have been employed in future reactor generations. Besides Light Water Reactor designs, other reactors use different fuel, coolant, heat exchanges, combustion processes, and more. The choice between the different reactors depends on many different factors and is out of the scope of this article. However, the simplicity and safety of the Light Water Reactor technology is expected to keep the technology, or at least further iterations of the technology, relevant for the years to come (EIA, 2020).
Foundations in Finance Given the unique economics of nuclear power plant construction compared to other forms of energy production, there is a discussion to be had about the financing of nuclear plants and the best way to sell energy. First, however, it is important to understand some critical foundations of finance to appreciate the key points of this discussion. One key point is that electricity is, in financial terms, a “commodity.” A commodity is a resource where, no matter what the production process is like, there is virtually no differentiation in the end-product. For instance, consumers cannot differentiate between electricity produced by a nuclear plant, a solar panel, or a natural gas plant – all three turn the lights on, so to speak. This means that consumers often choose their electricity sources based primarily on price. Occasionally, consumers will opt for the more environmentally friendly source by installing their own solar panel, but electricity producers essentially must do everything they can to compete on costs with other producers (World Nuclear Association, 2020). However, even though individual consumers cannot tell the difference between electricity produced by different processes, differences in electricity production can have significant impacts on the environment and society. Different sources of electrical power emit different amounts of pollutants, have different risks associated with them, cost different amounts to construct power plants, and so
WINTER 2021
on. Therefore, to make an “apples-to-apples” comparison of the total costs – including financial, environmental, and social – the energy industry uses what is known as the Levelized Cost of Electricity (LCOE), which represents the average revenue needed per unit of electricity for the plant to recover its total costs over its lifetime. So, when assessing two plants, one with a lower LCOE than the other, the former would be regarded as more financially viable and profitable than the latter. The next component that needs to be considered is that money now has more value than the same amount of money later. This happens because if you have a certain amount of money now, you have the option to grow the sum of money through investments over time. For instance, if someone has ten million dollars, they have the option to invest that ten million in the stock market, buy bonds, or put it in their bank accounts – all options that could increase their ten million now into even more in the future. Consequently, the principle that money now is more valuable than the same amount in the future is well-known and well-accepted in the world of finance.
"Electricity is, in financial terms, a “commodity.” A commodity is a resource where, no matter what the production process is like, there is virtually no differentiation in the end-product."
However, knowing that ten million dollars today is worth more than ten million dollars in the future, how can someone compare the two values if they need to? After all, investors in nuclear power plants that are putting down money now are not getting money back for potentially another five years due to the construction timelines. Investors use what is known as the “discount rate” to discount what they expect to earn in the future to how much they think that money would be worth if they had it today. This discount rate can be calculated in different ways and can incorporate many different factors, including the amount of debt the project is using, how much investors are investing, the tax rate, the volatility of the company, and even more. One key point to notice is that the discount rate is applied every year that needs to be discounted. For example, at a 5% discount rate, $100 next year is worth today:
Similarly, at the same rate, $100 in two years is worth today:
81
Therefore, not only is money today worth more than money tomorrow, but money today is also worth even more than the day after tomorrow. The last component of basic financing is that investors expect some sort of compensation for risk. Almost everyone would prefer a guaranteed ten-dollar payment than a 50% chance at tendollar payment and a 50% chance payment of nothing. Humans are naturally risk-averse, and therefore will only take a risk when the potential payoff is significantly higher than if the riskless option. Almost everyone would prefer the guaranteed payment given the two options, but many would reconsider if given the option of a guaranteed ten-dollar payment and a 50% chance at a thirty-dollar payment and a 50% chance at nothing. This is due to the now “expected-value” that one will receive from the second option: (0.5)×0+(0.5)×30=$15, meaning we are receiving a $5 compensation for our risk.
"Financing nuclear energy upfront is particularly expensive due to the large initial plant costs during the planning and construction phases, so investors often have to put Financing and Selling Nuclear Energy large sums of money Financing nuclear energy upfront is particularly down." expensive due to the large initial plant costs
during the planning and construction phases, so investors often have to put large sums of money down. However, these plants also take longer to build than nearly every other type due to the complexity of the technology and the necessity of following the many safety protocols set in place by the industry and government regulatory bodies. Nuclear plants, particularly within the United States, can take five years to construct while solar farms can take as little as three months (World Nuclear Association, 2020). Consequently, the revenue generated from the nuclear plants is worth far less for investors today than other energy producer types. Increasing the complexity of constructing these plants also adds to the risk that they will fail, which therefore increases the financial return that investors can reasonably expect to be compensated for this risk. For these two reasons, as the discount rate increases for different energy production technologies, nuclear plants’ LCOE grows at the largest rate, making it the least financially viable by far in economies with high discount rates. In fact, in a 2015 study by the Organization for Economic Co-operation and Development (OECD) Nuclear Energy Agency showed that at a 3% discount rate, nuclear is the lowest cost option for all countries. At a 7% rate, nuclear energy is about as equal in cost to coal. At a 10% rate, nuclear energy is one
82
of the most expensive options for all countries. In fact, at each of the three discount rates, nuclear energy was more cost effective than all renewable energy sources, except for onshore wind at 10% discount rate, further speaking to the potential benefit of nuclear energy (World Nuclear Association, 2020a). A main issue is that although the discount rate for many countries is below 10%, this discount rate can fluctuate due to other movements in the market every day, so it is extremely hard to pinpoint one rate to think about for the lifetime of a plant construction project. However, these LCOE v. discount rate analyses are often carried out assuming a simple transactional revenue model for the electricity market. In fact, the industry has found two primary means of lowering the LCOE via contract structure: increasing the predictability of revenues, thereby lowering the risk investors take on, or collecting revenues earlier, making the longer timeline of nuclear energy not as big of a deterrent to investing. One or both of these challenges are found in the three main contract types for nuclear plants: power purchase agreements (PPA), contract for difference (CfD), and the regulated asset base model (World Nuclear Association, 2020). PPAs are common not just in nuclear energy markets but in the entire electricity market. These agreements dictate a price, amount, and term at which the buyer will purchase the power from the plant. This provides a stable revenue and, if set high enough, will ensure profitability for the plant. CfDs are more complex and require a counterparty in addition to the operator and electricity customers. The counterparty, which is often a government or other regulatory group, is a body that acts in the interest of the electricity consumers. The operator will expect some profit margin on the costs to build and operate the plant, so a price for electricity called the “strike price” will be set to ensure a margin agreed upon by the counterparty and operator. In the event that the market price of electricity at the time of operation is higher than the strike price, the operator will pay the difference between the market price and the strike price. In the event that the market price is below the strike price, the counterparty will pay the difference. In this model, it is ensured that the strike price is paid, so both parties forego the upside, though agreeing on the strike price can be difficult especially depending on the lifetime of the contract as it becomes harder to project a fair
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
price farther into the future (World Nuclear Association, 2020). In the previous two contract structures, the industry developed a means of ensuring stability in revenues, but they failed to capture these revenues earlier that would lower the LCOE. The regulated asset base model was one that was developed specifically for the nuclear industry. In this contract model, there are the operators, an independent regulator, and the customers. Under this model, the operators receive a license from the regulator after thorough investigation by the regulator of their construction plans. The regulator then analyzes the cost of financing the plant, operating costs, depreciation expenses, and more to agree up an “allowable revenue” for the plant in return for a certain percentage of electricity produced by the plant. This allowable revenue, much like the strike price, ensures that the operator will earn enough to recover their costs and a reasonable profit. The difference between the two contract structures lies in the ability for the operator to charge this revenue to customers before the operation of the plant. This allows for stable and less discounted revenue that lowers the LCOE and increases the approval of investors (World Nuclear Association, 2020).
Costs of Nuclear Energy The different costs associated with building power plants can be broken down into three different categories: capital costs, plant operating costs, and external costs. Capital costs are those costs incurred by the developer or operator while they are constructing the plant, including equipment, engineering, labor, financing, designing, and other expenses. Depending on the type of power source, these costs can vary widely, but for nuclear plants they are especially important because the large plants are so complex to build. In fact, they can account for almost 60% of the LCOE (World Nuclear Association, 2020).
WINTER 2021
The next type of cost is operating costs. These are costs incurred from producing fuel, running the plant to produce energy, and decommissioning the plant at the end of its lifetime. Though wind, solar, and hydro have little to no operating costs considering the operations are largely autonomous and the “fuel” is free, nuclear plant operating costs are still much less than that of coal and natural gas plants. In fact, the fuel costs for nuclear power are about 1/3-1/2 those of coal fired power plants (NEA, 2020). This is due to the energy density, which measures the joules of energy produced per unit of volume (of uranium). The energy density of uranium is over twenty thousand times that of coal (World Nuclear Association, 2020). The last type of cost for energy producers are external costs. These costs encapsulate all the costs to the community that are not paid by the electricity consumers and not included in the building and operations of the plant. These costs are largely ignored by the general economy as they are difficult to calculate and not necessarily paid by any individual. These include environmental damage, the public financing needed to assess and regulate nuclear designs, and risk of disasters or accidents that affect the surrounding area. Consequently, these external costs work against nuclear energy as accidents like Three Mile Island, Fukushima, and Chernobyl have left the public eye with a negative reaction towards nuclear energy. On the contrary, nuclear energy can bring positive external costs as well. Compared to coal and other fossil fuels, nuclear energy is an extremely low pollutant option and nuclear plants that requires a large number of employed individuals to construct and operate (NEA, 2020). In fact, a report by the Brattle Group showed that the typical revenue deficit for nuclear power plants losing money is $10/Megawatt Hours, but the plants’ low CO2 emissions save society between $12 and $20 per ton of CO2 for the environment (Celebi et al., 2016).
Figure 3: The three common renewable energy sources are biofuels, wind, and solar energy. Each of these three sources are seen as the future of energy and have subsequently seen much support from the government, the media, and the general population. However, each of these three energy sources still has major hurdles to overcome before they can even begin to challenge the giants in the fossil fuel industry. For this reason, it makes sense to diversify energy interests to other sources, such as nuclear, that can produce high amounts of energy and take away the lion’s share of energy production from the fossil fuel industry Source: Pixabay
"The different costs associated with building power plants can be broken down into three different categories: capital costs, plant operating costs, and external costs."
Governments have attempted to reward nuclear energy operators for their low-emission production via tax credits. These credits seek to compensate for some of the external costs for nuclear developers that aren’t associated with the production of fossil fuels and other high emission sources (Nuclear Energy Institute, n.d.). For instance, in 2005, when the United States began pushing away from fossil fuels, the Energy Policy Act established a tax credit
83
of $18/Megawatt Hour for advanced nuclear power facilities up to the first 6,000 megawatts of nuclear generating capacity. Similarly, in April 2017, New York state established its Clean Energy Standard initiative that applied tax credits of $17.48/Megawatt Hour to its nuclear plants. While this is still lower than the $23/ Megawatt Hour that wind power plants received, it is nevertheless a step in the right direction for governmental support for the industry (Horvath & Rachlew, 2016).
The Comparative Effects of Renewable Energy "Hydropower contributes about 6.4% of the world’s power production (Ritchie, n.d.). Hydropower plants often utilize dams upon major sources of running water to generate power."
84
Understanding interest in nuclear energy takes an understanding of the comparative pros and cons of energy sources commonly used today. While the benefits and drawbacks of nonrenewable sources such as coal, oil, and natural gas are commonly known, those of renewables are often not discussed beyond the basic benefit of low to zero-emissions. Renewables include hydroelectric, biofuels, wind power, and solar energy and will be discussed in that order. Hydropower contributes about 6.4% of the world’s power production (Ritchie, n.d.). Hydropower plants often utilize dams upon major sources of running water to generate power. Hydropower first became popular in 1882 and has powered many of the textile plants that developed after the industrial revolution. This source of power scaled up into larger sources with the first “megaplant”, Hoover Dam, which was built in 1936. Unfortunately, the early popularity of these sources led to extensive use that has left few bodies of water suitable for hydroelectric production. Further, these dams can change the ecosystem of the surrounding water significantly. In the standing water in the dam reservoir, organic remains can often release methane that indirectly pollutes the air and alters air humidity in the region (Bielecki et al., 2020). Biofuels, fuels generated from the combustion of living matter like plants or animal waste, are considered renewable fuel, as that the material burned could theoretically be reproduced. Biofuels make up only 0.7% of our world’s power, and, despite their renewable nature, actually have several significant drawbacks (Ritchie, n.d.). First, producing biofuel requires dedicated land use, which lowers the amount of usable land for food. In turn, this can lower food supply and subsequently increase the price of food. Without the advent of vertical farming, where farmers
use walls and scaffolds as foundations for farms instead of simple two-dimensional farms, this leads to greater levels of deforestation. This deforestation can have many negative environmental impacts, including increasing the carbon levels in the atmosphere as there are less plants to convert the molecules now as well as decreasing the number of habitats for wildlife. Secondly, the burning of organic material can release harmful substances from the combustion of protein or fat. For these two reasons, biofuels have major hurdles to overcome if they are to fill the shoes of fossil fuels (Bielecki et al., 2020). Wind and solar power have similar pros and cons. With wind accounting for 2.2% of global energy production and solar accounting for 1.1%, both energy sources are completely renewable and emission-free (Ritchie, n.d.). While the two don’t have many negative consequences, they have been documented to have a noticeable effect on the bird and bat population in the areas where wind and solar plants exist. This is the case because the plants both take up habitat space and provide obstacles (with wind turbines being physical obstacles and intense rays from the sun being a non-physical obstacle). These things aside, the biggest problem that these two types of energy face is an issue known as “intermittency.” While the two energy sources have fuel that is completely renewable, the sun does not shine for 24 hours a day and winds do not blow consistently at high velocities. There are periods of high energy production, with solar production’s occurring close to the afternoon and wind production’s occurring at night and at higher elevations. This fluctuation in periods is known as “intermittency” and requires a battery to store the energy to provide consistent energy production, meaning that the efficiency of these two power sources is dependent on the most advanced battery capacity and price of storage. The U.S. is known to be among one of the leading lithium-ion battery developers due to one of the country’s poster-child companies: Tesla. Tesla uses lithium-ion batteries to power their car and has consequently spent an immense amount of capital improving the current state-of-the-art battery design. As battery storage capability increases in the US, solar and wind power become cheaper (Research and Markets, 2020). These problems are large hurdles and have contributed to the slow acceptance of these technologies. Nonetheless, their proliferation is increasing (Bielecki et al., 2020).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Introduction to Small Molecular Reactors (SMRs) Between 1979 and 2012, no new construction permits had been administered for nuclear power plants within the United States. Since the Three Mile Island incident in 1979, the government has been highly skeptical of the safety of these reactors. Furthermore, many investors were not exactly disappointed to hear that new nuclear projects were not going to be undertaken, given the tendency for these nuclear plants to be both over-budget and behind schedule. Consequently, the nuclear industry remained quiet for over three decades (World Nuclear Association, 2020d). However, in recent years, there has been a new development that has revitalized interest in the industry: The Small Modular Reactor (SMR). SMRs have two distinctive features that separates them from their larger counterparts: their size and their modularity (U.S. Department of Energy, n.d.- a). The small size is orders of magnitude smaller than large nuclear power plants, but they are also less powerful to the same degree. Nuclear power plants can generate thousands and even upwards of ten thousand megawatts each day. SMRs, on the other hand, generate only tens to hundreds of megawatts in a day. The modularity of SMRs allows for their components to be built separately and then assembled on site. Modularity is the ability to construct a large object out of parts that are much smaller and can easily be removed and replaced. This allows for more dispersed design and production of parts that can make construction more economical and more efficient. It also is helpful in the event of any irregularities or malfunctions during operation. Instead of needing to replace a large portion of the plant with potentially large demolition and reconstruction costs, the modularity of the SMRs allow operators to simply remove the malfunctioning part and replace it with a new one (World Nuclear Association, 2020d).
The Economic Benefits of SMRs These two distinctive features of SMRs have numerous effects on the cost and feasibility of these plants. For one, the size of SMRs can lead to lower LCOEs for nuclear technology through lower capital costs that make initial investments in these technologies much easier for investors to swallow (Boarin & Ricotti, 2014). Not only are investors much more willing to write smaller checks, but nuclear developers will not have to find as many investors to
WINTER 2021
contribute to the total initial costs. The smaller size of these devices also helps overcome one of the defining economic drawbacks of building nuclear reactors: the long construction timeline (Mignacca and Locatelli, 2020). Typical nuclear reactor construction timelines can be a putoff to investors and operators for two reasons: they are extremely costly and delay when the plants can start generating revenue (thereby increasing the discount factor on these future cash flows). Because SMRs are smaller, they are often completed on a shorter timeline, estimated at about three years on average, which is about half the time that a larger reactor takes. Consequently, if an operator chooses to build several SMRs to mimic the energy output of larger reactors, it will take longer to build all of the SMRs, but they could receive portions of the total expected cash flows as the initial SMRs begin operation. This allows operators to expect cash that comes in sooner and is therefore less discounted when determining its present value, constituting one of the key points for SMR technology that encourages their acceptance (Boarin and Ricotti, 2014). Stemming from SMRs’ size as well is the ability for the “co-sitting” of these reactors (U.S. Department of Energy, n.d.- a). Co-sitting is the action of placing two or more objects in the same site. For SMRs, this is often done to produce the same amount of energy output as would be produced by a large nuclear reactor. Through co-sitting, operators can save on costs and time for permits and regulatory approval, as regulators are often more likely to approve similar projects in an already approved area on an expedited timeline (Mignacca and Locatelli, 2020).
"SMRs have two distinctive features that separates them from their larger counterparts: their size and their modularity (U.S. Department of Energy, n.d.- a). The small size is orders of magnitude smaller than large nuclear power plants, but they are also less powerful to the same degree [...] The modularity of SMRs allows for their components to be built separately and The next key economic benefit of SMR then assembled on technology incorporates the “learning effect.” site." The learning effect is a common idea not just in energy plant construction but in many different industries. The learning effect simply states that future projects similar to those completed earlier are more likely to be operationally and economically viable due to information that was learned from the initial projects. The learning effect specifically benefits SMRs that are co-sitting as many of the features of the initial project can be incorporated into the construction and operation of the future projects. Further, as previously stated, SMRs are cheaper and will often be co-sitting to mimic the energy output of large reactors, meaning that more SMRs can be produced and will benefit more from the learning effect than their
85
larger counterparts (Boarin & Ricotti, 2014). While modularization benefits are often seen specifically as an operational benefit in the event of malfunction, there are also more general economic benefits. For instance, the dispersed building process of its components can allow for off-site construction and on-site assembly, which can be far cheaper than having to hire labor to work on-site for large reactors given that many of their components are too large to ship (Levitan, 2020). Similarly, the decentralized construction process allows for quicker construction timelines; this can be a huge plus for investors (U.S. Department of Energy, n.d.-b).
"SMRs have brought new hope for the nuclear industry (especially within the United States) and could potentially bring new labor demand. A 2010 study estimated that a standard 100 MWe SMR would create nearly 7,000 jobs, $404 million in payroll, and $35 million in indirect business taxes."
The modularity of SMRs can also make nuclear technology more economically attractive given that the easy changing of parts decreases the chance that these reactors will have a malfunction that results in an accident (Levitan, 2020). Similarly, the size of SMRs makes accidents, should they occur, less dangerous for the environment relative to large nuclear reactors. The effect of incidence is factored into the LCOE, meaning that the smaller occurrence and effects of these accidents will lower the LCOE for SMRs relative to large nuclear reactors (Boarin & Ricotti, 2014). The last benefit of SMR technology is the number of jobs that construction brings. Often, job growth is not considered when discussing the advantages of further research into a new field but is vital nonetheless. SMRs have brought new hope for the nuclear industry (especially within the United States) and could potentially bring new labor demand. A 2010 study estimated that a standard 100 MWe SMR would create nearly 7,000 jobs, $404 million in payroll, and $35 million in indirect business taxes (U.S. Department of Energy, n.d.-b).
The Economic Concerns of SMRs While SMRs do have many advantages that encourage their acceptance into the energy industry, there are still plenty of drawbacks. Like many of the advantages of the technology, the drawbacks are also mainly economical. The first major drawback is that sourcing capital for these projects is difficult. Despite the many differences between SMRs and large nuclear power plants, many investors are still wary of nuclear energy projects in general due to the long, delayed, and over-budget construction timelines that have defined the industry in the past. The damage that can be caused from a potential accident is another factor that keeps investors away. For
86
this reason, developers and operators of SMRs fear that they will have difficulties sourcing the amount of capital needed to build these reactors (Mignacca, Locatelli, and Sainati, 2020). The second main concern regarding SMR technology is the actual costs of the technology. As stated previously, SMRs only produce around a tenth of the energy output of large nuclear power plants. However, cost estimates for the most recent SMR design range from only about a third to a half of that of large nuclear plants (Levitan, 2020). So, while the smaller designs do save some of the initial costs of construction, the discount is nowhere near the power output that is received. For this reason, the LCOE is often higher for SMRs, particularly at low discount rates when borrowing is cheaper, which makes the long construction times of large nuclear power plants more economically feasible. Furthermore, SMRs have cost competitions beyond their larger counterparts. Wind and solar are both often cited as having lower LCOEs than nuclear, and the difference in LCOEs are particularly significant for SMRs (Mignacca, Locatelli, and Sainati, 2020). The last source of concern for SMR technology is directed towards the regulatory approval process. Because the technology is still not perfected, many are worried about governments’ abilities to properly assess and approve these designs in a reasonable time frame (Liu & Fan, 2014). One assumption for the smaller and simpler designs were that they could get approved much quicker than large nuclear power plants, but First-Of-A-Kind (FOAK) technologies often have higher bars and hurdles to jump through in order to become approved. Experts are less worried about this problem for countries in Asia and western Europe whose governments have taken an active role and interest in the development of nuclear technology. However, there is significant concern for this hurdle within the United States, where politicians have been less than constructive towards the development of nuclear energy (Mignacca, Locatelli, & Sainati, 2020). This, along with bipartisan preference for “squeaky-clean” renewables, have been two of the main reasons that the government has provided these two technologies with much more support than nuclear. Consequently, it is unlikely that the United States will do much to support the approval process for SMR technology. In fact, the U.S. SMR licensing capabilities right now are limited to Light Water Reactor designs because the NRC does not
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
have the “technical or regulatory capability” to assess more advanced technologies (Levitan, 2020).
Nuclear Fusion One potential avenue to foster the expansion of the nuclear industry beyond SMR technology is by investing in nuclear fusion technology. Nuclear fusion is similar to the processes that occur inside the sun to produce large amounts of energy. In fusion reactions, light atoms smash into each other and create heavier ones. The reaction is exothermic, releasing large amounts of energy, for nuclei lighter than iron-56. For nuclei heavier than iron-56, the reaction actually requires an external energy source. However, despite the large amount of energy that is released for lighter nuclei, there still must be high amounts of energy input into the system in order to satisfy the conditions for nuclear fusion. This principle leads to the main challenge for nuclear fusion: the temperature threshold of around 150 million degrees Celsius. Efforts to produce technologies to facilitate such conditions are much like those of past nuclear reactor projects: overbudget and behind schedule. Notably, ITER, formerly known as the International Thermonuclear Experimental Reactor, has a current construction expense of $22 billion already without even one test having been caried out. The first experiments to test the reactors are now scheduled for 2025, and they are already seven years postponed from when they were originally scheduled when the project was started in 2018.
Conclusion Although the current state of the nuclear energy industry may not be as attractive to investors nor ripe with governmental support as renewables like solar and wind, the future could be very bright. With the introduction of SMR technology and the potential for nuclear fusion technology, nuclear engineers might have found a way to make the technology not just safer but also more economically sensible. These technologies in the works will likely be an easier capital-cost pill for investors to swallow relative to the current costs of nuclear power plants (Phillips, 2019). Nevertheless, the technological capabilities required for these innovations still involve many challenges to overcome; the support of the government would also be necessary to not just approve the technology but also
WINTER 2021
financially support it until full economic acceptance, just as it is doing with the solar and wind industry. Nuclear energy currently comprises approximately 10% of energy output in the world today, and many studies show that number staying stagnant or even decreasing in the years to come (Ritchie, n.d.). Yet there are those who believe nuclear energy will play an integral role in the shift away from fossil fuels. The MIT Energy Initiative believes nuclear power constitutes the best path away from fossil fuels as renewables still struggle to catch up to the reliability and cost-efficiency of non-renewables (MIT Energy Initiative, 2018). Nuclear energy could very well play a larger role in the years to come as even just the production of a few plants or SMRs could greatly decrease the reliance of the world on fossil fuels. As the world cries out for a savior from a polluted energy production industry, nuclear has been, and likely will continue to be, the answer (Phillips, 2019). References Bielecki, A., Ernst, S., Skrodzka, W., & Wojnicki, I. (2020). The externalities of energy production in the context of development of clean energy generation. Environmental Science and Pollution Research, 27(11), 11506–11530. https:// doi.org/10.1007/s11356-020-07625-7 Boarin, S., & Ricotti, M. E. (2014, August 5). An Evaluation of SMR Economic Attractiveness [Research Article]. Science and Technology of Nuclear Installations; Hindawi. https://doi. org/10.1155/2014/803698 Celebi, M., Chupka, M., Graves, F., Murphy, D., & Karkatsouli, I. (2016). Nuclear Retirement Effects on CO2 Emissions. The Brattle Group. https://brattlefiles.blob.core.windows.net/ system/news/pdfs/000/001/158/original/brattle_nuclearcarbon_whitepaper_-_dec2016.pdf EIA. (2020, April 16). Nuclear power plants—U.S. Energy Information Administration (EIA). https://www.eia.gov/ energyexplained/nuclear/nuclear-power-plants.php Goldberg, S. M., & Rosner, R. (2011). Nuclear Reactors: Generation to Generation. American Academy of Arts and Sciences, 40. Horvath, A., & Rachlew, E. (2016). Nuclear power in the 21st century: Challenges and possibilities. Ambio, 45(1), 38–49. https://doi.org/10.1007/s13280-015-0732-y Levitan, D. (2020, September 9). First U.S. Small Nuclear Reactor Design Is Approved. Scientific American. https:// www.scientificamerican.com/article/first-u-s-small-nuclearreactor-design-is-approved/ Liu, Z., & Fan, J. (2014). Technology readiness assessment of Small Modular Reactor (SMR) designs. Progress in Nuclear Energy, 70, 20–28. https://doi.org/10.1016/j. pnucene.2013.07.005 Mignacca, B., & Locatelli, G. (2020). Economics and finance of Small Modular Reactors: A systematic review and research agenda. Renewable and Sustainable Energy Reviews, 118,
87
109519. https://doi.org/10.1016/j.rser.2019.109519 Mignacca, Benito, Locatelli, G., & Sainati, T. (2020, September 1). Deeds not words: Barriers and remedies for Small Modular nuclear Reactors | Elsevier Enhanced Reader. https://doi. org/10.1016/j.energy.2020.118137
World Nuclear Association. (2020d, December). Small nuclear power reactors—World Nuclear Association. https://www. world-nuclear.org/information-library/nuclear-fuel-cycle/ nuclear-power-reactors/small-nuclear-power-reactors.aspx
MIT Energy Initiative. (2018). The Future of Nuclear Energy in a Carbon-Constrained World. NASA. (2020). Climate Change Evidence: How Do We Know? Climate Change: Vital Signs of the Planet. https://climate.nasa. gov/evidence NEA. (2020, August 6). Projected Costs of Generating Electricity—2015 Edition. Nuclear Energy Agency (NEA). https://www.oecd-nea.org/jcms/pl_14756/projected-costsof-generating-electricity-2015-edition?details=true NEI. (n.d.). How a Nuclear Reactor Works. Nuclear Energy Institute. Retrieved November 25, 2020, from https://www. nei.org/fundamentals/how-a-nuclear-reactor-works Nuclear Energy Institute. (n.d.). Nuclear Production Tax Credit. Nuclear Energy Institute. Retrieved January 5, 2021, from https://www.nei.org/advocacy/build-new-reactors/nuclearproduction-tax-credit Phillips, L. (2019, February 27). The new, safer nuclear reactors that might help stop climate change. MIT Technology Review. https://www.technologyreview.com/2019/02/27/136920/ the-new-safer-nuclear-reactors-that-might-help-stopclimate-change/ Research and Markets. (2020, October 19). Global Li-ion Battery Market Report 2020: While China Focuses on Building Future Batteries, Tesla Remains the Only US Company in Race. GlobeNewswire News Room. http://www.globenewswire. com/news-release/2020/10/19/2110146/0/en/Global-Liion-Battery-Market-Report-2020-While-China-Focuseson-Building-Future-Batteries-Tesla-Remains-the-Only-USCompany-in-Race.html Ritchie, H. (n.d.). Electricity Mix. Our World in Data. Retrieved January 20, 2021, from https://ourworldindata.org/electricitymix Ritchie, H., & Roser, M. (n.d.). CO2 emissions. Our World in Data. Retrieved January 20, 2021, from https:// ourworldindata.org/co2-emissions U.S. Department of Energy. (n.d.-a). Advanced Small Modular Reactors (SMRs). Energy.Gov. Retrieved November 25, 2020, from https://www.energy.gov/ne/nuclear-reactortechnologies/small-modular-nuclear-reactors U.S. Department of Energy. (n.d.-b). Benefits of Small Modular Reactors (SMRs). Energy.Gov. Retrieved December 18, 2020, from https://www.energy.gov/ne/benefits-small-modularreactors-smrs World Nuclear Association. (2020a, March). Nuclear Power Economics | Nuclear Energy Costs—World Nuclear Association. https://www.world-nuclear.org/informationlibrary/economic-aspects/economics-of-nuclear-power.aspx World Nuclear Association. (2020b, May). Nuclear Fuel Cycle Overview—World Nuclear Association. https://www. world-nuclear.org/information-library/nuclear-fuel-cycle/ introduction/nuclear-fuel-cycle-overview.aspx World Nuclear Association. (2020c, October). Financing Nuclear Energy. https://www.world-nuclear.org/informationlibrary/economic-aspects/financing-nuclear-energy.aspx 88
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
89
The Chilling Reason for Goosebumps BY GEORGIA DAWAHARE '23 Cover: An image of hair follicles. Hair growth is stimulated by the dermal papilla (DP) which activates stem cells to form the hair follicle. Environmental stimuli, such as cold temperatures, causes muscle contractions which pulls the hair up in a phenomenon known as goosebumps Source: Wikimedia Commons, Creator: Gaelle, L. and Cédric
90
Introduction Everyone has experienced goosebumps at some point in their lives. Perhaps the chilly weather coaxed the small bumps to rise on their arms or maybe it was the suspense of a horror movie or the sound of a particularly moving song. Whatever the cause, the peculiar reaction (scientifically called ‘piloerection’) has seemingly no purpose in humans. Piloerection in animals with thick fur, on the other hand, raises the hair and insulates the body by creating motionless air near the skin’s surface which effectively protects them from the cold (Chaplin et al., 2014). Humans, however, have evolved to have relatively little body hair, so piloerection is not enough to retain warmth (Reynolds, 2020). What, then, is the purpose of goosebumps?
The Sympathetic Nervous System: A Balancing Act The body is constantly working to maintain biological balance – a phenomenon known as ‘homeostasis’ – in an everchanging environment. Maintaining body temperature, blood pressure, and glucose levels are all examples of how the body keeps itself at homeostasis. The sympathetic nervous system (SNS), in particular, is important for maintaining body physiology under steady state and mediating ‘‘fight-or-flight’’ responses such as increases in heart rate and respiration rate after external insults (Shwartz et al., 2020). The cell bodies of sympathetic neurons reside in the sympathetic ganglia close to the spinal cord while the axons extend out and innervate essentially all organs (Borden et al., 2013; Karemaker, 2017; Suo et al., 2015). Sympathetic neurons are active within certain ranges of heart rate, respiration, blood pressure, and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
also important for regulating the stem cells that regenerate the hair follicle and hair (Lau, 2020). As mentioned previously, the sympathetic nerve reacts to the cold by contracting the muscle and causing goosebumps in the short term, but the research has found that if the cold and goosebumps persist, then the sympathetic nerve also drives hair follicle stem cell activation and new hair growth over the long term (Lau, 2020).
Figure 1: The arrector pili muscle (2) is connected to the hair follicle (3) and the sympathetic nerve [which lies in the epidermis (1)]. Source: Wikimedia Commons, Creator: AnthonyCaccese
Hair Follice Development
more to maintain a steady, safe state for the body. External stimuli, such as cold or danger, elevate sympathetic nerve activity which allow rapid changes in body physiology that enable animals to respond (Shwartz et al., 2020). In the skin, the sympathetic innervation (nerve), arrector pili muscle (APM; bundles of smooth muscle cells), and the hair follicle form a trilineage unit, meaning that each component originates from a different type of tissue (Shwartz et al., 2020). The sympathetic nerve innervates APMs which, in turn, are attached to the bulge region of the hair follicle where hair follicle stem cells (HFSCs) reside (Fujiwara et al., 2011). Environmental stimuli, such as cold temperatures, cause elevated impulses from sympathetic nerves to trigger the contraction of APM bundles, pulling the hair upward, the previously discussed phenomenon known as goosebumps (Shwartz et al., 2020) [Figure 1]. But while the cellular mechanism by which goosebumps work may be well understood, scientists, until now, still didn’t know why they occur since modern humans seemingly have no purpose for them. Harvard scientists have recently discovered that the cell types that cause goosebumps are
While hair production is often important to one’s psychological state, producing hair is not the sole purpose of hair follicles. Hair follicles are self-renewing and contain reservoirs of multipotent stem cells that are capable of regenerating the epidermis (the outermost layer of skin) and are thought to be involved in wound healing (Millar, 2002). The structural, or ‘pilosebaceous,’ unit of a hair follicle consists of the hair follicle itself with an attached sebaceous gland (a small gland which produces oil, or sebum, which lubricates the hair and skin) and APM. The hair follicle begins at the surface of the epidermis and cycles through three different growth phases: anagen, catagen, and telogen (Martel et al., 2020).
"Harvard scientists have recently discovered that the cell types that cause goosebumps are also important for regulating the stem cells that regenerate the hair follicle and hair."
The anagen phase is the proliferation phase, and occurs when the hair follicle is growing a new hair shaft. The length of this phase varies depending on the location of hair growth. For example, the growth phase for hair on the scalp can last two to six years while hair for eyebrows and eyelashes may only last a few months (Martel et al., 2020). The anagen phase begins when the dermal papilla (a structure which strengthens the adhesion between the dermal and epidermal layers) signals to the multipotent epithelial stem cells in the bulge (an area of the follicle marked by the insertion of the APM). Once these stem cells are stimulated, the bottom of the hair follicle Figure 2: Hair growth has three distinct stages: Anagen, catagen, and telogen. Anagen is the proliferative phase in which the hair follicle is growing a new hair shaft. In catagen phase, cell division stops and the bottom of the hair follicle begins to regress. The last phase, telogen phase, is the phase in which hair is released and shed, and anagen phase can start again to grow a new hair. Source: Hair 2013, Creator: Cenveo
WINTER 2021
91
Figure 3: Stem cells are able to change and transform into other types of cells found in the body. This process, called cell differentiation, is in charge of the development of all the cells in one’s body, including muscle cells, blood cells, and more. Shwartz et al. found that neurons form synapse-like structures with epithelia stem cells which allows them to maintain the stem cells in a poised state ready for regeneration. Under prolonged cold, the nerve causes the stem cells to activate quickly, regenerate the hair follicle, and grow new hair Source: Wikimedia Commons, Creator: Haileyfournier
can grow downwards, forming a bulb around the dermal papilla. The dermal papilla can then signal matrix cells in the bulb to proliferate, differentiate, and grow upwards, forming a new hair (Martel et al., 2020).
"In the skin, the hair follicle forms a trilineage unit made of three types of tissue: epithelium, mesenchyme, and nerve."
The catagen phase (also known as the transition or regression phase) is the shortest of the three phases, and may only last a few weeks (Martel et al., 2020). During this phase, cell division in the matrix ceases, and the bottom of the hair follicle begins to regress. Eventually, this segment of the hair follicle no longer exists, and the dermal papilla has moved upwards to contact the bulge once again. During this process, a club hair (a hair which has stopped growing) is formed with a white, hard node on the end (Martel et al., 2020). The telogen phase is the last phase and it is often referred to as the resting phase. Club hairs on the scalp, essentially dead, are typically held for about 100 days. Eventually these hairs are released and shed so that the anagen phase can begin again with a new hair (Martel et al., 2020) [Figure 2]. The researchers from Harvard who determined why we get goosebumps did so in part by investigating this initial development of the hair follicle system. Specifically, they determined how the APM and sympathetic nerve reach the hair follicle in the first place (Lau, 2020). As it turns out, the hair follicle secretes a protein that regulates the formation of the smooth
92
muscle, which then attracts the nerve. Then when the hair follicle matures, the nerve and APM together regulate the hair follicle stem cells to regenerate the new hair follicle (anagen phase). HFSCs in the bulge and hair germ remain quiescent, or inactive, throughout most hair cycles but become proliferative for a short time at anagen onset. During this period, they produce their transit-amplifying progeny which then undergoes massive proliferation and differentiation to fuel the growth of new hair (Shwartz et al., 2020).
Hair Follicle Regulation In the skin, the hair follicle forms a tri-lineage unit made of three types of tissue: epithelium, mesenchyme, and nerve. The sympathetic nerve, nerve tissue, connects with the APM in the mesenchyme. The APM in turn connects to HFSCs, a type of epithelial stem cell critical for regenerating the hair follicle as well as repairing wounds (Lau, 2020). The connection between the sympathetic nerve and the muscle was previously described as the cellular basis behind goosebumps: the cold triggers sympathetic neurons to send a nerve signal, and the muscle reacts by contracting and causing the hair to stand on end. Shwartz et al. studied this connection and found that, using electron microscopy, the sympathetic nerve fibers not only associated with the muscle, but also formed a direct connection to the hair follicle stem cells by
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
wrapping around the hair follicle stem cells like a ribbon (Lau, 2020). Through this discovery, they were able to determine how the nerve and stem cell interact. Neurons tend to regulate excitable cells, but the researchers were surprised to find that they also form similar synapse-like structures with an epithelial stem cell, which is not a typical target for neurons (Lau, 2020) [Figure 3]. In order to confirm that the nerve did indeed target the stem cells, however, they studied the activity level of the sympathetic nervous system. The sympathetic nervous system is normally activated at a constant low level to maintain body homeostasis. Under prolonged cold, however, the nerve was activated at a much higher level and more neurotransmitter norepinephrine (binds to adrenergic receptors on target cells) was released, causing the stem cells to activate quickly, regenerate the hair follicle, and grow new hair (Lau, 2020). From this, the researchers were able to conclude that this low level of nerve activity maintained the stem cells in a poised state ready for regeneration (Lau, 2020). Collectively, these data suggest that HFSC activity is tightly linked with sympathetic nerve activity. Loss of sympathetic nerve innervations make HFSCs more dormant, whereas elevated sympathetic tone promotes HFSC activation (Shwartz et al., 2020). The researchers also investigated what maintained the nerve connections to the HFSCs (Lau, 2020). When they removed the APM, the sympathetic nerve retracted and the nerve connection to the HFSCs was lost, showing that APMs are crucial for the formation and maintenance of the sympathetic innervation, or bridge, to HFSCs (Shwartz et al., 2020).
Responding to the Environment The erection of hairs, feathers, and spines plays a role in thermoregulation, courtship, and aggression, features essential for evolutionary success across the animal kingdom (Darwin, 1872). The anatomical connection between APMs and HFSCs is conserved across mammals, raising the possibility that there might be evolutionary advantages to preserving this anatomical connection beyond goosebumps. Using several experiments, a group of scientists at Harvard found that cell types enabling goosebumps form a dual-component system that regulates hair follicle stem cells (2020). The nerve is the signaling component that activates the stem cells through neurotransmitters, while the muscle is the supporting component that allows the nerve fibers to directly connect with
WINTER 2021
hair follicle stem cells (Lau, 2020). It is possible the APM is evolutionarily conserved due to its indispensable role as a hub to attract and maintain sympathetic innervations in the skin. Goosebumps provide a quick way to provide some sort of relief in the short term. But when the cold lasts, it becomes a mechanism for the stem cells to know to regenerate more hair (Lau, 2020). In the future, the researchers will further explore how the external environment might influence the stem cells in the skin, both under homeostasis and in repair situations such as wound healing (Lau, 2020). References Borden, P., Houtz, J., Leach, S.D., & Kuruvilla, R. (2013). Sympathetic innervation during development is necessary for pancreatic islet architecture and functional maturation. Cell Rep. 4, 287–301. Chaplin, G., Jablonski, N. G., Sussman, R. W., & Kelley, E. A. (2014). The Role of Piloerection in Primate Thermoregulation. Folia Primatologica, 85(1), 1–17. Darwin, C., & Prodger, P. (1998). The Expression of the Emotions in Man and Animals. Oxford University Press. Fujiwara, H., Ferreira, M., Donati, G., Marciano, D.K., Linton, J.M., Sato, Y., Hartner, A., Sekiguchi, K., Reichardt, L.F., and Watt, F.M. (2011). The basement membrane of hair follicle stem cells is a muscle cell niche. Cell 144, 577–589. Karemaker, J.M. (2017). An introduction into autonomic nervous function. Physiol. Meas. 38, R89–R118. Lau, J. The hair-raising reason for goosebumps: The same cell types that cause goosebumps are responsible for controlling hair growth. (2020, July 20). ScienceDaily. Retrieved January 5, 2021, from https://www.sciencedaily.com/ releases/2020/07/200720112325.htm Martel, J. L., Miao, J. H., & Badri, T. (2020). Anatomy, Hair Follicle. In StatPearls. StatPearls Publishing. http://www.ncbi. nlm.nih.gov/books/NBK470321/ Millar, S. E. (2002). Molecular Mechanisms Regulating Hair Follicle Development. Journal of Investigative Dermatology, 118(2), 216–225. https://doi.org/10.1046/j.0022202x.2001.01670.x Reynolds, S. What goosebumps are for. (2020, July 27). National Institutes of Health (NIH). https://www.nih.gov/ news-events/nih-research-matters/what-goosebumps-are Shwartz, Y., Gonzalez-Celeiro, M., Chen, C., Tseng, Y., Lin, S., & Hsu, Y. (2020). Cell Types Promoting Goosebumps Form a Niche to Regulate Hair Follicle Stem Cells. Cell 182, 578–593 Suo, D., Park, J., Young, S., Makita, T., & Deppmann, C.D. (2015). Coronin-1 and calcium signaling governs sympathetic final target innervation. J. Neurosci. 35, 3893–3902.
93
A Short History of DNA: From Discoverers to Innovators BY GIL ASSI '22 Cover: The image above depicts Gregor Mendel, an Augustinian monk who is regarded as the father of modern genetics. He experimented with pea plants and theorized that traits passed on from one generation to the next originate from an element in pea plant cells. Source: Wikimedia Commons
94
Introduction
In the Beginning
Understanding the complexity of biological inheritance is one of the most difficult research fields in science. At the most basic level, it is how terrestrial life begins and differentiates one species from another, and within species, one individual from another. Therefore, scientists set out long ago to primarily identify the agent responsible for passing genetic information from parent to offspring. The human interest in genes thus has a long history, and this article attempts to highlight the different generations of scientists who have studied inheritance, identified this unknown agent, innovated techniques to comprehend its biology, and contributed to the ever growing field of genetics. This history begins with the father of modern genetics.
In 1856, an Augustinian monk at St. Thomas’ Abbey in the Austria Empire, now Czech Republic, examined the physical appearance of pea plants. He noticed that plants of this species had different physical appearances. Their colors, heights, and seed shapes varied. Many biologists used the Blend Theory of Inheritance to explain such differences (Andrei, 2013). The theory suggests that offspring are a mixture of parental traits and as a result, they can never be separated into the original parental traits. Mendel was skeptical of this theory. He experimented with seven characteristics of pea plants, which can be seen in figure 1. Since pea plants reproduce and mature quickly, he crossed the plants and observed the selected features over several generations. Mendel found that offspring in successive generations often displayed the original characteristics of their parents (Andrei, 2013). Mendel went a step further. He suggested that a certain DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: In order to study inheritance, Mendel experimented with seven characteristics of pea plants: the shape and color of seeds, flower color, pod shape and color as well as stem size and the position of the inflorescences. Soure: Wikimedia Commons
element must be present in pea plants and was responsible for traits that were passed on from one generation to the next. Now known as Mendelian genetics, Mendel’s findings discredited the Blend Theory and laid the groundwork for subsequent studies of genes (Andrei, 2013). Several years after Gregor Mendel had conducted his studies on genetic inheritance in peas, Walton Sutton suggested that chromosomes were the responsible hereditary factor in Mendel’s experiments (Mishra, 2014). Understanding the underlying molecules at work within chromosomes then became essential to further the field of genetics. Phoebus Levene was among the very first to propose the identity of the genetic element that was responsible for inheritance.
A Long-Forgotten Biochemist Phoebus Aaron Theodor Levene was born in Sagor, Russia on February 25, 1869. He was one of very few Jews who were allowed to enter the Imperial Military Medical Academy in St. Petersburg to study medicine (Jacobs & Van Slyke, 1943). While in medical school, he was under the guidance of renowned professors including Ivan Pavlov, the physiologist who had conditioned dogs to salivate at the sound of a bell and consequently established the school of thought known as behaviorism in the field of psychology. Levene developed a passion for biochemistry during his work on the condensation reaction of aldehydes and ketones with phenols (Jacobs & Van Slyke, 1943). Unfortunately, his newfound passion was cut short due to growing anti-Semitism in Russia; as a result, he and his family emigrated to America in 1891 for a more fruitful, and safer life (Jacobs & Van Slyke, 1943).
WINTER 2021
Upon his arrival in the United States, Levene began practicing medicine on the lower East Side of New York for several years. During this time, however, Levene was still drawn to physiological chemistry research. As a doctor, he continuously took time off to work in the department of physiology at Columbia University (Jacobs & Van Slyke, 1943). He simultaneously enrolled at the University’s School of Mines to further his chemical training. In due time, Levene was granted a position as Associate in Physiological Chemistry in the laboratory of the Pathological Institute of the New York State Hospitals (Jacobs & Van Slyke, 1943). His focus on nucleic acids, amino acids, and genetic inheritance would emerge soon thereafter. Interestingly enough, Levene would meet with one of his old professors in unexpected ways years later. Ivan Pavlov and his son were visiting the United States. Upon their arrival, they were greeted by three rough men who seized Pavlov and snatched his wallet, which contained his visa and money. With nowhere to go, Pavlov contacted Levene, who gladly accepted to house the duo. Levene helped Pavlov and his son obtain new visas and provided them with money for the remainder of their stay in the United States (Thomas, 1997).
"In 1910, Levene published On The Biochemistry of Nucleic Acid where he explored recent discoveries and his understanding of nucleic acids. By studying plants and animal tissue nucleic acids, he and his associate discovered a sugar: ribose, the sugar backbone of RNA."
Inheritance: DNA or Proteins? In 1910, Levene published On The Biochemistry of Nucleic Acid where he explored recent discoveries and his understanding of nucleic acids. By studying plants and animal tissue nucleic acids, he and his associate discovered a sugar: ribose, the sugar backbone of RNA (Levene, 1910). Furthermore, in the section title “The Constitution of Nucleic Acid,” Levene introduces nomenclature that we use today: nucleotides. Levene described these nucleotides as nitrogenous base nucleosides 95
Figure 2: Phoebus Levene, a biochemist known for his work on nucleic acids. He isolated DNA, RNA, discovered the basic nucleotides, and also characterized the bonds that link the nucleotides. Source: Wikimedia Commons
"By the 1940s, the scientific community was in agreement that chromosomes were either proteins or nucleic acids."
combined with ribose and phosphoric acid. This, however, was only one of Levene’s discoveries; he published another paper where he and other investigators uncovered “deoxyribose,” the sugar backbone of DNA (Levene et al., 1930). Additionally, Levene’s research lab would go on to propose a structure of DNA (shown below) and one of RNA. Accomplishing these feats only made Levene more ambitious. Levene set out for a much more difficult task and he faced a question that had stalled the scientific community for a longtime: what was the molecule responsible for heredity? By the 1940s, the scientific community was in agreement that chromosomes were either proteins or nucleic acids. Most of the amino acids that make up proteins had been discovered while the nucleotides that comprise nucleic acids had also been studied although their structure and function was vaguely understood (Hernandez, 2019). The most “reasonable” answer appeared almost immediately for most scientists at the time: proteins. At the head of this movement was Levene (Hernandez, 2019). At this point in his career, Levene had established himself in the field of biochemistry. He was respected by many, and scientists in his domain revered his intelligence. It was around this time that Levene proposed the tetranucleotide hypothesis, which was based on his tetranucleotide structure of nucleic acid. He asserted that proteins held much more complexity than nucleic acids and therefore must be the genetic variable long looked for (Levene, 1917). In an article published in 1917, Levene wrote that “...nucleic acids... which occur in all tissues, all organs of all species...are indispensable for life, but carry no individuality, no specificity, and it may be just to accept the conclusion of the biologist that they do not determine species specificity, nor are they carriers of the Mendelian characters” (Levene, 1917). Levene’s infamous hypothesis
Figure 3: Along with his discovery of nucleotides and the sugar backbones, Levene proposed a possible structure of DNA known as the tetranucleotide. This structure, however, was later proven to be incorrect. Source: Wikimedia Commons
96
costs him his exclusion from most biological textbooks, and class discussions today (Frixione & Ruiz-Zamarripa, 2019). His discoveries are overshadowed by his erroneous conclusion that nucleic acid was of no importance to the study of inheritance. The history of scientific research is filled with assumptions, perspectives backed by incomplete knowledge, and a drive to uncover the complexities of a phenomenon. These factors can result in false conclusions but they can also contribute to scientific discoveries. Levene’s life is one of such examples. It demonstrates this well known fact about science. Although his tetranucleotide hypothesis incited many to neglect the study of nucleic acids, he should be credited for what he did accomplish in the field of genetics.
Hershey and Chase’s Answer: DNA In 1951, Hershey and Chase conducted a series of experiments that centered DNA at the forefront of genetics (Hernandez, 2019). The focus of their study was on viruses that infect bacteria: bacteriophages. At the time of their research, the mechanism by which bacteriophages infect a bacterium was well known: a phage attaches itself on the surface of a bacterium, it then injects a piece of itself inside the bacterium, replicates itself inside the host, and eventually causes the cell to burst resulting in cell death as well as the release of several new copies of itself (Hernandez, 2019). Hershey and Chase deduced that the replication that takes place must be related to the piece of the phage that was injected in the host and that piece must contain some kind of genetic information that provides the host cells with the information to replicate. However, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: The Hershey and Chase experiment made use of radiolabeled proteins and DNA that were injected in bacteria via phages. The process, depicted in the figure, showed concretely that DNA was responsible for inheritance. Source: Wikimedia Commons
bacteriophages are made of proteins and DNA (Hernandez, 2019). So, Hershey and Chase needed to design an experimental method that would determine whether DNA or protein is injected in the host during the injection phase. After countless attempts, Hershey and Chase determined that radioactive isotopes could be the key to solving the problem (Hernandez, 2019). It had long been determined by chemists that chemical elements found on the periodic table can take different forms, better known as isotopes. Isotopes are versions of the same element but differ in mass due to differing numbers of neutrons. One great characteristic of isotopes is their ability to be radioactive and give off signals that can be detected (Kishore, 1981). With that knowledge in mind, Hershey and Chase grew one group of E. Coli, a bacterium, in radioactive sulfur and another group in radioactive phosphorus since all DNA molecules contain phosphorus and, similarly, all protein molecules contain sulfur (Hernandez, 2019). Consequently, Hershey and Chase hypothesized that if the phosphorus isotope is found in the DNA of the phages, then DNA must contain the genetic material. However, if sulfur is found in the phages after replication, then protein must be the hereditary molecule (Hernandez, 2019). After radioactive phosphorus was found, the Hershey and Chase experiment became famous for proving that DNA does in fact carry genetic information. Only one puzzle remained: the structure of DNA.
and nitrogenous bases—adenine, guanine, thymine, and cytosine (Khan Academy, n.d.). In 1953, American biologist James Watson, and British scientist Francis Crick published “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid.” In this paper, they introduced their version of the structural nature of DNA. They received help from Rosalind Franklin, a physical chemist PhD who worked at King’s College London. Her training in X-Ray crystallography allowed her to use “fine-focus X-ray equipment” and DNA samples to produce clear images of DNA crystals (Khan Academy, n.d.). With her help and their investigation, Watson and Crick were able to reject other known structures at the time. For instance, Watson and Crick commented on one of the leading structures at the time, a model that consisted of three intertwined chains, sugar-phosphate groups located on the inside, and bases on the outside. Watson and Crick deemed the structure unsatisfactory for two reasons: for one, the structure lacked acidic hydrogen bonds, and because the phosphate
"In 1953, American biologist James Watson, and British scientist Francis Crick published “Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid.” In this paper, they introduced their version of the structural nature of DNA."
Figure 5: Rosalind Franklin, a physical chemist whose work proved instrumental in determining the correct structure of DNA. Source: Wikimedia Commons
Three Scientists and the Structural Nature of DNA Almost a year after the Hershey and Chase experiment, Franklin, Watson, and Crick settled the debate on the exact structure of DNA. As it was, DNA was known to consist of building blocks made of sugars, phosphate groups, WINTER 2021
97
Figure 6: A double helix DNA showing its sugar phosphate backbone and the basic nucleotides known as adenine, cytosine, thymine, and guanine. Adenine base pairs with thymine (green and yellow) and cytosine base pairs with guanine (blue and red). Source: Wikimedia Commons
groups were positioned inside the molecule, it seemed unclear what held the structure together. Additionally, the spaces between the nitrogenous bases, known as “Van der Waals distance,” seemed too small (Crick & Watson, 1953). Franklin, Watson, and Crick went on to present what is now used in elementary biology courses and everywhere around the world: the double helix DNA molecule.
"The double helix DNA consists of two polynucleotide chains, known as “DNA strands,” composed of the four nucleotides discovered by Levene and his laboratory partners."
The double helix DNA consists of two polynucleotide chains, known as “DNA strands,” composed of the four nucleotides discovered by Levene and his laboratory partners. The DNA strands are complementary, rather than identical, and are held by hydrogen bonds between the nucleotides. Contrary to the structure described previously, the hydrogen bonds are located on the inside, providing structural stability to the molecule, with the sugar-phosphate backbones located on the outside (Alberts et al., 2002). The ends of the DNA molecule are characterized by their terminal end molecule, thus making them distinguishable: one end, the five prime end or 5’ end, has a phosphate group at its terminal and the other end, three prime or 3’ end, has a hydroxyl group (Alberts et al., 2002). A full turn of the double helix is 20 (angstroms) wide with 10 bases per turn, and about 34 in length. With the double helix established as the most accurate structure of DNA, scientists faced another problem: the order. The scientific community knew that the information regarding heredity lay in the order of nucleotides in the DNA strands (Heather & Chain, 2016). In order to decipher its hereditary information, scientists would have to “read” DNA, also known as sequencing.
RNA First, DNA Second The early attempts of sequencing DNA involved a method developed by scientists to infer the sequence of RNA species: transfer RNA (tRNA),
98
ribosomal RNA, and the genomes of singlestranded RNA bacteriophages (Heather & Chain, 2016). Unlike DNA, an RNA molecule is single stranded. Additionally, RNA strands tend to be much shorter than DNA strands and RNase enzymes, proteins that can cleave RNA chains at specific sites, were already available. Biologist Robert Holley and his colleagues took advantage of these RNA features and the RNase enzymes to sequence the first nucleic acid sequence, alanine tRNA, in a model organism study that involved Saccharomyces Cerevisiae. Following this accomplishment, Frederick Sanger, a scientist who would later become a key figure in DNA sequencing, engineered a technique that allowed other labs to readily sequence, and add to the growing pool of transfer RNA sequences (Heather & Chain, 2016). Some notable examples include the complete nucleotide sequence of N-Formylmethionyl tRNA, a derivative of the amino acid methionine, from Escherichia Coli (E. Coli), and the nucleotide sequence of a coat protein from a bacteriophage RNA (Heather & Chain, 2016). The most important accomplishment using Sanger’s technique came from Walter Fier’s laboratory. He and his partners were able to produce the first complete protein-coding gene sequence in 1972. The gene belonged to the coat protein of the bacteriophage MS2, and four years later, Fier and his colleagues presented its complete genome (Heather & Chain, 2016). These results showed that RNA sequencing became a routine practice as biologists could quickly produce vast amounts of these RNA species in culture. Nevertheless, this could not be applied to DNA. DNA molecules are comparatively longer and their double helix structure provides them stability, making them more difficult to process (Heather & Chain,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
2016). Additionally, enzymes that could readily cut DNA at specific sites were not available. However, by combining the techniques used to sequence RNA with the information collected on the nature of DNA, small progress emerged in DNA sequencing (Heather & Chain, 2016).
Breakthrough: DNA Polymerase and Model Organisms The years that followed the development of RNA sequencing saw a rise in innovative techniques to analyze DNA. For example, the number of in-depth studies of model organisms vastly increased. In 1968, Ray Wu and Dale Kaiser published a paper that shed light on the behavior of nucleotides found at the ends of bacteriophage lambda DNA. Wu and Kaiser discovered that the 5’ ends of lambda DNA strands tended to be 20 nucleotides longer than the 3’ end strands. In order to deduce the sequence of the 20 nucleotides located at the 5’ end, the researchers needed to lengthen the 3’ end (the shorter end) to identify the matching nucleotides. To do so, they used DNA polymerase to induce replication. During DNA replication, enzymes called ‘helicases’ unwind the double helix by breaking hydrogen bonds between the base pairs, resulting in a bubble that exposes the nucleotides. A primer (a short nucleic acid sequence) is added to initiate replication whereby two strands begin to grow from the ends of the bubble, known as the replication fork (McMurry, 2015). DNA polymerase synthesizes DNA by incorporating deoxynucleotides in the growing strands. Deoxynucleotides are composed of a nucleotide, a sugar (deoxyribose in this case), and a phosphate. DNA polymerase requires these deoxynucleotides to be converted from their monophosphate forms to their triphosphate forms—nucleoside triphosphate, dNTP—in order for incorporation to be successful. The four dNTPs are deoxyadenosine triphosphate (dATP), deoxythymidine triphosphate (dTTP), deoxycytidine triphosphate (dCTP), and deoxyguanosine triphosphate (dGTP) (Carroll et al., 2000). Wu and Kaiser used DNA polymerase to fill the 3’ ends with radioactive dNTPs, adding them one at time to deduce the sequence of the 5’ end (the longer end). Kaiser and Wu’s work inspired many labs to adopt this new technique of DNA sequencing that used radiolabeled nucleotides to infer the order of nucleotides at the end termini of bacteriophage genomes (Heather & Chain, 2016). As researchers became more apt at using this technique, incorporation of radioactive nucleotides was generalized to specific
WINTER 2021
oligonucleotides, short single DNA strands that act as starting points for DNA sequencing (they were no longer limited to the end termini of bacteriophage genomes). Unfortunately, only short stretches of DNA strands could be used for base pair analysis due to the considerable amount of analytical chemistry and fractionation procedures (Heather & Chain, 2016). Eventually, this seamless yet necessary improvement would years later allow Frederick Sanger to dramatically alter the progress of DNA sequencing.
The Way Forward: Electrophoresis Through Polyacrylamide Gels The next practical change involved the use of a single step separation of DNA fragments using electrophoresis through polyacrylamide gels, a technique used to separate DNA fragments according to their size. The DNA samples are loaded at one end of the gel in pocket-like indentations called wells. When an electric current is applied, the samples travel through the gel at varying speeds according to their size. By staining the gels with DNA binding dyes, the fragments can be seen as bands (Khan Academy, n.d.). The new use of DNA polymerase in combination with gel electrophoresis inspired two well-known studies conducted by Frederick Sanger and his laboratory partners. In 1975, Alan Coulson and Frederick Sanger presented the “plus” and “minus” system to the scientific community. The plus and minus method used DNA polymerase to synthesize the template strand from a primer in the presence of the four deoxy-ribotriphosphates, incorporating radiolabeled nucleotides before performing the “plus” and “minus” polymerization reactions (Sanger & Coulson, 1975). In the plus reaction, only a single type of nucleotide is present resulting in strands that end with specific base. In the minus system, three bases are used producing sequences up to the position before the next missing nucleotide. After running the products on a polyacrylamide gel and comparing the lanes side by side, researchers can deduce the position of nucleotides in the sequence (Sanger & Coulson, 1975). Using the plus and minus system, Sanger and Coulson sequenced the very first DNA genome: bacteriophage ϕX174, commonly known as “PhiX” in sequencing labs today. The major breakthrough that marked the birth of the first generation of DNA sequencing was within reach. Sanger sequencing, as it became known, was founded by Frederick Sanger, the scientist who had already contributed greatly to the field.
"The years that followed the development of RNA sequencing saw a rise in innovative techniques to analyze DNA. For example, the number of indepth studies of model organisms vastly increased."
99
Figure 7: Frederick Sanger, known for the chain termination technique to sequence DNA. The technique is also named after him, Sanger sequencing. Image
Sanger Sequencing: First Generation Sequencing
How do scientists incorporate dNTPs of their choosing if the sequence of a DNA molecule is Source: Wikimedia Commons unknown to them? DNA strands are composed of four nucleotides which seem to be arranged in a seemingly disordered yet specific way. During DNA replication, errors can emerge in the form of mutations such as deletions, substitutions, and translocations. Sometimes, however, a nucleotide can undergo changes that will result in something that differs from the norm. Another lab in the field had discovered that if 2',3'-dideoxythymidine triphosphate (ddTTP) replaces a thymidylic acid (dT), the monophosphate form of thymidine, in a growing oligonucleotide chain, DNA polymerase can no longer conduct its replicating activity (Sanger et al., 1977). Sanger observed that unlike dNTPs, dideoxynucleotides (ddNTPs) contain "In 1990, the no 3’-hydroxyl group, a group that is necessary Human Genome for the extension of DNA chains. Consequently, Project set out to DNA polymerase can no longer recognize the molecule and replication stops (Sanger et al., sequence the entire 1977). Since the chain cannot be extended any human genome further, termination is bound to end at positions and determine the where a ddNTP should be incorporated (Sanger order of all 3 billion et al., 1977). Sanger took advantage of the inhibitory activity of ddNTPs on DNA polymerase nucleotides. In to develop a new method for determining order to complete nucleotide sequences in DNA. By incubating a this daunting task, primer with DNA polymerase in the presence scientists developed of the dNTPs, and very small amounts of the sequencing ddNTPs, Sanger could obtain varying lengths of DNA fragments, each terminating with one techniques that of the four dye-labeled dideoxynucleotides. Using gel electrophoresis, the product mixture is separated according to size and the identity of the terminal dideoxynucleotide in each piece is determined. Consequently, the sequence of the restriction fragment is determined by the color of the attached dye on the terminal nucleotides (McMurry, 2015). By performing
four parallel reactions containing each individual ddNTP base and running the results on gel electrophoresis, Sanger was able to infer the sequence of nucleotides in the original template. The dideoxy chain-termination method, or simply Sanger Sequencing, became the most used technology to sequence DNA. It is still used in some laboratories today.
Conclusion In 1990, the Human Genome Project set out to sequence the entire human genome and determine the order of all 3 billion nucleotides. In order to complete this daunting task, scientists developed sequencing techniques that emphasized speed and greater accuracy. These techniques were derived from Sanger sequencing (McMurry, 2015). Early in 2001, the preliminary sequence information for the entire human genome was announced and by 2003, the completed project was released. The finished sequence covers about 99% of the human genome, with a 99.99% accuracy
Figure 8: The sequence of a restriction fragment determined by sanger sequencing can be read by noting the colors of the dyes attached to the ends of the DNA fragments. Source: Wikimedia Commons
100
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
(National Human Genome Research Institute, n.d.). This achievement, however, is also the result of dedicated scientists – only some of whom were mentioned here – who worked relentlessly to study DNA, its structure, and preliminary techniques to determine its sequence. Along the way, many of them were awarded Nobel prizes, others were shelved because of their human errors, and some never acknowledged because of their sex. And even though the vast majority did not witness the completion of the Human Genome Project, their work will live on as the basis for many discoveries, and innovations to come.
Encyclopedia (2019-06-23). ISSN: 1940-5030 https://embryo. asu.edu/pages/hershey-chase-experiments-1952-alfredhershey-and-martha-chase Jacobs, A.W., & Van Slyke, D.D. (1943). Biographical Memoir Of Phoebus Aaron Theoror Levene | National Academy of Sciences of The United States of America Biographical Memoirs Voume XXIII-fourth memoir. Retrieved February 6, 2021, from http://www.nasonline.org/publications/ biographical-memoirs/memoir-pdfs/levene-phoebus-a.pdf Kishore, R. (1981). Radiolabeled microorganisms: Comparison of different radioisotopic labels. Reviews of Infectious Diseases, 3(6), 1179–1185. https://doi.org/10.1093/ clinids/3.6.1179
References
Levene, P. A. (1910). ON THE BIOCHEMISTRY OF NUCLEIC ACIDS. 2. Journal of the American Chemical Society, 32(2), 231–240. https://doi.org/10.1021/ja01920a010
Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K., & Walter, P. (2002). The Structure and Function of DNA. Molecular Biology of the Cell. 4th Edition. https://www.ncbi.nlm.nih. gov/books/NBK26821/
Levene, P. A. (1917). THE CHEMICAL INDIVIDUALITY OF TISSUE ELEMENTS AND ITS BIOLOGICAL SIGNIFICANCE. 3. Journal of the American Chemical Society, 39(4), 828–837. https://doi. org/10.1021/ja02249a037
Andrei, Amanda, "Experiments in Plant Hybridization" (1866), by Johann Gregor Mendel". Embryo Project Encyclopedia (2013-09-04). ISSN: 1940-5030 http://embryo.asu.edu/ handle/10776/6240.
Levene, P. A., Mikeska, L. A., & Mori, T. (1930). ON THE CARBOHYDRATE OF THYMONUCLEIC ACID. Journal of Biological Chemistry, 85(3), 785–787. https://doi.org/10.1016/ S0021-9258(18)76947-0
Barnes, M. Elizabeth, "The Origin of Species: "Chapter Thirteen: Mutual Affinities of Organic Beings: Morphology: Embryology: Rudimentary Organs" (1859), by Charles R. Darwin". Embryo Project Encyclopedia (2014-07-11). ISSN: 1940-5030 http://embryo.asu.edu/handle/10776/8033.
McMurry, J. (2015). Organic chemistry. Monterey, Calif: Brooks/Cole Pub. Co. Pages 947-956
Bensaude-Vincent, Bernadette. (2018). Dmitri Mendeleev— Other scientific achievements | Britannica. Retrieved February 6, 2021, from https://www.britannica.com/biography/DmitriMendeleev/Other-scientific-achievements Carroll S, Doebley J, Griffiths A, Wessler S. (2000) Introduction to Genetic Analysis, tenth edition, New York: W.H.Freeman and Company. Pages 284 to 287 Crick, F.H.C, & Watson, J.D. (1953). Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid | Nature. (n.d.). Retrieved February 6, 2021, from https://www. nature.com/articles/171737a0 Discovery of the structure of DNA (article) | Khan Academy. (n.d.). Retrieved February 6, 2021, from https://www. khanacademy.org/science/high-school-biology/hsmolecular-genetics/hs-discovery-and-structure-of-dna/a/ discovery-of-the-structure-of-dna Frixione, E., & Ruiz-Zamarripa, L. (2019). The “scientific catastrophe” in nucleic acids research that boosted molecular biology. The Journal of Biological Chemistry, 294(7), 2249– 2255. https://doi.org/10.1074/jbc.CL119.007397 Gel electrophoresis (article). (n.d.). Khan Academy. Retrieved March 11, 2021, from https://www.khanacademy.org/ science/ap-biology/gene-expression-and-regulation/ biotechnology/a/gel-electrophoresis
Mishra, Abhinav, "Walter Stanborough Sutton (1877-1916)". Embryo Project Encyclopedia (2014-06-27). ISSN: 1940-5030 http://embryo.asu.edu/handle/10776/8014. National Human Genome Research Institute “Human Genome Project Results.”Genome.gov, 12 Nov. 2018, www. genome.gov/human-genome-project/results. Phoebus Levene: DNA from the Beginning. (n.d.). Retrieved February 7, 2021, from http://www.dnaftb.org/15/bio-2.html Sanger, F., & Coulson, A. R. (1975). A rapid method for determining sequences in DNA by primed synthesis with DNA polymerase. Journal of Molecular Biology, 94(3), 441–448. https://doi.org/10.1016/0022-2836(75)90213-2 Sanger, F., Nicklen, S., & Coulson, A. R. (1977). DNA sequencing with chain-terminating inhibitors. Proceedings of the National Academy of Sciences, 74(12), 5463–5467. https://doi. org/10.1073/pnas.74.12.5463 Thermo Fisher Scientific. (2015, June 17). How does Sanger Sequencing Work? – Seq It Out #1. https://www.youtube. com/watch?v=e2G5zx-OJIw Thomas, R. K. (1997). Correcting Some Pavloviana regarding “Pavlov’s Bell” and Pavlov’s “Mugging.”The American Journal of Psychology, 110(1), 115. https://doi.org/10.2307/1423704 Wu, R., & Kaiser, A. D. (1968). Structure and base sequence in the cohesive ends of bacteriophage lambda DNA. Journal of Molecular Biology, 35(3), 523–537. https://doi.org/10.1016/ S0022-2836(68)80012-9
Heather, J. M., & Chain, B. (2016). The sequence of sequencers: The history of sequencing DNA. Genomics, 107(1), 1–8. https://doi.org/10.1016/j.ygeno.2015.11.003 Hernandez, Victoria, "The Hershey-Chase Experiments (1952), by Alfred Hershey and Martha Chase". Embryo Project
WINTER 2021
101
Understanding Infertility and Its Determinants BY GRACE NGUYEN '24 Cover: Sole of the foot of an infant between an adult hand for size comparison. Although having children is a hope for many couples, 10-15% of couples worldwide attempting pregnancy struggle with infertility (Blaževičienė et al., 2014). Despite the recent improvement of fertility treatments, there still remains many barriers to the ability to conceive a child. Source: Wikimedia Commons
102
Introduction Many often believe infertility to be solely a women’s health issue, and consequently, it is frequently categorized as a reproductive issue. While it is indeed a reproductive issue, many people fail to see the further implications of infertility on both men and women. Infertility’s association with women’s health often puts the blame on women, and this “pointing of fingers” can impact both mental and physical health that have severe consequences for both men and women. In fact, infertility’s association with solely being the inability to conceive has overshadowed the further implications that infertility truly harbors, such as the physiological stress that follows and the genetic conditions that can result from fertility treatment. Understanding the genetics of infertility—and how they interact with the environment—can lead to earlier identification and prevention protocols that can help individuals with a higher susceptibility to infertility.
Being beyond a quality-of-life issue, infertility is a disease of the reproductive system that can then be a risk factor for future illnesses in other body systems (Jacobson et al., 2018). Infertility’s association with the inability to reproduce, however, has become the main identifier for the condition. This common assumption is a result of the lack of a standard definition, and much of the confusion comes from the varying amount of time that must pass before an individual is diagnosed as “infertile.” For this paper, the World Health Organization’s definition of infertility will be utilized: “[infertility is] a disease of the reproductive system defined by the failure to achieve a clinical pregnancy after 12 months or more of regular unprotected sexual intercourse,” (Zegers-Hochschild et al., 2009). Infertility affects approximately 48 million couples and 148 million individuals globally, and it affected 10% of American women between the ages of 15 and 44 in 2019 (Chandra et al., 2013). The DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Infertility is not solely determined by a genetic basis, nor is it only a female issue as this graph indicates a breakdown of infertility causes by gender and other causes. Infertility can be associated with anomalies associated of either the female or male reproductive systems or within both partners’ reproductive systems. Female infertility can result from polycysticovary syndrome, hormonal disorders, premature ovarian failure, genital infections, fallopian tube obstruction, congenital uterine anomalies, uterine synechiae, or indirectly from other medical complications such as diabetes and thyroid disorders. Male infertility, on the other hand, can result from hormonal imbalances and sperm abnormalities (Benksim et al., 2018). Source: Wikimedia Commons
effects of infertility are multifaceted as it can induce further biological consequences as well as psychological distress; 56% of women and 32% of men at an infertility clinic in northern California reported notable symptoms of depression and 76% of the women and 61% of the men reported notable symptoms of anxiety (Pasch et al., 2016). It becomes clear, then, that infertility demands solutions, yet its blurred definition and complex domino effects on the body make it difficult to study. Only by thoroughly examining infertility and its cascading nature will its implications be understood and potentially overcome.
Causes of Infertility One challenge that comes with understanding the fundamental basis of infertility results from the fact that the exact cause—or causes— of infertility have yet to be determined. Considering the broad nature of infertility, this difficulty is understandable: there are different types of infertility including primary infertility (the inability to conceive) and secondary infertility (the inability to conceive following a preceding pregnancy), as well as different sexspecific fertility issues - for example, primary ovarian insufficiency in females and idiopathic oligospermia in males (Benksim et al., 2018; Inhorn & Patrizio, 2015). Complications with ovarian function, abnormal sperm production and function, STDs, and aging are common indicators of infertility, but there has been considerable evidence linking are genetic factors linked to the condition. Mutations in the CSMD1 gene have been found to be associated
WINTER 2021
with gonadal failure in both sexes (Lee et al., 2019). While Lee and colleagues performed this study on mice, multiple disturbances of CSMD1, particularly deletions within introns 1-3 of the gene, were associated with infertility in humans. In the male mice, the gonadal failure (gonadal failure) was observed to have “profound anatomic and histopathologic derangement of the testes” and a decrease in sperm production. Similarly, an E3-ubiquitin ligase connected to human X-linked intellectual disability, CUL4B, has been found to be a key regulator in the early stages of post-meiotic sperm development (Lin et al., 2016). CUL4Bdeficient male mice were infertile, and the mice showed a significant reduction in sperm production prior to their release from the testes compared to wildtype mice. This decrease in sperm count then impaired sperm motility, increased sperm apoptosis, and resulted in fewer and more defective sperm exiting the testes. Infertility, particularly secondary infertility, can also be the result of certain genetic defects, such as developmental, endocrine, and metabolic defects (Zorrilla & Yatsenko, 2013). Secondary infertility includes genetic conditions, sex development disorders, and reproductive dysgenesis disorders among other notable syndromes. Chromosomal mutations are the most common cause of male infertility, with Klinefelter syndrome (KS) being the most prevalent with a karyotype of 47,XXY (Walsh et al., 2009). Klinefelter testis experience germ cell degeneration, and while the sperm of KS men have the normal 23,X or 23, Y haploid genome,
"One challenge that comes with understanding the fundamental basis of infertility results from the fact that the exact cause—or causes—of infertility have yet to be determined. "
103
there is an increased rate of sex chromosome aneuploidies in the men’s offspring (Hennebicq et al., 2001). Researchers have also found the BRCA2 gene to be associated with primary ovarian insufficiency (POI) in oocytes, where BRCA2-deficient mice were infertile due to the culmination of stunted follicle development and nonfunctioning oocytes as a result of DNA damage (Miao et al., 2019). Not only is BRCA2 commonly known as one of the most susceptible genes for breast cancer, BRCA2 mutations are also responsible for the majority of hereditary ovarian cancer cases (Malone et al., 2006). The widespread impacts of mutations in BRCA2 highlight the connection between infertility and other major health issues.
"Research on a Danish population has found that factory workplaces—male exposure to heat and female exposure to textile dyes, noise, and metals—are associated with infertility in a Danish population."
Genetic diseases can also account for infertility. Cystic fibrosis, a recessive disease that results from mutations in the CFTR gene on chromosome 7, is known for its overproduction of thick mucus in organs (Noone & Knowles, 2001). As a result, the mucus can contribute to infertility by clogging spermatic ducts and the congenital bilateral absence of the vas deferens that accompanies the condition can also obstruct sperm outflow (Xu et al., 2014). Similarly, Klinefelter syndrome (KS) is caused by impairments on the X chromosome that result in “small testicles, degenerative changes in spermatic ducts, azoospermia, and decay of potency” (Molnar et al., 2010). Kallmann KAN syndrome, also the result of mutations in the KAL1 gene on the X chromosome, is manifested in delayed maturation, an underdeveloped penis and small testicles (Laitinen et al., 2011). Infertility, then, can result from chromosomal disorders or from single genes, and infertility can be attributed to genetic factors that prevent gametes from meeting or simply being produced in the first place (Shah et al., 2003). Indeed, it can be observed that there is both a strong genetic and heritable basis for infertility in both men and women.
Environmental Impacts on Infertility Considering the genetic basis of infertility, the question of how infertility persists comes into play. Is infertility heritable due to its genetic basis? While recessive genes may account for this, how else can infertility be passed down if infertile individuals are unable to have kids and thus unable to pass down the genes responsible for the infertility? Epigenetic alterations induced by environmental pressures are therefore an important consideration. Epigenetic mechanisms such as DNA methylation, histone modification, and RNA transcripts have been
104
found in gametes, specifically in sperm (James & Jenkins, 2018; Jenkins & Carrell, 2012). The alteration in the density of histone markers has been thought to cause protamine transcripts to be improperly processed, which has been implicated as a potential cause of poor semen parameters (Oliva, 2006). Differences in methylation have also been found at various promoter regions in the sperm of men with idiopathic male infertility (Benchaib et al., 2005). Specifically, diet seems to have a large influence on sperm motility. A study conducted by SalasHuetos and colleagues found that adherence to a Mediterranean diet (MD—high in fruits, nuts, legumes, vegetables, and olive oil; moderate in poultry, fish, and wine; low in sugars, dairy, and red meats) was positively correlated to sperm motility. In a study of 244 subjects, those in the top tertile of adherence to a MD had statistically significant lower sperm immobility (Estruch et al., 2013) (2019). Similar studies have also found that the MD and healthy eating patterns likely have a positive correlation with both sperm quality and concentration (Arab et al., 2018; Salas-Huetos et al., 2017). Another example of diet impacting fertility is the relationship of alcohol and coffee with fertility. The consumption of three or more cups of decaffeinated coffee per day has been linked with an increased risk for spontaneous abortions, the loss of a pregnancy without outside intervention before the 20th week of gestation (Fenster et al., 1997; Griebel et al., 2005). High intake of alcohol has been associated with altered hormone levels, anovulation, and increased risk of fetal loss (Juhl et al., 2003). While it is important to note that correlation does not imply causality, the existence of a positive relationship indicates that the variables may be related. Other environmental factors have also been found to influence infertility. Physically, research on a Danish population has found that factory workplaces—male exposure to heat and female exposure to textile dyes, noise, and metals—are associated with infertility in a Danish population (Rachootin & Olsen, 1983). Chemically, a study found that Danish welders’ constant exposure to metal and solder vapors put them at greater risk for poor sperm quality and reduced fertility (Bonde et al., 1990; Mortensen, 1988). Another study observed that the occurrence of infertility was higher in couples living in heavily polluted areas of New York compared to reference populations of couples living in less polluted areas in the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Assisted reproductive technology (ART) has revolutionized the reproductive industry, transforming a once uncontrollable life process into one that can be controlled, manipulated, and altered. One of the most popular forms, in vitro fertilization (IVF), has been able to facilitate the formation of human embryos following a couple’s difficulty to naturally do so, as can be seen in this image of early human embryos that resulted from IVF. Source: ZEISS Microscopy / Flickr
rest of New York (Carpenter D O et al., 2001). Behaviorally, infertility is a heavy inducer of stress on couples. Amenorrhea, the lack or halting of menstrual cycles in women of reproductive age, is common among women with elevated cortisol levels (Biller et al., 1990), and physical stress is related to low testosterone levels and a decrease in sperm count and motility in men (Andreeff, 2014; AONO et al., 1972; McGrady, 1984). Such examples are only a surface-level demonstration of the powerful influence of environment on infertility. Infertility, then, can be influenced by both genetics and environment, especially when the two are intertwined. For example, prediabetes, normally defined as abnormally high blood glucose levels that are not yet as high as the levels for diabetes thresholds, is a high-risk state for developing diabetes that has both genetic and environmental roots (Tabák et al., 2012). Diabetes mellitus (DM), then, may have similar roots, as DM has been suspected to contribute to infertility. A study done in mice found that paternal mice with diabetes had altered gene expression for protamines in the testes that are crucial for spermatogenesis; altered levels of protamine are associated with lower sperm count and viability and can result in DNA damage in mice and humans
WINTER 2021
(Oliva, 2006). DM has biological roots, where those with DM—especially those with type 2 diabetes mellitus—have insulin deficiency or resistance that can result in a disruption of the metabolism of carbohydrate, fat, and protein (American Diabetes Association, 2014). The disruption can also result by environmental and behavioral risk factors, such as availability of walking areas to promote physical activity or accessibility to supermarkets for sustaining healthy diets (Rachootin & Olsen, 1983; Sallis & Glanz, 2009). Thus, the culmination of these factors that contribute to prediabetes and eventual development of DM may also be a reason for (or amplify) the effect of the decrease in sperm concentration and induced sperm apoptosis.
"Given the large impact of infertility (affecting 15% of couples, or 48.5 million individuals globally), it is inevitable that treatments have become available. In fact, approximately 10% of couples seek fertility evaluation."
Combating Infertility Today Given the large impact of infertility (affecting 15% of couples, or 48.5 million individuals globally), it is inevitable that treatments have become available (Agarwal et al., 2015). In fact, approximately 10% of couples seek fertility evaluation (Cocuzza & Agarwal, 2007). There are three main types of treatments: medication, surgical procedures, and assisted conception, including the most commonly known form: in vitro fertilization (IVF), which is designed to confront obstructed fallopian tubes (Inhorn &
105
Patrizio, 2015). Fertility treatment options do not just mean IVF, and nor do they only treat women. Among women who seek treatment, ovulation drug therapy is the most popular treatment (86%), followed by artificial insemination (30%), corrective surgery for uterine tube blockage (20%), and IVF (13%) (Kessler et al., 2013). For men, gonadotropin releasing hormone therapy, gonadotropins, and assisted reproductive technology (ART) such as intrauterine insemination (IUI) and intracytoplasmic sperm injection (ICSI) are available to men (Turner et al., 2020). IVF, the external fertilization of an egg in a laboratory setting before it is placed back into the uterus, has particularly been found to be among the most successful fertility treatments as the chance of a live birth by the fifth cycle of IVF was 80.1% in a 2006-2010 study (Wade et al., 2015). Similarly, ICSI prevents more total failed fertilizations compared to standard insemination, and it does so by targeting unimpeded azoospermia with testicular sperm extraction (TESE) through directly injecting an isolated spermatozoon into the ooplasm (Lewin et al., 1999; McGrady, 1984; Palermo et al., 1992).
"Among women who seek treatment, ovulation drug therapy is the most popular treatment (86%), followed by artificial insemination (30%), corrective surgery for uterine Despite these technological advancements that tube blockage (20%), have allowed otherwise sterile couples to have and IVF (13%)." children, these treatments are not foolproof.
The majority of ART treatments require multiple rounds to ensure fertile results, and the cost of doing so can be quite significant. In 2012, a fresh transfer cycle of ART was approximately $15,715 and a frozen cycle was approximately $3812; in 2010, an ART birth was estimated to cost $28,829 per single infant, $123,402 per twin, and $465,464 per triplet or higher-order birth (Crawford et al., 2016). This introduces an affordability barrier, as there exists a positive correlation between socioeconomic status and ovarian reserve—the number and quality of viable eggs for reproduction—in women of reproductive age; this could likely be explained by the intensified stress levels, diet limitations, and environmental contaminant exposures that low-income women face (Barut et al., 2016). Moreover, the cost of utilizing ART to achieve fertility is not just monetary. Children conceived from ICSI are at a higher risk for multiple gestation (which is related to higher susceptibility to preterm delivery, low birth weight, and perinatal mortality and epigenetic syndromes including Beckwith-Wiedemann and Angelman), chromosomal abnormalities, congenital defects, and epigenetic syndromes (Alukal & Lamb, 2008). Furthermore, ART treatments are most successful in patients without preexisting conditions, which further limits the treatment
106
accessibility. Yang and colleagues found that couples with an overweight male BMI had decreased sperm activity, sperm volume, and total sperm count than in couples with a normal male BMI (2016). This, along with the fact that paternal obesity negatively affects assisted reproduction outcomes, suggests that male adiposity compromises IVF cycle outcomes (Campbell et al., 2015). ART’s costs, risks, and complex relationships with preexisting conditions demand further research, and a better understanding of these factors is crucial as ART popularity increases.
Future Directions While ART is generally able to treat individuals with common causes of infertility (such as bilateral tubal occlusion), it remains inaccessible to a large portion of the infertile population— especially in the developing world (Nachtigall, 2006). IVF, in particular, has been available for nearly 45 years, yet IVF services are often absent in areas where the infertility rate is higher than the global rate; the sub-Saharan Africa region sees infertility prevalence rates of 30-40% compared to the international prevalence of 9% (Boivin et al., 2007; Leke et al., 1993). This has led to the low-cost IVF (LCIVF) movement, a reproductive justice movement that aims to aid those who are infertile in resource-poor settings (Hammarberg & Kirkman, 2013). An LCIVF technique that has potential is The Walking Egg (WE) which simplifies embryo culture methods and eliminates high-end instruments. This technique has been used to successfully deliver 12 healthy babies (Ombelet, 2014). The WE lab IVF culture system can “fit in a shirt pocket” and utilizes low-cost components like inexpensive, common chemicals. It also does not require complex microprocessorcontrolled incubators, costing only 200 euros ($167.28) per IVF cycle (Inhorn & Patrizio, 2015). This and other LCIVF strategies like “Friends of Low-Cost IVF” (FLCIVF)—which has simplified the ovarian stimulation protocol without using injectable gonadotrophins—have been described to have the potential to “make treatment more accessible and thus reduce injustice” by the ESHRE (European Society for Human Reproduction and Embryology) Task Force on Ethics and Law (ESHRE Task Force on Ethics and Law et al., 2009). This proves to be promising moving forward as LCIVF methods address one of the major impediments that is preventing ART from becoming an effective solution to one of the most difficult medical issues.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
New technologies have also emerged. Treatments for male infertility specifically have become a focus of reproductive assistance, where sperm DNA fragmentation is a new, potentially valuable tool for evaluating male fertility (Fainberg & Kashanian, 2019). Magnetic activated, flow cytometric, and microfluid sperm sorting are techniques that have been used to identify semen samples with low DNA fragmentation indexes in viable sperm due to increased sperm DNA fragmentation being known to negatively impact pregnancy rates (Quinn et al., 2018; Spanò et al., 2000). Capacitation—the maturation of sperm that occurs in vivo along the female reproductive tract—has particularly gained attention, and diagnostic tests such as MiOXSYS that calculates oxidative stress have also gained interest (Cardona et al., 2017; Agarwal et al., 2016). Furthermore, Pantou and colleagues found that in couples with mild male factor infertility and at least three failed IVF attempts, women who submitted to laparoscopy had statistically significant higher clinical pregnancies and ongoing pregnancy and/or live birth rates; this could be due to endometriosis correction (39.53% vs 16.67%) or pelvic adhesions correction (31.82% vs 16.67%), in which both saw higher rates of pregnancy than did women who did not submit to laparoscopy (2020). Additionally, Garolla and colleagues conducted a study that observed increased pregnancy rate, higher delivery rate of healthy born babies, and lower rate of miscarriages in couples where the male partner was human papillomavirus (HPV) vaccinated (2018). The absence of HPV on sperm likely contributes to this finding, and it speaks to how other sectors of medicine may also play a role in reproductive medicine as experts begin to broaden their research methods to develop new, more accessible ART. Important to note is the option that complementary or alternative medicine (CAM) offers to infertile couples. Examples of CAM include pelvic physical therapy, yoga, homeopathy, spiritual healing, acupuncture, herbal therapy, and hypnosis, and CAM has been perceived as being more affordable, safer, and more effective by some couples (Vincent & Furnham, 1996; van Balen et al., 1997). While CAM has been used by infertile patients in South Australia (Stankiewicz et al., 2007) and Canada (Zini et al., 2004), it is not yet popular in the United States, where CAM is often only a resort after failure of conventional fertility treatments. It may also be used to increase the probability of having a child or avoid standard
WINTER 2021
medical treatments (van Balen et al., 1997). Even more traditional to “standard” medical treatments is adoption, a once “natural solution to infertility that is now deemed secondary to medical treatments” (Bell, 2019). Adoption has been stigmatized in the United States due to the lack of a biological tie between the child and its parent(s) and the implication of infertility (Bell, 2019), but it has proven a viable alternative to ART because it is generally more economical. Despite this, the stigma of adoption seems to be yet another barrier for infertile couples to overcome; Joshi and colleagues found that only 54% of couples studied were willing to adopt if ART was not successful (2015). Other options that are available include surrogacy and sperm, egg, or embryo donations. Infertility and its implications evidently run deeper than just the ability to conceive a child. Beyond just the genetic determinant of infertility, there are many factors that contribute to the persistence of infertility: preexisting conditions, the environment in which a couple resides, and financial barriers that prevent infertility treatments from being an option. The inability of the reproductive field to address all of these factors has resulted in a privilege surrounding the ability to have children. As a result, infertility is deemed by the World Health Organization to be one of the top global health issues, and there are many more questions in the field that remain to be explored. Despite this, it must be acknowledged that the field has seen massive advancements over the last couple of decades, most of which yield promising results. References Agarwal, A., Mulgund, A., Hamada, A., & Chyatte, M. R. (2015). A unique view on male infertility around the globe. Reproductive Biology and Endocrinology, 13(1), 37. https:// doi.org/10.1186/s12958-015-0032-1 Agarwal, A., Sharma, R., Roychoudhury, S., Du Plessis, S., & Sabanegh, E. (2016). MiOXSYS: A novel method of measuring oxidation reduction potential in semen and seminal plasma. Fertility and Sterility, 106(3), 566-573.e10. https://doi. org/10.1016/j.fertnstert.2016.05.013 Alukal, J. P., & Lamb, D. J. (2008). Intracytoplasmic Sperm Injection (ICSI) – What are the risks? The Urologic Clinics of North America, 35(2), 277–288. https://doi.org/10.1016/j. ucl.2008.01.004 American Diabetes Association. (2014). Standards of Medical Care in Diabetes—2014. Diabetes Care 2014;37(Suppl. 1):S14–S80Diagnosis and Classification of Diabetes Mellitus. Diabetes Care 2014;37(Suppl. 1):S81–S90. Diabetes Care, 37(3), 887–887. https://doi.org/10.2337/dc14-er03 Andreeff, R. (2014). Amenorrhea. Journal of the American
107
Academy of PAs, 27(10), 50–51. https://doi.org/10.1097/01. JAA.0000453871.15689.a2 AONO, T., KURACHI, K., MIZUTANI, S., HAMANAKA, Y., UOZUMI, T., NAKASIMA, A., KOSHIYAMA, K., & MATSUMOTO, K. (1972). Influence of Major Surgical Stress on Plasma Levels of Testosterone, Luteinizing Hormone and FollicleStimulating Hormone in Male Patients. The Journal of Clinical Endocrinology & Metabolism, 35(4), 535–542. https://doi. org/10.1210/jcem-35-4-535 Arab, A., Rafie, N., Mansourian, M., Miraghajani, M., & Hajianfar, H. (2018). Dietary patterns and semen quality: A systematic review and meta-analysis of observational studies. Andrology, 6(1), 20–28. https://doi.org/10.1111/andr.12430 Barut, M. U., Agacayak, E., Bozkurt, M., Aksu, T., & Gul, T. (2016). There is a Positive Correlation Between Socioeconomic Status and Ovarian Reserve in Women of Reproductive Age. Medical Science Monitor : International Medical Journal of Experimental and Clinical Research, 22, 4386–4392. https://doi. org/10.12659/MSM.897620 Bell, A. V. (2019). “Trying to Have your Own First; It’s What you Do”: The Relationship Between Adoption and Medicalized Infertility. Qualitative Sociology, 42(3), 479–498. https://doi. org/10.1007/s11133-019-09421-3 Benchaib, M., Braun, V., Ressnikof, D., Lornage, J., Durand, P., Niveleau, A., & Guérin, J. F. (2005). Influence of global sperm DNA methylation on IVF results. Human Reproduction, 20(3), 768–773. https://doi.org/10.1093/humrep/deh684 Benksim, A., Elkhoudri, N., Ait Addi, R., Baali, A., & Cherkaoui, M. (2018). Difference between Primary and Secondary Infertility in Morocco: Frequencies and Associated Factors. International Journal of Fertility & Sterility, 12(2), 142–146. https://doi. org/10.22074/ijfs.2018.5188 Biller, B. M., Federoff, H. J., Koenig, J. I., & Klibanski, A. (1990). Abnormal cortisol secretion and responses to corticotropinreleasing hormone in women with hypothalamic amenorrhea. The Journal of Clinical Endocrinology and Metabolism, 70(2), 311–317. https://doi.org/10.1210/jcem-70-2-311 Blaževičienė, A., Jakušovaitė, I., & Vaškelytė, A. (2014). Attitudes of fertile and infertile woman towards new reproductive technologies: A case study of Lithuania. Reproductive Health, 11, 26. https://doi.org/10.1186/1742-4755-11-26 Boivin, J., Bunting, L., Collins, J. A., & Nygren, K. G. (2007). International estimates of infertility prevalence and treatmentseeking: Potential need and demand for infertility medical care. Human Reproduction, 22(6), 1506–1512. https://doi. org/10.1093/humrep/dem046 Bonde, J. P., Hansen, K. S., & Levine, R. J. (1990). Fertility among Danish male welders. Scandinavian Journal of Work, Environment & Health, 16(5), 315–322. https://doi.org/10.5271/ sjweh.1778 Campbell, J. M., Lane, M., Owens, J. A., & Bakos, H. W. (2015). Paternal obesity negatively affects male fertility and assisted reproduction outcomes: A systematic review and metaanalysis. Reproductive BioMedicine Online, 31(5), 593–604. https://doi.org/10.1016/j.rbmo.2015.07.012 Cardona, C., Neri, Q. V., Simpson, A. J., Moody, M. A., Ostermeier, G. C., Seaman, E. K., Paniza, T., Rosenwaks, Z., Palermo, G. D., & Travis, A. J. (2017). Localization patterns of the ganglioside GM1 in human sperm are indicative of male fertility and
108
independent of traditional semen measures. Molecular Reproduction and Development, 84(5), 423–435. https://doi. org/10.1002/mrd.22803 Carpenter D O, Shen Y, Nguyen T, Le L, & Lininger L L. (2001). Incidence of endocrine disease among residents of New York areas of concern. Environmental Health Perspectives, 109(suppl 6), 845–851. https://doi.org/10.1289/ ehp.01109s6845 Chandra, A., Stephen, E. H., & Copen, C. (2013). Infertility and Impaired Fecundity in the United States, 1982–2010: Data From the National Survey of Family Growth. National Health Statistics Report, 67, 19. Cocuzza, M., & Agarwal, A. (2007). Nonsurgical treatment of male infertility: Specific and empiric therapy. Biologics : Targets & Therapy, 1(3), 259–269. Coombes, E., Jones, A. P., & Hillsdon, M. (2010). The relationship of physical activity and overweight to objectively measured green space accessibility and use. Social Science & Medicine, 70(6), 816–822. https://doi.org/10.1016/j. socscimed.2009.11.020 Crawford, S., Boulet, S. L., Mneimneh, A. S., Perkins, K. M., Jamieson, D. J., Zhang, Y., & Kissin, D. M. (2016). Costs of achieving live birth from assisted reproductive technology: A comparison of sequential single and double embryo transfer approaches. Fertility and Sterility, 105(2), 444–450. https://doi. org/10.1016/j.fertnstert.2015.10.032 ESHRE Task Force on Ethics and Law, Pennings, G., de Wert, G., Shenfield, F., Cohen, J., Tarlatzis, B., & Devroey, P. (2009). Providing infertility treatment in resource-poor countries†. Human Reproduction, 24(5), 1008–1011. https://doi. org/10.1093/humrep/den503 Estruch, R., Ros, E., Salas-Salvadó, J., Covas, M.-I., Corella, D., Arós, F., Gómez-Gracia, E., Ruiz-Gutiérrez, V., Fiol, M., Lapetra, J., Lamuela-Raventos, R. M., Serra-Majem, L., Pintó, X., Basora, J., Muñoz, M. A., Sorlí, J. V., Martínez, J. A., & Martínez-González, M. A. (2013, April 3). Primary Prevention of Cardiovascular Disease with a Mediterranean Diet (world) [Research-article]. Https://Doi.Org/10.1056/NEJMoa1200303; Massachusetts Medical Society. https://doi.org/10.1056/NEJMoa1200303 Fainberg, J., & Kashanian, J. A. (2019). Recent advances in understanding and managing male infertility. F1000Research, 8. https://doi.org/10.12688/f1000research.17076.1 Fenster, L., Hubbard, A. E., Swan, S. H., Windham, G. C., Waller, K., Hiatt, R. A., & Benowitz, N. (1997). Caffeinated beverages, decaffeinated coffee, and spontaneous abortion. Epidemiology (Cambridge, Mass.), 8(5), 515–523. https://doi. org/10.1097/00001648-199709000-00008 Garolla, A., De Toni, L., Bottacin, A., Valente, U., De Rocco Ponce, M., Di Nisio, A., & Foresta, C. (2018). Human Papillomavirus Prophylactic Vaccination improves reproductive outcome in infertile patients with HPV semen infection: A retrospective study. Scientific Reports, 8(1), 912. https://doi.org/10.1038/s41598-018-19369-z Griebel, C. P., Halvorsen, J., Golemon, T. B., & Day, A. A. (2005). Management of Spontaneous Abortion. American Family Physician, 72(7), 1243–1250. Hammarberg, K., & Kirkman, M. (2013). Infertility in resourceconstrained settings: Moving towards amelioration. Reproductive Biomedicine Online, 26(2), 189–195. https://doi.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
org/10.1016/j.rbmo.2012.11.009 Hennebicq, S., Pelletier, R., Bergues, U., & Rousseaux, S. (2001). Risk of trisomy 21 in offspring of patients with Klinefelter’s syndrome. The Lancet, 357(9274), 2104–2105. https://doi. org/10.1016/S0140-6736(00)05201-6 Inhorn, M. C., & Patrizio, P. (2015). Infertility around the globe: New thinking on gender, reproductive technologies and global movements in the 21st century. Human Reproduction Update, 21(4), 411–426. https://doi.org/10.1093/humupd/ dmv016 Jacobson, M. H., Chin, H. B., Mertens, A. C., Spencer, J. B., Fothergill, A., & Howards, P. P. (2018). “Research on Infertility: Definition Makes a Difference” Revisited. American Journal of Epidemiology, 187(2), 337–346. https://doi.org/10.1093/aje/ kwx240 James, E., & Jenkins, T. G. (2018). Epigenetics, infertility, and cancer: Future directions. Fertility and Sterility, 109(1), 27–32. https://doi.org/10.1016/j.fertnstert.2017.11.006 Jenkins, T. G., & Carrell, D. T. (2012). The sperm epigenome and potential implications for the developing embryo. Reproduction, 143(6), 727–734. https://doi.org/10.1530/REP11-0450 Johnson, L. N. C., Sasson, I. E., Sammel, M. D., & Dokras, A. (2013). Does intracytoplasmic sperm injection improve the fertilization rate and decrease the total fertilization failure rate in couples with well-defined unexplained infertility? A systematic review and meta-analysis. Fertility and Sterility, 100(3), 704–711. https://doi.org/10.1016/j. fertnstert.2013.04.038
Lewin, A., Reubinoff, B., Porat-Katz, A., Weiss, D., Eisenberg, V., Arbel, R., Bar-el, H., & Safran, A. (1999). Testicular fine needle aspiration: The alternative method for sperm retrieval in non-obstructive azoospermia. Human Reproduction, 14(7), 1785–1790. https://doi.org/10.1093/humrep/14.7.1785 Lin, C.-Y., Chen, C.-Y., Yu, C.-H., Yu, I.-S., Lin, S.-R., Wu, J.-T., Lin, Y.-H., Kuo, P.-L., Wu, J.-C., & Lin, S.-W. (2016). Human X-linked Intellectual Disability Factor CUL4B Is Required for Postmeiotic Sperm Development and Male Fertility. Scientific Reports, 6(1), 20227. https://doi.org/10.1038/srep20227 Malone, K. E., Daling, J. R., Doody, D. R., Hsu, L., Bernstein, L., Coates, R. J., Marchbanks, P. A., Simon, M. S., McDonald, J. A., Norman, S. A., Strom, B. L., Burkman, R. T., Ursin, G., Deapen, D., Weiss, L. K., Folger, S., Madeoy, J. J., Friedrichsen, D. M., Suter, N. M., … Ostrander, E. A. (2006). Prevalence and Predictors of BRCA1 and BRCA2 Mutations in a Population-Based Study of Breast Cancer in White and Black American Women Ages 35 to 64 Years. Cancer Research, 66(16), 8297–8308. https://doi. org/10.1158/0008-5472.CAN-06-0503 McGrady, A. V. (1984). Effects of psychological stress on male reproduction: A review. Archives of Andrology, 13(1), 1–7. https://doi.org/10.3109/01485018408987495 Miao, Y., Wang, P., Xie, B., Yang, M., Li, S., Cui, Z., Fan, Y., Li, M., & Xiong, B. (2019). BRCA2 deficiency is a potential driver for human primary ovarian insufficiency. Cell Death & Disease, 10(7), 1–11. https://doi.org/10.1038/s41419-019-1720-0 Molnar, A. M., Terasaki, G. S., & Amory, J. K. (2010). Klinefelter syndrome presenting as behavioral problems in a young adult. Nature Reviews. Endocrinology, 6(12), 707–712. https:// doi.org/10.1038/nrendo.2010.186
Joshi, S., Prasad, R., & Kushwaha, A. (2015). A Study of Knowledge and Attitude Towards Adoption Amongst Infertile Couples. International Journal of Public Health Research, 3, 318–326.
Mortensen, J. T. (1988). Risk for reduced sperm quality among metal workers, with special reference to welders. Scandinavian Journal of Work, Environment & Health, 14(1), 27–30.
Juhl, M., Olsen, J., Nybo Andersen, A., & Grønbæk, M. (2003). Intake of wine, beer and spirits and waiting time to pregnancy. Human Reproduction, 18(9), 1967–1971. https:// doi.org/10.1093/humrep/deg376
Nachtigall, R. D. (2006). International disparities in access to infertility services. Fertility and Sterility, 85(4), 871–875. https://doi.org/10.1016/j.fertnstert.2005.08.066
Kessler, L. M., Craig, B. M., Plosker, S. M., Reed, D. R., & Quinn, G. P. (2013). Infertility Evaluation and Treatment among Women in the United States. Fertility and Sterility, 100(4). https://doi. org/10.1016/j.fertnstert.2013.05.040 Laitinen, E.-M., Vaaralahti, K., Tommiska, J., Eklund, E., Tervaniemi, M., Valanne, L., & Raivio, T. (2011). Incidence, Phenotypic Features and Molecular Genetics of Kallmann Syndrome in Finland. Orphanet Journal of Rare Diseases, 6(1), 41. https://doi.org/10.1186/1750-1172-6-41 Lee, A. S., Rusch, J., Lima, A. C., Usmani, A., Huang, N., Lepamets, M., Vigh-Conrad, K. A., Worthington, R. E., Mägi, R., Wu, X., Aston, K. I., Atkinson, J. P., Carrell, D. T., Hess, R. A., O’Bryan, M. K., & Conrad, D. F. (2019). Rare mutations in the complement regulatory gene CSMD1 are associated with male and female infertility. Nature Communications, 10(1), 4626. https://doi.org/10.1038/s41467-019-12522-w Leke, R., Oduma, J., Bassol-Mayagoitia, S., Bacha, A., & Grigor, K. (1993). Regional and geographical variations in infertility: Effects of environmental, cultural, and socioeconomic factors. Environmental Health Perspectives Supplements, 101(2), 73–80.
WINTER 2021
Noone, P. G., & Knowles, M. R. (2001). “CFTR-opathies”: Disease phenotypes associated with cystic fibrosis transmembrane regulator gene mutations. Respiratory Research, 2(6), 328. https://doi.org/10.1186/rr82 Oliva, R. (2006). Protamines and male infertility. Human Reproduction Update, 12(4), 417–435. https://doi. org/10.1093/humupd/dml009 Ombelet, W. (2014). Is global access to infertility care realistic? The Walking Egg Project. Reproductive BioMedicine Online, 28(3), 267–272. https://doi.org/10.1016/j.rbmo.2013.11.013 Palermo, G., Joris, H., Devroey, P., & Steirteghem, A. C. V. (1992). Pregnancies after intracytoplasmic injection of single spermatozoon into an oocyte. The Lancet, 340(8810), 17–18. https://doi.org/10.1016/0140-6736(92)92425-F Pantou, A., Sfakianoudis, K., Maziotis, E., Giannelou, P., Grigoriadis, S., Tsioulou, P., Kokkali, G., Koutsilieris, M., Pantos, K., & Simopoulou, M. (2020). Couples with mild male factor infertility and at least 3 failed previous IVF attempts may benefit from laparoscopic investigation regarding assisted reproduction outcome. Scientific Reports, 10(1), 2350. https:// doi.org/10.1038/s41598-020-59170-5
109
Pasch, L. A., Holley, S. R., Bleil, M. E., Shehab, D., Katz, P. P., & Adler, N. E. (2016). Addressing the needs of fertility treatment patients and their partners: Are they informed of and do they receive mental health services? Fertility and Sterility, 106(1), 209-215. e2. https://doi.org/10.1016/j.fertnstert.2016.03.006 Quinn, M. M., Jalalian, L., Ribeiro, S., Ona, K., Demirci, U., Cedars, M. I., & Rosen, M. P. (2018). Microfluidic sorting selects sperm for clinical use with reduced DNA damage compared to density gradient centrifugation with swim-up in split semen samples. Human Reproduction (Oxford, England), 33(8), 1388–1393. https://doi.org/10.1093/humrep/dey239 Rachootin, P., & Olsen, J. (1983). The risk of infertility and delayed conception associated with exposures in the Danish workplace. Journal of Occupational Medicine.: Official Publication of the Industrial Medical Association, 25(5), 394–402.
P., Samsel, K., Dockray, J., Jarvi, K., & El-Sohemy, A. (2020). Nutrition, genetic variation and male fertility. Translational Andrology and Urology, 22. Vincent, C., & Furnham, A. (1996). Why do patients turn to complementary medicine? An empirical study. The British Journal of Clinical Psychology, 35(1), 37–48. https://doi. org/10.1111/j.2044-8260.1996.tb01160.x Wade, J. J., MacLachlan, V., & Kovacs, G. (2015). The success rate of IVF has significantly improved over the last decade. Australian and New Zealand Journal of Obstetrics and Gynaecology, 55(5), 473–476. https://doi.org/10.1111/ ajo.12356 Walsh, T. J., Pera, R. R., & Turek, P. J. (2009). The Genetics of Male Infertility. Seminars in Reproductive Medicine, 27(2), 124–136. https://doi.org/10.1055/s-0029-1202301
Salas-Huetos, A., Bulló, M., & Salas-Salvadó, J. (2017). Dietary patterns, foods and nutrients in male fertility parameters and fecundability: A systematic review of observational studies. Human Reproduction Update, 23(4), 371–389. https://doi. org/10.1093/humupd/dmx006
Xu, X., Zheng, J., Liao, Q., Zhu, H., Xie, H., Shi, H., & Duan, S. (2014). Meta-analyses of 4 CFTR variants associated with the risk of the congenital bilateral absence of the vas deferens. Journal of Clinical Bioinformatics, 4(1), 11. https://doi. org/10.1186/2043-9113-4-11
Salas-Huetos, A., Babio, N., Carrell, D. T., Bulló, M., & SalasSalvadó, J. (2019). Adherence to the Mediterranean diet is positively associated with sperm motility: A cross-sectional analysis. Scientific Reports, 9(1), 3389. https://doi.org/10.1038/ s41598-019-39826-7
Yang, Q., Zhao, F., Hu, L., Bai, R., Zhang, N., Yao, G., & Sun, Y. (2016). Effect of paternal overweight or obesity on IVF treatment outcomes and the possible mechanisms involved. Scientific Reports, 6(1), 29787. https://doi.org/10.1038/ srep29787
Sallis, J. F., & Glanz, K. (2009). Physical Activity and Food Environments: Solutions to the Obesity Epidemic. The Milbank Quarterly, 87(1), 123–154. https://doi.org/10.1111/j.14680009.2009.00550.x
Zegers-Hochschild, F., Adamson, G. D., Mouzon, J. de, Ishihara, O., Mansour, R., Nygren, K., Sullivan, E., & Vanderpoel, S. (2009). International Committee for Monitoring Assisted Reproductive Technology (ICMART) and the World Health Organization (WHO) revised glossary of ART terminology, 2009∗. Fertility and Sterility, 92(5), 1520–1524. https://doi. org/10.1016/j.fertnstert.2009.09.009
Shah, K., Sivapalan, G., Gibbons, N., Tempest, H., & Griffin, D. (2003). The genetic basis of infertility. Reproduction (Cambridge, England), 126, 13–25. https://doi.org/10.1530/ rep.0.1260013 Spanò, M., Bonde, J. P., Hjøllund, H. I., Kolstad, H. A., Cordelli, E., & Leter, G. (2000). Sperm chromatin damage impairs human fertility. The Danish First Pregnancy Planner Study Team. Fertility and Sterility, 73(1), 43–50. https://doi.org/10.1016/ s0015-0282(99)00462-8 Stankiewicz, M., Smith, C., Alvino, H., & Norman, R. (2007). The use of complementary medicine and therapies by patients attending a reproductive medicine unit in South Australia: A prospective survey. The Australian & New Zealand Journal of Obstetrics & Gynaecology, 47(2), 145–149. https://doi. org/10.1111/j.1479-828X.2007.00702.x
ZEISS Microscopy. (2016). Early human embryos [Photo]. https://www.flickr.com/photos/zeissmicro/27872285595/ Zini, A., Fischer, M. A., Nam, R. K., & Jarvi, K. (2004). Use of alternative and hormonal therapies in male infertility. Urology, 63(1), 141–143. https://doi.org/10.1016/j. urology.2003.07.018 Zorrilla, M., & Yatsenko, A. N. (2013). The Genetics of Infertility: Current Status of the Field. Current Genetic Medicine Reports, 1(4). https://doi.org/10.1007/s40142-013-0027-1
Tabák, A. G., Herder, C., Rathmann, W., Brunner, E. J., & Kivimäki, M. (2012). Prediabetes: A high-risk state for diabetes development. The Lancet, 379(9833), 2279–2290. https://doi. org/10.1016/S0140-6736(12)60283-9 Turner, K. A., Rambhatla, A., Schon, S., Agarwal, A., Krawetz, S. A., Dupree, J. M., & Avidor-Reiss, T. (2020). Male Infertility is a Women’s Health Issue—Research and Clinical Evaluation of Male Infertility Is Needed. Cells, 9(4). https://doi.org/10.3390/ cells9040990 van Balen, F., Verdurmen, J., & Ketting, E. (1997). Choices and motivations of infertile couples. Patient Education and Counseling, 31(1), 19–27. https://doi.org/10.1016/s07383991(97)01010-0 Vanderhout, S. M., Panah, M. R., Garcia-Bailo, B., Grace-Farfaglia,
110
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
111
A Year of COVID-19: Implications for the Heart and Lungs BY JENNIFER CHEN '23 Cover: Illustration of the SARS-CoV-2 virus, showing the spike (S) proteins on the viral membrane. Source: Wikimedia Commons
Introduction to COVID-19 The novel coronavirus, known as severe acute respiratory syndrome coronavirus 2 or SARSCoV-2, was first found in infected patients on December 8th, 2019 in Wuhan, Hubei province, China (Bansal, 2020). By March 11th, 2020, the World Health Organization declared the SARS-CoV-2 outbreak a pandemic (“WHO Timeline,” 2020). The virus has caused more than 121 million cases of the coronavirus disease, COVID-19, and more than 2.6 million deaths worldwide as of March 2021 (“COVID-19 Dashboard,” n.d.) Coronaviruses are a family of single-stranded RNA viruses. In the two decades leading up to the SARS-CoV-2 pandemic, two coronaviruses – the severe acute respiratory syndrome coronavirus (SARS-CoV) and Middle-East respiratory syndrome coronavirus (MERS-CoV) – emerged with similar symptoms such as pneumonia. Starting in 2002, SARS-CoV spread
112
to over two dozen countries, with a total of 774 out of 8098 infected dying due to the virus (“SARS Basics Fact Sheet,” n.d.). In 2012, outbreaks of MERS-CoV spread, with 3 to 4 out of 10 infected patients dying due to the virus (“About MERS,” n.d.). The spread of SARS-CoV-2, however, has been unprecedented (Melenotte et al., 2020). SARS-CoV-2 has proven to be highly infectious, even transmissible in the incubation period when the infected have no symptoms and a low viral load. To enter and infect human host cells, SARSCoV-2 binds to angiotensin-converting enzyme 2 (ACE2) receptors on cells (Walls et al., 2020). ACE2, a central enzyme in the reninangiotensin-aldosterone system (RAAS), is a transmembrane glycoprotein expressed in most tissues (Amirfakhryan & Safari, 2020). The spike protein on the surface of SARSCoV-2 mediates the binding between the virus and the host cell membrane. The spike DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Illustration of the SARSCoV-2 virus and its structure, the spike protein, and the ACE2Spike protein complex. The ACE2 receptor is the contact SARSCoV-2 uses to enter host cells. Source: Wikimedia Commons
protein has two domains: the S1 domain, which participates in receptor binding, and the S2 domain, which acts to help fuse the viral and cellular membranes. Researchers discovered that SARS-CoV-2 has a furin cleavage site. This is significant as it shows that furin, an enzyme that is found in the host cell, can be used to cleave at the S1/S2 boundary. This cleavage site distinguishes SARS-CoV-2 from SARS-CoV-1 and is possibly the reason for the increased capacity of the novel virus for infection and disease transmission though this has not been confirmed (Walls et al., 2020). Once the virus has entered the cell, it replicates and is transcribed by host cell machinery. The virus is then released from the cell to repeat the process of infection (He et al., 2020). Since SARS-CoV-2 binds to the ACE2 receptor, the virus can infect endothelial cells of many tissues such as the lungs, heart, kidneys, and other organs (Uday, 2020). The incubation period follows the initial infection of the patient and lasts up to two weeks (Melenotte et al., 2020). Once SARSCoV-2 has begun infecting cells, an immune response is triggered. The trajectory of the disease can be described by three stages.
WINTER 2021
The first stage is the early innate immune system response to the virus where the first clinical symptoms, predominantly pulmonary, manifest in coughs, fevers, and other flu-like symptoms (Melenotte et al., 2020). One study with 1099 patients in China reported that 43.8% of patients presented with fever upon admission, and 88.7% developed fever during hospitalization. 67.8% of patients in this same study presented with cough (Guan et al., 2020).
"The spike protein has two domains: the S1 domain, which participates in receptor binding, and the S2 domain, which acts to help fuse the viral and cellular membranes."
Many individuals with coronavirus do not progress beyond the first stage, but some do exhibit more severe symptoms. The second stage is the inflammatory phase; it is characterized by symptoms such as shortness of breath and hypoxia due to immune-induced inflammation of the respiratory system. Inflammatory markers can also be found in various parts of the body, such as the heart, blood vessels, and kidneys. The third stage is the hyperinflammatory phase marked by a sudden deterioration in patient condition. It is usually caused when white blood cells begin releasing a large quantity of pro-inflammatory molecules called cytokines into the body. This phenomenon is often referred to as a
113
“cytokine storm”; the patient may go into acute respiratory distress, cardiac failure, or multiorgan dysfunction, and exhibit other severe complications. Depending on the patient’s immune response and level of inflammation, complications at this stage may be fatal (Dhakal Source: Wikimedia Commons et al., 2020).
Figure 2: A chest X-ray of a patient with pneumonia due to COVID-19. The X-ray is blurry in the regions where the lungs are, most likely indicating fluid build-up as a complication of SARS-CoV-2 infection.
In one study, it has been determined that patients infected with SARS-CoV-2 had elevated levels of monocytes which secrete a wide range of cytokines, including type I interferons. In severe COVID-19 cases, physicians observed a reduced type I interferon (IFN) response. When they administered type I interferon to patients, the amount of time SARS-CoV-2 was found in the upper respiratory tract was reduced. Scientists are currently investigating the potential of therapies involving type I INF for treating COVID-19 patients. Paradoxically, in some cases, the patient may develop a stronger immune response to the virus in the third phase, causing organ injury and worsening the patient’s condition (Melenotte et al., 2020).
"COVID-19 patients with severe pulmonary complications usually produce a detectable amount of sputum and present with shortness of breath as well as bilateral ground-glass opacities."
In the last weeks of 2020, multiple countries such as the United States and United Kingdom approved vaccines and began distributing them, roughly a year after the virus first appeared. These vaccines provide a light at the end of the tunnel; however, as major vaccine suppliers Pfizer, Moderna, AstraZeneca and others work to supply the world with vaccines, the virus continues to take lives and disrupt others. Furthermore, scientists, physicians, and recovering COVID-19 patients are already bringing attention to potential long-term impacts of the virus on the body.
COVID-19 and the Lungs The ACE2 receptor is found on the surface of ciliated bronchial epithelial cells in the bronchial passages of the lung (Itoh et al., 2020). It is also found on the surface of alveolar type II cells, facing the lumen of the alveoli, which are small, terminal air sacs where carbon dioxide and oxygen exchange (Gavriatopoulou et al., 2020). Therefore, the virus’ mechanism of infection via the ACE2 receptor can explain the virus’ transmissibility and prevalent impact on the lungs, and it is no surprise that physicians have found pulmonary symptoms to be the most common in COVID-19 patients, ranging from coughing to lethal pneumonia or acute respiratory distress syndrome (ARDS) (Bansal, 2020).
114
Physicians and scientists have set forth different categories to classify the pulmonary complications seen in COVID-19 patients. Specifically, physicians have categorized COVID-19 patients as having pneumonia with or without ARDS. According to the Berlin Definition, ARDS is characterized by hypoxemia (low blood oxygen levels) and pulmonary bilateral infiltrates (abnormal substance that has accumulated in both lungs), which is most commonly due to pulmonary edema. ARDS is classified as a respiratory syndrome in patients without cardiac complications such as cardiac failure (Huppert et al., 2020). The main mechanism that causes ARDS is increased permeability of the alveolar epithelium and lung endothelium. In a study in China including 138 hospitalized COVID-19 patients, 19.6% presented with ARDS (Wang et al., 2020). Physicians have also categorized patients with ARDS versus those with ARDS and an abnormal set of symptoms such as the unusual presentation of hypoxia called “silent hypoxia (Gavriatopoulou et al., 2020). Pneumonia can be caused by any pathogen including bacterium, viruses such as SARS-CoV-2, or fungi. The most common symptoms of pneumonia are fever, cough, chest pain, and hypoxia. In more severe cases, patients may suffer from ARDS, organ damage, and sepsis (“Pneumonia,” n.d.). The subsequent inflammatory response causes the alveoli to fill with fluid, which may cause difficulty breathing. Because the virus commonly infects the lungs, COVID-19 was initially named novel coronavirus-infected pneumonia. Regardless of the nomenclature, COVID-19 patients with severe pulmonary complications usually produce a detectable amount of sputum and present with shortness of breath as well as bilateral ground-glass opacities. According to another study in China, 33.7% of
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
II levels and low ACE2 expression can cause acute lung injury (Imai et al., 2005). Scientists have been testing the possibility of increasing ACE2 activity in the body to attenuate inflammation and decrease damage to tissues (Hrenak & Simko, 2020).
patients presented with sputum production, and 18.7% had shortness of breath. The same study reported that out of 975 chest CT scans, 86.2% had abnormal results. Specifically 56.4% showed ground-glass opacities, and 51.8% had bilateral patchy shadowing (Guan et al., 2020). Ground-glass opacities are characterized by gray and blurry imaging in chest CT scans, for various reasons. In the case of COVID-19 patients, the opacities are most likely to be accumulated fluid. The mechanism behind these symptoms lies in the inflammatory response in the lungs. SARS-CoV-2 infection of host cells initiates a systematic inflammatory response in the body in order to eliminate the virus. However, excess or delayed inflammation causes the lungs to fill with fluid, contributing to COVID-19 pneumonia. Furthermore, ACE2 plays an important role in the RAAS, which is involved in regulating blood pressure, oxidative stress, and inflammation. ACE2 has a usual substrate of angiotensin II, generating it to angiotensin-(1-7). Conversely, the transmembrane angiotensin-converting enzyme (ACE) converts angiotensin I into angiotensin II (Amirfakhryan & Safari, 2020). Angiotensin II then binds to either angiotensin II receptor type 1 (AT1R) or angiotensin II receptor type 2 (AT2R). On one hand, angiotensin’s binding to AT1R leads to a cascade pathway that promotes vasoconstriction, elevated blood pressure, and inflammation. On the other hand, the binding of angiotensin II to AT2R leads to the opposite: vasodilation, lower blood pressure, and reduced inflammation (Amirfakhryan & Safari, 2020). Therefore, ACE2 can indirectly regulate ACE activity by reducing angiotensin II levels. When viruses like SARSCoV-2 bind to the receptor of ACE2 and infect host cells, angiotensin II accumulates in the body, dysregulating the RAAS and promoting further inflammation (Gao et al., 2020). Researchers have shown that high angiotensin
WINTER 2021
Figure 3: A chest CT scan of a patient with pneumonia due to COVID-19. Haziness in the image (red arrows) show pulmonary edema due to COVID-19 pneumonia. Source: Wikimedia Commons
In severe inflammatory responses, white blood cells are found in high levels in the lungs. Neutrophils, a type of white blood cell, can disrupt endothelial cells in the lung that are connected by E-cadherin junctions, thus increasing lung endothelial permeability (Huppert et al., 2019). In healthy lungs, the lymphatic system drains a certain amount of fluid from the interstitium, which is the region between the alveoli and pulmonary vessels. However, with increased permeability, fluid can also leak from the capillaries into the lung interstitium at a faster rate than normal. The disrupted capillary endothelial barrier leads to interstitial edema in some COVID-19 cases, and in the cases where the amount of fluid is overwhelming and leaks into the alveolar epithelial barrier, alveolar edema can develop (Zemans & Matthay, 2004). Diffuse alveolar damage, characterized by increasing alveolar epithelial permeability, is another way alveolar edema can accumulate. Inflammatory molecules, such as those secreted by white blood cells and other immune cells, can interfere with VE-cadherin bonds and compromise the alveolar blood-air epithelium barrier as a consequence. In healthy lungs, active ion transport across the alveolar epithelium to the interstitium allows for fluid to clear, optimizing gas exchange across the alveoli. Ions are transported across the alveolar epithelium to create an osmotic gradient, where fluid follows and exits the alveolus (Huppert et al., 2019). Sodium ions are actively transported across the alveolar epithelium via the epithelial sodium channel and ATPase pump (Mathay et al., 2002). Chloride ions are pumped out of the alveoli primarily through the cystic fibrosis transmembrane conductance regulator (CFTR). Water is transported out of the alveoli through aquaporin channels. However, when the lung endothelium and epithelium are compromised, which is the case in patients with severe COVID-19 pneumonia or ARDS, the alveolar epithelium is damaged and the lungs’ ability to clear fluid is impaired (Huppert et al., 2019). As a result, alveolar endothelial permeability increases, allowing fluid from the capillaries to “leak” into the alveoli (Huppert et al., 2019).
"SARS-CoV-2 infection of host cells initiates a systematic inflammatory response in the body in order to eliminate the virus. However, excess or delayed inflammation causes the lungs to fill with fluid, contributing to COVID-19 pneumonia."
115
Figure 4: A flow chart showing the renin-angiotensinaldosterone system (RAAS). SARS-CoV-2 binds to ACE2 receptors, preventing normal ACE2 activity and dysregulating the RAAS. Source: Wikimedia Commons
"Unlike the pulmonary complications of COVID-19, which researchers have studied more extensively, the pathogenesis and mechanisms underlying the cardiovascular complications induced by COVID-19 remain unclear."
116
Patients whose lungs have increased endothelial and epithelial permeability can develop noncardiogenic pulmonary edema (Huppert et al., 2019). This condition is caused by excess fluid in the alveoli and interstitium in the lungs and is characterized by dyspnea or difficulty breathing (Mayo Clinic Staff, 2020). Non-cardiogenic pulmonary edema is pulmonary edema that does not arise due to cardiovascular disease. In COVID patients, pulmonary edema can develop as a result of acute respiratory distress syndrome, pulmonary embolism, and viral infections (Mayo Clinic Staff, 2020). Patients with ARDS and most cases of pneumonia have hypoxemia (low blood oxygen levels) and difficulty breathing due to impaired gas exchange in the alveoli and pulmonary edema. Respiratory failure may also occur and, in the most severe cases, the patient may go into multi-organ failure and potentially die as a result (Gavriatopoulou et al., 2020). Some COVID-19 patients have presented with abnormal symptoms, including “silent hypoxia” or “happy hypoxia”. Strangely, patients with “silent hypoxia” do not experience the usual symptom of breathlessness that usually accompanies hypoxia, thereby allowing the patient’s condition to seem less serious to physicians. COVID-19 patients with this abnormal presentation of hypoxia often have pneumonia as well (Gavriatopoulou et al., 2020). Physicians have suggested that silent hypoxia may occur in patients with hypoxia as well as hypocapnia or decreased levels of carbon dioxide in the blood (most often caused by hyperventilation), thereby preventing the outward symptom of breathlessness that usually signals hypoxia (Ottestad et al., 2020). Because of this unusual presentation, many patients, especially early
in the pandemic, were not diagnosed and treated until more severe symptoms arose. This ultimately prompted scientists to diagnose patients with pneumonia and distinguish them as having or not having ARDS (Gattinoni et al., 2020).
COVID-19 Cardiac Complications Patients also frequently develop cardiovascular disease when infected with SARS-CoV-2. However, unlike the pulmonary complications of COVID-19, which researchers have studied more extensively, the pathogenesis and mechanisms underlying the cardiovascular complications induced by COVID-19 remain unclear. There are a number of case studies that still puzzle researchers. For example, a patient with COVID-19 and ARDS died of sudden cardiac arrest after hospitalization, but his autopsy did not reveal any substantial damage to the heart tissue (Xu et al., 2020). The most common cardiovascular symptom present in COVID-19 patients is acute cardiac injury, with an incidence rate of about 8-12% of all COVID-19 patients (Bansal, 2020). Scientists have used different standards to diagnose myocardial injury, but the condition is generally characterized by an elevation of cardiac troponin levels. Acute cardiac or myocardial injury is commonly diagnosed when the patient exhibits cardiac troponin I levels above the 99th percentile upper reference limit (Thygesen et al., 2018). In the first 41 patients diagnosed with COVID-19 in Wuhan, 5 (about 12.2%) had elevated troponin levels (Zheng et al., 2020). Another study in Wuhan found elevated levels of troponin I in 24 (17%) of 145 cases (Zhou et al., 2020). Cardiac symptoms can also serve as a predictive indicator of patient outcomes. Patients treated in the intensive care unit had DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
significantly higher levels of myocardial injury biomarkers such as troponin I and creatine kinase (Zhou et al., 2020). It should be noted, though, that these statistics can only estimate the true number of cases due to the novel nature of the disease, the differing diagnostic standards used between reports, and the overlap between acute cardiac injury and other cardiovascular symptoms. Researchers have proposed various mechanisms to explain the cardiovascular complications in COVID-19 patients based off of lab work, autopsy reports, and prior knowledge of the heart. The main mechanisms have been direct infection of the heart via ACE2, systemic inflammation and cytokine storm that disrupts heart function, and COVID-19-induced hypoxemia as a result of severe pulmonary complications (Zhou et al., 2020). Direct myocardial infection by SARS-CoV-2 would explain symptoms attributed only to myocarditis. According to the Dallas Criteria, myocarditis or fulminant myocarditis can be diagnosed if the patient presents with myocardial injury or infarction with an inflammatory infiltrate next to the damaged tissue (Aretz, 1987). In less technical terms, myocarditis occurs when a virus, in this case SARS-CoV-2, directly infects the myocardium or heart muscle and causes immune-induced inflammation of the heart (Wang et al., 2020). Myocarditis is associated with a wide range of symptoms such as heart failure, myocardial failure, and arrhythmia (Cooper, 2009). Direct SARS-CoV-2 infection of the heart is certainly plausible given the high level of ACE2 in the heart under normal circumstances (Bansal, 2020). ACE2 is known to be expressed in the epithelium of blood vessels in the heart (Donoghue, et al., 2000). ACE2 is also expressed in the endocardial cells of the heart (AbdelMassih et al., 2020). In an autopsy report of a COVID-19 patient who suffered from cardiogenic shock and ultimately died of septic shock, SARS-CoV-2 particles were found in the myocardium of the patient, suggesting that the virus can infect cardiomyocytes (Tavazzi et al., 2020). However, the researchers could not confirm whether the virus infected myocardium directly or an infected macrophage migrated from the lungs to the heart. Similar to when the virus directly infects the lungs, direct infection of the heart muscle cells would reduce available ACE2 enzymes,
WINTER 2021
thus dysregulating the RAAS and increasing angiotensin II levels. Dysregulation of the RAAS has been associated with cardiovascular disease (Der Sarkissian et al., 2008). Researchers have long known the important role ACE2 plays in regulating heart function (Crackower et al., 2002). In an experiment that deleted ACE2 in mice, the mice presented with increased levels of angiotensin II, reduced cardiac function, and cardiac hypertrophy (Yamamoto et al., 2006). In another study, researchers demonstrated that ACE2 expression protects heart function and attenuates cardiac dysfunction in mice already presenting with myocardial ischemia (Der Sarkissian et al., 2008). ACE2 has been shown to protect cardiomyocytes as well as endothelial cells (AbdelMassih et al., 2020). Therefore, if SARS-CoV-2 directly infects a patient’s myocardium, reducing the number of functioning ACE2 enzymes in the heart, the risk for cardiovascular disease may increase. Direct myocardial infection by SARS-CoV-2 would explain cases were patients’ first clinical symptoms of COVID-19 were of a cardiovascular nature rather than the typical symptoms of fever or cough (Wang et al., 2020). For example, COVID-19 patients have reported heart palpitations and chest tightness as their first symptoms when consulting a doctor (Zheng et al., 2020).
"Direct SARS-CoV-2 infection of the heart is certainly plausible given the high level of ACE2 in the heart under normal circumstances. ACE2 is known to In contrast to direct infection of the be expressed in the myocardium, systemic inflammation and epithelium of blood “cytokine storms” can also inflict myocardial vessels in the heart." injury and may help explain the cardiovascular symptoms in some COVID-19 patients (Wang et al., 2020). In this model, inflammatory molecules such as cytokines infiltrate the heart, disrupting heart function and causing myocardial injury and even potentially heart attack (Adeghate et al., 2020). This mechanism can explain cases where cardiovascular complications occur concurrently with those in other systems such as the pulmonary or renal system. For example, in patients with ARDS or severe pneumonia, hypoxemia may induce myocardial injury (Esakandari et al., 2020). Inflammation can also increase the metabolic needs of tissues including myocardial cells (Gavriatopoulou et al., 2020). Since myocarditis includes a wide range of symptoms, scientists have struggled to determine whether patients suffered from direct myocardial infection or cardiovascular manifestations as a result of systematic inflammation. Regardless of underlying mechanism, the resulting inflammation
117
upon infection can cause a wide range of cardiovascular symptoms and diseases. Researchers have shown that high levels of cytokines such as interleukin-1 and interleukin-6 can damage cardiomyocytes and endothelial cells (AbdelMassih et al., 2020). Damage to endothelial cells, namely blood vessels, can impede coronary artery perfusion and cause myocardial injury, one of the most common cardiovascular complication in COVID-19 patients, in addition to cardiac arrhythmias, heart failure, and cardiogenic shock (Dhakal et al., 2020). One study of 191 COVID-19 patients found that 23% presented with heart failure. Another study in China including 138 COVID-19 patients reported that 16.7% experienced arrhythmia and 7.2% had acute cardiac injury (Wang et al., 2020).
"One study of 191 COVID-19 patients found that 23% presented with heart failure. Another study in China including 138 COVID-19 patients reported that 16.7% experienced arrhythmia and 7.2% had acute cardiac injury."
This means that the virus can induce cardiac damage without any underlying cardiac conditions. 11.8% of COVID-19 deaths reported by the National Health Commission of China were patients that had substantial heart damage without any underlying cardiovascular disease before hospitalization (Zheng et al., 2020). However, pre-existing cardiovascular disease can predispose patients to the more severe COVID-19 symptoms upon infection; prior history of cardiovascular disease can also worsen symptoms (Zheng et al, 2020). For example, in a study involving COVID-19 patients in China, 25% of the patients in the ICU had pre-existing cardiovascular conditions (Wang et al., 2020). In another study of deceased patients, 52% had heart failure prior to acquiring the virus, while only 12% of the surviving patients did (Zhou et al., 2020). Therefore, patients with pre-existing cardiovascular diseases have a higher mortality rate than those without (Esakandari et al., 2020). In addition to virus-induced cardiac complications, physicians have found that drugs used to treat COVID-19 also have unintended cardiovascular effects. For example, chloroquine and hydroxychloroquine have both been shown to prolong the QTc interval, or the interval of time when the heart contracts and relaxes, and induces torsade de pointes (an abnormal heart rhythm) in some cases (Dhakal et al., 2020). Both QTc interval prolongation and torsade de pointes increase a patient’s risk for death due to cardiac symptoms (Giudicessi et al., 2020). Other treatments such as the antibiotic azithromycin and the drug lopinavir have been shown to have the same effects, as well as additional complications such as bradycardia and PR interval prolongation (Dhakal et al., 2020).
118
Future Questions Due to the novelty of the SARS-CoV-2 virus, researchers still have many questions about the resulting symptoms found in patients. The underlying mechanisms of the cardiovascular manifestations in COVID-19 patients still remain unclear, and many researchers have begun investigations in the lab to better elucidate the mechanisms behind these cardiac complications. Understandably, many scientists have focused on the pathogenesis of COVID-19 and comorbidities that predispose patients toward one outcome or set of symptoms. However, not many researchers have studied the long-term effects of contracting SARSCoV-2 on the lungs or heart, perhaps due to the lack of long-term data on patients who have survived. Generally, researchers expect that the acute phase of the disease, where the patient’s immune system either clears the virus or does not, lasts about 21 days. After the acute phase, the convalescent (recovery) and potentially the chronic stages of the disease follow. Specifically, the convalescent stage is the period of time when the patient recovers from the infection. Patients may recovery fully from the infection or enter the chronic stage, where the patient experiences a persistent set of symptoms due to the infection. The timing of these stages is still unknown. A few preliminary reports, however, give scientists a glimpse of what awaits recovering patients. In one case, one week after her respiratory symptoms subsided, a woman in the convalescent phase developed fulminant myocarditis (Akhmerov & Marbán, 2020). This specific case suggests that inflammatory molecules can persist, even when the virus has been successfully eliminated, and continue to wreak havoc in the body. The symptoms in these two phases cannot be determined at this time, however, because of the lack of data beyond these initial reports. Some scientists have looked to previous outbreaks of viruses similar to SARS-CoV-2 such as SARS-CoV-1 (the SARS virus) in order to gain insight into the implications of COVID-19 for surviving patients. For example, some survivors of the SARS-CoV-1 epidemic developed pulmonary and cardiovascular symptoms such as cardiac necrosis and pulmonary fibrosis as well as increased risk for cardiovascular disease. Survivors of SARS-CoV-1 also experienced reduced pulmonary function (27%) up to 6 months after hospital discharge (Ahmed et al., 2020). Longitudinal or cross-sectional studies on survivors of viral outbreaks and
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
their symptoms, however, are rare (Xiong et al., 2020). Other scientists have looked at the longterm effects of common COVID-19 symptoms such as pneumonia to predict long term prognosis of COVID-19 survivors. For example, researchers found that hospitalization for pneumonia was associated with increased cardiovascular disease risk where 34.9% of patients had at least one cardiovascular event in the ten years after hospitalization (CorralesMedina et al., 2015).
Bansal, M. (2020). Cardiovascular disease and COVID-19 . Diabetes & metabolic syndrome, 247-250.
The spread of SARS-CoV-2 has had devastating consequences for millions of people around the globe, and scientists have worked rapidly to understand the virus and its effect on the human body to help COVID-19 patients. The mechanism behind SARS-CoV-2 infection of the lungs has been well-researched in attempt to understand the common respiratory symptoms in COVID-19 patients. However, non-respiratory symptoms have not been as closely investigated, including the sometimes inexplicable cardiovascular complications of COVID-19. Furthermore, the long-term consequences of contracting COVID-19 are still not clear, but as more time passes, researchers will likely gain more insight.
Crackower, M., Sarao, R., Oudit, G., Kozieradzki, I., Scanga, S., Oliveira-dos-Santos, A., . . . Penning. (2002). Angiotensinconverting enzyme 2 is an essential regulator of heart function . Nature, 822-828.
References AbdelMassih, A., Ramzy, D., Nathan, L., Aziz, S., Ashraf, M., Youssef, N., . . . Agha, H. (2020). Possible molecular and paracrine involvement underlying the pathogenesis of COVID-19 cardiovascular complications . Cardiovascular endocrinology & metabolism, 121-124. About MERS . (n.d.). Retrieved from Centers for Disease Control and Prevention: https://www.cdc.gov/coronavirus/ mers/about/index.html
Cooper, L. T. (2009). Myocarditis . The New England journal of medicine, 1526-1538. Corrales-Medina, V., Alvarez, K., Weissfeld, L., Angus, D., Chirinos, J., Chang, C.-C., . . . Yende, S. (2015). Association between hospitalization for pneumonia and subsequent risk of cardiovascular disease . JAMA, 264-274. COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins (JHU). (n.d.). Retrieved from Johns Hopkins: https://coronavirus.jhu.edu/map.html
Der Sarkissian, S., Grobe, J., Yuan, L., Narielwala, D., Walter, G., Katovich, M., & Raizada, M. (2008). Cardiac Overexpression of Angiotensin Converting Enzyme 2 Protects the Heart From Ischemia-Induced Pathophysiology. Hypertension, 712-718. Dhakal, B., Sweitzer, N., Indik, J., Acharya, D., & William, P. (2020). SARS-CoV-2 Infection and Cardiovascular Disease: COVID-19 Heart . Heart, lung & circulation, 973-987. Diamond, M., Peniston Feliciano, H., Sanghavi, D., & Mahapatra, S. (2020). Acute Respiratory Distress Syndrome . In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing. Donoghue, M., Hsieh, F., Baronas, E., Godbout, K., Gosselin, M., Stagliano, N., . . . Acton, S. (2000). A Novel AngiotensinConverting Enzyme–Related Carboxypeptidase (ACE2) Converts Angiotensin I to Angiotensin 1-9 . Circulation Research. Esakandari, H., Nabi-Afjada, M., Fakkari-Afjadi, J., Farahmandian, N., Miresmaeili, S.-M., & Bahreini, E. (2020). A comprehensive review of COVID-19 characteristics. Biological Procedures Online.
Adeghate, E., Eid, N., & Singh, J. (2020). Mechanisms of COVID19-induced heart failure: a short review . Heart Failure Review.
Gao, Y.-L., Du, Y., Zhang, C., Cheng, C., Yang, H.-Y., Jin, Y.-F., . . . Chen, S.-Y. (2020). Role of Renin-Angiotensin System in Acute Lung Injury Caused by Viral Infection . Infection and drug resistance, 3715-3725.
Ahmed, H., Patel, K., Greenwood, a., Halpin, S., Lewthwaite, P., Salawu, A., . . . Sivan, M. (2020). Long-term clinical outcomes in survivors of severe acute respiratory syndrome and Middle East respiratory syndrome coronavirus outbreaks after hospitalisation or ICU admission: A systematic review and meta-analysis . Journal of Rehabilitation Medicine.
Gattinoni, L., Chiumello, D., & Rossi, S. (2020). COVID-19 pneumonia: ARDS or not? . Critical Care, 154. Gavriatopoulou, M., Korompoki, E., Fotiou, D., NtanasisStathopoulos, I., Psaltopoulou, T., Kastritis, E., . . . Dimopoulos, M. (2020). Organ-specific manifestations of COVID-19 infection. Clinical and Experimental Medicine, 493–506.
Akhmerov, A., & Marbán, E. (2020). COVID-19 and the Heart. Circulation Research, 1443-1455.
Giudicessi, J., Noseworthy, P., Friedman, P., & Ackerman, M. (2020). Urgent Guidance for Navigating and Circumventing the QTc-Prolonging and Torsadogenic Potential of Possible Pharmacotherapies for Coronavirus Disease 19 (COVID-19). Mayo Clinic proceedings, 1213-1221.
Amirfakhryan, H., & Safari, F. (2020). Outbreak of SARS-CoV2: Pathogenesis of infection and cardiovascular involvement . Hellenike kardiologike epitheorese. Archived: WHO Timeline - COVID-19. (2020, April 27). Retrieved from World Health Organization: https://www.who. int/news/item/27-04-2020-who-timeline---covid-19 Aretz, H. T. (1987). Myocarditis: The Dallas Criteria. Human Pathology, 619-624.
WINTER 2021
Guan, W.-j., Ni, Z.-y., Hu, Y., Liang, W.-h., Ou, C.-q., He, J.-x., . . . Xiang, J. (2020). Clinical Characteristics of Coronavirus Disease 2019 in China. New England Journal of Medicine, 1708-1729. He, F., Deng, Y., & Li, W. (2020). Coronavirus disease 2019: What we know? Journal of Medical Virology, 1-7. Hrenak, J., & Simko, F. (2020). Renin-Angiotensin System: An Important Player in the Pathogenesis of Acute Respiratory
119
Distress Syndrome . International Journal of Molecular Sciences. Huppert, L., Matthay, M., & Ware, L. (2019). Pathogenesis of Acute Respiratory Distress Syndrome . Seminars in respiratory and critical care medicine, 31-39. Imai, Y., Kuba, K., Rao, S., Huan, Y., Guo, F., Guan, B., . . . Penninger. (2005). Angiotensin-converting enzyme 2 protects from severe acute lung failure . Nature, 112-116. Itoh, N., Yufika, A., Winardi, W., Keam, S., Te, H., Megawati, D., . . . Harapan, H. (2020). Coronavirus disease 2019 (COVID-19): A literature review. Journal of infection and public health, 667-673. Mathay, M., Folkesson, H., & Clerici, C. (2002). Lung Epithelial Fluid Transport and the Resolution of Pulmonary Edema . Physiological Reviews, 569-600.
Yamamoto, K., Ohishi, M., Katsuya, T., Ito, N., Ikushima, M., Kaibe, M., . . . Ogihara, T. (2006). Deletion of AngiotensinConverting Enzyme 2 Accelerates Pressure Overload-Induced Cardiac Dysfunction by Increasing Local Angiotensin II. Hypertension, 718-726. Zemans, R., & Matthay, M. (2004). Bench-to-bedside review: The role of the alveolar epithelium in the resolution of pulmonary edema in acute lung injury . Critical care (London, England), 469–477. Zheng, Y.-Y., Ma, Y.-T., Zhang, J.-Y., & Xie, X. (2020). COVID-19 and the cardiovascular system . Nature Reviews Cardiology, 259-260. Zhou, F., Yu, T., Du, R., Fan, G., Liu, Y., Xiang, J., . . . Liu, Z. (2020). Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study . Lancet, 1054-1062.
Mayo Clinic Staff. (2020, October 20). Pulmonary edema. Retrieved from Mayo Clinic: https://www.mayoclinic.org/ diseases-conditions/pulmonary-edema/symptoms-causes/ syc-20377009 Melenotte, C., Silvin, A., Goubet, A.-G., Lahmar, I., Dubuisson, A., Zumla, A., . . . Ippolito, G. (2020). Immune responses during COVID-19 infection. Oncoimmunology. Ottestad, W., Seim, M., & Otto Mæhlen, J. (2020). COVID-19 with silent hypoxemia. Tidsskriftet. Pneumonia. (n.d.). Retrieved from National Heart, Lung, and Blood Institute: https://www.nhlbi.nih.gov/health-topics/ pneumonia SARS Basics Fact Sheet. (n.d.). Retrieved from Centers for Disease Control and Prevention: https://www.cdc.gov/sars/ about/fs-sars.html Tavazzi, G., Pellegrini, C., Maurelli, M., Belliato, M., Sciutti, F., Bottazzi, A., . . . Antonio lotti, G. (2020). Myocardial localization of coronavirus in COVID‐19 cardiogenic shock. European journal of heart failure, 911-915. Thygesen, K., Alpert, J., Jaffe, A., Chaitman, B., Bax, J., Morrow, D., & White, H. (2018). Fourth Universal Definition of Myocardial Infarction (2018) . Circulation, 618-651. Uday, J. (2020). Effect of COVID-19 on the Organs. Cureus. Walls, A., Park, Y.-J., Tortorici, M., Wall, A., McGuire, A., & Veesler, D. (2020). Structure, Function, and Antigenicity of the SARSCoV-2 Spike Glycoprotein. Cell, 281-292. Wang, D., Han, Y., Lewis, D., Wu, J., & Nishiga, M. (2020). COVID-19 and cardiovascular disease: from basic mechanisms to clinical perspectives. Nature Public Health Emergency Collection, 1-16. Wang, D., Hu, B., & Hu, C. (2020). Clinical Characteristics of 138 Hospitalized Patients With 2019 Novel Coronavirus–Infected Pneumonia in Wuhan, China . JAMA, 1061-1069. Xiong, T.-Y., Redwood, S., Prendergast, B., & Chen, M. (2020). Coronaviruses and the cardiovascular system: acute and longterm implications . European heart journal, 1798-1800. Xu, Z., Shi, L., Wang, Y., Zhang, J., Huang, L., Zhang, C., . . . Wang, F.-S. (2020). Pathological findings of COVID-19 associated with acute respiratory distress syndrome . The Lancet. Respiratory medicine, 420-422.
120
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
121
The Effects of Anxiety on Reaction to COVID-19 and Prevention Efforts BY JESSICA MEIKLE '23 AND LEAH JOHNSON '23 Cover: COVID-19 has not only resulted in severe physical detriments for patients, but also severe psychological consequences among unaffected populations. Source: Pixabay
122
Abstract The COVID-19 pandemic has sparked feelings of fear as well as strong opinions about COVID-19 prevention efforts. Understanding the underlying causes of responses towards COVID-19 and prevention efforts is crucial for policymakers in order to successfully mitigate the virus. One theory revolves around anxiety. This study investigates whether anxiety affects how people feel about COVID-19 and how they behave with respect to COVID-19 prevention efforts. Specifically, we looked at participants’ feelings of anxiety surrounding COVID-19, as well as their feelings of how comfortable they were around COVID-19 prevention efforts based on their level of anxiety (mild, moderate, severe) and which prime (an image) they received. We also looked at the association between anxiety levels and social distancing behaviors. Participants (N=65) were randomly assigned to receive an anxiety-inducing prime or a nonanxiety inducing prime, followed by questions
about how anxious they felt about COVID-19, how comfortable they felt around others wearing masks, and if they still socially distance while wearing masks. We found that there was no interaction between levels of anxiety or priming type with the responses of anxiety about COVID-19 (p > 0.05) nor the responses about comfortability with social distancing (p > 0.05). There was also no association between the level of anxiety and whether or not the participants socially distanced while wearing a mask (p >.05). The results suggest that anxiety does not affect how people feel about COVID-19 and COVID-19 prevention efforts or how they behave in response to these efforts.
Introduction With new COVID-19 cases still being reported every day in the United States, prevention efforts such as wearing masks and social distancing are more important than ever. These prevention efforts were introduced in DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
the first wave of response to the virus in April 2020. Since then, mask mandates have become a very controversial, political topic among Americans, and this has affected people’s behavior and beliefs about COVID-19. The information provided by the CDC and health professionals indicates masks are important in blocking respiratory droplets from the mouth or nose from spreading the virus (Haischer, et. al, 2020). However, not everything about this virus is known, which is why it is important to enforce mandates that are known to have some impact in mitigating the virus. Evidence has shown that countries such as Taiwan that have universal mask-wearing mandates have been able to successfully control the spread of COVID and reduce COVID-related deaths (Haischer, et. al, 2020). In addition to wearing masks, the CDC strongly emphasizes that people need to stay 6-feet apart. In other words, masks are not a substitute for social distancing (CDC). Even with this hard evidence, the U.S. still falls short in enforcing mask mandates and social distancing efforts. As of February 19, 2020 the U.S. has recorded over 27 million confirmed cases and over 480,000 deaths (CDC). These numbers reinforce the importance of understanding how people feel about COVID-19 prevention efforts such as wearing masks. These feelings can help policymakers know what efforts to focus on when creating mask mandate proposals. With the rising number of COVID-19 cases, there has also been a rise in research surrounding people’s behavior towards COVID-19 and COVID-19 prevention efforts. A recent frequency study (a study that only gathers information about a population and states only the percentages of a topic) was conducted by Edward Knotek et.al. in Cleveland, to gather information regarding the American population’s mask-wearing behaviors (Knotek, et. al, 2020). This study ascertained the percentage of people wearing masks in different settings and also assessed social distancing behaviors, but the researchers did not investigate the possible causes or factors behind these behaviors. Furthermore, a recent study at the University of Cincinnati investigated the effectiveness of signs on people’s behavior towards hand washing (Kellaris, et.al, 2020). The researchers conducted an experiment with physical signs that provoked mortality salience levels, the awareness of one's own doom or death which causes existential anxiety compared to neutral control signs’ influence
WINTER 2021
on behavioral intentions. The Kellaris study found that signs evoking mortality salience led individuals to comply more with the signs. However, this study only investigated the effect of fear, or mortality salience specifically, on people’s behavior toward COVID-19 prevention efforts, not COVID-19 in general.
Figure 1: This figure depicts the two common methods of the intermittent fasting diet. The image on the left depicts the method that involves only eating during a specified time interval every day. For example, only eating between 12pm and 6pm. The image on the right depicts the other method of intermittent fasting: picking 2 or 3 days each week to eat nothing and eating normally on all the other days.
These recent studies and findings prompted the theory behind this current study. In the field of psychology, it has been thoroughly shown that a part of the brain called the Source: Wikimedia Commons ‘amygdala’ is involved in both fear and anxiety. This led us to question the effects of anxiety itself on COVID-19 behaviors? We theorized that behavior and reactions toward COVID-19 should correlate with anxiety levels in general and be affected by anxious priming, like the use of different signs in the Kellaris study. We conducted a replication plus extension study in an experimental setting, using an online questionnaire. A replication plus extension study involves asking and conducting a similar study, but with added questions or elements. In our replication plus extension study we used the Knoetek frequency study and added elements of an experiment and extra questions. We investigated the behaviors and feelings about COVID-19 and COVID-19 prevention efforts based on 3 different anxiety levels scored on a G.A.D.-7 item scale by subjecting individuals to different priming images. We hypothesized that people with higher levels of ‘Anxiety’ in the ‘Anxiety-inducing prime’ would respond feeling more anxious about COVID-19 as opposed to the other two levels and the non-anxious prime responses overall. We also hypothesized that those with higher levels of Anxiety paired with the Anxiety-inducing prime would also feel more comfortable around others wearing face masks. Finally, we hypothesized that there was an association between levels of anxiety and the tendency of subjects to socially distance while wearing a mask.
"With the rising number of COVID-19 cases, there has also been a rise in research surrounding people’s behavior towards COVID-19 and COVID-19 prevention efforts."
Design The experiment used a 3x2 factorial approach to assign subjects. The independent variables were participants’ individual levels of Anxiety and Prime type which was either the Anxietyinducing covid-related image or the NonAnxiety inducing COVID-related image. The dependent variables were the questionnaire responses: anxiety about COVID-19, comfortability towards others wearing masks, and social distancing behavior.
123
Participants There were 22 male participants, 42 female participants, and 1 gender-neutral participant from different areas of the United States. Participants’ ages ranged from 18 to 48, with 11 Pacific Islanders/Asian Americans, 21 Native Americans, 4 African Americans, 35 Caucasian, and 2 others. Ages ranged from 19 to 54 years old. There were 31 participants in Survey 1 (the Anxiety-inducing prime) and 34 participants in Survey 2 (the non-anxiety inducing prime). All participants took the survey voluntarily with no reward or personal benefit.
Materials "Our study was conducted and administered using the online research software Qualtrics. We randomly assigned participants to two groups: one group was given Survey 1, the anxiety-inducing prime group, and the other group was given Survey 2, the non-anxiety inducing prime group, or the control group."
Our study was conducted and administered using the online research software Qualtrics. We randomly assigned participants to two groups: one group was given Survey 1, the anxietyinducing prime group, and the other group was given Survey 2, the non-anxiety inducing prime group, or the control group. Our questionnaire included some questions that were developed in a prior frequency study of Americans’ mask-wearing behaviors. The study “Consumers and COVID-19: Survey Results on Mask-Wearing Behaviors and Beliefs” was conducted by Edward S. Knotek ll and Raphael Schnoele with contributing authors to assess the state of mask-wearing in the United States. This study was also administered through Qualtrics and determined percentages of ‘yes,’ ‘no,’ and ‘not sure’ responses to questions such as “When at a store, do you feel more comfortable, less comfortable, or indifferent if other shoppers are wearing masks?” as well as the question “Does wearing a mask make you less likely to follow social-distancing guidelines?” We also used a 7-item anxiety scale called the G.A.D.-7 to measure participants’ anxiety levels. This scale was developed by Robert L. Spitzer in 2006 and has a sensitivity of 89% and specificity of 82% for G.A.D. Note that in this scale the terms sensitivity and specificity mean the accuracy of correctly identifying people with anxiety and those without respectively. This 7-question questionnaire uses the overarching question: “Over the last 2 weeks, how often have you been bothered by the following problems?” and has 7 proceeding questions including “Are you feeling nervous, anxious, or on edge” or “Are you feeling afraid as if something awful might happen.” Participants use a 4-point scale (0 = Not at all to nearly every day) to answer these questions. The points for each question are added up
124
and “scored for 5, 10, and 15 as the cut-off points for mild, moderate, and severe anxiety, respectively.” (Spitzer, et.al, 2006). This scale has previously been validated by numerous physicians. (Spitzer, et.al, 2006). After participants answered the G.A.D.-7 item scale, they were given a prime; either an anxiety-inducing or non-anxiety-inducing image. We used an image of a crowd of people not wearing a mask with one person wearing a mask for the anxiety-inducing prime. We also used an image of a plane taking off for the neutral or non-anxiety inducing prime.
Images Survey 1 included the anxiety prime image followed by a question aimed at participants’ levels of anxiety about COVID-19 and another question about levels of anxiousness about the image. The anxiety prime image was a crowd of people with some wearing masks to illustrate that it was taken during the COVID-19 pandemic. In this image, the majority of people were not wearing face coverings. Survey 2 included the non-anxiety prime image of a plane followed by the same two questions which served as our manipulation check. The manipulation check involved conducting an independent sample t-test using the question “How anxious does this image make you feel?” using a 4-point scale to rate subject anxiousness levels (0 = Not at all anxious to 4 = Very anxious). The anxiety-inducing image was coded for 1 and the non-anxiety inducing image as 2. We determined that our manipulation was effective and found the anxiety-inducing survey responses demonstrated significantly higher levels of anxiousness about the image, t(63) = -5.99, p < .001. (See Table 1 for the means and standard deviations; Figure 1 displays the pattern of means).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table 1 & Figure 1: Mean image scores of Image Anxiousness ( 0-4) split by Image Prime (Anxiety-inducing and NonAnxiety inducing coded Survey 1 and Survey 2 respectively). Source: Figure created by authors in Jamovi
Dependent Variables After each prime was administered, participants answered 11 questions related to COVID-19 and COVID-19 prevention efforts, with a couple questions containing multiple parts. A sampling of the questions, specifically the ones included in our data analysis, are as followed: “How Anxious do you feel about COVID-19?”, “When visiting an indoor public space like a store, do you feel more comfortable, less comfortable, or indifferent if other people are wearing face coverings?” and “Does wearing a mask make you less likely to adhere to proper social distancing guidelines (6 feet apart)?” The question regarding participants’ anxiousness about COVID-19 was assessed using a 5-point scale (0 = Not anxious to 4 = Very anxious). We asked participants to select the level of anxiousness that most accurately described their feelings about COVID-19 following the image prime. After appropriate recoding, higher scores on this question indicated higher levels of anxiety about COVID-19 and lower scores indicated lower levels of anxiety about COVID-19. The question regarding participants’ comfortability when others are wearing face coverings was assessed using a 3-point scale (-1 = Less comfortable to 1 = More comfortable). We asked participants to select the level of
WINTER 2021
comfortability that most accurately described their feelings when others are wearing face coverings in a public space. After appropriate recoding, higher scores on this question indicated higher comfortability and lower scores indicated lower comfortability when others are wearing face coverings in public spaces. Lastly, the question regarding participants’ likelihood to adhere to social distancing guidelines while wearing a face-covering was assessed using 2 potential responses (Yes, No). We asked participants to select the answer that most accurately described their likelihood to adhere to social distancing guidelines while wearing a face covering. After appropriate recoding, counts of “Yes” indicated a higher likelihood to adhere to social distancing guidelines, and counts of “No” indicated a lower likelihood.
"We randomly assigned participants to Survey 1 or Survey 2 using a random number generator and the surveys were distributed via email and link. Participants took the survey online via personal computers or personal smartphones, and every subject was asked the same questions."
Procedure We randomly assigned participants to Survey 1 or Survey 2 using a random number generator and the surveys were distributed via email and link. Participants took the survey online via personal computers or personal smartphones, and every subject was asked the same questions (the only difference, of course, was the anxiety prime image in survey 1 compared to the non-anxiety prime image in
125
Figure 2: Mean G.A.D. scores split by priming image type (Anxietyinducing prime and Non-anxiety inducing prime coded Survey 1 and Survey 2 respectively). Source: Figure created by authors in Jamovi
"We hypothesized that there would be different responses to COVID-19 anxiousness based on anxiety level and the two primes – that those with higher levels of anxiety in the anxiety-inducing prime would respond feeling more anxious about COVID-19 compared to the other two levels and the non-anxious prime responses overall."
survey 2). The survey began by asking a series of demographic questions such as gender, age, race, educational level, and state location, and before participants saw the priming image, they filled out the G.A.D.-7. Then they were given the image and asked how anxious it made them. Then they answered the questions about COVID anxiousness, followed by the question about comfortability, and lastly, the questions about their mask-wearing and social distancing behaviors.
Results We wanted to see the distribution of anxiety levels in both surveys. Figure 2 shows the pattern of G.A.D. score means for each priming type (coded 1 and 2, for the anxiety-inducing prime and the control prime respectively). Participants with the anxiety-inducing survey had higher average GAD scores (Mean = 8.58, SD = 5.32) than the non-anxiety-inducing survey responses (Mean = 7.00, SD = 4.97). We hypothesized that there would be different responses to COVID-19 anxiousness based on anxiety level and the two primes – that those with higher levels of anxiety in the anxiety-
126
inducing prime would respond feeling more anxious about COVID-19 compared to the other two levels and the non-anxious prime responses overall. Table 2 shows the means and standard deviations of participant’s anxiousness about COVID-19 separated by GAD category (coded 1,2,3, for mild, moderate, and severe respectively) and survey type (coded 1 and 2, for the anxious prime and non-anxious/ control prime respectively). Participants in the mild anxiety category given the anxietyinducing prime showed higher levels of anxiety about COVID-19 (Mean = 2.55) compared to the mild anxiety group in the control prime (Median = 1.94). In the non-anxiety inducing / control prime, the severe anxiety group had the highest levels of anxiety about COVID-19 (Mean = 3.00) followed by the moderate anxiety group and then the mild anxiety group. We also hypothesized that there would be different responses about comfortability around others wearing masks based on anxiety level and the use of the two primes. We also anticipated that those with higher levels of Anxiety paired with the Anxiety-inducing prime would also feel more comfortable around others wearing face masks. Table 3 shows the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table 2: Means and standard deviations of participant’s anxiousness about COVID-19 separated by GAD category. Source: Table created by authors in Jamovi
Table 3: Means and standard deviations of participants’ comfortability when others are wearing face coverings separated by GAD category. Source: Table created by authors in Jamovi
Table 4: Number of yes or no responses to the question “Does wearing a mask make you less likely to socially distance?” Source: Table created by authors in Jamovi
WINTER 2021
127
Figure 3: Mean scores of Covid Anxiousness split by Priming type (Anxiety-inducing prime and Non-anxiety inducing prime, coded Survey 1 and Survey 2 respectively) and Anxiety level (mild, moderate, severe, coded 1,2,3 respectively). Source: Figure created by authors in Jamovi
"Based on the statistically insignificant findings for the interaction between anxiety level and anxiety-inducing primes with feelings about COVID-19 and COVID-19 prevention efforts, we further hypothesized that there was an association between anxiety level and following social distancing while wearing a mask."
means and standard deviations of participants’ comfortability when others are wearing face coverings separated by GAD category (coded 1,2,3, for mild, moderate, and severe respectively) and survey type (coded 1 and 2, for the anxious prime and non-anxious/control prime respectively). Participants in the severe anxiety category given the anxiety-inducing prime showed lower levels of comfortability (Mean = 0.750) compared to the participants in the severe anxiety group in the non-anxiety inducing (Mean = 1.00). Based on the statistically insignificant findings for the interaction between anxiety level and anxiety-inducing primes with feelings about COVID-19 and COVID-19 prevention efforts, we further hypothesized that there was an association between anxiety level and following social distancing while wearing a mask. Table 4 shows the number of yes or no responses to the question “Does wearing a mask make you less likely to socially distance?” Figure 5 displays the counts of yes and no responses split by anxiety level. A two-way ANOVA analysis was conducted to determine the interaction between the effects of anxiety level and priming type on participants' anxiousness about COVID-19. There was no significant main effect for anxiety level F [2,59] = 3.04, p =.055 and there was no significant main effect for Priming type F [1,59] = 1.85, p =.179. No significant interaction was found between
128
anxiety level and priming type on participants’ anxiousness about COVID-19 F [2,59] = 1.03, p = 0.365. Figure 3 shows the pattern of means. These results show that there is not a significant difference in responses based on participants’ anxiety level and their priming type (Anxietyinducing or Non-anxiety inducing). A two-way ANOVA text was conducted to determine the interaction between the effects of Anxiety level and priming type on participants' comfortability when others are wearing face coverings. There was no significant main effect for anxiety level F [2,58] =.153, p =.858 and there was no significant main effect for priming type F [1,58] =.303, p =.584. There was no significant interaction between anxiety level and priming type on participants’ comfortability when others are wearing face coverings F [2,58] = 1.106, p = 0.338. Figure 4 shows the pattern of means. These results show that there is not a significant difference in responses based on participants’ anxiety level and their priming type (anxiety-inducing or non-anxiety inducing). A chi-square test of independence was conducted to examine the association between anxiety level and social distancing while wearing a mask. There was no significant association between anxiety level and social distancing while wearing a mask, X2 (2, N = 65) = .281, p =.869.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: Means of Comfortability around others wearing masks split by Priming type (Anxietyinducing prime and Non-anxiety inducing prime coded Survey 1 and Survey 2 respectively) and Anxiety level (mild, moderate, severe coded 1,2,3 respectively). Source: Figure created by authors in Jamovi
Discussion We wanted to study if there was a difference in feelings about COVID-19 and COVID-19 prevention efforts, as well as behaviors towards these efforts based on anxiety level and the use of priming images. We hypothesized that those with higher levels of anxiety in the anxietyinducing prime would feel more anxious about COVID-19 compared to those in the non-anxiety inducing prime and the other levels of anxiety. Our results did not support this hypothesis; there was not a difference in anxiousness responses between the three anxiety levels - Mild, Moderate, or Severe nor between the two priming images. Interestingly though, our descriptive statistics found a difference in the mild anxiety group between the two primes. This group was more affected than the other levels by the anxiety-inducing prime, with feelings of anxiousness about COVID-19 having a higher mean than the mild anxiety group in the non-anxiety-inducing prime. While this is an interesting find, our tests did not detect a statistically significant difference. Therefore, we cannot conclude that anxiety levels and anxious vs. non-anxious induced priming truly affect people’s responses and feelings. We also hypothesized that those with higher levels of anxiety in the anxiety-inducing prime group would feel more comfortable around others wearing masks compared to responses in the non-anxiety inducing prime and the other levels of anxiety. Our results also did not support this hypothesis; there was not a difference in
WINTER 2021
how comfortable people felt around others wearing masks between the three anxiety levels (mild, moderate, or severe) nor between the two priming images. We also hypothesized that there was an association between anxiety level and whether or not people follow social distancing guidelines when wearing a mask. Our results did not support this hypothesis.
"There was not a difference in anxiousness responses between the three anxiety levels - Mild, Moderate, or Severe Our insignificant results could be explained by nor between the two some limitations in our study. Construct validity priming images." was not a major concern for the most part. The manipulation of our primes was checked using an independent-samples t-test which showed a significant difference in anxiousness about the image (our first independent variable). We also used the G.A.D-7 item scale to determine anxiety levels, which has a good test-retest reliability. The scale also has criterion and procedural validity, which helps to establish construct validity for the second independent variable.
Our measurement of the dependent variable, anxiousness about COVID-19, had good construct validity because we asked the question directly. We also directly asked people to gauge their comfortability about masks; while this seems like good construct validity, it could have been better operationalized, or measured, from the conceptual variable “Feelings about COVID-19 prevention efforts.” It may have been the case that some of the questions were measuring people’s
129
Figure 5: Number of Counts (Yes or No) split by Anxiety Level (mild, moderate, severe, coded 1,2,3 respectively) Source: Figure created by authors in Jamovi
"Lack of internal validity could be a reason why there were no significant differences between anxiety levels. This study was not conducted in a strict laboratory setting, and therefore, other factors could have affected the responses."
130
comfortability around others in general, instead of just masks. Future studies could ensure this validity by asking directly about people’s opinions about mandates, such as if they agree with them or not. Future studies could also ask another re-worded question about being around others, in general, to see if the responses differ to further ensure construct validity of this dependent variable. While we established construct validity, external and internal validity were significant limitations. Lack of internal validity could be a reason why there were no significant differences between anxiety levels. This study was not conducted in a strict laboratory setting, and therefore, other factors could have affected the responses. COVID-19 revolves around health, and therefore, socioeconomic status and access to affordable healthcare could be a third variable. COVID-19 hot spots and high incidence rates could also affect how people feel and act towards COVID-19 prevention efforts. Not having access to healthcare or being in COVID-19 hot spot could include people with higher levels of anxiety which would be a third variable to address. This study was also conducted around the 2020 presidential election, which might have affected the respondents’ reported anxiety. There also could have been a selection bias. In our results section, we demonstrated that there were more people who were already anxious in the anxietyinducing survey. A future study could ensure that there is an even representation of anxiety levels in
both priming surveys. External validity was also a limitation because, due to the circumstances, we had to use convenience sampling of college students to gather participants; therefore, there was no simple random sampling to ensure an accurate representation of the population of interest (the American population). Another limitation in establishing external validity was the small sample size. This element could possibly explain the insignificant results of the anxiety level main effect for anxiousness about COVID-19 where the p-value was close to significance. A future study could address this limitation by ensuring a simple random sampling method of the population of interest and gathering a larger sample to possibly find significant results, even small ones. Despite these limitations, the results of our study are still important because they pose questions of what causes and motivates people to behave in certain ways towards COVID-19 and COVID-19 prevention efforts. Based on the Knotek study, which gathered the frequencies of Americans’ mask-wearing habits, there are people who still choose not to wear masks or social distance; future studies are needed to address the underlying motivator of these behaviors. Another replicative study similar to the mortality salience provoking signage could also be done to ensure the validity of how mortality salience and feelings of anxiousness and fear influence people’s behaviors. Future studies could address the potential third
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
factor of socioeconomic status and access to healthcare – it is also possible that people’s anxious feelings about obeying COVID-19 prevention efforts become heightened if they can’t afford to recover from severe COVID-19 symptoms. Such studies could also investigate whether empathy towards those without access to proper health care could influence individual compliance with COVID-19 guidelines. It would be relevant to see if the use of pathos, something that evokes sadness or pity, in COVID-19 signs or information prompts people to comply with orders due to potential guilt about impacting the less fortunate. While a COVID-19 vaccine is slowly being distributed, understanding what will motivate people to comply with COVID-19 prevention efforts is still important. Until a 100% effective vaccine is dispersed equally, this area of research and study is important for policymakers to utilize when trying to mitigate the virus as much as possible for the greater good of the American people. References CDC: Centers for Disease Control and Prevention. (2020). Considerations for wearing masks. https://www.cdc.gov/ coronavirus/2019-ncov/prevent-getting-sick/cloth-facecover-guidance.html Kellaris, J.J., Machleit, K.A., & Gaffney, D.R., (2020). Sign Evaluation and Compliance Under Mortality Salience: Lessons from a pandemic. Interdisciplinary Journal of Signage and Wayfinding. 4(2), 51-66. Knotek ll, Edward S., Schoenle, R., Dietrich, A., Muller, G., Myrseth, K O. R., & Weber, M. (2020). Consumers and COVID-19: Survey results on mask-wearing behaviors and beliefs. Federal Reserve Bank of Cleveland. DOI: 10.26509/ frbc-ec-202020 Haischer, M.H., Beilfuss, R., Rose Hart, M., Opielinski, L., Wrucke, D., Zirgaitis, G., Uhrich, T.D., & Hunter, S.K. (2020). Who is wearing a mask? Gender-, age-, and location-related differences during the COVID-19 pandemic. PLOS One. https://doi.org/10.1371/journal.pone.0240785 Spitzer, R.L., Kroenke, K., Williams, J.B.W., & Lowe, B. (2006). A Brief Measure for Assessing Generalized Anxiety Disorder: The GAD-7. Arch Intern Med. 166(10), 1092-1097. doi:10.1001/ archinte.166.10.1092
WINTER 2021
131
The Microbiota-Gut-Brain Axis May Be the Missing Link Between Autism Spectrum Disorder and Anorexia Nervosa BY JILLIAN TROTH '24 Cover: The microbiota-gut-brain axis is a bidirectional pathway of communication between the enteric and central nervous systems. Shared abnormalities in MGB axis signaling may explain the comorbid overlaps between ASD and AN. Source: created by the author via BioRender
Introduction The etiological paradigm for neurological disorders is currently shifting to one centered less on the brain and more on the gut. High frequencies of gastrointestinal and immune comorbidities in patients with conditions such as depression, autism, and schizophrenia imply a role for the microbiota-gut-brain (MGB) axis (a nervous, immune, and endocrine communication system) in the pathogenesis of various mental disorders. Frequent overlaps amongst neurodevelopmental (autism), mood (anxiety and depression), feeding (anorexia nervosa), autoimmune, social impairment, and sensory processing disorders might be explained by a shared pathogenesis related to MGB axis dysfunction. While the exact role of the MGB axis in such disorders is uncertain, multiple hypotheses have been proposed. Identifying commonalities between MGB axis hypotheses about various neurological,
132
immune, and psychiatric conditions may lead to the best pathophysiological explanations for these disorders. For example, autism spectrum disorder (ASD) and anorexia nervosa (AN) often co-occur and share multiple comorbid symptoms. This article proposes that this overlap might be explained by shared MGB axis-related etiologies. Furthermore, these possible shared mechanisms for ASD and AN may inform next steps for research efforts pertaining to these disorders.
MGB axis dysfunction as a pathophysiological mechanism of neurological disorder etiology The MGB axis is a bidirectional communication system of nervous, immune, and endocrine pathways by which the enteric nervous system (ENS) of the gastrointestinal (GI) tract and the central nervous system (CNS) regulate the development and function of one another DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: The branches of the nervous system. The MGB axis coordinates the activities of the CNS and ENS. Source: Wikimedia Commons
(Liang et al., 2018). Previously known as the gut-brain axis, this network has been recently renamed to reflect the role of the intestinal microbiome, which comprises 90-95% of cells in the gut, as a key mediator of gutbrain communication (Liang et al., 2018). Studies on animals with altered or eliminated gut microbiomes have demonstrated the importance of this microbiome to the functioning of a breadth of physiological systems, including pain perception and response, memory, mood (anxiety and depression), temperament, stress response, appetite, immune development and regulation, metabolism, sensory-motor processing, and social interaction (Liang et al., 2018; Arresti Sanz and El Aidy, 2019; van de Wouw et al., 2017; Oleskin et al., 2016; Fields et al., 2018; Johnson, 2020). Given the overwhelming evidence for the involvement of the gut microbiome in various processes moderated by the MGB axis, it is unsurprising that an altered gut microbiota would lead to various systemic dysfunctions, including those of the brain (Liang et al., 2018). Indeed, several sources confirm the role of the microbiome in the development of various neurological disorders, including depression, autism, and schizophrenia (Liang et al., 2018; Obrenovich, 2018; Rudzki and Szulc, 2018; Mangiola et al., 2016). While the association between an altered gut microbiota (dysbiosis) and neurological and mood disorders is broadly accepted, the precise mechanisms producing this relationship are unclear, especially given the bidirectional nature of the MGB axis. The three predominant theories of dysbiosis pathogenicity are the gut-microbiota WINTER 2021
hypothesis, the “old friends”/early immune challenge hypothesis, and the leaky gut theory; each centers on the role of the gut microbiota in neurological, immune, and/or endocrine dysfunction (Liang et al., 2018). 1. The Gut Microbiota Hypothesis Proposed by Liang and colleagues, the gutmicrobiota hypothesis states that modern lifestyle factors (such as a more processed diet, reduced exercise, increased hygienic practices) alter the gut microbiota to yield one much different from that which was so carefully tailored through evolution (2018). Because the microbiota is so intricately linked to the development and function of the CNS, dysbiosis leads to neurological dysfunction in the form of mental illness, which has also risen over time in parallel with the aforementioned lifestyle changes. According to this hypothesis, the underlying cause of many mental illnesses is lifestyle-driven dysbiosis, which can be remedied with dietary or supplementary preand probiotics (Liang et al., 2018). 2. The "Old Friends" Hypothesis
"Proposed by Liang and colleagues, the gut-microbiota hypothesis states that modern lifestyle factors (such as a more processed diet, reduced exercise, increased hygienic practices) alter the gut microbiota to yield one much different from that which was so carefully tailored through evolution."
Rook and Lowry’s “old friends” hypothesis (2008) is similarly hinged on the concept that certain gut microbes, referred to as “old friends,” have evolved symbiotically with the human race; modern departures from previous lifestyles harm the “friends,” whose presence is essential for proper immune system development, thereby leading to immune dysfunction (Liang et al., 2018). For example, during development, the exposure of dendritic cells to certain 133
Figure 2. A schematic of dysbiosis-induced neuroinflammation and abnormal stress response. In a cycle where the initial cause of dysfunction is still unclear, it is thought that gut dysbiosis leads to intestinal permeability, allowing bacterial metabolites to circulate in the bloodstream. This then induces neuroinflammation, systemic immune response, and altered HPA axis stress response, which perpetuates dysbiosis. Source: Wikimedia Commons
"The “old friends” hypothesis states that dysbiosis-related dysregulation of the immune system explains the rise in autoimmune disorders, allergies, chronic inflammation, and mental illnesses alongside modernization."
species of the gut microbiota is essential for the maturation of T-lymphocytes; without proper microbiota composition, T-lymphocytes differentiate into effective rather than regulatory T-lymphocytes which prevents proper regulation of the immune response (Liang et al., 2018). The microbiota is also known to regulate the development of the gut-associated lymphoid tissue (GALT), as well as the complement system, a proteinaceous immune complex involved in various neurodevelopmental processes as well as antigen destruction (Mangiola et al., 2016; Rudzki and Szulc, 2018). The “old friends” hypothesis states that dysbiosis-related dysregulation of the immune system explains the rise in autoimmune disorders, allergies, chronic inflammation, and mental illnesses alongside modernization (Liang et al., 2018). Notably, high levels of immunoglobulins against dietary peptides and/or bacterial antigens have been repeatedly reported in the serum of patients with neurological disorders, including ASD, schizophrenia, and depression, implying that an inappropriate immune response is common to such conditions; this further supports the “old friends” hypothesis (Rudzki and Szulc, 2018). 3. The Leaky Gut Theory The leaky gut theory departs from the previous two hypotheses by instead focusing on the role of the MGB axis in the development and maintenance of the gastrointestinal-blood barrier (GBB) and blood-brain barrier (BBB)
134
(Liang et al., 2018). Microbes in the gut produce short-chain fatty acids (SCFAs) which stimulate intestinal epithelial regeneration and mucous production to strengthen the GBB; without proper microbiome composition, SCFA production is reduced, and the integrity of the intestinal barrier is compromised (Mangiola et al., 2016). Moreover, altered SCFA levels are associated with an inflammatory immune response: it has been experimentally demonstrated that excess circulating proinflammatory cytokines (which can be triggered by dysbiosis directly or by SCFA alterations) can further contribute to intestinal and BBB permeability (Rudzki and Szulc, 2018; Garcia-Gutierrez et al., 2020). Dysbiosis can also increase circulation of pro-inflammatory bacterially-derived lipopolysaccharides (LPS), which enter the bloodstream due to leaky gut and cause further systemic immune response including neuroinflammation and cytokine secretion (leading to even more GBB and BBB permeability) (Liang et al., 2018; Mangiola et al., 2016; Rudzki and Szulc, 2018; Garcia-Gutierrez et al., 2020; van de Wouw et al., 2017). Loss of GBB and BBB integrity and neuroinflammation are both associated with various neurological disorders, including depression, schizophrenia, ASD, and AN (Garcia-Gutierrez et al., 2020; Liang et al., 2018; Obrenovich, 2018; Rudzki and Szulc, 2017; Gibson and Mehler, 2018; Lobzhanidze et al., 2019; Seitz et al., 2019; Fields et al., 2018; van de
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Wouw et al., 2017; Karakula-Juchnowicz et al., 2017). In addition to causing BBB permeability and neuroinflammation, elevated circulating cytokines and bacterial metabolites have been shown to directly alter brain activity, further supporting the hypothesis that dysbiosisinduced leaky gut can cause brain dysfunction (Mangiola et al., 2016; Obrenovich, 2018; Liang et al., 2018; Liu and Zhu, 2018; Aresti Sanz and El Eidy, 2019).
MGB Axis Dysfunction as a Mechanism of ASD Pathogenesis One of the fastest growing areas of MGB axis research is focused on its implications for the etiology of autism spectrum disorder (ASD). ASD is a neurodevelopmental disorder marked by stereotyped and repetitive behaviors, impaired social communication, and sensory processing disorders (Israelyan and Margolis, 2019; Galiana-Simal et al., 2017). While the prevalence of ASD in the United States is as high as 1 in 54, the etiology of the disorder is still unclear, in part due to its heterogeneity of presentation (CDC, 2020; Israelyan and Margolis, 2019; Vissoker et al, 2015; Sanctuary et al, 2018; Xu et al, 2019; Azhari et al, 2018). A growing body of neurological, immunological, and gastrointestinal-based theories points to the potential role of the MGB axis and gut dysbiosis in ASD pathogenesis (Azhari et al., 2018; Srikantha and Mohajeri, 2019; Fattorusso et al., 2019; Tye et al., 2018). This is supported by the high frequency of gastrointestinal and immunological comorbidities in children with ASD. The literature reports a four times greater prevalence of GI comorbidities in children with ASD than without, and the common comorbidity of constipation has been found to correlate with symptoms like social impairment, aggression, and compulsivity (Sanctuary et al., 2018; Israelyan and Margolis, 2019; Fattorusso et al., 2019). Additionally, the correlation between ASD severity and immune dysregulation (food allergies, asthma, and systemic inflammation evidenced by upregulated proinflammatory cytokines, natural killer cells, interferons, and autoantibodies) further points to the potential role of MGB axis dysfunction (Tye et al., 2018; Vissoker et al., 2015; Azhari et al., 2018; Fattorusso et al., 2019). Thus, a complex phenotypic pattern of gastrointestinal, immunological, and neurological dysfunction has been identified, suggesting a connection between ASD and MGB axis dysfunction (although a causal relationship is yet to be established by human studies) (Azhari et al.,
WINTER 2021
2018; Srikantha and Mohajeri, 2019; Fattorusso et al., 2019; Xu et al., 2019). The induction of ASDassociated traits (such as repetitive behaviors and related social and cognitive dysfunctions) in mice through microbiome manipulation demonstrates that gut dysbiosis indeed plays a role in ASD, and trials of Microbiota Transfer Therapy show promise for improving ASD symptoms and GI comorbidities; thus, it is worthwhile to continue to investigate the patterns, mechanisms, and causal relationships underlying this connection, especially within the context of human ASD patients (Kang et al., 2017; Wang et al., 2019; Liang et al., 2018; Srikantha and Mohajeri, 2019; Rudzki and Szulc, 2018; Mangiola et al., 2016; Sanctuary et al., 2018; Fattorusso et al., 2019). While GI disorders do not present in all cases of ASD, and neither do autoimmune and anxiety disorders, the correlation between these three comorbidities in a subset of cases leads some to believe that there is a particular immunemediated subtype of ASD distinguished by degree of systemic inflammation (Tye et al., 2018; Fattorusso et al., 2019; Azhari et al., 2018). Given prior research, it seems likely that this subtype might be found in patients with co-occurring ASD, GI disorders, and immunological disorders; this is explained by Azhari and colleagues’ gut-immune-brain paradigm (2018). 1. The "Gut-Immune-Brain" Paradigm of ASD
"While GI disorders do not present in all cases of ASD, and neither do autoimmune and anxiety disorders, the correlation between these three comorbidities in a subset of cases leads some to believe that there is a particular immune-mediated subtype of ASD distinguished by degree of systemic inflammation."
One hypothesis grounded in MGB axis dysfunction is the “gut-immune-brain” paradigm, proposed by Azhari and colleagues (2018). Featuring parts of the gut-microbiota, old friends, and leaky gut theories, the paradigm states that gut dysbiosis leads to the autistic phenotype and its comorbidities through four interacting mechanisms: an increase in intestinal permeability (“leaky gut”), toxin production and exposure, aberrant immune response, and metabolic abnormalities (Azhari et al., 2018). This theory is supported by repeated findings of dysbiosis, abnormal levels of circulating bacterial metabolites, immune dysregulation, and impaired metabolic processes in association with ASD (Fattorusso et al., 2019; Israelyan and Margolis, 2019; Sanctuary et al., 2018; Srikantha and Mohajeri, 2019; Obrenovich, 2018; Rudzki and Szulc, 2018; Mangiola et al., 2016; Dempsey et al., 2019; Shultz et al., 2014; Garcia-Gutierrez et al., 2020; Nankova et al., 2014; MacFabe, 2012; Xu et al., 2019; Fields et al., 2018; Lobzhanidze et
135
al., 2019). Given differential immune responses between males and females (for example, males exhibiting a more pro-allergic response), an immune-driven paradigm of ASD could also explain the overrepresentation of males compared to females in the autistic population (Fields et al., 2018; Werling and Geschwind, 2014). Furthermore, the explanatory power of the paradigm lies in its application to many of the leading proposed mechanisms of ASD and its ability to account for their interactions such as through positive feedback loops.
"Analyses of the fecal microbiota of children with ASD compared with typical development peers and/or siblings reveals that Bacteroidetes, Clostridium, Desulfovibrio, Faecalibacterium, Proteobacteria (including Sutterella), Lactobacillus, and Actinobacteria are overrepresented in ASD."
It is important to note that stress alone has been demonstrated to alter the gut microbiome as well as gut permeability, and so one might argue that a genetic and/or environmental predisposition to barrier permeability and/or immuno-inflammatory responses could cause enough physiological stress to induce dysbiosis (Liu and Zhu, 2018; Rudzki and Szulc, 2018; Liang et al., 2018). In this case, it is possible that dysbiosis is merely correlated with the immune response but does not cause it in the first place. However, dietary intervention, probiotic, and fecal transplantation trials demonstrating amelioration of stereotyped behaviors and comorbidities in ASD suggest that a causal relationship might indeed exist between dysbiosis and ASD pathogenesis (Rudzki and Szulc, 2018; Mangiola et al., 2016; Sanctuary et al., 2018; Srikantha and Mohajeri, 2019; Fattorusso et al., 2019). 2. Patterns of Dysbiosis in ASD While studies on ASD patient microbiomes have yielded heterogenous and at times conflicting results, a number of significant patterns have emerged. Analyses of the fecal microbiota of children with ASD compared with typical development peers and/or siblings reveals that Bacteroidetes, Clostridium, Desulfovibrio, Faecalibacterium, Proteobacteria (including Sutterella), Lactobacillus, and Actinobacteria are overrepresented in ASD; Bifidobacterium, Prevotella, Coprococcus, and Veillonellaceae are lacking, and overall species diversity is diminished as well (Fattorusso et al., 2019; Tye et al., 2018; Nankova et al., 2014, Azhari et al., 2018; Garcia-Gutierrez et al., 2020; Sanctuary et al., 2018; Fields et al., 2018; Oleskin et al., 2016; Xu et al., 2019; Wang et al., 2019; Dempsey et al., 2019). Additionally, higher Firmicutes: Bacteroidetes ratios have been noted in ASD (Srikantha and Mohajeri, 2019; Sanctuary et al., 2018). While no definite ASD microbiome profile has emerged, the Human Microbiome Project has demonstrated that the microbiomes of two healthy humans can vary greatly in comparison to one another, highlighting that perhaps it
136
is not the presence of an ideal profile, but the balance between competing and symbiotic species that demarcates a healthy gut from dysbiosis (Fattorusso et al., 2019; Israelyan and Margolis, 2019; Fields et al., 2018). As van de Wouw and colleagues note, a less diverse gut microbiome might allow one species to dominate, influencing the host in ways otherwise impossible; additionally, a variety of dysbiotic profiles could disrupt the microbial ecosystem by hampering cross-feeding (the supplying of nutrients for one species by the metabolites of another). This is particularly relevant to SCFA production, a mechanism of interest for gut-related neurological disorder pathogenesis (2017). a. Bacteroidetes, Clostridium, and Desulfovibrioproduced propionate as a mechanism of ASD There are several hypotheses pertaining to certain species that are found in elevated or diminished abundance which could explain the correlation between dysbiotic patterns in ASD and the etiology of the disorder. For example, the lack of Bacteroidetes, Clostridium, and Desulfovibrio in ASD patients lends support to a propionate-related theory of ASD pathogenesis (Garcia-Gutierrez et al., 2020; Nankova et al., 2014; MacFace, 2012). It is known that the aforementioned bacteria produce propionate (PPA), one of the three primary SCFAs, and the overgrowth of these species in ASD correlates with elevated levels of PPA in stool (Dempsey et al., 2019; Shultz et al., 2014; Lobzhanidze et al., 2019; Nankova et al., 2014; Srikantha and Mohajeri, 2019; Garcia-Gutierrez et al., 2020; Fields et al., 2018). It is known that PPA can cross the BBB, triggering an immune response; chronic PPA administration in mice is correlated with neuroinflammatory biomarkers (GarciaGutierrez et al., 2020; Lobzhanidze et al., 2019). There is also evidence that excess PPA exposure impacts amygdala development and function; this is pertinent to ASD as social communication and emotional processing is often impaired in the disorder (particularly gaze, attachment behavior, and social behavior) (Lobzhanidze et al., 2019). PPA exposure in mice decreases social motivation, reduces neuron number in the amygdala, and activates glial cells, causing further inflammation (Lobzhanidze et al., 2019). Although the data is currently inconclusive, there is evidence that structural changes of the amygdala correlate with degree of social impairment in ASD; the impacts of PPA on the amygdala could explain this association (Lobzhanidze et al., 2019). Furthermore, PPA is a rather convincing mechanism of ASD as murine modeling has shown an association between PPA and an array of ASD-related DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
traits and comorbidities such as reduced food intake, hyperactivity, repetitive behaviors, obsessive/compulsive behaviors, mood disorders, impaired social behavior, altered fatty acid metabolism, altered BBB integrity, reduced gastric motility, developmental delay, GI dysfunction, immune dysfunction, and abnormal gastrointestinal serotonin secretion (Dempsey et al., 2019; Garcia-Gutierrez et al., 2020; Nankova et al., 2014; MacFabe, 2012; Lobzhanidze et al., 2019). b. Faecalibacterium overgrowth as a cause of ASD-related immune dysfunction Faecalibacterium is often overrepresented in the gut microbiomes of ASD patients (Dempsey et al., 2019; Garcia-Gutierrez et al., 2020; Xu et al., 2019; Azhari et al., 2018). This trait has been associated with systemic immune dysfunction (a common comorbidity of ASD), which is the result of altered butyrate (BA) production, where BA is a known metabolite of Faecalibacterium (Xu et al., 2019; Valles-Colomer et al., 2019). Some studies have measured high BA as correlated with high Faecalibacterium in ASD (Dempsey et al., 2019; Garcia-Gutierrez et al., 2020). This could also explain other comorbidities of ASD associated with high BA such as satiety/reduced intake, indigestion, constipation, altered social behavior, and immune dysfunction (Garcia-Gutierrez et al., 2020; Lin et al., 2012). However, the data and ensuing conclusions on the role of BA in ASD are far from conclusive: other studies found reduced BA in ASD and explained impaired GBB integrity as the intestinal mucosa is constructed from SCFAs including BA (Lobzhanidze et al., 2019; Srikantha and Mohajeri, 2019; VallesColomer et al., 2019). While different studies have found either increased or decreased levels of BA in ASD patients, it seems likely that either way, a BA imbalance contributes to ASD etiology. As a key SCFA, BA is implicated in other SCFA-related processes relevant to ASD and its comorbidities, such as serotonin production and secretion, inflammatory immune response, intestinal mucous production, vagal signaling, neurodevelopment, appetite regulation (via secretion of anorexigenic hormones GLP-1 and PYY), metabolism, gene expression (via histone acetylation and methylation), BBB maintenance, and lipid metabolism (Mangiola et al., 2016; Dempsey et al., 2019; GarciaGutierrez et al., 2020; Yang et al., 2019; Nankova et al., 2014, MacFabe, 2012; Oleskin et al., 2016; van de Wouw et al., 2017; Fields et al., 2018; Nankova et al., 2014).
WINTER 2021
c. Proteobacteria overgrowth as a cause of ASD-related immune dysfunction through LPS and TNF-α Proteobacteria, specifically Sutterella, have been found at elevated numbers in ASD patients and might contribute to ASD pathogenesis in line with the gut-immunebrain paradigm (Azhari et al., 2018; Fattorusso et al., 2019; Tye et al., 2018; Fields et al., 2018). An increase in Proteobacteria is associated with metabolic endotoxemia (low-grade systemic inflammation) through the overproduction and circulation of lipopolysaccharides (LPS) (van de Wouw et al., 2017; Fields et al., 2018). LPS interacts with toll-like receptor 4 (TLR4) to increase gut permeability, leading to a cascade of effects on CNS function, including altered SCFA circulation, vagal signaling, stress response and inflammatory immune response (Fields et al., 2018; Garcia-Gutierrez et al., 2020; Xu et al., 2019; Karakula-Juchnowicz et al., 2017). In mice, increased levels of circulating LPS are associated with repetitive behaviors, underscoring the implications of LPS for not only ASD comorbidities but also for hallmark traits (Fields et al., 2018). Furthermore, LPS stimulates the secretion of TNF-α from liver cells, leading to peripheral inflammation, inhibition of INF-β (which is essential for the intestinal mucosa lining), and impairment of tight junction formation in the GBB and BBB, thereby further perpetuating intestinal and brain permeability and systemic inflammation (Azhari et al., 2018; KarakulaJuchnowicz et al., 2017). Notably, TNF-α correlates with the severity of ASD symptoms in general and GI dysfunction specifically (GarciaGutierrez et al., 2020; Srikantha and Mohajeri, 2019). The molecule is thought to influence CNS function by crossing the BBB, inducing neuroinflammation (Garcia-Gutierrez et al., 2020; Srikantha and Mohajeri, 2019). TNF-α is also thought to act directly on the CNS to cause the release of corticotropin-releasing factor (CRF), an anorexigenic neuropeptide that also affects gastric emptying, immune response, and mood (Holden and Pakula, 1996; Furman, 2007). Moreover, elevated TNF-α results in hypothalamic dysregulation manifesting in reduced insulin and increased cortisol, which further decreases IL-1, IL-2, and 5-HT secretion, and thereby impacts serotonergic signaling (Holden and Pakula, 1996).
"Faecalibacterium is often overrepresented in the gut microbiomes of ASD patients. This trait has been associated with systemic immune dysfunction (a common comorbidity of ASD), which is the result of altered butyrate (BA) production, where BA is a known metabolite of Faecalibacterium."
137
Figure 3. The gut-blood barrier (GBB). Dysbiosis appears to compromise the integrity of the intestinal mucosal barrier through a variety of mechanisms, allowing for the circulation of bacterial products in the bloodstream. Source: Wikimedia Comons
"It is thought that adequate amounts of metabolites from Bifidobacterium in particular are essential for the regulation of the hypothalamuspituitary-adrenal (HPA) axis; thus, decreased Bifidobacterium levels may contribute to impaired stress response and explain elevated cortisol levels in ASD fecal samples."
d. Underrepresentation of Bifidobacterium as a mechanism of ASD through decreased SCFAs Bifidobacterium has been repeatedly measured in decreased quantity in the microbiomes of ASD patients (Xu et al., 2019; Oleskin et al., 2016; Fattorusso et al., 2019; Azhari et al., 2018; Garcia-Gutierrez et al., 2020). Given that Bifidobacterium is a producer of SCFAs, low Bifidobacterium in ASD patients might explain decreased SCFAs and dysregulation of the gut mucosal barrier, anti-inflammatory pathways, vagal signaling, appetite, serotonin production, neurodevelopment, and gene expression (Xu et al., 2019; Mangiola et al., 2016; Dempsey et al., 2019; Garcia-Gutierrez et al., 2020; Yang et al., 2019; Nankova et al., 2014; Oleskin et al., 2016; van de Wouw et al., 2017; Fields et al., 2018; Srikantha and Mohajeri, 2019). It is thought that adequate amounts of metabolites from Bifidobacterium in particular are essential for the regulation of the hypothalamus-pituitary-adrenal (HPA) axis; thus, decreased Bifidobacterium levels may contribute to impaired stress response and explain elevated cortisol levels in ASD fecal samples (Oleskin et al., 2016; Rudzki and Szulc, 2018; Obrenovich, 2018). Furthermore, an altered stress response would perpetuate immune dysregulation, contributing to neuroinflammation, GBB and BBB permeability, and autoimmune comorbidities. e. Underrepresentation of Prevotella as a mechanism of ASD Finally, it is worth briefly noting decreased levels of Prevotella in ASD, given the bacterium is negatively associated with gut permeability
138
and positively associated with GI dysfunction (Tye et al., 2018; Fattorusso et al., 2019; Fields et al., 2018; Garcia-Gutierrez et al., 2020). Low Prevotella is also associated with increased leptin and decreased ghrelin, possibly accounting for feeding-related comorbidities in ASD (Seitz et al., 2019).
MGB axis dysfunction as a mechanism of AN pathogenesis Anorexia nervosa (AN) is another neurological disorder which has recently received attention as a condition possibly linked to MGB axis dysfunction. With MGB axis-related traits and comorbidities such as gut dysbiosis, mood disorders, leaky gut, GI dysfunction, autoimmune disorders, sensory processing disorders, alexithymia, abnormal amygdala activity, impaired social functioning, serotonergic dysregulation, OCD, and ASD, AN is plausibly perpetuated by the immunological, nervous, and endocrine effects of gut dysbiosis (Hommer and Swedo, 2021; Gibson and Mehler, 2019; Bentz et al., 2016; Kerbeshian and Burd, 2009; van de Wouw et al., 2017; GalianaSimal et al., 2017; Joos et al., 2011; Holden and Pakula, 1996; Karakula-Juchnowicz et al., 2017). Moreover, as a psychiatric disease dealing expressly with gut-related activities (such as eating), it makes sense to consider AN in relation to gut-brain communication. It is thought that interactions between the ENS and microbiota are involved in regulating ANrelated processes such as food choice-related emotions, food intake, dopaminergic signaling, satiety, and hunger (Oleskin et al., 2016).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Once again, it is important to consider that many of the traits listed above (including dysbiosis and leaky gut) may be the result (and not the cause) of AN behavior, psychological or physiological stress, and/or other possible etiological pathways, and that a causal relationship between gut dysbiosis and AN has yet to be established (Breton et al., 2021; Gibson and Mehler, 2019; Glenny et al., 2017; Karakula-Juchnowicz et al., 2017; Liang et al., 2018). Repeated observations of dysbiosis in AN patients and existing knowledge linking MGB axis-related factors to AN symptomatology suggest that further investigation into a causal relationship between gut dysbiosis and AN is warranted (MacFabe, 2012; Breton et al., 2021; Gibson and Mehler, 2019; Seitz et al., 2019; van de Wouw et al., 2017; Glenny et al., 2017; Holden and Pakula, 1996; Karakula-Juchnowicz et al., 2017; Liang et al., 2018). Additionally, experiments modifying the microbiomes of mice have demonstrated a causal link between microbiome composition and anxious and depressive behaviors (Roubalovà et al., 2020). Finally, clinically noted overlaps between AN and ASD, paired with the earlier discussed evidence for a role of the MGB axis in the etiology of some forms of ASD, further support the consideration of AN as a MGB axis-related disorder (Kerbeshian and Burd, 2009; GalianaSimal et al., 2017; Dattaro 2020). 1. Applying the gut-immune-brain paradigm to AN Similar to Azhari and colleagues’ gut-immunebrain paradigm of ASD, an etiological pattern of AN can be derived from various MGB axisrelated theories of neurological disorder in the literature (2017). Several sources propose gut dysbiosis as a causal factor of AN behaviors (van de Wouw et al., 2017; Karakula-Juchnowicz et al., 2017; Breton et al., 2021; Seitz et al., 2019). Both Karakula-Juchnowicz and colleagues and Seitz and colleagues hypothesize gut dysbiosis as a likely mechanism of AN pathogenesis through the induction of intestinal permeability and resulting immune response which ultimately interferes with mood and appetite (2017; 2019). This is supported by reports of a higher chance of autoimmune disorders in patients with AN (Seitz et al., 2019; Hommer and Swedo, 2017). While dysbiosis could occur for many reasons, it is known that malnutrition (for instance, as a result of restrictive feeding behaviors) alters the microbiome—for example, by promoting bacteria suited to a low-energy environment (Karakula-Juchnowicz et al., 2017; Glenny et
WINTER 2021
al., 2017; Breton et al., 2021). In line with the prevailing thought that stress anticipates restrictive episodes, it is also known that stress alters the microbiome (Liu and Zhu, 2018; Rudzki and Szulc, 2018; Liang et al., 2018). It has been otherwise suggested that AN-related dysbiosis and GI permeability can be the result of infection and/or antibiotic use, which have been noted to commonly precede the onset of restrictive behaviors (Liang et al., 2018; Rudzki and Szulc, 2018; Holden and Pakula, 1996; Fetissov and Hökfelt, 2019). For example, it appears that Salmonella exposure may prompt overgrowth of the Enterobacteriaceae which are known to produce the ClpB mimetic of α-melanocyte-stimulating hormone (α-MSH), inducing satiety and behavior common to AN (Fetissov and Hökfelt, 2019). Regardless of the cause of dysbiosis, it is thought to cause an altered stress response and a leaky gut (Seitz et al., 2019; Karakula-Juchnowicz et al., 2017). Increased intestinal permeability has been reported in murine models of AN, and weight loss has been reported to increase intestinal permeability, although whether this is the result of gut dysbiosis or another mechanism is still unclear (Gibson and Mehler, 2019; Karakula-Juchnowicz et al., 2017). Leaky gut allows LPS to enter the bloodstream and incite systemic and neurological inflammation, creating a feedback loop through which gut dysbiosis, permeability, and inflammation are exacerbated, manifesting in the worsening symptoms of AN through mood dysregulation and immune-related appetite depletion (Karakula-Juchnowicz et al., 2017). It has been proposed that immune dysfunction leads to abnormal appetite, emotion, and weight regulation specifically through the production of autoantibodies against appetite and emotion regulating neuropeptides such as melanocortin, α-MSH, and neuropeptide Y (NPY) (Fetissov et al., 2008).
"While dysbiosis could occur for many reasons, it is known that malnutrition (for instance, as a result of restrictive feeding behaviors) alters the microbiome— for example, by promoting bacteria suited to a lowenergy environment."
2. Elevated TNF-α in AN Physical wasting in AN patients is additionally associated with elevated TNF-α, and while a causal relationship has yet to be determined between these two events, it is known that TNF-α acts on tight junctions to further increase GBB and BBB permeability, perhaps thereby leading to and/or worsening AN symptomatology (Karakula-Juchnowicz et al., 2017). Holden and Pakula similarly point to TNF-α and immune system dysregulation as the underlying mechanism of AN, as an overexpression of
139
TNF-α leads to reduced β-endorphin and IL-4 levels, which further increases TNF-α; this positive feedback loop could explain the difficulty of terminating gradually worsening behaviors in AN (1996). Furthermore, TNF-α is thought to induce release of the anorexigenic neuropeptide CRF, reduce insulin, IL-1, and IL-2, and increase cortisol, thereby inhibiting 5-HT release and impairing serotonergic signaling, a known pathophysiological trait in AN (Holden and Pakula, 1996). 3. Altered SCFA levels in AN
"People with AN are over 15 times more likely to be autistic than people without the condition, and people with ASD are over five times more likely to have AN. Within the eating disorder population at large, ASD is overrepresented at about 22.9%."
140
Gut dysbiosis may further contribute to AN symptoms through decreased SCFA levels (Dempsey et al., 2019; MacFabe 2012; Gibson and Mehler, 2019; van de Wouw et al., 2017; Lin et al., 2012). Similar to the pathophysiology in ASD, properly balanced SCFA levels seem to be essential for processes also related to AN such as neurodevelopment, social behavior, serotonergic signaling, HPA stress response, GBB and BBB integrity, gastric motility, immune function, appetite and weight regulation, repetitive behaviors, and amygdala activity (Dempsey et al., 2019; Garcia-Gutierrez et al., 2020; Lobzhanidze et al., 2019; Yang et al., 2019; Nankova et al., 2014; MacFabe 2012; Oleskin et al., 2016; Valles-Colomer et al., 2019; van de Wouw et al., 2017; Lin et al., 2012). In particular, SCFAs regulate gut-derived neuropeptide signaling relating to appetite and metabolism (Oleskin et al., 2016; van de Wouw et al., 2017; Fetissov et al., 2008). Additionally, the effects of abnormal SCFA production as a result of gut dysbiosis may be compounded by the disruption of bacterial cross-feeding in the microbiome, and SCFAs may also contribute to AN by epigenetic gene modification (van de Wouw et al., 2017; Nankova et al., 2014). Furthermore, because SCFAs regulate inflammatory immune pathways, altered SCFA levels may perpetuate a vicious cycle of inflammation, dysbiosis, abnormal metabolite circulation, inflammation, and so on (van de Wouw et al., 2017). As SCFA-related inflammation can lead to gut permeability, further altering SCFA levels in serum, the cycle continues (Fields et al., 2018). The exact effects of high versus low levels of SCFAs on AN-related processes are unclear, but the data demonstrate that abnormal SCFA levels on either end of the spectrum have detrimental effects, and further investigation is warranted (van de Wouw et al., 2017).
4. Patterns of dysbiosis in AN Given the prevalence of dysbiosis in AN cases, it may be helpful to synthesize patterns of dysbiosis in the literature and analyze their potential impacts on AN symptomatology. Faecalibacterium, Methanobrevibacter, and Actinobacteria have been reported in excess in individuals with AN, while Clostridium, Bacteroidetes, Prevotella, Roseburia, and overall bacterial diversity are reduced (Azhari et al., 2018; Dempsey et al., 2019; Glenny et al., 2017; Karakula-Juchnowicz et al., 2017; Breton et al., 2021; van de Wouw et al., 2017; Tye et al., 2018; Seitz et al., 2019; Roubalovà et al., 2020). However, there are studies contradicting some of these findings (Breton et al., 2021). A very recent systematic review reports increased Enterobacteriaceae, Parabacteroidetes, Alistipes, and Akkermansia, and decreased butyrate-producing bacteria such as Roseburia in AN (Lodovico et al., 2021). The lack of consensus on a microbiome profile, in AN as well as ASD and even healthy persons, should be noted and perhaps indicates that if there exists a causal relationship between dysbiosis and these disorders, it may not be due to the over- or undergrowth of one or another species but rather a complex system of dependent relationships. Alternatively, the heterogeneity of methodology in measuring microbiome composition could account for the lack of consensus (Lodovico et al., 2021).
MGB axis dysfunction as an explanation for the overlap between ASD and AN ASD and AN share a high and thus far unexplained comorbidity rate which may be accounted for by a common etiological mechanism rooted in MGB axis dysfunction. People with AN are over 15 times more likely to be autistic than people without the condition, and people with ASD are over five times more likely to have AN (Dattaro, 2020). Within the eating disorder population at large, ASD is overrepresented at about 22.9% (compared to 1 in 54, or 1.85%, in the general population), and a standardized clinical assessment confirmed these findings within a cohort of AN patients, with 23.3% qualifying for ASD (Westwood et al., 2017). Such a remarkable comorbidity rate surely hints at shared or similar pathophysiological mechanisms. While AN is typically thought of as a purely psychiatric disease, some scientists propose it is better categorized as a neuropsychiatric
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
developmental disorder, given the overlap of comorbidities between AN and ASD (Kerbeshian and Burd, 2009). In addition to the AN-ASD overlap, the two conditions share many other comorbidities which could be explained by the gut-immune-brain paradigm. 1. GI dysfunction in both ASD and AN In both AN and ASD, GI dysfunction is common, particularly presenting in the form of constipation (Karakula-Juchnowicz et al., 2017; Garcia-Gutierrez et al., 2020; Glenny et al., 2017; Israelyan and Margolis, 2019; Sanctuary et al., 2018). In AN, it is thought that constipation is the result of the body slowing down digestive processes to optimize nutrient absorption during malnutrition and may be related to gut dysbiosis; increased levels of methanogenic bacteria species have been observed in correlation with constipation and are thought to be associated with nutrient absorption (Karakula-Juchnowicz et al., 2017; Seitz et al., 2019; Glenny et al. 2017). While constipation does not present in all ASD patients and occurs in the absence of a restrictive diet, it is linked to autoimmune dysfunction, anxiety, and sensory processing, suggesting that the MGB axis plays a role (Garcia-Gutierrez et al., 2020; Tye et al., 2018). Thus, the heterogeneity of GI symptoms in ASD patients may be explained by a shared MGB axis-related pathophysiology between AN and an immune-mediated subtype of ASD. More specifically, GI dysfunction in ASD correlates with elevated PPA and TNF-α; the relevance of these molecules to AN etiology as well suggests these pathways may explain the symptomatic overlap between the two conditions (Nankova et al., 2014; Srikantha and Mohajeri, 2019). It is also possible that the two mechanisms are entirely different and independent; however, given correlations between GI dysfunction and other ASD and AN symptoms such as social impairment and food selectivity, it seems possible that the mechanisms for GI dysfunction are shared, and further research should be conducted on this connection. 2. Explaining shared behavioral comorbidities in ASD and AN a. Impaired social functioning While impaired social functioning is better known as a hallmark trait of ASD, it appears to be a stable trait of AN as well (Israelyan and
WINTER 2021
Margolis, 2019; Bentz et al., 2016; Kerr-Gaffney et al., 2020). Given that social impairment and associated brain abnormalities also tend to correlate with length of AN episodes, it seems plausible that MGB axis-directed neurodevelopment is disrupted in AN, contributing to an ASD-like phenotype; it is also notable that AN commonly appears during the teenage years, which are thought to be a critical window for development of social perception skills and the development of the amygdala (Bentz et al., 2016; Lobzhanidze et al., 2019). In ASD, social impairment is correlated with GI dysfunction (particularly constipation), leading researchers to think that simultaneous developmental defects in the brain, intestine, and/or MGB axis underlie ASD pathogenesis (Israelyan and Margolis, 2019). Meanwhile, children with social difficulties are more likely to develop disordered eating; it is hypothesized that such children with ASD or ASD-like traits engage in starvation as a coping mechanism to numb difficult or mal-processed emotions, which could then lead to developmental brain abnormalities exacerbating ASD symptoms (Dattaro, 2020). Regardless of causality, the shared trait of abnormal social functioning has been linked to gut dysbiosis, altered PPA levels, increased GI serotonin secretion, and amygdala hypoactivity and structural abnormalities (Liang et al., 2018; Garcia-Gutierrez et al., 2020; MacFabe, 2012; Herrington et al., 2016; Lobzhanidze et al., 2019; Nankova et al., 2014). Since dysbiosis-induced altered PPA levels are associated with social impairment in addition to abnormal serotonergic signaling, neurodevelopment, and amygdala structure and function, it seems possible that PPA is behind the link between social difficulties in both AN and an immune-mediated subtype of ASD (Dempsey et al., 2019; Garcia-Gutierrez et al., 2020; Yang et al., 2019; van de Wouw et al., 2017; Nankova et al., 2014; Lobzhanidze et al., 2019; MacFabe 2012). That being said, the fact that PPA has been measured in excess in ASD and in deficit in AN must be considered (Dempsey et al., 2019; Srikantha and Mohajeri, 2019; Shultz et al., 2014; Garcia-Gutierrez et al., 2020; Nankova et al., 2014). The data on the effects of PPA and other SCFAs on behavior and physiology are conflicting, with both elevated and decreased levels showing similar detrimental effects (van de Wouw et al., 2017). It is possible that SCFA signaling is dependent on mere balance, rather than remaining within a specific level.
"Since dysbiosisinduced altered PPA levels are associated with social impairment in addition to abnormal serotonergic signaling, neurodevelopment, and amygdala structure and function, it seems possible that PPA is behind the link between social difficulties in both AN and an immunemediated subtype of ASD."
141
b. Impaired alexithymia
"ASD and AN patients also may exhibit sensory hypersensitivity and anxiety disorders. In fact, sensory processing disorder (SPD) presents in about 90% of ASD cases, and it has been suggested that a subset of AN cases are “sensory eating disorders” involving hyperreactivity to certain food traits."
emotional
processing
and
Both ASD and AN patients also tend to display impaired emotional processing and alexithymia, the inability to name one’s own or others’ emotions (Lobzhanidze et al., 2019; Bentz et al., 2016; Holden and Pakula, 1996; Kerr-Gaffney et al., 2020). Once again, PPA levels seem to be connected to these symptoms, as they are abnormal in both ASD and AN individuals, and emotional processing is in part regulated by the amygdala, which is affected by PPA (Lobzhanidze et al., 2019; Dempsey et al., 2019; Shultz et al., 2014; Garcia-Gutierrez et al., 2020; Srikantha and Mohajeri, 2019; Nankova et al., 2014; MacFabe, 2012). Additionally, amygdala activity is also regulated by LPS (Mangiola et al., 2016). Thus, the LPS/TNF-α overproduction in AN and ASD seems to form another piece of the puzzle which may also be worth further investigation (Srikantha and Mohajeri, 2019; Rudzki and Szulc, 2018; Garcia-Gutierrez et al., 2020; van de Wouw et al., 2017; Fields et al., 2018; Karakula-Juchnowicz et al., 2017; Mangiola et al., 2016; Xu et al., 2019; Holden and Pakula, 1996; Azhari et al., 2018). c. Obsessive/compulsive behaviors
and
repetitive
Furthermore, ASD and AN individuals are both known for obsessive/compulsive and repetitive behaviors, manifesting in tics in the former and aberrant feeding behaviors in the latter. This also seems to be connected with abnormal PPA levels: altered PPA and serotonin levels are associated with obsessive/compulsive and repetitive behaviors, and serotonin signaling is regulated in part by PPA (Dempsey et al., 2019; MacFabe 2012; Nankova et al., 2014; GarciaGutierez et al., 2020). LPS, which is elevated in ASD and AN, is also associated with repetitive behaviors and anxiety in mice (Fields et al., 2018; Xu et al., 2019; Srikantha and Mohajeri, 2019; Rudzki and Szulc, 2018; Mangiola et al., 2016; Garcia-Gutierrez et al., 2020; van de Wouw et al., 2017; Karakula-Juchnowicz et al., 2017; Azhari et al., 2018). d. Impaired sensory processing and stress response ASD and AN patients also may exhibit sensory hypersensitivity and anxiety disorders (Herrington et al., 2016; Galiana-Simal et al., 2017; Dattaro, 2020; Garcia-Gutierrez et al., 2020; Tye et al., 2018; Fields et al., 2018; Lobzhanidze
142
et al., 2019). In fact, sensory processing disorder (SPD) presents in about 90% of ASD cases, and it has been suggested that a subset of AN cases are “sensory eating disorders” involving hyperreactivity to certain food traits; the fear of weight gain common in individuals with AN might be linked to hypersensitivity to the sensations of weight gain (Galiana-Simal et al., 2017). Moreover, recovered AN patients show impaired perception of social stimulation, which might be tied to abnormal sensory processing (Bentz et al., 2016). In ASD, sensory processing, anxiety, and GI symptoms are linked, and the appearance of these comorbidities in AN as well suggests a MGB axis-related pathophysiology (Tye et al., 2018). Indeed, it is thought that the microbiota is implicated in sensory-motor processing and filtering (Fields et al., 2018). This is supported by encouraging results from studies showing the potential for probiotics to ameliorate sensory responsiveness in ASD (Garcia-Gutierrez et al., 2020). Underlying the connection between anxiety and the microbiota appears to be abnormal amygdala habituation, altered PPA/SCFA levels, abnormal HPA axis development, altered autoantibodies against mood and appetite-regulating neuropeptides, and elevated intestinal LPS (Herrington et al., 2016; Lobzhanidze et al., 2016; MacFabe, 2012; Garcia-Gutierrez et al., 2020; Oleskin et al., 2016; Liang et al., 2018; Glenny et al., 2017; Fetissov et al., 2008; Xu et al., 2019). However, one study showed no improvement in anxiety in mice with PPA regulation (Lobzhanidze et al., 2019). It is thought that certain anxiety disorders such as panic disorder occur at higher rates in the ASD population, and panic disorder might be related to sensory hypersensitivity (Herrington et al., 2016). Thus, there seems to be a link between sensory hyper-responsiveness, anxiety disorders, and the MGB axis in ASD and AN driven by PPA/SCFA and/or LPS levels. e. Abnormal feeding behaviors Finally, while AN is best known for inappetence and reduced food intake, these symptoms (alongside other feeding abnormalities) are also prevalent in the ASD population; as mentioned earlier, children with social difficulties are more likely to develop disordered eating (Vissoker et al., 2015; Dattaro, 2020). Within the ASD population, food selectivity is correlated with GI dysfunction and is thought to be caused by discomfort, possibly relating to sensory hypersensitivity to foods and/or GI sensations
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4. Mice can be used as models of AN and ASD when testing the effects of MGB axis-related molecules such as SCFAs, LPS, and TNF-α; murine modeling has also demonstrated the potential for probiotic and FMT treatment of neurological disorders. The next step is to conduct high-power human studies for such novel treatments. Source: Wikimedia Commons
(Vissoker et al., 2015; Galiana-Simal et al., 2017). Yet, it is also possible that there is a deeper mechanism at work. Multiple sources report that the MGB axis plays an important role in food choice, appetite, hunger, and satiety, as well as affecting the emotions regarding these factors (Oleskin et al., 2016; van de Wouw et al., 2017; Liang et al., 2018). Broadly, there is a correlation between dysbiosis and inappetence (Seitz et al., 2019). Specifically, SCFAs affect neuropeptides and gut hormones regulating appetite, intake, metabolism, and emotions regarding food choice (Oleskin et al., 2016; Fetissov et al., 2008; Lin et al., 2012; Lobzhanidze et al., 2019; MacFabe, 2012; Dempsey et al., 2019; GarciaGutierrez et al., 2020; van de Wouw et al., 2017; Holden and Pakula, 1996). PPA and BA have been shown to reduce intake by stimulating secretion of anorexigenic hormones PYY and GLP-1 via the vagus nerve; NPY and PYY are involved in appetite and are associated with AN and indigestion (Lin et al., 2012; Lobzhanidze et al., 2019; MacFabe, 2012; Dempsey et al., 2019; Garcia-Gutierrez et al., 2020; van de Wouw et al., 2017; Holden and Pakula, 1996). Increased PPA could explain disordered eating in ASD, but the lack of PPA in AN presents a challenge to the SCFA appetite-regulation hypothesis. However, as noted before, there are contradicting data on the effects of high versus low SCFAs on physiological processes; for example, PPA has been shown to either increase or decrease metabolism and appetite (van de Wouw et al., 2017). Future research should focus on a better understanding of SCFAs’ involvement in feeding behavior.
WINTER 2021
Another hypothesis for the regulation of intake focuses on dysbiosis-related immunemediated processes. Immune dysfunction is correlated with decreased appetite, and more precisely, infection via the vagus nerve induces inappetence (Karakula-Juchnowicz et al., 2018; Liang et al., 2018). In AN, bacterially induced cross-reactive autoantibodies against α-MSH are linked to anxiety and are thought to produce inappetence through molecular mimicry, activating the appetite-regulating melanocortin complex (Aresti Sanz and El Aidy, 2019; Karakula-Juchnowicz et al., 2017; van de Wouw et al., 2017; Glenny et al., 2017; Fetissov and Hökfelt, 2019). It is thought that production of these cross-reactive antibodies is stimulated by ClpB production in the presence of E. coli overgrowth, and the chronic stimulation of melanocortin type 4 receptor has been pharmacologically shown to induce anorexia and weight loss (Fetissov and Hökfelt, 2019; Karakula-Juchnowicz et al., 2017). While neither of these has been noted in ASD, it may be worth acknowledging high observed levels of Proteobacteria, under which E. coli is categorized, in ASD (Azhari et al., 2018; Fattorusso et al., 2019; Tye et al., 2018; Fields et al., 2018). This perhaps signals a mechanism similar to the autoantibody molecular mimicry in AN. Between the SCFAneuropeptide hypothesis and the molecular mimicry hypothesis, it is plausible that gut dysbiosis may contribute to inappetence and reduced intake in both ASD and AN, and further research should be concentrated on this question to better understand the shared pathophysiology.
"Increased PPA could explain disordered eating in ASD, but the lack of PPA in AN presents a challenge to the SCFA appetite-regulation hypothesis."
143
Conclusion: Altered SCFA levels, LPS and TNF-α as potential explanations for ASD and AN comorbidities Both ASD and AN are neurological disorders commonly featuring GI and immune dysfunction. As the nexus between the largest immune organ (the GI tract) and the CNS, the MGB axis has the power to explain the connection between GI, autoimmune, and behavioral comorbidities in ASD and AN, as well as co-occurrences of the two disorders themselves (Rudzki and Szulc, 2018). With both ASD and AN individuals known to present with dysbiosis, scientists have recently investigated the role of the microbiota – one end of the MGB axis – as a diagnostic and therapeutic tool in disorders commonly diagnosed and treated exclusively at the brain – the other end of the axis. In order to develop practical microbiological applications for preventing and treating neurological disorders, it is essential to first understand how changes to the microbiome and the MGB axis contribute to such conditions. For example, by studying ASD and AN from the perspective of the MGB axis, key overlaps in physiological traits could explain symptoms in both disorders independently and in conjunction. Most notably, SCFA, LPS, and TNF-α are all linked to the MGB axis and have been repeatedly noted in abnormal quantities in both ASD and AN. Altered SCFAs are associated with impaired GI functioning, social functioning, emotional processing, sensory processing, and stress response, all of which are found in ASD and AN. Elevated levels of TNF-α in serum correlate with impaired GI function and emotional processing, while elevated LPS correlates with impaired emotional processing, sensory processing, and stress response. While these correlations do not indicate causality, murine modeling has demonstrated a causal relationship between many of these factors, including between LPS and anxiety, repetitive behaviors, and neuroinflammation; SCFAs and BBB integrity; BA and anorexigenic hormones; PPA and neuroinflammation, anorexigenic hormones, and decreased social motivation; anorexigenic hormones (PYY) and constipation; and TNF-α and intestinal permeability (Rudzki and Szulc, 2018; van de Wouw et al., 2017; Glenny et al., 2017; Lin et al., 2012; Lobzhanidze et al., 2019; Karakula-Juchnowicz et al., 2017). Certainly, it will be worthwhile to further investigate these connections by conducting high-power human studies. So far, probiotic and fecal
144
transplantation studies have demonstrated an ability for microbiome remediation to alleviate GI and behavior symptoms in ASD (Mangiola et al., 2016; Srikantha and Mohajeri, 2019; Israelyan and Margolis, 2019). The current data on the causality between dysbiosis and human pathopsychology is nascent, and expanding such research should be prioritized in the coming years. References Aresti Sanz, J., & El Aidy, S. (2019). Microbiota and gut neuropeptides: A dual action of antimicrobial activity and neuroimmune response. Psychopharmacology, 236(5), 1597–1609. https://doi.org/10.1007/s00213-019-05224-0 Azhari, A., Azizan, F., & Esposito, G. (2019). A systematic review of gut-immune-brain mechanisms in Autism Spectrum Disorder. Developmental Psychobiology, 61(5), 752–771. https://doi.org/10.1002/dev.21803 Bentz, M., Jepsen, J. R. M., Pedersen, T., Bulik, C. M., Pedersen, L., Pagsberg, A. K., & Plessen, K. J. (2017). Impairment of Social Function in Young Females With Recent-Onset Anorexia Nervosa and Recovered Individuals. Journal of Adolescent Health, 60(1), 23–32. https://doi.org/10.1016/j. jadohealth.2016.08.011 Breton, J., Tirelle, P., Hasanat, S., Pernot, A., L’Huillier, C., Rego, J.-C. do, Déchelotte, P., Coëffier, M., Bindels, L. B., & Ribet, D. (2021). Gut microbiota alteration in a mouse model of Anorexia Nervosa. Clinical Nutrition, 40(1), 181–189. https:// doi.org/10.1016/j.clnu.2020.05.002 CDC. (2020, September 25). Data and Statistics on Autism Spectrum Disorder | CDC. Centers for Disease Control and Prevention. https://www.cdc.gov/ncbddd/autism/data.html Cerdó, T., Diéguez, E., & Campoy, C. (2019). Early nutrition and gut microbiome: Interrelationship between bacterial metabolism, immune system, brain structure, and neurodevelopment. American Journal of PhysiologyEndocrinology and Metabolism, 317(4), E617–E630. https:// doi.org/10.1152/ajpendo.00188.2019 Dattaro, L. (2020, December 7). Anorexia’s link to autism, explained. Spectrum | Autism Research News. https://www. spectrumnews.org/news/anorexias-link-to-autism-explained/ Dempsey, J. L., Little, M., & Cui, J. Y. (2019). Gut microbiome: An intermediary to neurotoxicity. NeuroToxicology, 75, 41–69. https://doi.org/10.1016/j.neuro.2019.08.005 Di Lodovico, L., Mondot, S., Doré, J., Mack, I., Hanachi, M., & Gorwood, P. (2021). Anorexia nervosa and gut microbiota: A systematic review and quantitative synthesis of pooled microbiological data. Progress in NeuroPsychopharmacology and Biological Psychiatry, 106, 110114. https://doi.org/10.1016/j.pnpbp.2020.110114 Fattorusso, A., Di Genova, L., Dell’Isola, G. B., Mencaroni, E., & Esposito, S. (2019). Autism Spectrum Disorders and the Gut Microbiota. Nutrients, 11(3), 521. https://doi.org/10.3390/ nu11030521 Fetissov, S. O., Hamze Sinno, M., Coquerel, Q., Do Rego, J. C., Coëffier, M., Gilbert, D., Hökfelt, T., & Déchelotte, P. (2008).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Emerging role of autoantibodies against appetite-regulating neuropeptides in eating disorders. Nutrition, 24(9), 854–859. https://doi.org/10.1016/j.nut.2008.06.021 Fetissov, S. O., & Hökfelt, T. (2019). On the origin of eating disorders: Altered signaling between gut microbiota, adaptive immunity and the brain melanocortin system regulating feeding behavior. Current Opinion in Pharmacology, 48, 82–91. https://doi.org/10.1016/j. coph.2019.07.004 Fields, C. T., Sampson, T. R., Bruce-Keller, A. J., Kiraly, D. D., Hsiao, E. Y., & de Vries, G. J. (2018). Defining Dysbiosis in Disorders of Movement and Motivation. The Journal of Neuroscience, 38(44), 9414–9422. https://doi.org/10.1523/ JNEUROSCI.1672-18.2018 Furman, B. L. (2007). Corticotropin Releasing Factor. In S. J. Enna & D. B. Bylund (Eds.), xPharm: The Comprehensive Pharmacology Reference (pp. 1–4). Elsevier. https://doi. org/10.1016/B978-008055232-3.61510-7 Galiana-Simal, A., Muñoz-Martinez, V., & Beato-Fernandez, L. (2017). Connecting Eating Disorders and Sensory Processing Disorder: A Sensory Eating Disorder Hypothesis. Global Journal of Intellectual and Developmental Disabilities. Garcia-Gutierrez, E., Narbad, A., & Rodríguez, J. M. (2020). Autism Spectrum Disorder Associated With Gut Microbiota at Immune, Metabolomic, and Neuroactive Level. Frontiers in Neuroscience, 14. https://doi.org/10.3389/fnins.2020.578666 Gibson, D., & Mehler, P. S. (2019). Anorexia Nervosa and the Immune System—A Narrative Review. Journal of Clinical Medicine, 8(11). https://doi.org/10.3390/jcm8111915 Glenny, E. M., Bulik-Sullivan, E. C., Tang, Q., Bulik, C. M., & Carroll, I. M. (2017). Eating disorders and the intestinal microbiota: Mechanisms of energy homeostasis and behavioral influence. Current Psychiatry Reports, 19(8), 51. https://doi.org/10.1007/s11920-017-0797-3 Groves, H. T., Higham, S. L., Moffatt, M. F., Cox, M. J., & Tregoning, J. S. (2020). Respiratory Viral Infection Alters the Gut Microbiota by Inducing Inappetence. MBio, 11(1), e03236-19, /mbio/11/1/mBio.03236-19.atom. https://doi. org/10.1128/mBio.03236-19 Herrington, J. D., Miller, J. S., Pandey, J., & Schultz, R. T. (2016). Anxiety and social deficits have distinct relationships with amygdala function in autism spectrum disorder. Social Cognitive and Affective Neuroscience, 11(6), 907–914. https:// doi.org/10.1093/scan/nsw015 Holden, R. J., & Pakula, I. S. (1996). The role of tumor necrosis factor-alpha in the pathogenesis of anorexia and bulimia nervosa, cancer cachexia and obesity. Medical Hypotheses, 47(6), 423–438. https://doi.org/10.1016/s03069877(96)90153-x Hommer, R. E., & Swedo, S. E. (2017). Anorexia and Autoimmunity: Challenging the Etiologic Constructs of Disordered Eating. Pediatrics, 140(6), e20173060. https://doi. org/10.1542/peds.2017-3060 Israelyan, N., & Margolis, K. G. (2019). Reprint of: Serotonin as a link between the gut-brain-microbiome axis in autism spectrum disorders. Pharmacological Research, 140, 115–120. https://doi.org/10.1016/j.phrs.2018.12.023 Kang, D.-W., Adams, J. B., Gregory, A. C., Borody, T., Chittick, L., Fasano, A., Khoruts, A., Geis, E., Maldonado, J., McDonough-
WINTER 2021
Means, S., Pollard, E. L., Roux, S., Sadowsky, M. J., Lipson, K. S., Sullivan, M. B., Caporaso, J. G., & Krajmalnik-Brown, R. (2017). Microbiota Transfer Therapy alters gut ecosystem and improves gastrointestinal and autism symptoms: An open-label study. Microbiome, 5(1). Scopus. https://doi. org/10.1186/s40168-016-0225-7 Karakuła-Juchnowicz, H., Pankowicz, H., Juchnowicz, D., Valverde Piedra, J., & Małecka-Massalska, T. (2017). INTESTINAL MICROBIOTA – A KEY TO UNDERSTANDING THE PATHOPHYSIOLOGY OF ANOREXIA NERVOSA? Psychiatria Polska, 51(5), 859–870. https://doi.org/10.12740/PP/65308 Kerbeshian, J., & Burd, L. (2009). Is anorexia nervosa a neuropsychiatric developmental disorder? An illustrative case report. The World Journal of Biological Psychiatry, 10(4–2), 648–657. https://doi.org/10.1080/15622970802043117 Kerr-Gaffney, J., Harrison, A., & Tchanturia, K. (2020). Autism spectrum disorder traits are associated with empathic abilities in adults with anorexia nervosa. Journal of Affective Disorders, 266, 273–281. https://doi.org/10.1016/j. jad.2020.01.169 Liang, S., Wu, X., & Jin, F. (2018). Gut-Brain Psychology: Rethinking Psychology From the Microbiota–Gut–Brain Axis. Frontiers in Integrative Neuroscience, 12. https://doi. org/10.3389/fnint.2018.00033 Lin, H. V., Frassetto, A., Kowalik Jr, E. J., Nawrocki, A. R., Lu, M. M., Kosinski, J. R., Hubert, J. A., Szeto, D., Yao, X., Forrest, G., & Marsh, D. J. (2012). Butyrate and Propionate Protect against Diet-Induced Obesity and Regulate Gut Hormones via Free Fatty Acid Receptor 3-Independent Mechanisms. PLoS ONE, 7(4). https://doi.org/10.1371/journal.pone.0035240 Liu, L., & Zhu, G. (2018). Gut–Brain Axis and Mood Disorder. Frontiers in Psychiatry, 9. https://doi.org/10.3389/ fpsyt.2018.00223 Lobzhanidze, G., Lordkipanidze, T., Zhvania, M., Japaridze, N., MacFabe, D. F., Pochkidze, N., Gasimov, E., & Rzaev, F. (2019). Effect of propionic acid on the morphology of the amygdala in adolescent male rats and their behavior. Micron, 125, 102732. https://doi.org/10.1016/j.micron.2019.102732 MacFabe, D. F. (2012). Short-chain fatty acid fermentation products of the gut microbiome: Implications in autism spectrum disorders. Microbial Ecology in Health and Disease, 23. https://doi.org/10.3402/mehd.v23i0.19260 Mangiola, F., Ianiro, G., Franceschi, F., Fagiuoli, S., Gasbarrini, G., & Gasbarrini, A. (2016). Gut microbiota in autism and mood disorders. World Journal of Gastroenterology, 22(1), 361–368. https://doi.org/10.3748/wjg.v22.i1.361 Mayer, E. A. (2011). Gut feelings: The emerging biology of gut–brain communication. Nature Reviews Neuroscience, 12(8), 453–466. https://doi.org/10.1038/nrn3071 Obrenovich, M. E. M. (2018). Leaky Gut, Leaky Brain? Microorganisms, 6(4). https://doi.org/10.3390/ microorganisms6040107 Oleskin, A. V., El’-Registan, G. I., & Shenderov, B. A. (2016). Role of neuromediators in the functioning of the human microbiota: “Business talks” among microorganisms and the microbiota-host dialogue. Microbiology, 85(1), 1–22. https:// doi.org/10.1134/S0026261716010082 Pearson-Leary, J., Zhao, C., Bittinger, K., Eacret, D., Luz, S.,
145
Vigderman, A. S., Dayanim, G., & Bhatnagar, S. (2020). The gut microbiome regulates the increases in depressive-type behaviors and in inflammatory processes in the ventral hippocampus of stress vulnerable rats. Molecular Psychiatry, 25(5), 1068–1079. https://doi.org/10.1038/s41380-019-0380-x Roubalová, R., Procházková, P., Papežová, H., Smitka, K., Bilej, M., & Tlaskalová-Hogenová, H. (2020). Anorexia nervosa: Gut microbiota-immune-brain interactions. Clinical Nutrition, 39(3), 676–684. https://doi.org/10.1016/j.clnu.2019.03.023 Rudzki, L., & Szulc, A. (2018). “Immune Gate” of Psychopathology—The Role of Gut Derived Immune Activation in Major Psychiatric Disorders. Frontiers in Psychiatry, 9. https://doi.org/10.3389/fpsyt.2018.00205 Sanctuary, M. R., Kain, J. N., Angkustsiri, K., & German, J. B. (2018). Dietary Considerations in Autism Spectrum Disorders: The Potential Role of Protein Digestion and Microbial Putrefaction in the Gut-Brain Axis. Frontiers in Nutrition, 5. https://doi.org/10.3389/fnut.2018.00040 Seitz, J., Belheouane, M., Schulz, N., Dempfle, A., Baines, J. F., & Herpertz-Dahlmann, B. (2019). The Impact of Starvation on the Microbiome and Gut-Brain Interaction in Anorexia Nervosa. Frontiers in Endocrinology, 10. https://doi.org/10.3389/ fendo.2019.00041
Spectrum Disorder. MSystems, 4(1). https://doi.org/10.1128/ mSystems.00321-18 Werling, D. M., & Geschwind, D. H. (2013). Sex differences in autism spectrum disorders. Current Opinion in Neurology, 26(2), 146–153. https://doi.org/10.1097/ WCO.0b013e32835ee548 Westwood, H., Mandy, W., & Tchanturia, K. (2017). Clinical evaluation of autistic symptoms in women with anorexia nervosa. Molecular Autism, 8(1), 12. https://doi.org/10.1186/ s13229-017-0128-x Xu, M., Xu, X., Li, J., & Li, F. (2019). Association Between Gut Microbiota and Autism Spectrum Disorder: A Systematic Review and Meta-Analysis. Frontiers in Psychiatry, 10. https:// doi.org/10.3389/fpsyt.2019.00473 Yang, L. L., Millischer, V., Rodin, S., MacFabe, D. F., Villaescusa, J. C., & Lavebratt, C. (2020). Enteric short-chain fatty acids promote proliferation of human neural progenitor cells. Journal of Neurochemistry, 154(6), e14928. https://doi. org/10.1111/jnc.14928
Seitz, J., Trinh, S., & Herpertz-Dahlmann, B. (2019). The Microbiome and Eating Disorders- ClinicalKey. Clinical Key. https://www-clinicalkey-com.dartmouth.idm.oclc.org/#!/ content/playContent/1-s2.0-S0193953X18311511?returnurl=n ull&referrer=null Shultz, S. R., & MacFabe, D. F. (2014). Propionic Acid Animal Model of Autism. In V. B. Patel, V. R. Preedy, & C. R. Martin (Eds.), Comprehensive Guide to Autism (pp. 1755–1778). Springer. https://doi.org/10.1007/978-1-4614-4788-7_106 Srikantha, P., & Mohajeri, M. H. (2019). The Possible Role of the Microbiota-Gut-Brain-Axis in Autism Spectrum Disorder. International Journal of Molecular Sciences, 20(9), 2115. https:// doi.org/10.3390/ijms20092115 Tye, C., Runicles, A. K., Whitehouse, A. J. O., & Alvares, G. A. (2019). Characterizing the Interplay Between Autism Spectrum Disorder and Comorbid Medical Conditions: An Integrative Review. Frontiers in Psychiatry, 9. https://doi.org/10.3389/ fpsyt.2018.00751 Valles-Colomer, M., Falony, G., Darzi, Y., Tigchelaar, E. F., Wang, J., Tito, R. Y., Schiweck, C., Kurilshikov, A., Joossens, M., Wijmenga, C., Claes, S., Van Oudenhove, L., Zhernakova, A., Vieira-Silva, S., & Raes, J. (2019). The neuroactive potential of the human gut microbiota in quality of life and depression. Nature Microbiology, 4(4), 623–632. https://doi.org/10.1038/s41564018-0337-x van de Wouw, M., Schellekens, H., Dinan, T. G., & Cryan, J. F. (2017). Microbiota-Gut-Brain Axis: Modulator of Host Metabolism and Appetite. The Journal of Nutrition, 147(5), 727–745. https://doi.org/10.3945/jn.116.240481 Vissoker, R. E. (2015). Eating and feeding problems and gastrointestinal dysfunction in Autism Spectrum Disorders. Research in Autism Spectrum Disorders, 12. Wang, M., Wan, J., Rong, H., He, F., Wang, H., Zhou, J., Cai, C., Wang, Y., Xu, R., Yin, Z., & Zhou, W. (2019). Alterations in Gut Glutamate Metabolism Associated with Changes in Gut Microbiota Composition in Children with Autism
146
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
147
A Call for Efforts to Address Asian American Health Disparities: Fighting Heart Disease, Liver Disease, and Obesity BY KRISTAL WONG '22 Cover: Asian American health is often subject to oversight within the overall clinical research context; a further look into the specifics of cardiovascular and heart disease, obesity and overweight status, and hepatitis B and liver cancer suggests that a greater effort should be put forth in addressing health disparities and health issues in Asian Americans. Source: dietaryguidelines.gov
148
Introduction Although Asian Americans are the fastest growing population in the US, Asian American health disparities and health issues have often been neglected in widespread research (Chen and Dang, 2015). According to the US Department of Health and Human Services’ Office of Minority Health, ‘Asian American’ is defined as a racial group with people having origins in the original peoples of the Far East, Southeastern Asia, and/or the Indian subcontinent (2019). While Asian Americans statistically suffer lower rates of comorbidities such as heart disease, obesity, and hypertension, they have still been seeing increased prevalence with the rest of the US population (Office of Minority Health, 2021). Additionally, Asian Americans also suffer from disproportionately high rates of chronic diseases such as Hepatitis B and the resulting liver cancer (Office of Minority Health, 2020c). Furthermore, conditions such as obesity and
overweight status are often hidden due to western (and American) standards and cut offs. Moreover, the broad usage of the term “Asian American” categorizes a large and diverse group of people and, as a result, masks health disparities in Asian American subgroups. Historically, aggregations of racialized data grouped Asian Americans within a broader category of “Asian American and Pacific Islander,” which further ignored and hid subgroup disparities and variability. The US census data consisted only of the Chinese until the addition of Japanese in 1870, and “other Asian races” (for Filipino, Hindu, and Korean Americans) didn’t exist until 1910. It was only fairly recently, in 2000, when Asian Americans and Pacific Islander Americans were separated in US Census data reports (Holland & Palaniappan, 2012). In conjunction with US Census data, not all states are required to report disease and mortality data on Asian American DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: US Census data showing the growth of Asian American population from 2000-2010; Asian Americans are the fastest growing population in the US. Source: US Census Bureau
subgroups; and it was only in 2003 when the US Department Health and Human services added the category of Asian (as opposed to Asian Pacific Islander) and the subcategories Asian Indian, Chinese, Filipino, Japanese, Korean, Vietnamese, and other Asian (specify). Before 2003, only seven states required this further classification (Holland & Palaniappan, 2012). This heterogeneity in reporting impedes accurate studies and statistics on Asian American health, oftentimes masking health issues within subgroups. It is essential to recognize these health issues within the context of unique culture, histories, and livelihoods of Asian Americans in order to implement preventative measures and provide patient-centered care for all Americans (Chen and Dang, 2015). Historically, barriers to healthcare access and health outcomes for Asian Americans have stemmed from a range
WINTER 2021
of socioeconomic and geohistorical factors including language barrier and isolation, education, differences in cultural and societal norms, histories of struggles in Asia, and geohistorical disease trends. This article will touch upon these social determinants of health as well as any genetic, geographical, and historical trends which effect the Asian American community. These issues and associated disparities will primarily be focused within the context of psychological health conditions such as cardiovascular disease and general heart disease, obesity and overweight status, and hepatitis B and liver cancer. Second to cancer, cardiovascular disease leads the number of mortalities in Asian Americans; issues of obesity and overweight status are an increasing and continued problem amongst Asian American adults and children; and Asian Americans comprise over half of the Hepatitis B cases and suffer higher rates of liver cancer and liver cancer mortality than any other racial/
"Historically, barriers to healthcare access and health outcomes for Asian Americans have stemmed from a range of socioeconomic and geohistorical factors including language barrier and isolation, education, differences in cultural and societal norms, histories of struggles in Asia, and geohistorical disease trends."
149
Figure 2: Asian American population by state. Hawaii and California are the states with the largest percentage of Asian Americans. Source: US Census Bureau
ethnic group in the US (Argueza, Sokal-Gutierrez, & Madsen, 2020; CDC, 2020c; Office of Minority Health, 2021c)
Cardiovascular Disease (CVD) and Heart Disease "Heart disease is the second leading cause of death for Non-Hispanic Asian Americans and Pacific Islanders, and 25% of overall American deaths are attributed to heart disease."
Heart disease is the second leading cause of death for Non-Hispanic Asian Americans and Pacific Islanders, and 25% of overall American deaths are attributed to heart disease (CDC, 2019a; CDC, 2021a). Heart disease consists of many heart conditions. In the US, CHD, also known as coronary artery disease, is the most common. Often, patients don’t know they have CHD until symptoms such as those associated with heart attacks including angina, weakness, pain and discomfort in the chest, arm, or shoulder, and shortness of breath occur (CDC, 2019b). Other forms of heart disease include cerebrovascular disease and peripheral artery disease. Symptoms of heart disease include heart attack, heart arrythmia – fluttering in the chest or palpitations – and heart failure, which is marked by shortness of breath, fatigue, or swelling of the feet, ankles, abdomen, legs, or neck veins (CDC, 2021a). Luckily, many cardiovascular diseases can be prevented through a change in behavioral risk factors such as diet, physical activity, and tobacco
150
and alcohol use. People most at risk for CVD have at least one of the following: hypertension, diabetes, hyperlipidemia. It is essential that these individuals be identified early on and seek physician recommended treatment options such as medicines and counselling and work towards eliminating tobacco use, reducing salt intake, increasing fruit and vegetable intake, incorporate regular exercise, and avoid alcohol in order to reduce their cardiovascular risk. There have been improvements modern medicine to minimize cardiovascular risk and complications including the prescription of drugs for diabetes, hypertension, high cholesterol, and blood lipids (WHO, 2017). And for these reasons, it is also essential that Asian American populations, especially those at risk, have adequate access to healthcare, including clinicians that are able to effectively communicate to the patient, so that those at risk for CVD and its associated complications can receive preemptive treatment. Although the burden of cardiovascular disease (CVD) is less for Asian Americans compared to other racial and ethnic groups, issues surrounding CVD including overweight status, obesity, high cholesterol, hypertension, and tobacco use still pose large threats to the community. In 2018, the CDC reported a 4.4% prevalence of diagnosed cases coronary heart
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
disease (CHD), a form of CVD, in Asian Americans 18 years old and a mortality rate of 82 in every 100,000 people from CHD (Office of Minority Health, 2021). Aggregated data from 2017 show that 21.4% of deaths of Asian Americans are caused by heart disease. Additionally, despite overall improvements on CVD (heart and cerebrovascular) outcomes and rates of mortality for non-Hispanic white populations (NHWs), not all Asian American subgroups have benefited well as their white counterparts (Jose et al., 2015). This disappointing fact suggests that measures of prevention, treatment, and screening may not have been as successful in reaching the Asian American population of the US.
whites and people of European origin (JiaLi et al., 2009). Similarly, Holland et al. found a significantly greater risk of ischemic stroke, complete obstruction of blood flow in the arteries, for Filipino women and a greater risk of hemorrhagic stroke, blood clot in the brain, for Vietnamese men and Korean women in comparison to NHWs (Holland et al., 2011). Additionally, there also exists variability in heart disease treatment methods, diagnoses and medical and procedural therapies. Manjunath et al. found that Chinese Americans were more likely to undergo stent procedures than NHWs and that Filipino Americans were more likely to undergo bypass procedures compared to NHWs (2020).
It is essential that health policies create environments supportive of making easy and affordable access to healthy choices for Asian Americans looking to avoid CVD. One way researchers have attempted to ameliorate CVD risk in Asian Americans is through community outreach programs. For instance, one particular study notes the linkage between high salt intake and CVD incidence and high blood pressure and implemented such a program in Philidelphia, PA with success (2019).
In regard to physical activity, the CDC reports that Asian Americans were at risk for higher levels of physical inactivity when compared to the average American (Jia Li et al., 2009). In addition, Asian Americans tend to live in urban (crowded) more densely populated areas (Figure A – US Census Bureau, 2010). In addition, adequate physical activity has been linked to access to outdoor spaces and active transportation routes (Sallis et al., 2012; Koohsari et al., 2015). Thus, it is essential that local (urban) planning create environments conductive for physical activity for all residents. In addition, other risk factors that contribute to risk for CVD development include sociodemographic factors, education, dietary habits, economic status, and linguistic isolation which may further contribute to variations within Asian American populations (Jiali et al., 2009).
Additionally, it is important to emphasize the variability within the Asian American population and in Asian American and Pacific Islander (AAPI) data. Although the Asian American group overall displays lower rates of CVD and related illnesses, Holland et al. report that oftentimes, data for one subgroup of Asian Americans is collected and extrapolated for the whole racial group (2011). They found that the umbrella term “Asian American” masks unique racial variability and levels of comorbidities such as CVD within certain AAPI subgroups. For example, Filipinos and Asian Indians generally diverge from the other Asian subgroups, with both groups showing elevated risk for CHD including heart disease and diabetes (Holland & Palaniappan, 2012). Jiali et al. found that Filipino-Americans suffered from higher rates of hypertension than NHWs, and thus worse outcomes than the rest of the Asian American population (2009). In addition, findings such as these may be clouded by atypical data in the Asian-Indian American population where lower rates of obesity don’t necessarily correlate to lower rates of heart disease. Along the same lines, individuals with Asian-Indian origins display high rates of diabetes mellites which is also masked by early onset of diabetes, lower BMI, and lower rates of obesity than
WINTER 2021
"In regard to physical activity, the CDC reports that Asian Americans were at risk for higher levels of physical inactivity when compared to the average American. In addition, Asian Americans tend to live in urban (crowded) more densely populated areas."
In determining geographical risk with Asian Americans, Pu et al. found that mortality from CVD amongst Asian American subgroups had similar geographic variation to NHWs, suggesting that migration patterns of Asian American subgroups is not a large driver for explaining geographic variability in Asian Americans with CVD (2017). Additionally, the largest populations of Asian Americans can be found along the coastlines. Specifically, all five states bordering the Pacific Ocean are within the top 12 most heavily Asian American populated states (Fig 2). Interestingly, Pu et al. found that Japanese and Filipinos exhibited 80-90% of cerebrovascular disease on the West coast (Pu et al., 2017). The literature suggests three pillars towards combatting heart and CVD in Asian American populations: prevention, health education,
151
and community outreach, with special efforts to target those at highest risk. Already, community health programs such as Prevention and Awareness for South Asians (PRANA) and Asian American Partnership in Research and Endowment (AsPire) have been implemented to increase awareness of health disparities in CVD. This being said, it is also essential that such public health projects include culturally competent practices in order to best and most effectively reach their Asian American community (Jose et al., 2015).
Nutrition: Obesity and Diabetes
"Historically, Asian Americans have displayed lower rates of obesity compared to other racial groups, however the prevalence of obesity is similarly increasing, which is of concern. The CDC reports that non-Hispanic Asian Americans have a 17.4% prevalence of obesity."
Obesity within Asian-American populations has largely been an understudied population in obesity research. It effects 42.4% of Americans and has seen remarkable increases in adults and children in the last half century (CDC, 2021b). Historically, Asian Americans have displayed lower rates of obesity compared to other racial groups, however the prevalence of obesity is similarly increasing, which is of concern (Office of Minority Health, 2021). The CDC reports that non-Hispanic Asian Americans have a 17.4% prevalence of obesity (CDC, 2021b). Obesity is defined based on a BMI (>30) and a waisthip circumference. However, it is important to note that Asian Americans have been known to develop health conditions at lower BMIs than other racial and ethnic groups (Mui, Hill, and Thorpe, 2018). These variations in Asian-specific measurements and cut points have led some researchers to reevaluate their characterization of certain health complications such as obesity. Gordon et al. chose to categorize obesity as a BM ≥ 27.5kg/m2 and found that obesity prevalence varied between 14-39% in Asian American women and from 21-45% in Asian American men (2019). Data such as this suggest that influential health and medical resource groups such as the World Health Organization (WHO) and the CDC recommend implementation of Asian-specific BMI cut points to properly account for the health effects that can be experienced at these lower BMIs (Virani et al., 2021). Similarly, to the differences between Asian American subgroups in the mentioned in the previous section on CVD and heart disease, research teams Mui, Hill, and Thorpe and Gordon et al. also found subgroup variation in disease prevalence within Asian American subgroups. Using adjusted defining criteria, Filipino American men were found to have higher prevalence of obesity in comparison to NHWs. And similar to prevalence of CHD and diabetes,
152
Asian Indian Americans and Filipino Americans exhibited the highest obesity prevalence within the Asian American subgroups (Mui, Hill, and Thorpe, 2018; Gordon et al., 2019). Again, data such as these suggest that Asian American health data should be disaggregated by Asian American subgroup to better assess the individual needs of racial groups. Obese individuals, compared to those with a normal or healthy weight, are at increased risk for many serious diseases and health conditions, including hypertension, high cholesterol and triglyceride levels, type 2 diabetes mellitus, CHD, Stroke, gallbladder disease, gallbladder disease, osteoarthritis, sleep apnea, mental illness, and body pains and difficulty with physical function in addition to higher risk of overall mortality (CDC, 2020b). Thus, it is essential that researchers study the factors for obesity and bring awareness towards the development of risk-minimizing environments and implementations of community-specific programs. Additionally, it is essential that the social determinants of health be investigated when studying these disparities. Factors such as education, economic stability, neighborhood/ environment, health and healthcare, and social/community context largely mold are large forces that mold health outcomes. The CDC reports that non-Hispanic Asian American women of higher income were less likely to have obesity than those in the middle- and lower-income groups (CDC, 2021b). Cook et al. found supporting evidence of this phenomena, highlighting the inverse role of socioeconomic status (SES) in predicting obesity and overweight risk in Asian American adults (2017). In speaking of factors such as neighborhood and atmosphere, refugee and first-generation status, and lack of education can also serve as barriers to achieving better health outcomes and access to care (Sakamoto and Woo, 2007). In addition, other disadvantages that face Asian Americans, possibly contributing to the presence of obesity, overweight status, and diabetes, include lack of knowledge and information in health, physical activity encouragement, and history of food insecurity (prior to immigration) (Cook et al., 2017). One study from 2020 reported that recently immigrant mothers (< 5 years) bore children who were more likely to be obese and eat less fruit than Asian American children from US-born mothers (Argueza, Sokal-Gutierrez, and Madsen, 2020). Studies such as this one
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
provides key insights towards creating targeted and specific efforts in preventing obesity and advertising a healthy diet. In addition, presence in the US, and the increased availability of festival foods which are often high in fat, sugar, carbohydrates and animal proteins, may lead to overconsumption of such calorie-dense foods leading to obesity (Cook et al., 2017). Other environmental risk factors include metabolic rate and exercise. Thus, it is also important that Asian Americans, especially those in urban areas, have access to mixed use spaces and areas of recreation and physical activity such as playgrounds, mediums for active transit (such as paved streets), and for children, access to schools and playgrounds (Sallis et al., 2012). It is also important to note that traditional Asian cuisine can look very different from a typical American or Asian-American diet, and even within the realm of Asian food, there exists a large variety. Traditionally, Asian diets are categorized of a low intake of fat and meat and a high intake of vegetables, fruit, and legumes—a stark contrast to Asian American diets which are comprised of high sodium, carbohydrate, and oil content, markedly influenced by US patterns of consumption (Ma et al., 2019). Community based efforts such the “Healthy Chinese Takeout Initiative” in Philadelphia PA, used educational methods to teach community chefs in low income and ethnic minority neighborhoods to practice healthier cooking practices such as reducing sodium levels (Ma et al., 2018). Ma et al. also used a community-based organization, ‘IDEALREACH,’ to improve nutrition in its AsianAmerican target population in Philadelphia (2019). Post-intervention data demonstrated that the program was successful in increasing whole grains consumption, reducing sodium consumption and increasing awareness of nutrition and heart health.
WINTER 2021
Lastly it is interesting to note that obesity in Asian-Americans may have a sizeable genetic component. One such component of genetic risk for obesity includes susceptibility genes, one of which can be explained by a phenomenon called the “thrifty phenotype.” The “thrifty phenotype” encompasses the idea that early nutritional deprivation (in utero or otherwise during the critical period of life) leads to epigenetic modifications in key genes that carries into obesity later in life. Some of these epigenetic characteristics effect the gametes, allowing modifications for obesity to be passed onto children in the subsequent generation (Siddiqui, Joy, and Nawaz, 2019).
Figure 3: A closer look into diets and cultural practices of some Asian-Americans demonstrate contributions to risk of diabetes, overweight status, and CVD including high sodium intake and (history of ) food insecurity. Source: Flickr
Hepatitis B Perhaps the disease with the greatest disparity amongst Asian-Americans is hepatitis B (Tien et al., 2013; Dienstag, 2008). Hepatitis, more broadly, is marked by acute inflammation of the liver and is caused by the hepatitis B virus (HBV); the disease’s incubation period can last anywhere from 30 to 75 days with detection typically in the 30-to-60-day range (Dienstag, 2008; WHO, 2020). Common symptoms of HBV infection include fatigue, fever, appetite loss, nausea, vomiting, jaundice, sclera, dark urine, pain on the right abdomen, and joint pain (Office of Minority Health, 2020c). Symptoms of hepatitis can be induced and worsened by common drugs and by alcohol. Nonviral forms of hepatitis, such as steatohepatitis or fatty liver disease, are closely linked to obesity, and often result in chronic liver disease and require liver transplants. Luckily, due to the liver’s great regenerative capacity, liver transplants can be successful with just a portion of healthy donor liver. Despite this fact, liver transplants in patients with (chronic) hepatitis B may not fully eradiate the virus from the body (Fung, 2015).
"One study from 2020 reported that recently immigrant mothers (< 5 years) bore children who were more likely to be obese and eat less fruit than Asian American children from US-born mothers."
The US Department of Health and Human services reports that one in twelve AsianAmericans are chronically infected with HBV and this statistic is corroborated by the fact that Asian American and Native Hawaiians/Pacific Islanders make up over half of the Hepatitis B cases in the United States despite making up only 5.6% of the population (Office of Minority Health, 2019; Office of Minority Health 2020, Office of Minority Health 2021). These statistics are exceptionally concerning since HBV infection can often go unnoticed, leaving the effects much more severe (Kin et al., 2013). In fact, the US Department of Health and Human Services estimate that two-thirds of Asian Americans infected with HBV aren’t aware that
153
Figure 4: Incidence of hepatitis B in the world, showing particularly high rates in eastern/ south eastern Asia, contributing to the overarching disparity of Hepatitis B in Asian Americans despite the presence of an effective vaccine. Source: Wikimedia Commons
"Asian American and Native Hawaiians/ Pacific Islanders make up over half of the Hepatitis B cases in the United States despite making up only 5.6% of the population."
they are infected. If left untreated, chronic HBV can lead to cirrhosis, chronic scarring of the liver, along with liver cancer, liver failure, or even death (Office of Minority Health, 2020a; Chen and Dang, 2015; Huang et al., 2021). Thus, it is essential that Asian-Americans, the population most at-risk for developing hepatitis B in the US, be knowledgeable about treatments and preventative measures for this disease. One highly regarded prevention technique is the HBV vaccine, typically administered to infants in multiple dosages but has also been proven effective when taken in adulthood. Additionally, hepatitis B is easily testable via with a blood sample that checks for HBV DNA. Treatment for hepatitis B positive patients – whether infant, child, or adult – are available as antivirals (Fung, 2015). However, it is important to note that the timeline of anti-viral treatment varies depending on the case and oftentimes, since current antiviral medications for hepatitis B will not completely eradiate the virus from previously infected cells and that antivirals treatments regimens may be long-term to prevent flare ups (Fung, 2015). However, to minimize the negative progression of the disease, infected individuals should refrain from excessive alcohol use, lead healthy lifestyles (e.g. diet, exercise, sleep), and regularly consult their physician and get tested for hepatitis A and C. Transmission of HBV can occur via unprotected sex with an infected individual, exchange of bodily fluids (e.g. blood, saliva), and, most
154
commonly, from an infected mother to fetus during birth or perinatally (Office of Minority Health, 2020b). Historically, the hepatitis B virus has been circulating and endemic in the eastern and south eastern Asian region of the world. Luckily, efforts in preventing the spread of this disease have increased; the CDC reports that East Asian countries incidence of vaccination at birth has increased from 34% to 54% and third dose vaccinations increased from 89% to 91% (hepbtalk, 2020). These vaccination efforts have been successful in decreasing the percent of children under five chronically infected with HBV from 5% to 1% and, consequently, the prevalence of Asian American (immigrants) as well (WHO, 2020). Still, this statistic doesn’t take away from the realities and current consequences of HBV infection within Asian American population. Researchers, physicians, and public health specialists argue that increased screening in Asian-Americans, along with wide-range vaccination efforts in children and adults, and health education may be key in ameliorating the burden of Hepatitis B in Asian American populations (Feng, 2015; Alber et al., 2018; Chen Jr. and Deng, 2015).
Conclusion Clearly, the health disparities in Asian Americans and in Asian American subgroups is a topic that requires increased and continued attention in both clinical and public health research avenues. A future with more specialized and effective community outreach
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
and intervention programs could pave the way towards minimizing the rising risk of CVD and obesity and increase immunizations in Asian Americans including immigrants. In addition, better classification methods catered to the Asian homeostatic norms could improve early interventions in overweight status and obesity, thus curbing the risk of more serious and life altering conditions such as heart disease, hypertension and Type 2 Diabetes. And lastly, greater efforts in testing and vaccination against Hepatitis B can drastically prevent liver cancer progression and subsequent mortality.
CDC. (2020a, September 8). Heart Disease Facts | cdc.gov. Centers for Disease Control and Prevention. https://www.cdc. gov/heartdisease/facts.htm
It is also important to acknowledge that the three health conditions mentioned are not comprehensive reviews on Asian American health disparities, but rather serve as a dip into the health issues of Asian Americans. Other indications for future discussion include general cancers (other than liver cancer) and the large disparity and perhaps just as important, large problem of mental health and the current barriers in place that prevent adequate treatment of these illnesses (Yang et al., 2020). Further research into the impact of the COVID-19 pandemic on Asian Americans including the rate (low or high) of infection, incidence of vaccine hesitancy, and impact of isolation/stay-at-home orders will also be interesting to study in the foreseeable future.
CDC. (2021b, February 11). Obesity is a Common, Serious, and Costly Disease. Centers for Disease Control and Prevention. https://www.cdc.gov/obesity/data/adult.html
References Alber, J. M., Cohen, C., Nguyen, G. T., Ghazvini, S. F., & Tolentino, B. T. (2018). Exploring Communication Strategies for Promoting Hepatitis B Prevention among Young Asian American Adults. Journal of Health Communication, 23(12), 977–983. https://doi.org/10.1080/10810730.2018.1534904 Argueza, B. R., Sokal-Gutierrez, K., & Madsen, K. A. (2020). Obesity and Obesogenic Behaviors in Asian American Children with Immigrant and US-Born Mothers. International Journal of Environmental Research and Public Health, 17(5). https://doi.org/10.3390/ijerph17051786 Asian American—The Office of Minority Health. (n.d.). Retrieved March 2, 2021, from https://minorityhealth.hhs. gov/omh/browse.aspx?lvl=3&lvlid=63 Cancer and Asian Americans—The Office of Minority Health. (n.d.). Retrieved March 2, 2021, from https://minorityhealth. hhs.gov/omh/browse.aspx?lvl=4&lvlid=46 CDC. (2019a, September 27). From the CDC-Leading Causes of Death Asian/Pacific Islander Males 2016. Centers for Disease Control and Prevention. https://www.cdc.gov/ healthequity/lcod/men/2016/nonhispanic-asian-or-islander/ index.htm CDC. (2019b, December 9). Coronary Artery Disease | cdc.gov. Centers for Disease Control and Prevention. https://www.cdc. gov/heartdisease/coronary_ad.htm
WINTER 2021
CDC. (2020b, September 17). The Health Effects of Overweight and Obesity | Healthy Weight, Nutrition, and Physical Activity | CDC. https://www.cdc.gov/healthyweight/ effects/index.html CDC. (2020c, September 24). People Born Outside of the United States and Viral Hepatitis | CDC. https://www.cdc.gov/ hepatitis/populations/Born-Outside-United-States.htm CDC. (2021a, January 13). Heart Disease Resources | cdc.gov. Centers for Disease Control and Prevention. https://www.cdc. gov/heartdisease/about.htm
Cheah, C. S. L., Wang, C., Ren, H., Zong, X., Cho, H. S., & Xue, X. (2020). COVID-19 Racism and Mental Health in Chinese American Families. Pediatrics, 146(5). https://doi.org/10.1542/ peds.2020-021816 Chen Jr, M. S., & Dang, J. (2015). Hepatitis B among Asian Americans: Prevalence, progress, and prospects for control. World Journal of Gastroenterology, 21(42), 11924–11930. https://doi.org/10.3748/wjg.v21.i42.11924 Cook, W. K., Tseng, W., Tam, C., John, I., & Lui, C. (2017). Ethnicgroup socioeconomic status as an indicator of communitylevel disadvantage: A study of overweight/obesity in Asian American adolescents. Social Science & Medicine, 184, 15–22. https://doi.org/10.1016/j.socscimed.2017.04.027 Developmental and contextual correlates of mental health and help-seeking among Asian American college students. - Abstract—Europe PMC. (n.d.). Retrieved February 14, 2021, from https://europepmc.org/article/med/29389152 Dietary Guidelines for Americans, 2020-2025. (n.d.). 164. Fung, J. (2015). Management of chronic hepatitis B before and after liver transplantation. World Journal of Hepatology, 7(10), 1421–1426. https://doi.org/10.4254/wjh.v7.i10.1421 Fung, S. K., & Lok, A. S. F. (2004). Treatment of chronic hepatitis B: Who to treat, what to use, and for how long? Clinical Gastroenterology and Hepatology: The Official Clinical Practice Journal of the American Gastroenterological Association, 2(10), 839–848. https://doi.org/10.1016/s15423565(04)00386-6 Gordon, N. P., Lin, T. Y., Rau, J., & Lo, J. C. (2019). Aggregation of Asian-American subgroups masks meaningful differences in health and health risks among Asian ethnicities: An electronic health record based cohort study. BMC Public Health, 19(1), 1551. https://doi.org/10.1186/s12889-019-7683-3 hepbtalk. (2020, December 23). Hepatitis B in Asian Populations. Hepatitis B Foundation. https://www.hepb.org/ blog/hepatitis-b-asian-populations/ Holland, A. T., & Palaniappan, L. P. (2012). Problems With the Collection and Interpretation of Asian-American Health Data: Omission, Aggregation, and Extrapolation. Annals of Epidemiology, 22(6), 397–405. https://doi.org/10.1016/j. annepidem.2012.04.001 Holland, A. T., Wong, E. C., Lauderdale, D. S., & Palaniappan,
155
L. P. (2011). Spectrum of Cardiovascular Diseases in Asian-American Racial/Ethnic Subgroups. Annals of Epidemiology, 21(8), 608–614. https://doi.org/10.1016/j. annepidem.2011.04.004 Huang, D. Q., Li, X., Le, M. H., Le, A. K., Yeo, Y. H., Trinh, H. N., Zhang, J., Li, J., Wong, C., Wong, C., Cheung, R. C., Yang, H.-I., & Nguyen, M. H. (2021). Natural History and Hepatocellular Carcinoma Risk in Untreated Chronic Hepatitis B Patients With Indeterminate Phase. Clinical Gastroenterology and Hepatology. https://doi.org/10.1016/j.cgh.2021.01.019 Huang, K.-Y., Calzada, E., Cheng, S., Barajas-Gonzalez, R. G., & Brotman, L. M. (2017). Cultural Adaptation, Parenting and Child Mental Health Among English Speaking Asian American Immigrant Families. Child Psychiatry and Human Development, 48(4), 572–583. https://doi.org/10.1007/ s10578-016-0683-y Jiali, Y., RUST, G., BALTRUS, P., & DANIELS, E. (2009). Cardiovascular Risk Factors among Asian Americans: Results from a National Health Survey. Annals of Epidemiology, 19(10), 718–723. https://doi.org/10.1016/j. annepidem.2009.03.022 Jose, P. O., Frank, A. T. H., Kapphahn, K. I., Goldstein, B. A., Eggleston, K., Hastings, K. G., Cullen, M. R., & Palaniappan, L. P. (2014). Cardiovascular Disease Mortality in Asian Americans. Journal of the American College of Cardiology, 64(23), 2486–2494. https://doi.org/10.1016/j.jacc.2014.08.048 Kin, K., Lin, B., Ha, N., Chaung, K., Trinity, H. N., Garcia, R., Nguyen, K., Nguyen, H. A., da Silveira, E., Levitt, B., & Nguyen, M. (n.d.). High Proportion of Hepatitis C Virus in Community Asian Amer... : Journal of Clinical Gastroenterology. Retrieved February 27, 2021, from https://journals.lww.com/jcge/ Fulltext/2013/04000/High_Proportion_of_Hepatitis_C_ Virus_in_Community.15.aspx Kirshner, L., Yi, S. S., Wylie-Rosett, J., Matthan, N. R., & Beasley, J. M. (2020). Acculturation and Diet Among Chinese American Immigrants in New York City. Current Developments in Nutrition, 4(1), nzz124. https://doi.org/10.1093/cdn/nzz124 Koohsari, M. J., Mavoa, S., Villanueva, K., Sugiyama, T., Badland, H., Kaczynski, A. T., Owen, N., & Giles-Corti, B. (2015). Public open space, physical activity, urban design and public health: Concepts, methods and research agenda. Health & Place, 33, 75–82. https://doi.org/10.1016/j.healthplace.2015.02.009
Mui, P., Hill, S. E., & Thorpe, R. J. (2018). Overweight and Obesity Differences Across Ethnically Diverse Subgroups of Asian American Men. American Journal of Men’s Health, 12(6), 1958–1965. https://doi.org/10.1177/1557988318793259 Office of Minority Health. (2019, August 22). Asian American—The Office of Minority Health. https:// minorityhealth.hhs.gov/omh/browse.aspx?lvl=3&lvlid=63 Office of Minority Health. (2020a, February 28). Cancer and Asian Americans—The Office of Minority Health. https:// minorityhealth.hhs.gov/omh/browse.aspx?lvl=4&lvlid=46 Office of Minority Health. (2020b). Asian Americans and Hepatitis B. FDA. https://www.fda.gov/consumers/minorityhealth-and-health-equity/asian-americans-and-hepatitis-b Office of Minority Health. (2020c, December 31). Hepatitis and Asian Americans—The Office of Minority Health. https:// minorityhealth.hhs.gov/omh/browse.aspx?lvl=4&lvlid=50 Office of Minority Health. (2021, February 11). Heart Disease and Asian Americans—The Office of Minority Health. https:// minorityhealth.hhs.gov/omh/browse.aspx?lvl=4&lvlid=49 Problems With the Collection and Interpretation of Asian-American Health Data: Omission, Aggregation, and Extrapolation | Elsevier Enhanced Reader. (n.d.). https://doi. org/10.1016/j.annepidem.2012.04.001 Pu, J., Hastings, K., Boothroyd, D., Powell, J., Chung, S., Shah, J., Cullen, M., Palaniappan, L., & Rehkopf, D. (n.d.). Geographic Variations in Cardiovascular Disease Mortality Among Asian American Subgroups, 2003–2011. https://doi.org/10.1161/ JAHA.117.005597 Sallis James F., Floyd Myron F., Rodríguez Daniel A., & Saelens Brian E. (2012). Role of Built Environments in Physical Activity, Obesity, and Cardiovascular Disease. Circulation, 125(5), 729– 737. https://doi.org/10.1161/CIRCULATIONAHA.110.969022 Siddiqui, K., Joy, S. S., & Nawaz, S. S. (2019). Impact of Early Life or Intrauterine Factors and Socio-Economic Interaction on Diabetes—An Evidence on Thrifty Hypothesis. Journal of Lifestyle Medicine, 9(2), 92–101. https://doi.org/10.15280/ jlm.2019.9.2.92
Latkin, C. A., Dayton, L., Yi, G., Konstantopoulos, A., & Boodram, B. (2021). Trust in a COVID-19 vaccine in the U.S.: A social-ecological perspective. Social Science & Medicine, 270, 113684. https://doi.org/10.1016/j.socscimed.2021.113684
Skinner, A. C., Ravanbakht, S. N., Skelton, J. A., & Perrin, E. M. (2018). Skinner AC, Ravanbakht SN, Skelton JA, Perrin EM, Armstrong SC. Prevalence of Obesity and Severe Obesity in US Children, 1999–2016. Pediatrics. 2018;141(3):e20173459. Pediatrics, 142(3). https://doi.org/10.1542/peds.2018-1916
Ma, G. X., Shive, S. E., Zhang, G., Aquilante, J., Tan, Y., Pharis, M., Bettigole, C., Lawman, H., Wagner, A., Zhu, L., Zeng, Q., & Wang, M. Q. (2018). Evaluation of a Healthy Chinese Take-Out Sodium-Reduction Initiative in Philadelphia Low-Income Communities and Neighborhoods. Public Health Reports, 133(4), 472–480. https://doi.org/10.1177/0033354918773747
Tummala-Narra, P., Li, Z., Chang, J., Yang, E. J., Jiang, J., Sagherian, M., Phan, J., & Alfonso, A. (2018). Developmental and contextual correlates of mental health and help-seeking among Asian American college students. American Journal of Orthopsychiatry, 88(6), 636–649. https://doi.org/10.1037/ ort0000317
Ma, G. X., Zhu, L., Shive, S. E., Zhang, G., Senter, Y. R., Topete, P., Seals, B., Zhai, S., Wang, M., & Tan, Y. (2019). The Evaluation of IDEAL-REACH Program to Improve Nutrition among Asian American Community Members in the Philadelphia Metropolitan Area. International Journal of Environmental Research and Public Health, 16(17). https://doi.org/10.3390/ ijerph16173054
Virani, S. S., Alonso, A., Aparicio, H. J., Benjamin Emelia J., Bittencourt Marcio S., Callaway Clifton W., Carson April P., Chamberlain Alanna M., Cheng Susan, Delling Francesca N., Elkind Mitchell S.V., Evenson Kelly R., Ferguson Jane F., Gupta Deepak K., Khan Sadiya S., Kissela Brett M., Knutson Kristen L., Lee Chong D., Lewis Tené T., … Tsao Connie W. (2021). Heart Disease and Stroke Statistics—2021 Update. Circulation, 143(8), e254–e743. https://doi.org/10.1161/ CIR.0000000000000950
Manjunath Lakshman, Chung Sukyung, Li Jiang, Shah Harsh, Palaniappan Latha, & Yong Celina M. (2020). Heterogeneity of Treatment and Outcomes Among Asians With Coronary Artery Disease in the United States. Journal of the American Heart Association, 9(10), e014362. https://doi.org/10.1161/ 156
JAHA.119.014362
What is hepatitis B? (2020). Asian Liver Center. Retrieved February 14, 2021, from http://med.stanford.edu/liver/ education/whatishepb.html DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WHO. (2020). Hepatitis B. Retrieved February 27, 2021, from https://www.who.int/news-room/fact-sheets/detail/ hepatitis-b World Hepatitis Day: Fast-tracking the elimination of hepatitis B among mothers and children. (n.d.). Retrieved February 15, 2021, from https://www.who.int/news/item/27-072020-world-hepatitis-day-fast-tracking-the-elimination-ofhepatitis-b-among-mothers-and-children Wu, C., Qian, Y., & Wilkes, R. (2020). Anti-Asian discrimination and the Asian-white mental health gap during COVID-19. Ethnic and Racial Studies, 1–17. https://doi.org/10.1080/0141 9870.2020.1851739 Yang, K. G., Rodgers, C. R. R., Lee, E., & Le Cook, B. (2020). Disparities in Mental Health Care Utilization and Perceived Need Among Asian Americans: 2012-2016. Psychiatric Services (Washington, D.C.), 71(1), 21–27. https://doi. org/10.1176/appi.ps.201900126 Yoo, H. C., Gee, G. C., & Takeuchi, D. (2009a). Discrimination and health among Asian American immigrants: Disentangling racial from language discrimination. Social Science & Medicine, 68(4), 726–732. https://doi.org/10.1016/j. socscimed.2008.11.013 Yoo, H. C., Gee, G. C., & Takeuchi, D. (2009b). Discrimination and health among Asian American immigrants: Disentangling racial from language discrimination. Social Science & Medicine, 68(4), 726–732. https://doi.org/10.1016/j. socscimed.2008.11.013
WINTER 2021
157
COVID Symptom Severity and the Immune Response BY LAUREN FERRIDGE '23 Cover: Ultrastructural morphology of coronavirus SARS-CoV-2. The protruding red structures are the spike proteins which allow the virus to attach to and infect host cells. Source: Unsplash
158
Overview The novel coronavirus outbreak, originating in Wuhan, China in December 2019, has led to a global pandemic and widespread disease (Röltgen et al., 2020). The disease resulting from infection by the SARS-CoV-2 virus is COVID-19; it is a respiratory condition in which infected persons can present mild to severe symptoms such as fever, cough, chronic respiratory disease, fatigue, and shortness of breath. The special danger of this disease is that it is also possible to be infected but not exhibit symptoms, yet still spread the disease asymptomatically, to others. Additionally, observations of infection trends, morbidity rates, and hospitalizations have shown that older people, especially those with underlying health conditions, are more likely to develop and experience severe symptoms (WHO, n.d.). It has also been observed that men are more susceptible to severe experiences than women; however, women are more likely to experience ‘long COVID’ or long-lasting
effects of COVID-19-related complications (Brodin, 2021). This article will review the biology of SARS-CoV-2 and characteristics that allow it to infect human cells. The article will then analyze the human immune response to understand why certain patients experience different severity in symptoms. The discussion of immune response will focus on cytokines and the so-called cytokine storm, antibodies, and immunoglobulin-G. Coronaviruses are a family of viruses named for their characteristic crown shape. They are enveloped, positive-sense, single-stranded RNA viruses. This means that their genetic information is contained in one single strand of RNA. After entering host cells, the RNA is translated and RNA-dependent-RNA polymerase is created to synthesize additional mRNAs. (Payne, 2017). Coronaviruses infect host cells using their spike (S) protein which binds to the cellular entry receptors on the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
host cell. The viruses attach to angiotensinconverting enzyme 2 (ACE2) receptors in human cells. ACE2 receptors are significant because receptors regulate what can bind and enter the cell. After binding, coronaviruses make copies of their DNA, which then become incorporated into new viral particles. Coronavirus accessory proteins, proteins that assist in viral function, are highly variable and virus-specific and are thought to conduct host responses to infection and determine viral pathogenicity. SARSCoV-2 has some accessory proteins thought to interfere with antiviral host responses and may dysregulate the immune system, thereby increasing pro-inflammatory responses to provoke further tissue damage in lungs. Understanding the inflammatory response is crucial to understanding the immunological determinants of disease severity (V’kovski et al., 2020).
Immune Response A knowledge of the immune system is necessary to understand the intricacies of the immune response to COVID-19. Broadly, antibody interaction with antigens is the basis of the immune response to the virus and is characterized by interactions between macrophages, T lymphocytes, and B lymphocytes (NIH, n.d.) The immune system has two different types of responses: innate and adaptive. The innate response provides immediate host defense and includes macrophages and cytokines. Macrophages detect and destroy bacteria and
WINTER 2021
Figure 1: The human antibody response to SARS-CoV-2 infection. 1) SARS-COv-2 virus enters host cell when its viral spike protein (S) interacts with angiotensin-converting enzyme 2 (ACE2). 2) and 3) After replication and release from the host cell, viruses are engulfed and digested by antigen-presenting cells (APCs), specifically macrophages or dendritic cells. 4) Fragmented SARS-CoV-2 antigens are presented to helper T cells which interact with and activate B cells. 5) B cells proliferate and release into plasma and they have a high affinity for original SARS-CoV-2 antigen. Plasma cells secrete SARS-CoV-2-specific receptors in the form of immunoglobulin-M, immunoglobulin-G, or immunoglobulin-A antibodies. 6) Antibody mediated neutralization occurs when SARS-CoV-2-specific antibodies bind to viral antigens and prevent virus interaction and subsequent entry into host cells.
initiate inflammation by releasing cytokines to signal the immune system to fight off infection. Adaptive immunity is comprised of T and B lymphocytes, which recognize antigen proteins on foreign viruses or bacteria and regulate the immune response. The innate immune Source: Wikimedia Commons system is more precise but takes several days to become active as it relies on memory of past infection.
"Cytokines are small messengers that are secreted from one cell to another to alter the behavior of subsequent cells. They act as intracellular signals, and by binding to specific cell-surface receptors, they affect cell activation, Analysis of the Cytokine Response division, apoptosis, or Scientists have observed a relationship between disease severity and levels of proinflammatory movement." Cytokines are small messengers that are secreted from one cell to another to alter the behavior of subsequent cells. They act as intracellular signals, and by binding to specific cell-surface receptors, they affect cell activation, division, apoptosis, or movement. Although cytokines are crucial to the immune response, the so-called “cytokine storm” can be detrimental to the host. This is a phenomenon whereby the immune system is overactivated and cytokines not only attack and kill viral cells but also healthy host cells (Parkin & Cohen, 2001).
cytokines, suggesting that SARS-CoV-2 leads to immune dysregulation and increased concentrations of proinflammatory cytokines which worsen tissue damage. Tissue injury as a result of the virus can also induce exaggerated cytokine production, the aforementioned cytokine storm, and macrophage activation syndrome (MAS), which involves uncontrolled activation and proliferation of macrophages and T lymphocytes, which lead to further tissue damage. Retrospective analyses have supported this, showing that initial cytokine 159
levels are increased in patients with COVID-19 infection (compared to those not infected) and that there is a higher cytokine concentration in ICU admitted patients than non-ICU admitted patients (Tufan et al., 2020). A similar trend is seen in patients with severe infection versus those with non-severe infection. In particular, the cytokine IL-6 is highly active in its contribution to the cytokine storm (Tufan et al., 2020). From these trends it can be surmised that excessive inflammatory responses, such as the cytokine storm, are directly correlated with a worse COVID-19 prognosis. In addition to increased levels of cytokines, there also tends to be increased levels of inflammatory macrophages and T lymphocytes, cytotoxic T cells, and neutrophils. When present in moderation, these help the immune system identify and fight invading viruses and bad bacteria, but when unregulated start attacking healthy host cells (V’kovski et al., 2020).
"Patients with more severe illness had higher amounts of antibodies and increased viral loads when compared to those with milder infections, suggesting that larger initial amounts of viral antigen contribute to immune responses."
160
An analysis of cytokine, antibody, and immunoglobulin levels and activity in symptomatic patients versus asymptomatic patients is required to begin the investigation into immunological and biological determinants of immune response and disease severity. For the purposes of this report, symptomatic patients are defined by a positive lab test confirmed infection and presented clinical symptoms, while asymptomatic patients showed a positive nucleic acid test but showed no clinical symptoms (Long & Tang, et al., 2020).
Analysis of the Antibody Response SARS-CoV-2 specific antibodies, especially those preventing interaction between the viral spike receptor binding domain (RBD) and the host ACE2 receptor, are effective in neutralizing the virus. Scientists are still unsure if a serological response (antibody presence in blood), mediated by Immunoglobulin G (IgG), the most common antibody in blood, also effects clinical outcomes. In a recent report, spike 1 (S1) and nucleocapsid (N) protein-specific antibodies and RBD-ACE2 blocking antibodies were observed in asymptomatic and symptomatic patients (Röltgen et al., 2020). Another study indicated that of all COVID-19 infections, roughly 4045% of patients remain asymptomatic. These asymptomatic patients presented a more rapid decline in RBD antibodies over time. Additionally, it was observed that antibodies in outpatient and asymptomatic patients decreased during observation; however, for patients who died and who required ICU care, doctors observed the highest levels of antibodies. These patients
maintained a high level and no noticeable decrease while in the hospital (Röltgen et al., 2020). Furthermore, neutralizing antibody activity was higher in inpatients and in individuals who developed life-threatening COVID-19 compared to outpatients and those with mild symptoms. Additionally, a decreased amount of neutralizing antibodies was observed about a month after symptom onset (Brodin, 2021). Upon further investigation, relative antibody targeting was observed in SARS-CoV-2 infected patients. Relative antibody targeting was investigated in an attempt to determine whether antibody response in the initial weeks of infection dictates disease severity. It was observed that patients with mild illness had lower levels of SARS-CoV-2 antibodies targeting RBD, N and S1 compared to those with more severe illness who had much higher levels of antibodies. This phenomenon suggests antigen response could serve as an indicator of disease severity. (Röltgen et al., 2020). Another notable trend was that patients who died recorded higher amounts of S antibodies compared to patients who recovered. Outpatients with mild symptoms had higher levels of S1 and RBD antibodies than N antibodies when they first started to experience symptoms. This familiar trend suggests that early immune response targets spike proteins in attempt to thwart and constrain infection. Additionally, recovered patients had higher amounts of spike-targeting antibodies than patients who died, further suggesting that those who died either failed to mount an effective immune attack against the spike (S) protein or that the S protein itself is connected to disease severity. In line with previous claims about overactive immune response relating to severe disease expression, patients with more severe illness had higher amounts of antibodies and increased viral loads when compared to those with milder infections, suggesting again that larger initial amounts of viral antigen contribute to immune responses. It is important to understand that antibody analysis alone cannot accurately predict patient outcomes and experiences as patients with moderate antibody production were present in the entire disease severity spectrum. Nonetheless, the fact that those who died presented the highest levels of RBD-ACE2 blocking antibodies suggests an overactive response can be detrimental (Röltgen et al., 2020).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Comparison of mild and severe immune response to SARS-CoV-2 infection. In mild and moderate cases, the cytokine signature shows a controlled response with higher expression of cytokines and interferons, whereas in a severe response, a cytokineinduced immunopathological mechanism is observed, along with an increase of cytokines and delayed release of interferons. Source: Wikimedia Commons
Analysis of Viral Shedding and Immunoglobulin Levels Another way to analyze immune response is by observing viral shedding levels of immunoglobulin. Viral shedding is a phenomenon in which a virus replicates inside the host and releases replicated viruses into the host environment. Immunoglobulin-G (IgG) and immunoglobulin-M (IgM) are immunoglobulins which are found in blood that also fight SARS-CoV-2. Interestingly, longer periods of viral shedding were observed in asymptomatic patients (median of 19 days) compared to the symptomatic group. Conversely, immunoglobulin-G (IgG) levels were significantly lower in asymptomatic patients, and these patients also had lowerlevels of inflammatory cytokines. This suggests that the amount of virus within the host or the speed at which it was released into the environment may affect severity of immune response and symptoms. These trends also suggests that asymptomatic individuals had a weaker or slower response to SARS-CoV-2 infection (Long, Tang, et al., 2020). A further investigation of the seroconversion of immunoglobulin shows that 19 days after symptom onset, 100% of patients tested positive for antiviral immunoglobulin-G. Seroconversion is the timespan within which a specific antibody is detectable in blood. Before seroconversion antibodies to an infection are not detectable, during seroconversion antibodies are present but not detectable, and after seroconversion the antibodies are WINTER 2021
detectable, indicating disease and infection (“Seroconversion,” 2021) There were higher recorded levels of IgG and IgM in the severely infected group versus the non-severely infected group. However, no association could be made between seroconversion, the process of changing from seronegative to seropositive, and clinical characteristics. From all these findings, one take away is that serological testing can be useful in identifying asymptomatic patients. (Long, Liu, et al., 2020).
"Data shows that people older than 65 are significantly more likely to have a severe infection, whereas those younger than 65 are less likely to experience severe infection."
Symptom Severity with Respect to Age, Gender, and Health Another way to analyze potential connections between clinical symptoms and immune response is to look at severity differences in gender and age. Data shows that people older than 65 are significantly more likely to have a severe infection, whereas those younger than 65 are less likely to experience severe infection (Brodin, 2021). Possible reasons for this are again related to the immune response. Researchers posit that the reason why there is a low risk of experiencing severe COVID infection in children is that they also have low type 1 IFN levels. In addition, their immune systems are younger and more accustomed to facing novel challenges, whereas older patients have a higher risk of severe COVID infection because their immune systems rely much more heavily on memory responses. Additionally, people with pre-existing health issues, especially smokers, are more likely to experience severe symptoms. Smoking is
161
particularly significant because it induces ACE2 expression, facilitating SARS-CoV-2 entrance into cells. The increased number of ACE2 receptors makes it is easier for SARS-CoV-2 to bind and enter the cell, and ultimately, to replicate and produce more viral particles.
"Interestingly, gender is another factor that has been found to determine response to COVID-19. Men are more susceptible to infection and severe symptoms than women; however, women are more likely to experience ‘long COVID’ and its long-lasting effects."
Interestingly, gender is another factor that has been found to determine response to COVID-19. Men are more susceptible to infection and severe symptoms than women; however, women are more likely to experience ‘long COVID’ and its long-lasting effects. The path to understanding the biology of these observed trends starts with NLRP3. While there is no direct or concrete connection between NLRP3 and gender the activation or lack of activation in important for the repercussive effects within the immune response that are related to gender. NLRP3 is an inflammasome, an innate immune system receptor that induces inflammation, release of proinflammatory molecules, and pyroptotic cell death. Pyroptotic cell death, or pyroptosis, is a form of cell death that is set off by proinflammatory signals which are connected to inflammation. It is most commonly seen in macrophages (inflammatory cells) and is commonly triggered by infection (Ryter & Choi, 2014). NLRP3 activity is correlated with disease severity because pyroptotic cell death involves release of the enzyme lactate dehydrogenase (LDH). High levels of LDH were found in blood samples of COVID-19 infected patients, which suggests that inflammasome activation is important to the immune response and regulation of SARS-CoV-2 (Brodin, 2021). Inflammasome activation can inhibit type 1 IFN responses in infected cells and allows the virus to replicate, which induces further tissue damage and as a result induces more dramatic immune response as the immune cells struggle to hinder viral replication. This becomes more severe as the inflammatory cells flow into the lung and continue to produce large amounts of proinflammatory cytokines thus causing intense damage to lung tissue. Type 1 IFN response is in part responsible for the imbalance in the immune response and is the most likely suspect of determining severity in COVID-19 infections. These findings are significant in conjunction with recent results from the COVID Human Genetic Effort which determined that there are inborn errors in type 1 IFN pathways and the presence of neutralizing autoantibodies to type 1 IFN. This means that there is the possibility that individuals with particular genotypes have less robust IFN pathways or a greater affinity
162
to express neutralizing autoantibodies which consequently makes them more susceptible to severe COVID-19. This observation suggests that there may be a genetic reasoning as to why some people experience more severe symptoms. Both findings were essential, but most notably, the autoantibodies against type 1 IFN were found in patients with life-threatening COVID-19 infection. This connects to gender disparities of symptom expression because women elicit stronger type 1 IFN responses and have better vaccine responses. This can explain why more women experience milder symptoms compared to men. It is also important to note that social factors, such as access to healthcare, living in a low-income area, and not being able to work from home, could potentially influence the gender trends of exposure and infection (Brodin, 2021).
Emerging Ideas As SARS-CoV-2 emerged very suddenly and recently, there has simply not been enough time to gather an extensive amount of data and knowledge of the disease. That being said, researchers have made plenty of progress towards understanding the immune response to SARS-CoV-2 infection. First, as referenced in the previous section, autoantibodies are beginning to be studied for their impact on immune response. Autoimmunity is when the immune system begins to turn against the body by mistake. Thus, autoantibodies are rogue antibodies that attack either elements of the immune system or proteins in specific organs. Autoantibodies can result in long-term damage and seem to impact the severity and long-term complications of COVID-19. Autoantibodies are more common in women, which may answer why women are more likely to experience ‘long COVID’. It is possible that SARS-CoV-2 could cause the body to generate autoantibodies to attack its own tissues, but this is still uncertain. Additionally, autoantibodies that target phospholipids are dangerous because they have a role in controlling blood clots. This is potentially hazardous when it comes to different treatments. With respect to COVID-19, it is important to have more autoantibody blood tests, particularly to test for pre-existing interferon targeting autoantibodies (Khamsi, 2021). One proposed treatment for COVID-19 involves interferon- as an immune system booster to improve clinical symptoms and to support the weakened immune system (Khamsi, 2021).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
A second proposed connection to disease severity is the relevance of pre-existing immunity to common-cold coronaviruses. T cell reactivity linked to prior common-cold coronavirus exposure has been found in individuals not exposed to COVID. IgG specific to SARS-CoV-2 spike protein has also been found in unexposed individuals. This IgG seems to have some neutralizing activity against COVID-19. This suggests that IgG-mediated adaptive immunity is potentially protective in individuals who have previously been infected with coronaviruses. (Brodin, 2021). Lastly, because of the deadly nature of cytokine storms, they have become a target for COVID-19 treatment. Researchers are considering immunotherapy to control or regulate the cytokine storm (Wang et al., 2020). This is relevant not just for preventing death and reducing morbidity rates, but also for preventing acute respiratory distress syndrome (ARDS) and other long lasting illnesses or conditions (Ragab et al., 2020).
Conclusion Just as the COVID-19 pandemic is a rapidly evolving situation, so are the research efforts to understand the virus and its effects on the body. While significant advancements have been made to learn about COVID-19, the whole picture is still not completely understood. Especially with the introduction of new strains, it becomes harder to identify what aspects of the immune system are major players in infection symptoms and progression. It is still likely that specific differences between individuals and their immune systems (including factors such as inflammatory mediators, host cell and tissue vulnerability, and T cell response) are just as likely to influence disease expression and patient experiences as cytokines, antibodies, and immunoglobulin levels. References Brodin, P. (2021). Immune determinants of COVID-19 disease presentation and severity. Nature Medicine, 27(1), 28–33. https://doi.org/10.1038/s41591-020-01202-8 Khamsi, R. (2021). Rogue antibodies could be driving severe COVID-19. Nature, 590(7844), 29–31. https://doi.org/10.1038/ d41586-021-00149-1
Long, Q.-X., Tang, X.-J., Shi, Q.-L., Li, Q., Deng, H.-J., Yuan, J., Hu, J.-L., Xu, W., Zhang, Y., Lv, F.-J., Su, K., Zhang, F., Gong, J., Wu, B., Liu, X.-M., Li, J.-J., Qiu, J.-F., Chen, J., & Huang, A.-L. (2020). Clinical and immunological assessment of asymptomatic SARS-CoV-2 infections. Nature Medicine, 26(8), 1200–1204. https://doi.org/10.1038/s41591-020-0965-6 NIH. (n.d.). Immune response: MedlinePlus Medical Encyclopedia. Medline Plus: U.S. National LIbrary of Medicine. Retrieved February 9, 2021, from https://medlineplus.gov/ ency/article/000821.htm Parkin, J., & Cohen, B. (2001). An overview of the immune system. The Lancet, 357(9270), 1777–1789. https://doi. org/10.1016/S0140-6736(00)04904-7 Payne, S. (2017). RNA Viruses—An overview | ScienceDirect Topics. Science Direct: RNA Viruses. https://www. sciencedirect.com/topics/neuroscience/rna-viruses Ragab, D., Salah Eldin, H., Taeimah, M., Khattab, R., & Salem, R. (2020). The COVID-19 Cytokine Storm; What We Know So Far. Frontiers in Immunology, 11. https://doi.org/10.3389/ fimmu.2020.01446 Röltgen, K., Powell, A. E., Wirz, O. F., Stevens, B. A., Hogan, C. A., Najeeb, J., Hunter, M., Wang, H., Sahoo, M. K., Huang, C., Yamamoto, F., Manohar, M., Manalac, J., Otrelo-Cardoso, A. R., Pham, T. D., Rustagi, A., Rogers, A. J., Shah, N. H., Blish, C. A., … Boyd, S. D. (2020). Defining the features and duration of antibody responses to SARS-CoV-2 infection associated with disease severity and outcome. Science Immunology, 5(54). https://doi.org/10.1126/sciimmunol.abe0240 Ryter, S. W., & Choi, A. M. K. (2014). Pyroptosis—An overview | ScienceDirect Topics. ScienceDirect: Pyroptosis. https://www. sciencedirect.com/topics/neuroscience/pyroptosis Seroconversion. (2021). In Wikipedia. https://en.wikipedia. org/w/index.php?title=Seroconversion&oldid=1005181140 TUFAN, A., AVANOĞLU GÜLER, A., & MATUCCI-CERINIC, M. (2020). COVID-19, immune system response, hyperinflammation and repurposing antirheumatic drugs. Turkish Journal of Medical Sciences, 50(3), 620–632. https:// doi.org/10.3906/sag-2004-168 V’kovski, P., Kratzel, A., Steiner, S., Stalder, H., & Thiel, V. (2020). Coronavirus biology and replication: Implications for SARS-CoV-2. Nature Reviews Microbiology, 1–16. https://doi. org/10.1038/s41579-020-00468-6 Wang, J., Jiang, M., Chen, X., & Montaner, L. J. (2020). Cytokine storm and leukocyte changes in mild versus severe SARS‐ CoV‐2 infection: Review of 3939 COVID‐19 patients in China and emerging pathogenesis and therapy concepts. Journal of Leukocyte Biology. https://doi.org/10.1002/JLB.3COVR0520272R WHO. (n.d.). Coronavirus. World Health Organization. Retrieved February 9, 2021, from https://www.who.int/ westernpacific/health-topics/coronavirus
Long, Q.-X., Liu, B.-Z., Deng, H.-J., Wu, G.-C., Deng, K., Chen, Y.-K., Liao, P., Qiu, J.-F., Lin, Y., Cai, X.-F., Wang, D.-Q., Hu, Y., Ren, J.-H., Tang, N., Xu, Y.-Y., Yu, L.-H., Mo, Z., Gong, F., Zhang, X.-L., … Huang, A.-L. (2020). Antibody responses to SARS-CoV-2 in patients with COVID-19. Nature Medicine, 26(6), 845–848. https://doi.org/10.1038/s41591-020-0897-1
WINTER 2021
163
Gallbladder Cancer—Emerging Treatment Methods Offer Hope for Better Prognosis BY LUCY FU '22 Cover: Microscopic view of a gallbladder carcinoma. Source: Wikimedia Commons
164
Introduction to Gallbladder Cancer
insidious, and deadly (Goetze, 2015).
The story of gallbladder cancer presents itself as a tragedy more often than not. As an organ located in the gastrointestinal tract, the gallbladder’s main function is to store bile acids secreted from the liver (Shukla et al., 2018). An unknowing patient might make a visit to the hospital with mild complications, only to find themselves diagnosed with late-stage gallbladder cancer. Complications associated with this condition may include abdominal pain, abdominal bloating, sudden weight loss, and jaundice, though it is not uncommon for symptoms to be absent until the disease has advanced significantly. As such, 4 in 5 cases of gallbladder cancers are discovered in their late stage because of the tendency for symptoms to be absent until the tumor is widespread (Rawla et al., 2019). In this way, with an overall 5-year survival rate of less than 5% and an overall mean survival rate of a mere 6 months, gallbladder cancer is undeniably aggressive,
Gallbladder cancer is the most common malignant cancer of the biliary tract (Shukla et al., 2018). Although gallbladder cancers account for 1.2% of all global cancer diagnoses, it is responsible for 1.7% of all cancer deaths (Bray et al., 2018). Dismal prognosis stems from the lack of effective treatment available to treat gallbladder carcinoma in later stages. Furthermore, many of the symptoms associated with this cancer are unspecific, making it difficult to correctly diagnose patients who do initially experience symptoms. Due to the obscurity associated with gallbladder cancer symptoms, the most common method of detection for the malignancy is incidental finding on unrelated imaging before or after a surgical procedure (Recio-Boiles et al., 2020). If the cancer is detected in its early stages, the 5-year survival rate shoots up to 75% with proper treatment compared to under 5% for DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Diagram of the gallbladder and surrounding structures. The gallbladder is located adjacent to the liver. Source: Wikimedia Commons
late-stage gallbladder cancer (Goetze, 2015). Thus, efforts to increase early detection and develop new treatment methods are crucial to improving prognosis for gallbladder cancer patients.
Stages and Risk Factors for Gall Bladder Cancer The stage of gallbladder cancer in a patient is a major determinant in deciding a particular course of treatment. The most widely used and recommended system of staging for gallbladder cancer is the tumor-node-metastasis (TNM) staging system developed by the International Union Against Cancer (UICC) and the American Joint Committee on Cancer (AJCC) (Stavros et al., 2008). This system determines cancer stage based on the depth of tumor invasion relating to the anatomy of the gallbladder wall, which is made up of several layers. Briefly, the innermost layer is the epithelium, which is a thin sheet of cells that line the inside wall of the gallbladder and are specialized for fluid absorption. The next layer is the lamina propria, a thin layer of loose connective tissue, which together with the epithelium forms the mucosa. The lamina propria is followed by a layer of muscular tissue that helps the gallbladder contract and excrete bile called the ‘muscularis.’ Surrounding the muscularis is the perimuscular fibrous tissue. Finally, the serosa composes the outer covering of the gallbladder. The lymph nodes
WINTER 2021
are nearby structures that help to control infection by removing foreign substances from the lymphatic fluid. According to UICC and AJCC’s joint TMN system, Stage 0 refers to carcinoma in situ, meaning that the cancer has not spread beyond the place of original formation. Stages I and II are subdivided into two more nuanced stages, depending on the location of the spread. Stage IA is characterized by the tumor invading the lamina propria or muscle layer; Stage IB describes a tumor that has invaded perimuscular connective tissue. Stage IIA tumor has perforated the serosa and/or directly invaded the liver and/or one adjacent organ. Tumors that have metastasized into any of the regional lymph nodes are considered Stage IIB. These stages are considered early in the progression of the tumor while the stages that follow are considered later stages of malignancy (Fong et al., 2006). Stage III encompasses tumors that have invaded the main portal vein, the common hepatic artery, or multiple extrahepatic organs. Finally, Stage IV refers to tumors that have distant metastases, including metastases in lymph nodes at the pancreatic body and tail (Stavros et al., 2008).
"Gallbladder cancer is the only digestive system cancer that is more common in women than men, with the female incidence ranging from three to six times higher than males."
Gallbladder cancer is the only digestive system cancer that is more common in women than men, with the female incidence ranging from three to six times higher than males (Rawla 165
Figure 2: The gallbladder is split into three sections, which are the fundus, body, and neck. The majority of gallbladder cancers originate in the fundus (Shaffer, 2008) Source: Wikimedia Commons
"Hardened deposits of digestive fluid formed in the gallbladder are called gallstones or ‘cholelithiasis,’ which is one of the most widely reported risk factors associated with the development of gallbladder cancer."
et al., 2005; Schmidt et al., 2019). Interestingly, the cancer also displays a strong nonuniform geographical distribution. For example, northern India and south Karachi Pakistan reports higher incidences of the cancer compared to Western countries such as the United States and Canada (Sharma et al., 2017). Likewise, some Eastern European such as Poland and Slovakia as well as places in Asia like Japan and Shanghai, China have higher rates of gallbladder cancer. It is likely that this geographic variation arises from different environmental factors and regional genetic predispositions. Another likely contributing factor is the dietary patterns associated with these regions (Kanthan et al., 2015). However, the specifics of these regional influences have not been widely studied, and the exact etiology of gallbladder cancer remains unknown (Schmidt et al., 2019). Several risk factors have been identified as high correlates with gallbladder cancer. Hardened deposits of digestive fluid formed in the gallbladder are called gallstones or ‘cholelithiasis,’ which is one of the most widely reported risk factors associated with the development of gallbladder cancer (Shaffer, 2008; Miller & Jarnagin, 2008; Muszynska et al., 2017). A history of cholelithiasis is associated with 70-90% of gallbladder cancer cases, but only 3% or less cases of cholelithiasis lead to
166
the development of gallbladder cancer (Hsing et al., 2007). Although a strong correlation has been observed, the mechanism by which cholelithiasis puts an individual at risk for gallbladder cancer is not thoroughly understood. Another prominent risk factor is the presence of an abnormal junction between the pancreatic and bile ducts known as an ‘anomalous pancreaticobiliary duct junction’ (Roukounakis et al., 2000). This rare congenital development is associated with around 10% of all gallbladder cancers (Schmidt et al., 2019). In order to explain the disproportionate number of women with gallbladder cancer, researchers have proposed reproductive and menstrual factors as a contributing factor to its etiology (Shin et al., 2011). Findings from several studies have confirmed this idea, where irregular, longer menstrual cycles and exposure to female hormonal factors were found to play a significant role in the development of gallbladder carcinoma (Makiuchi et al., 2017).
Pathogenesis and Aggressive Carcinogenesis There are three main sections of the gallbladder: the fundus is the large end and stores the bile, the body is the middle portion that tapers into the neck, which is the third section that connects to the cystic duct leaving the gallbladder. 60%
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
of carcinomas of the gallbladder originate in the fundus, 30% originate in the body of the organ, and 10% originate in the neck (Shaffer, 2008). Nearly all gallbladder cancers are adenocarcinomas, meaning they arise from the secretory cells that line the inner gallbladder walls (Rawla et al., 2019). The aggressive spread of gallbladder cancer is largely due to lymphatic metastases that spread widely from their origin in the gallbladder. Additionally, the poor prognosis can in part be attributed to the anatomical structures involved; the invasion of the carcinoma into the liver and adjacent areas relates to how the gallbladder drains into a segment of the liver through short, directly connected veins. The gallbladder lacks a serosal layer that is adjacent to the liver, so its perimuscular connective tissue is directly connected with the hepatic connective tissue of the liver (Hundal et al., 2005; Qadan & Kingham, 2016). This structural feature enables easy metastatic progression to the liver, leading to quicker advancement through the stages. The molecular pathogenesis of gallbladder cancer involves several identified gene mutations. Abnormalities of the p53 tumorsuppressor gene are seen in 35-92% of gallbladder cancers. Though the p53 gene has been suggested to contribute to gallbladder cancer development, this wide percentage range is suggestive of a lack of precise knowledge about the genetic basis of the disease (Stavros et al., 2008). Studies also identify the presence of mutations in K-ras, a protein that regulates cell growth, in 20-59% of gallbladder cancers (Sharma et al., 2017; Shukla et al., 2020). When K-ras is mutated, it stays abnormally active and consequently promotes uncontrolled cell growth (McCormick, 2015). Uncovering the genetic mutations that lead to gallbladder cancer is of great interest to researchers today, as it may provide the way for the development of targeted gene therapy.
Established Treatments The only potential curative treatment for gallbladder cancer is surgical resection, specifically a ‘cholecystectomy.’ Cholecystectomy is the surgical removal of the gallbladder, and is a common procedure performed for other non-cancer related reasons such as inflammation (Comitalo, 2012). In fact, 27-40% of cases of gallbladder cancer are incidentally found during or after simple cholecystectomy is performed for a
WINTER 2021
benign biliary disease (Rathanaswamy et al., 2012; Stavros et al., 2008). In these cases, the malignancy is often at the earliest stages ‘T1’ and ‘T2,’ such that cholecystectomy alone can be curative. T1a tumors are removed by a simple cholecystectomy, which is sufficiently curative for over 85% of cases. Tumors in the T1b, T2, or T3 stage are also treated with simple cholecystectomy, but this surgery must be followed up with radical resection (Toyonaga et al., 2003). Radical resection of the gallbladder involves the removal of tissue around the tumor margins, which could include parts of the liver and lymph nodes that the cancer could have spread to in later stages (Abramson et al., 2009). For a T4 tumor, surgical resection is not considered to be an effective treatment (Qadan & Kingham, 2016). If the tumor is deemed unresectable, other forms of treatment include adjuvant chemotherapy and radiotherapy, although they have not produced sufficient evidence to significantly alter the prognosis of gallbladder cancer patients (Gourgiotis et al., 2008; Chao et al, 1995). This is due to a lack of comprehensive research on these treatments for gallbladder cancer, especially since many studies analyze a mix of biliary tract cancers (Yin et al., 2018). In some cases, this treatment method can marginally improve survival, but other studies indicate that adjuvant chemotherapy does not improve overall survival time (Dutta, 2012; Memon et al., 2005). If the tumor is at a point of advanced malignancy, a patient is often directed to palliative care, which aims to relieve pain and symptoms associated with gallbladder cancer. One such symptom is jaundice, which is caused by the tumors that grow and obstruct the flow of bile from the gallbladder into the small intestine, causing a backup of bile into the liver and the bloodstream (Baiu & Visser, 2018). Implementing a biliary stent is a common palliative action that relieves symptoms relating to jaundice and improves the overall quality of life of patients (Yoshida et al., 2006). A biliary stent is a small tube that is placed through a duct blockage, allowing bile to drain into the small intestine.
"The molecular pathogenesis of gallbladder cancer involves several identified gene mutations. Abnormalities of the p53 tumorsuppressor gene are seen in 35-92% of gallbladder cancers."
Emerging Therapies As detailed earlier, the majority of gallbladder cancers are discovered in later stages, which drastically decreases the survival rates. Currently, this cancer is often discovered incidentally, and there are no reliable methods for primary prevention and detection. Recent
167
research regarding the genetic and molecular basis of gallbladder cancer may pave the way towards earlier detection and prevention (Wistuba & Gadzar, 2004). Modern pathology research has suggested that molecular markers of gallbladder cancer mainly involve 15 proteins, which could be useful for cancer detection and might also be able to serve as therapeutic targets in the near (Andrén-Sandberg, 2012). Two particularly promising markers include promyelocytic leukemia protein (PML) and p53; patients with normal expression of these two proteins displayed more favorable outcomes compared to those that had abnormal expression of one or both. PML is a tumor suppressor gene that plays a role in many cell processes such as apoptosis, cell cycle regulation, and DNA damage repair (Chan et al., 1997). Thus, PML and p53 could be used as clinical molecular prognostic markers and possible therapeutic targets of gallbladder cancer (Chang et al., 2007). Another growing field of research involves RNA molecules that control oncogene or tumor suppressor gene expression. Contrary to the traditional belief that non-coding RNAs do not play a role in regulating gene expression, newer studies show that microRNAs (miRNAs) and long non-coding RNAs play a crucial role in the gene expression underlying the development of gallbladder cancer. This discovery along with the advancement of nanotechnology allowing for more efficient delivery of RNA molecules to cells is a promising field for further investigation (Song et al., 2020).
Conclusion Gallbladder cancer is a devastating disease, with a stealthy pathogenesis and poor outcomes. For a relatively rare disease, this malignancy makes a name for itself as one of the most lethal cancers (Rakić et al., 2014). Because most gallbladder cancers are diagnosed in the later stages, many patients are left with few treatment options and demoralizing survival rates. While conventional therapies have not been successful in curing gallbladder, researchers are beginning to better understand the molecular and genetic basis of gallbladder cancer that may lead to the development of new treatment options. The future advancement of knowledge will inform the development of better prevention protocol and more effective treatment, so that one day, a gallbladder diagnosis will no longer incite complete doom but instead would bring about pockets of hope.
References Abramson, M. A., Pandharipande, P., Ruan, D., Gold, J. S., & Whang, E. E. (2009). Radical resection for T1b gallbladder cancer: A decision analysis. HPB : The Official Journal of the International Hepato Pancreato Biliary Association, 11(8), 656–663. https://doi.org/10.1111/j.1477-2574.2009.00108.x Andrén-Sandberg, Å. (2012). Molecular biology of gallbladder cancer: Potential clinical implications. North American Journal of Medical Sciences, 4(10), 435. https://doi.org/10.4103/19472714.101979 Baiu, I., & Visser, B. (2018). Gallbladder cancer. JAMA, 320(12), 1294–1294. https://doi.org/10.1001/jama.2018.11815 Bray, F., Ferlay, J., Soerjomataram, I., Siegel, R. L., Torre, L. A., & Jemal, A. (2018). Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA: A Cancer Journal for Clinicians, 68(6), 394–424. https://doi.org/https://doi.org/10.3322/ caac.21492 Chan, J. Y. H., Li, L., Fan, Y.-H., Mu, Z.-M., Zhang, W.-W., & Chang, K.-S. (1997). Cell-cycle regulation of dna damage-induced expression of the suppressor genePML. Biochemical and Biophysical Research Communications, 240(3), 640–646. https://doi.org/10.1006/bbrc.1997.7692 Chang, H. J., Yoo, B. C., Kim, S. W., Lee, B. L., & Kim, W. H. (2007). Significance of PML and p53 protein as molecular prognostic markers of gallbladder carcinomas. Pathology & Oncology Research, 13(4), 326–335. https://doi.org/10.1007/ BF02940312 Chao, T. C., Jan, Y. Y., & Chen, M. F. (1995). Primary carcinoma of the gallbladder associated with anomalous pancreaticobiliary ductal junction. Journal of Clinical Gastroenterology, 21(4), 306–308. https://doi.org/10.1097/00004836-19951200000012 Comitalo, J. B. (2012). Laparoscopic cholecystectomy and newer techniques of gallbladder removal. JSLS : Journal of the Society of Laparoendoscopic Surgeons, 16(3), 406–412. https://doi.org/10.4293/108680812X13427982377184 Dutta, U. (2012). Gallbladder cancer: Can newer insights improve the outcome? Journal of Gastroenterology and Hepatology, 27(4), 642–653. https://doi.org/https://doi. org/10.1111/j.1440-1746.2011.07048.x Fong, Y., Wagman, L., Gonen, M., Crawford, J., Reed, W., Swanson, R., Pan, C., Ritchey, J., Stewart, A., & Choti, M. (2006). Evidence-based gallbladder cancer staging. Annals of Surgery, 243(6), 767–774. https://doi.org/10.1097/01. sla.0000219737.81943.4e Goetze, T. O. (2015). Gallbladder carcinoma: Prognostic factors and therapeutic options. World Journal of Gastroenterology, 21(43), 12211–12217. https://doi.org/10.3748/wjg.v21. i43.12211 Gourgiotis, S., Kocher, H. M., Solaini, L., Yarollahi, A., Tsiambas, E., & Salemis, N. S. (2008). Gallbladder cancer. The American Journal of Surgery, 196(2), 252–264. https://doi.org/10.1016/j. amjsurg.2007.11.011 Hsing, A. W., Bai, Y., Andreotti, G., Rashid, A., Deng, J., Chen, J., Goldstein, A. M., Han, T.-Q., Shen, M.-C., Fraumeni, J. F., & Gao, Y.-T. (2007). Family history of gallstones and the risk of
168
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
biliary tract cancer and gallstones: A population-based study in Shanghai, China. International Journal of Cancer, 121(4), 832–838. https://doi.org/10.1002/ijc.22756 Hundal, R., & Shaffer, E. A. (2014). Gallbladder cancer: Epidemiology and outcome. Clinical Epidemiology, 6, 99–109. https://doi.org/10.2147/CLEP.S37357 Kanthan, R., Senger, J.-L., Ahmed, S., & Kanthan, S. C. (2015). Gallbladder cancer in the 21st century. Journal of Oncology, 2015. https://doi.org/10.1155/2015/967472 Makiuchi, T., Sobue, T., Kitamura, T., Sawada, N., Iwasaki, M., Sasazuki, S., Yamaji, T., Shimazu, T., & Tsugane, S. (2017). Reproductive factors and gallbladder/bile duct cancer: A population-based cohort study in Japan. European Journal of Cancer Prevention: The Official Journal of the European Cancer Prevention Organisation (ECP), 26(4), 292–300. https:// doi.org/10.1097/CEJ.0000000000000260 McCormick, F. (2015). Kras as a therapeutic target. Clinical Cancer Research, 21(8), 1797–1801. https://doi. org/10.1158/1078-0432.CCR-14-2662 Memon, M. A., Anwar, S., Shiwani, M. H., & Memon, B. (2005). Gallbladder carcinoma: A retrospective analysis of twenty-two years experience of a single teaching hospital. International Seminars in Surgical Oncology, 2(1), 6. https:// doi.org/10.1186/1477-7800-2-6 Miller, G., & Jarnagin, W. R. (2008). Gallbladder carcinoma. European Journal of Surgical Oncology (EJSO), 34(3), 306–312. https://doi.org/10.1016/j.ejso.2007.07.206 Muszynska, C., Lundgren, L., Lindell, G., Andersson, R., Nilsson, J., Sandström, P., & Andersson, B. (2017). Predictors of incidental gallbladder cancer in patients undergoing cholecystectomy for benign gallbladder disease: Results from a population-based gallstone surgery registry. Surgery, 162(2), 256–263. https://doi.org/10.1016/j.surg.2017.02.009 Qadan, M., & Kingham, T. P. (2016). Technical aspects of gallbladder cancer surgery. The Surgical Clinics of North America, 96(2), 229–245. https://doi.org/10.1016/j. suc.2015.12.007 Rakić, M., Patrlj, L., Kopljar, M., Kliček, R., Kolovrat, M., Loncar, B., & Busic, Z. (2014). Gallbladder cancer. Hepatobiliary Surgery and Nutrition, 3(5), 221–226. https://doi.org/10.3978/j. issn.2304-3881.2014.09.03 Rathanaswamy, S., Misra, S., Kumar, V., Chintamani, Pogal, J., Agarwal, A., & Gupta, S. (2012). Incidentally detected gallbladder cancer- the controversies and algorithmic approach to management. The Indian Journal of Surgery, 74(3), 248–254. https://doi.org/10.1007/s12262-012-0592-7 Rawla, P., Sunkara, T., Thandra, K. C., & Barsouk, A. (2019). Epidemiology of gallbladder cancer. Clinical and Experimental Hepatology, 5(2), 93–102. https://doi. org/10.5114/ceh.2019.85166 Recio-Boiles, A., Kashyap, S., & Babiker, H. M. (2021). Gallbladder cancer. In StatPearls. StatPearls Publishing. http:// www.ncbi.nlm.nih.gov/books/NBK442002/ Roukounakis, N. E., Kuhn, J. A., & McCarty, T. M. (2000). Association of an abnormal pancreaticobiliary junction with biliary tract cancers. Proceedings (Baylor University. Medical Center), 13(1), 11–13.
WINTER 2021
Schmidt, M. A., Marcano-Bonilla, L., & Roberts, L. R. (2019). Gallbladder cancer: Epidemiology and genetic risk associations. Chinese Clinical Oncology, 8(4), 2–2. https://doi. org/10.21037/cco.v8i4.28517 Shaffer, E. A. (2008). Gallbladder cancer. Gastroenterology & Hepatology, 4(10), 737–741. Sharma, A., Kumar, A., Kumari, N., Krishnani, N., & Rastogi, N. (2017). Mutational frequency of kras, nras, idh2, pik3ca, and egfr in north indian gallbladder cancer patients. Ecancermedicalscience, 11. https://doi.org/10.3332/ ecancer.2017.757 Sharma, A., Sharma, K. L., Gupta, A., Yadav, A., & Kumar, A. (2017). Gallbladder cancer epidemiology, pathogenesis and molecular genetics: Recent update. World Journal of Gastroenterology, 23(22), 3978–3998. https://doi. org/10.3748/wjg.v23.i22.3978 Shin, A., Song, Y.-M., Yoo, K.-Y., & Sung, J. (2011). Menstrual factors and cancer risk among Korean women. International Journal of Epidemiology, 40(5), 1261–1268. https://doi. org/10.1093/ije/dyr121 Shukla, S. K., Singh, G., Shahi, K. S., Bhuvan, null, & Pant, P. (2020). Genetic changes of p53 and kras in gallbladder carcinoma in kumaon region of uttarakhand. Journal of Gastrointestinal Cancer, 51(2), 552–559. https://doi. org/10.1007/s12029-019-00283-0 Shukla, S. K., Singh, G., Shahi, K. S., Bhuvan, & Pant, P. (2018). Staging, treatment, and future approaches of gallbladder carcinoma. Journal of Gastrointestinal Cancer, 49(1), 9–15. https://doi.org/10.1007/s12029-017-0036-5 Song, X., Hu, Y., Li, Y., Shao, R., Liu, F., & Liu, Y. (2020). Overview of current targeted therapy in gallbladder cancer. Signal Transduction and Targeted Therapy, 5(1), 1–19. https://doi. org/10.1038/s41392-020-00324-2 Toyonaga, T., Chijiiwa, K., Nakano, K., Noshiro, H., Yamaguchi, K., Sada, M., Terasaka, R., Konomi, K., Nishikata, F., & Tanaka, M. (2003). Completion radical surgery after cholecystectomy for accidentally undiagnosed gallbladder carcinoma. World Journal of Surgery, 27(3), 266–271. https://doi.org/10.1007/ s00268-002-6609-9 Wistuba, I. I., & Gazdar, A. F. (2004). Gallbladder cancer: Lessons from a rare tumour. Nature Reviews Cancer, 4(9), 695–706. https://doi.org/10.1038/nrc1429 Yin, L., Xu, Q., Li, J., Wei, Q., & Ying, J. (2018). The efficiency and regimen choice of adjuvant chemotherapy in biliary tract cancer. Medicine, 97(50). https://doi.org/10.1097/ MD.0000000000013570 Yoshida, H., Mamada, Y., Taniai, N., Mizuguchi, Y., Shimizu, T., Yokomuro, S., Aimoto, T., Nakamura, Y., Uchida, E., Arima, Y., Watanabe, M., Uchida, E., & Tajiri, T. (2006). One-step palliative treatment method for obstructive jaundice caused by unresectable malignancies by percutaneous transhepatic insertion of an expandable metallic stent. World Journal of Gastroenterology : WJG, 12(15), 2423–2426. https://doi. org/10.3748/wjg.v12.i15.2423 Z’graggen, K., Birrer, S., Maurer, C. A., Wehrli, H., Klaiber, C., & Baer, H. U. (1998). Incidence of port site recurrence after laparoscopic cholecystectomy for preoperatively unsuspected gallbladder carcinoma. Surgery, 124(5), 831–838. https://doi.org/10.1016/S0039-6060(98)70005-4
169
Compost Tea: A Review of the Experimental Data and Its Potential Implications BY MADDIE BROWN '22, IAN HSU '23, BASILE MONTAGNESE '22, AND JULIA DRAVES '23 Cover: A field of tea leaves. Source: Wikimedia Commons
Introduction In developing countries around the world, many rural communities face frequent food shortages, which cause undernourishment, malnutrition, and increased mortality (DeRose et al., 1999). The goal of the Dartmouth Humanitarian Engineering (DHE) club is to counter these humanitarian problems through practical engineering solutions. Within DHE, the goal of the Compost Tea Project is to combat food insecurity in regions with poor soil quality by providing a simple method of fertilizing community gardens, thus improving soil quality and increasing crop yield and nutritional content.
What is Compost Tea, and What are the Benefits? Compost tea is a microbial soil treatment that is nutrient-rich solution and brewed by steeping a bag of compostable material in water and 170
allowing the water to absorb the nutrients as organic material is broken down. Application of compost tea to soil increases the nutrients available, thus improving the health and yield of several consumer plants. Empirical evidence for this has been demonstrated including the increase of nitrogen levels, shoot growth, and root growth in red leaf lettuce and sweet corn (Kim et al, 2015). Other evidence includes improved commercial yield and nutritional content of lettuce and kohlrabi plants increases (by as much as 30%) following application of compost tea (Pane et al. 2014). In a study on strawberries, compost increased the concentration of sodium in the fruit with foliar compost tea applications (Hargreaves et al., 2009). The method by which compost tea improves plant health and yield is primarily through improvements in soil quality. The delivery of organic content into soil by compost tea DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
adds value to soil cation exchange capacity, replenishes nonrenewable nutrients, improves soil porosity, enhances the chemical stabilization of the soil, and mitigates soil erosion processes. This was demonstrated by Hargreaves et al (2009), who particularly noted improvements in nitrogen concentration in the soil. Finally, research on compost tea has also found it to be an effective pesticide against certain fungus and other pests. A study by Scheurell et al. (2004) concluded that aerated compost tea suppressed the growth of Botrytis cinerea (a common fungal pathogen) on bean leaflets. This does not have to be a continuous spraying of the plants, as Scheurell et al. (2004) found that pathogen infection was prevented even with decreased spray frequency.
Why is Compost Tea Better Than its Alternatives? Compost tea provides a promising alternative to other fertilizers, such as hydroponic agricultural systems, manure fertilizers, and synthetic fertilizers. Hydroponic agricultural systems (systems to grow plants with nutrients in sand, gravel, or liquid but without soil) are fertilizers that don’t require arable land and are able to grow a variety of crops with high nutritional values (Anda & Shear, 2017). However, these systems require complex mechanisms with high capital and operational costs, and need additional training to maintain, which is not being feasible for a more widespread and universal application (particularly in developing countries) (Palande et al., 2018). On the other
WINTER 2021
hand, natural fertilizers (such as manure) are easy to apply and increase nutrients available for plant growth, but also contaminate groundwater and spread disease. (Manyi-Loh et al. 2016) Synthetic fertilizers are an ideal choice for their efficacy, but are expensive, contaminate groundwater, and disqualify growers from organic markets (Hallberg 1987).
Figure 1: First Prototype This is the first prototype made for the project. It is made of two buckets fused together and an interior pump which pumps water through the exterior PCP pipes to create a vortex.
Creating Our Compost Tea Prototype
"To tackle the problem of plant nutrient deficiency, we combined existing solutions. We produced an affordable, effective, and feasible hydroponics system to produce organic fertilizer and compared two hydroponic nutrient solutions: one utilized compost tea (a liquid solution made from the decomposition of plant biomass in water) and the other utilized livestock manure)."
To tackle this problem of plant nutrient deficiency, we combined existing solutions. We produced an affordable, effective, and feasible hydroponics system to produce organic fertilizer and compared two hydroponic nutrient solutions: one utilized compost tea (a liquid solution made from the decomposition of plant biomass in water) and the other utilized livestock manure). Ultimately, compost tea solution was chosen despite its increased complexity and cost, as it had a lower risk of harboring pathogenic microbes and was simpler to process (Ingham, 2005). Our composting device is designed to create a nutrient rich solution for farmers through breakdown of compostable waste. To make compost tea solution, fresh plant biomass is decomposed in an aerated liquid medium, allowing for nutrient mobilization. After 2 days, the solution becomes nutrient-rich and usable as organic fertilizer for hydroponic gardens (Ingham, 2005). Aeration is artificially induced via a vortex, as it has been demonstrated that aeration provides oxygen that enables more
Source: Image taken by authors
Figure 2: Second Prototype This is the second prototype made for the project. It is made of two buckets fused together and an interior pump which pumps water through the interior PCP pipes in crosscurrent fashion to create a vortex. Source: Image created by authors
171
Table 1: Comparing Prototypes Comparison of oxygenation levels, cost, compost tea volume, and build time for all prototypes.
"The prototype was tested in several experimental phases. The first experiment, conducted in the spring of 2019, aimed to examine the prototype’s effect on NPK (Nitrogen, Phosphorous, and Potassium) levels in the soil."
Figure 3: Third Prototype This is the third prototype made for the project. It is made of one bucket and an interior pump which pumps water through an interior PCP pipe which splits in a T fashion to produce a vortex with liquid being pumped. Source: Image taken by authors
efficient bacterial decomposition of organic material (Shaban et al., 2015). Thus, the nutrient rich compost tea solution is ready in 48 hours, as opposed to longer times needed to break down plant material in non-aerated compost tea brews. Our first prototype used a waterfall vortex method to aerate the water, as this is known to be a superior method of aerating medium compared to mechanical aeration (Butcher et al., 2017). This design employs two buckets that are fused together, with three pipes at the top of the bucket aimed diagonally downwards. When water is pumped through the pipes, a vortex is produced. The prototype had an oxygenation of 80 ppm, a cost of materials of $70, a volume of 5 gallons, and a build time of over 5 hours. However, this prototype had issues with leaking and overheating due to insufficient sealing for the two fused buckets.
The second prototype used a similar framework, but the water was pumped through two opposing pipes at the top of the bucket complex. This version of the device created oxygen levels of 155 ppm, it cost $75 to build, it held 5 gallons of compost tea, and it required 2 hours to build. However, there were difficulties with finding venturi system pipes that were used, as these pipe volumes proved to be too small. The third and final prototype utilizes a single bucket design with a simple aquarium pump at the bottom of the bucket. The pump brought water into a single pipe with two exit points angled 30 degrees below the horizontal. This final prototype had an oxygenation 80 ppm, cost of $85, a volume of 25 gallons, and a 20-minute build time.
Spring 2019 Experiment: NPK Soil Nutrient Level Next, the prototype was tested in several experimental phases. The first experiment, conducted in the spring of 2019, aimed to examine the prototype’s effect on NPK (Nitrogen, Phosphorous, and Potassium) levels in the soil. During a travel trip during the summer of 2018, Dartmouth students tested the NPK levels of the soil on five urban orchards in Quito, Ecuador. Four of the five orchards had nitrogen levels below ideal amounts, and all five orchards had phosphorus and potassium levels below ideal amounts. These are vital components of plant growth, and therefore restoring them was high priority. Twelve individually potted plants were grown in the Dartmouth Greenhouse for three months. Six of the plants were corn and six
172
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: These graphs show the potassium levels in the soil of Corn 1-3 compared to Corn 4-6 over time. A linear trendline was added in excel.
WINTER 2021
173
Figure 5: This figure graphs the relative mass of the plants over time. 1:2 is shown in red, 1:1 in blue, and water in yellow.
"Group members planted kale at the Dartmouth Organic Farm and divided the plants into one control and two treatment groups. The control group received only water. The first treatment group received compost tea solution in a 1:1 ratio with water, and the second treatment group received compost tea in a 1:2 ratio." Figure 6: This graphs the number of diseased leaves per mass for each group.
174
were tomatoes. These plants were chosen due to their short growth time and high nutrient demands. Pots 1-3 Corn and 1-3 Tomato were then delegated to be water only; pots 4-6 for both the corn and tomato plants received the compost tea solution twice a week to ensure the soil was never deprived of nutrients. For this experiment, the compost tea was delivered in a 1:1 ratio with water for a total of 125ml of solution per watering. Group members used colorimetric assays from the LaMotte NPK test kit for weekly tests.
experiment in both the treatment and control groups. Nitrogen results were inconclusive due to a lack of data.
The results of the experiment were mixed and sparce. Comparisons between the average potassium levels over time suggest that compost tea was able to prevent potassium loss. Corn pots 1-3 saw a decrease in potassium over time while Corn pots 4-6 saw an increase over time (Figure 5). In contrast, the phosphorus levels remained constant for the duration of the
Fall 2019 Experiment: Plant Mass and Health
However, it is worth noting several inconsistencies over the methodology in determining NPK levels. Since the tests were based on colorimetric assays, the assay was subject to individual differentiation between colors. As such, we performed further experiments to validate and expand upon the data.
The next experiment focused on plant mass and health and establishing an ideal ratio of compost tea to water. In order to mimic a more natural environment, the experiment took place on the Dartmouth Organic farm. Group members planted kale at the Dartmouth Organic Farm and divided the plants into one control and two treatment groups. The control group received only water. The first treatment group received compost tea solution in a 1:1 ratio with water, and the second treatment group received compost tea in a 1:2 ratio. The team hypothesized the higher ratio would provide more nutrients to the plants and therefore would cause the plants to grow larger compared to the 1:1 ratio. We were also interested in whether or not varying ratio of the compost tea to water would improve
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 7: This figure graphs the increase in plant height and number of diseased leaves. Data for biological replicate 1 was not collected.
compost tea’s ability to prevent disease. For this experiment, the relative mass of the plants was measured every week alongside the number of diseased leaves. The results clearly demonstrate that plants that received compost tea had a larger mass than the control. The 1:2 ratio plants had the highest relative mass over the course of the experiment. However, the 1:1 ratio plants were the only group to grow at constant rate throughout the course of the experiment. This experiment also noted the relative health of the plants, measured by the number of diseased leaves per mass. The plants in the 1:1 group had significantly less diseased leaves than the water or 1:2 group. Additionally, while the 1:2 group had more diseased leaves than 1:1, it had much less diseased leaves per mass than the control. These results suggest compost tea may be beneficial to overall plant health. Unlike the previous experiment, this one did not encounter as many difficulties. While deer nibbled on some of the kale plants in the last two weeks of the experiment, the plants remained mostly intact, and only ‘non-nibbled’ leaves were collected for weighing. However, the soil at the Organic Farm is of much higher quality than that on urban farms. As such, the next goal for the project was to mimic a more realistic soil quality.
Summer 2020 Experiment: Plant Height and Health In our third experiment, we looked at efficacy by testing plant height and the number of diseased leaves. Plants were treated with either compost tea or water in a 1:1 ratio. In order to avoid potential confounding variable from soil quality to affect our results, we performed this experiment as biological replicates in 4 different sites: two in Massachusetts, and one in New York and Tennessee. Additionally, experiments were performed in technical replicates of N=10.
WINTER 2021
Plant height and number of diseased plants were tracked over a 7-week period by each participant. A preliminary examination of the data seems promising. The data indicates that plants treated with compost tea were taller than plants treated with water. Additionally, the plants treated with compost tea had fewer diseased leaves. However, a more in-depth statistical analysis is needed to confirm that these are statistically significant differences. It is also important to note that the soil conditions in this experiment are not necessarily reflective of the ideal environment for the compost tea prototype. All the participants had prior experience in gardening and most participants had experience in composting as well. As a result, the baseline soil quality in this experiment was already good, meaning that the compost tea may not have had as great of an impact.
Implications of the Experiments
"In our third experiment, we looked at efficacy by testing plant height and the number of diseased leaves. Plants were treated with either compost tea or water in a 1:1 ratio."
These experiments, as well as existing literature, are optimistic about the future implications of compost tea. In addition to being an organic fertilizer, compost tea has the potential benefit to mitigate plant disease. A study published in Biological Control used fertilized compost tea from agro-waste to suppress Choanephora cucurbitarum] , the causal pathogen of wet rot disease in okra (Siddiqui, 2008). Another study found compost tea decreased soil-borne diseases, such as wilts (caused by Fusarium oxysporum and Verticillium dahliae) as well as root rots and damping off (caused by Pythium ultimum, Rhizoctonia solani, Phytophthora spp) (St. Martin, 2012). Our Fall 2019 experiment found similar results in that plants treated with compost tea had fewer diseased leaves although our experiment itself centered on plant height and mass. Unlike traditional compost, which can be difficult to use and bulky, compost tea can be easily applied to large tracts of land. Therefore, compost tea is also gaining more
175
References Anda, J. D., & Shear, H. (2017). Potential of Vertical Hydroponic Agriculture in Mexico. Sustainability, 9(1), 140. doi:10.3390/ su9010140
Shaban, H., Fazeli-Nasab, B., Alahyari, H., Alizadeh, G., Shahpesandi, S. An Overview of the Benefits of Compost tea on Plant and Soil Structure. Adv. Biores. Vol 6 [1] January 2015: 154‐158. doi: 10.15515/abr.0976‐4585.6.1.154158
Butcher, J. D., Laubscher, C. P., & Coetzee, J. C. (2017). A Study of Oxygenation Techniques and the Chlorophyll Responses of Pelargonium tomentosum Grown in Deep Water Culture Hydroponics. HortScience, 52(7), 952-957. doi:10.21273/ hortsci11707-16 C. C.G. St. Martin & R. A.I. Brathwaite (2012) Compost and compost tea: Principles and prospects as substrates and soil-borne disease management strategies in soil-less vegetable production, Biological Agriculture & Horticulture, 28:1, 1-33, DOI: 10.1080/01448765.2012.671516 Hallberg, G.R. The impacts of agricultural chemicals on ground water quality. GeoJournal 15, 283–295 (1987). https://doi. org/10.1007/BF00213456 Hatam S. Bahman F‐N. Hasan A. Gholam A, Sadegh S. An Overview of the Benefits of Compost tea on Plant and Soil Structure. Adv. Biores. Vol 6 [1] January 2015: 154‐158. DOI: 10.15515/abr.0976‐4585.6.1.154158 Ingham, E. (2005). The compost tea brewing manual:. Corvallis, OR: Soil Foodweb. J.C. Hargreaves, M.S. Adl & P.R. Warman (2009) The Effects of Municipal Solid Waste Compost And Compost Tea on Mineral Element Uptake And Fruit Quality of Strawberries, Compost Science & Utilization, 17:2, 85-94, DOI: 10.1080/1065657X.2009.10702406 Kim, M. J., Shim, C. K., Kim, Y. K., Hong, S. J., Park, J. H., Han, E. J., ... & Kim, S. C. (2015). Effect of aerated compost tea on the growth promotion of lettuce, soybean, and sweet corn in organic cultivation. The plant pathology journal, 31(3), 259. Manyi-Loh CE, Mamphweli SN, Meyer EL, Makaka G, Simon M, Okoh AI. An Overview of the Control of Bacterial Pathogens in Cattle Manure. Int J Environ Res Public Health. 2016;13(9):843. Published 2016 Aug 25. doi:10.3390/ijerph13090843 Palande, V., Zaheer, A., & George, K. (2018). Fully Automated Hydroponic System for Indoor Plant Growth. Procedia Computer Science, 129, 482-488. doi:10.1016/j. procs.2018.03.028 Pane, Catello, Assunta Maria Palese, Giuseppe Celano, and Massimo Zaccardelli. “Effects of Compost Tea Treatments on Productivity of Lettuce and Kohlrabi Systems under Organic Cropping Management.” Italian Journal of Agronomy 9, no. 3 (2014): 153. https://doi.org/10.4081/ija.2014.596. Scheuerell, S. (2002) Compost Tea: Principles and Prospects For Plant Disease ... https://faculty.washington.edu/elizaw/compost_TEA_review. pdf. Siddiqui, Y., Meon, S., Ismail, R., & Rahmani, M. (2008). Biopotential of compost tea from agro-waste to suppress Choanephora cucurbitarum L. the causal pathogen of wet rot of okra. Biological Control. https://www.sciencedirect.com/science/ article/pii/S1049964408003034.
176
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
177
The Health and Environmental Impacts of Mass Incarceration BY MAEEN ARSLAN '23 Cover: This map highlights the Superfund sites, areas concentrated with enough contaminants that a long-term response is required to clean the site, within the United States as of 2013. The red dots show active Superfund sites, the green dots show the cleanedup sites, and the yellow dots show potential sites. Prisons are often located near these Superfund sites due to low cost in construction and maintenance Source: Wikimedia Commons
178
Introduction Ending mass incarceration and environmental degradation are two seemingly separate initiatives, but as environmental justice rises to the forefront of American policy, it is clear that the two are closely intertwined. Mass incarceration has been a pressing issue in the United States with organizations such as Decarcerate PA, The Sentencing Project, The Last Prisoner Project, and the NAACP’s Criminal Justice Program calling attention to the phenomenon. Likewise, environmental preservation has support with organizations such as Greenpeace, Friends of the Earth, and the Sierra Club. However, there has been research linking mass incarceration and environmental degradation as violations of environmental and criminal justice (Tsolkas, 2016). Mass incarceration can encourage the deterioration of the health of the prisoners, staff, and the surrounding community while irreparably damaging the environment (Tsolkas, 2016)
Origin and Consequences of Mass Incarceration Mass incarceration refers to the extreme rate of growth of imprisonment, more specifically disproportionate imprisonment of certain demographics. Black Americans have been disproportionately suffering at the expense of a state that scrutinizes individuals prior and following interactions with the criminal justice system or a “carceral state.” This likely stems from mass incarceration’s historical roots in white officers policing slaves in the 1800s (Manion, 2019). The beginnings of this carceral state resulted from denying the freedom of Black Americans as illegal. During the Jim Crow Era, the carceral state was the reaction by the government to the gains made in the Civil Rights movement (Manion, 2019). By 2020, Black Americans make up 56% of the incarcerated population despite only constituting 13.4% of the general population DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
by polluting facilities (McHale, 2020).
Prison Locations
(NAACP, 2020). Although Black Americans were actively targeted after the rise of Civil Rights, there are additional factors to consider for this rapid growth. Specifically, during the 1970s, political agendas, such as Lyndon B. Johnson’s “War on Poverty” and Richard Nixon’s “War on Drugs,” perpetuated such growth of incarceration by allowing law enforcement to target the areas that are disproportionately black (Thompson, 2010). In the early 1970s, New York began passing drug legislation that allowed for harsher punishments rather than offering rehabilitative approaches, and this type of legislation spread across the nation. For example, in 1978 Michigan passed the “650 lifer law,” which stated a life sentence for any individual with the intent to deliver 650 or more grams of cocaine. This law created a situation where an individual may be convicted despite not having the substance but simply being in proximity of it. This initiated the over policing of urban spaces and targeting communities of color. However, studies done by The National Household Survey on Drug Abuse showed people of color were not more likely to commit drug related offenses or misuse substances than their white counterparts (Thompson, 2010). During this time in the 1970s, petty crimes also became penalized such as talking too loudly during school, and schools in inner cities saw its students be imprisoned from a young age. Many children also experienced have at least one parent in federal or state prison, and by 2008, 52% of state and 63% of federal inmates with children reported having a combined 1,706,600 minor children (Thompson, 2010). Additionally, former convicts had difficulty finding work since future employers would not hire an exconvict and the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 hindered the federal support that ex-convicts could receive (Thompson, 2010). To make conditions worse, prisons were built near lowincome communities, which were often located
WINTER 2021
Figure 1. This image shows the California Men’s Colony (CMC) which is located in San Luis Obispo, CA and is the site of raw sewage spillage by wastewater systems
Alongside the general trend of prison construction near low-income communities, the environmental guidelines that must be Source: Wikimedia Commons taken into consideration prior to construction have often been ignored. Under the National Environmental Policy Act (NEPA), federal agencies must consider the environmental impact of any of their proposed actions. However, the Bureau of Prisons has not considered its population as part of NEPA’s environmental justice guidance (Bradshaw, 2020). Prisons are often located near Superfund "Prisons are sites, which are government-designated areas often located with high pollution emissions needing long- near Superfund term response to clean up toxic materials. sites, which are Using the Northwest Detention Center (NWDC) governmentas an example, economic prioritization, where the prisoner’s health was considered designated areas lower in priority than the cost and profits, with high pollution was the driving factor for the location of the emissions needing prison (McHale, 2020). Within this study, the long-term response Correctional Services Corporation (CSC) chose a toxic site over a port site to avoid any conflict to clean up toxic with the local government officials. Economic prioritization was also shown when deciding the locations of ADX Florence and SCI Fayette. The lack of consideration for the health of inmates is also demonstrated since these prisons are settled on areas that was unfit for residents and declared a Superfund sites.
Health of the Incarcerated Population With the locations of prisons being toxic hotbeds, the health of the incarcerated population is compromised. To make matters worse, inadequate resources for assisting with mental health and women’s health also compromise their health (Health and Incarceration: A Workshop Summary, 2013). In the general prison setting, the environment within the building can deteriorate the inmates’ health. For one, the food has minimal Figure 2. This image shows the overcrowding present in prisons that lead to many illness outbreaks as well as the deterioration of inmates’ physical and mental health Source: Wikimedia Commons
179
Figure 3. This image shows people protesting solitary confinement. Solitary confinement is linked to deteriorating mental health and is a source of inadequate health within the prison population Source: Wikimedia Commons
"A large amount of the prison population is not receiving treatment for mental health illnesses due to a lack of mental health professionals available to prisoners and a shrinking budget for mental health support."
nutritional value since it is high-fat and highcalorie. Additionally, smoking, poor ventilation, overcrowding, and stress can worsen chronic health conditions. Outside the prison, toxic areas such as old coal mines can introduce illnesses such as sinus and respiratory infections, gastrointestinal tract issues, and skin irritation (Health and Incarceration: A Workshop Summary, 2013). The Safe Drinking Water Act was passed in 1974 with the intentions of ensuring the public with safe drinking water (US EPA, 2015). However, this was not enough for the prison population as inmates have drunk water that has been contaminated with arsenic and lead, elements that can be toxic if sufficient amounts are ingested (Bradshaw, 2020). Once an inmate has been released, they need to reenroll for Medicaid and Medicare since their enrollment revoked in 90% of states (Morrissey et al., 2006). Without insurance, greater barriers are present and former inmates can no longer receive treatment for their chronic conditions or new illnesses developed due to being in prison. Furthermore, a large amount of the prison population is not receiving treatment for mental health illnesses due to a lack of mental health professionals available to prisoners and a shrinking budget for mental health support (Gonzalez & Connell, 2014). As of 2017, about 37% of the prison population suffers with mental health issues (Stringer, 2019). Being incarcerated and untreated for mental health conditions can cause poor adjustment to life in prison (Hills, et al., 2004). This coupled with crowded living
180
conditions, increased risk of victimization, and solitary confinement correlate strongly to self-harm and adaptation challenges (Olley et al., 2009; Kaba et al., 2014). In a study done across state and federal prisons, fewer than 50% of those who were taking medication for their mental health conditions were not given medication during their time in the prison. Those with more severe illnesses, such as schizophrenia where there may be outwardly symptoms present, received medication more often than those who did not have any outwardly presentation of symptoms. This may be due to erratic behavior being considered a safety threat. For those taking medication, 61% did not receive any other form of treatment (Gonzalez & Connell, 2014). However, many argue a multidimensional approach involving medication and varied therapy may be more effective (Stringer, 2019). Women are another marginalized group within the prison system. Women in the prison system are considered a minority in the incarcerated population but are the fastest growing inmate population with a 700% increase from 1980 to 2019 going from a total of 26,378 to 222,455 (Fetting, 2020). The current health model does not cater to the specific needs of women, such as testing for chronic hepatitis C viral (HCV), STDs, and cervical cancer, leaving them with access to inadequate health care. In a study done at Rhode Island Department of Corrections (RIDOC) in Cranston, Rhode Island, 70% of incarcerated women were tested for HCV, and
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
40% tested positive (Nijhawan, 2010). This is 20 times greater than in the general population (Hammett et al., 2002). Although not stated, there may have been minimal treatment due to a lack of HCV treatment in prisons (Maru et al., 2008). More progress can be made starting with detecting chronic HCV infection and initiating treatment for all inmates who meet the criteria, and then connecting the inmates to treatment providers once released. Additionally, female prisoners have a higher prevalence of HIV and other sexually transmitted diseases (STDS) than their male counterparts. Moreover, cervical cancer is more prevalent within the incarcerated population and is caused by human papillomavirus (HPV) (Proca et al., 2006). Proca and colleagues also found that only 62% of the sample received a Pap smear for a preventative screening; of those only 40% reported an abnormal Pap smear, which is six times greater than that reported in the general community. Preventative screening should be offered to all inmates to detect and treat cancers earlier to prevent unnecessary deaths. Oftentimes, these women return to their highrisk behavior, so it is equally important to provide connections to medical professionals in their community to retain the progress made during the incarceration period.
Impact of Incarceration on Community Health When thinking of incarceration, the public may not consider the harmful effects prison has on the community that is living near these toxic facilities and the environment. One study done on SCI Fayette reports the residents of LaBelle, Pennsylvania, the small rural community the prison was being built near, complaining about the coal ash. There were no actions done besides issuing small fees despite the harm coal ash can do including respiratory problems, hypertension, heart problems, brain and nervous system damage, liver damage, stomach and intestinal ulcers, and many forms of cancer which include skin, stomach, lung, urinary tract, and kidney cancers (Bernd et al., 2017). Prisons also leave the environment void of its resources, generating thousands of miles of commuting pollution and displacing wildlife due to their size (Decarcerate PA, 2013).
Future Reforms The goal of prison reform is not an easy one due to the prison system originating and continuing to incarcerate people of color to reap financial benefits for the government and private
WINTER 2021
owners of the facility. Ultimately, to reduce the toxicity of prisons, some prisons should offer alternatives to prisoners to complete imprisonment such as parole, improve sanitation, and augment medical resources to be sufficient for all demographics. Prisons themselves are constructed in a way that can be physically and psychologically unhealthy for the inmates with higher level security that have little to no access to the natural world (Fight Toxic Prisons, 2019). In addition, community support would help reduce recidivism since former convicts would have an opportunity for increased economic stability and structure that reduces their risk for committing criminal acts. The damage done to the environment as a result of prisons can be seen as intersecting with environmental justice which makes it an even more pressing issue. There are programs that work toward better understanding and addressing the intersecting nature of mass incarceration and environmental degradation such as the Prison Ecology Project (PEP). The PEP addresses issues including the damage of sewage and industrial waste from overpopulated and under-regulated prisons into to water ways; threats to listed species by the ongoing construction and operation of prisons in remote, environmentally sensitive rural areas; and environmental justice concerns surrounding prisoners, staff, and surrounding communities (PEP, 2016). It is evident that prison system threatens the health of its inmates due to its geographic proximity to toxins; lawmakers must take drastic criminal and environmental justice actions to protect prisoners across the country. References Bernd, C., Loftus-Farren, Z., & Mitra, M. N. (2017). America's Toxic Prisons: Earth Island Journal: Earth Island Institute. Retrieved January 06, 2021, from https://earthisland.org/ journal/americas-toxic-prisons/ Bradshaw, E. A. (2018). Tombstone Towns and Toxic Prisons: Prison Ecology and the Necessity of an Anti-prison Environmental Movement. Critical Criminology, 26(3), 407422. doi:10.1007/s10612-018-9399-6 Criminal Justice Fact Sheet. (2020). Retrieved January 05, 2021, from https://www.naacp.org/criminal-justice-factsheet/ Decarcerate PA. (2013). The Environmental Costs of Mass Incarceration [Fact Sheet]. https://decarceratepa.info/sites/ default/files/environment_factsheet.pdf Fettig, A. (2020). Incarcerated Women and Girls. Retrieved January 05, 2021, from https://www.sentencingproject.org/ publications/incarcerated-women-and-girls/
181
Ftp. (2021). Prison as a Process of Toxification. Retrieved January 06, 2021, from https://fight-toxic-prisons.org/2019/03/28/ prison-as-a-process-of-toxification/
American History. The Journal of American History, 97(3), 703-734. Retrieved January 6, 2021, from http://www.jstor. org/stable/40959940
Hills, Holly; Sigfried, Christine; and Ickowitz, Alan, "Effective Prison Mental Health Services: Guidelines to Expand and Improve Treatment" (2004). Mental Health Law & Policy Faculty Publications, p91.
Tsolkas, P. (2016). Incarceration, Justice and the Planet: How the Fight Against Toxic Prisons May Shape the Future of Environmentalism. Retrieved January 06, 2021, from https:// www.prisonlegalnews.org/news/2016/jun/3/ncarcerationjustice-and-planet-how-fight-against-toxic-prisons-mayshape-future-environmentalism/
Kaba, F., Lewis, A., Glowa-Kollisch, S., Hadler, J., Lee, D., Alper, H., Selling, D., MacDonald, R., Solimo, A., Parsons, A., & Venters, H. (2014). Solitary confinement and risk of self-harm among jail inmates. American journal of public health, 104(3), 442–447. https://doi.org/10.2105/AJPH.2013.301742
US EPA, O. (2015, March 25). Safe Drinking Water Act (SDWA) [Collections and Lists]. US EPA. https://www.epa.gov/sdwa
Manion, J. (2019). Carceral History in the Era of Mass Incarceration. The Pennsylvania Magazine of History and Biography, 143(3), 233-246. doi:10.5215/ pennmaghistbio.143.3.0233 Maru, D. S., Bruce, R. D., Basu, S., & Altice, F. L. (2008). Clinical outcomes of hepatitis C treatment in a prison setting: feasibility and effectiveness for challenging treatment populations. Clinical infectious diseases : an official publication of the Infectious Diseases Society of America, 47(7), 952–961. https:// doi.org/10.1086/591707 McHale, C. J. (2020). Toxic Prisons: An Exploration of the Connection Between Prisons and Superfund Sites [Honor’s thesis in Sociology, Whitman College] Morrissey, J. P., Steadman, H. J., Dalton, K. M., Cuellar, A., Stiles, P., & Cuddeback, G. S. (2006). Medicaid enrollment and mental health service use following release of jail detainees with severe mental illness. Psychiatric services (Washington, D.C.), 57(6), 809–815. https://doi.org/10.1176/ps.2006.57.6.809 Nijhawan, A. E., Salloway, R., Nunn, A. S., Poshkus, M., & Clarke, J. G. (2010). Preventive healthcare for underserved women: results of a prison survey. Journal of women's health (2002), 19(1), 17–22. https://doi.org/10.1089/jwh.2009.1469 Olley, M. C., Nicholls, T. L., & Brink, J. (2009). Mentally ill individuals in limbo: obstacles and opportunities for providing psychiatric services to corrections inmates with mental illness. Behavioral sciences & the law, 27(5), 811–831. https://doi. org/10.1002/bsl.899 Prison Ecology Project. (2016). Retrieved January 06, 2021, from https://nationinside.org/campaign/prison-ecology/facts/ Proca, D. M., Rofagha, S., & Keyhani-Rofagha, S. (2006). High grade squamous intraepithelial lesion in inmates from Ohio: cervical screening and biopsy follow-up. CytoJournal, 3, 15. https://doi.org/10.1186/1742-6413-3-15 Reingle Gonzalez, J. M., & Connell, N. M. (2014). Mental health of prisoners: identifying barriers to mental health treatment and medication continuity. American journal of public health, 104(12), 2328–2333. https://doi.org/10.2105/ AJPH.2014.302043 Smith, A. (2013). Health and Incarceration: A Workshop Summary. National Academies Press. Stringer, H. (2019, March). Improving mental health for inmates. Retrieved January 06, 2021, from https://www.apa. org/monitor/2019/03/mental-heath-inmates Thompson, H. (2010). Why Mass Incarceration Matters: Rethinking Crisis, Decline, and Transformation in Postwar
182
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
183
From Microneedles to Nanogels: The Diverse Mechanisms of Drug Delivery into the Body BY MIRANDA YU '24 Cover: Application of a microneedle patch. After insertion into the skin, the microneedles dissolve within minutes to release the encapsulated drug or vaccine Source: Flickr NIH Image Gallery
What are Drug Delivery Systems? Drug delivery systems allow for targeted delivery of drugs or therapeutic agents with a broad range of medical functions. It is often difficult to perfectly match the chemical structures of drugs with their desired biological targets, potentially causing side effects for the patient. The effects of a drug are seriously impacted by the route and method of administration. Delivery systems often need to work against bodily defenses like the blood-brain-barrier and processes that remove foreign substances (NIBIB, 2016). For a specific medical goal, some drug delivery systems provide more effective or efficient treatment and better mitigate side effects. The FDA defines drugs as agents used to diagnose, treat, or prevent disease through influencing a body structure or function (FDA, 2017). Not only have there been improvements in the types of drugs, but the routes and
184
methods through which a drug is transported to a desired location in the body have also been enhanced over time. The oral and transdermal routes are the oldest and simplest methods. While there is typically little control over the drug’s actions in the body with oral and transdermal drug delivery, these drugs can be made more specific when they are held within a vehicle that encapsulates the agent and exerts spatiotemporal control of its release (NIBIB, 2016). These special vehicles vary from macroscale drug delivery (MDD) devices to nanoparticles (Kearney & Mooney, 2013). Although small-molecule drugs, hormones, and vaccines are the standard cargo delivered by these systems, recent research has also involved biological macromolecules such as proteins and genes. Each method of delivery has advantages and disadvantages, and it is up to researchers and scientists to sculpt the best system possible for a particular medical objective. The variety of drug delivery systems DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
low absorption (Brown et al., 2020). Only 1% of macromolecules – even fewer than 0.1% in some cases – are left intact for absorption into the blood (Zhou & Li Wan Po, 1991; FjellestadPaulsen et al., 1993).
helps tailor treatment to specific diseases and illnesses, from influenza to cancer.
Destinations and Delivery Routes The delivery route of a drug goes hand-inhand with its destination, which may be an entire organ, a type of tissue, a disease-specific structure like a tumor, or an intracellular process. Employing a reverse engineering method, scientists frequently work backward from the location of interest. There are several routes of delivery that give scientists a choice in how they target a body part or location. The main methods of delivery include oral, intravenous, and transdermal routes (NIBIB, 2016). Oral administration occurs by swallowing the drug and is the most common form of drug delivery. It does not require significant expertise, nor does it feel invasive (Brown et al., 2020). Although oral administration works well for smaller drug molecules, it is not as capable of delivering larger macromolecules like proteins and peptides. Biological barriers, such as high acidity and digestive enzymes in the gastrointestinal tract, degrade most of the proteins and peptides into their amino acid subunits before they reach their targets. In particular, hydrochloric acid from digestive fluid creates a highly acidic environment of pH 1-2 in the stomach that optimizes the performance of pepsin, a protein-digesting enzyme (Brown et al., 2020). Pepsin contributes to the degradation of drugs in the stomach before they can be absorbed into the bloodstream. Furthermore, mucus physically limits the diffusion of drugs into the inner epithelial layers of the gastrointestinal tract. Given all of these barriers, purely oral deliveries of more complex cargo suffer from a great loss of material and WINTER 2021
In comparison, intravenous injections and transdermal absorption circumvent the digestive system by entering the bloodstream directly. However, unlike swallowing a pill, injections are invasive and can sometimes be painful (NIBIB, 2016). Recently, researchers have attempted to reduce discomfort through the development of extremely thin, painless microneedle arrays. Contained in patches, the microneedles are engineered to deliver drugs such as vaccines and hormones. They are so miniscule (<1 mm) that they do not reach nerve endings and blood vessels when they penetrate into the outer layers of skin (Kusama et al., 2021). After creating microscopic pores, they allow the drug to be absorbed at lower dermal layers. Thus, microneedle arrays can pierce the outermost epidermis layer while remaining a minimally invasive method. Microneedle patch deliveries are both effective and convenient (Hirobe et al., 2015). The Emory University Department of Medicine conducted a partly-blind, randomized, placebo-controlled trial and found that microneedles could potently deliver influenza vaccines (Rouphael et al., 2017). One hundred participants between the ages of 18 and 49 were randomly assigned (1:1:1:1) to four groups that determined how they received the vaccine: by a microneedle patch, an intramuscular injection, or a self-administered microneedle patch. The fourth group received a placebo through a microneedle patch. The other three non-placebo groups each received a dose of the inactivated influenza virus, Fluvirin. Dr. Rouphael and his team discovered that the vaccines delivered through microneedles spurred significant antibody responses after
Figure 1: Oral drug delivery passes through the liver and digestive system before entering the bloodstream. Intravenous drugs are administered into the bloodstream directly Source: Flickr NIH Image Gallery
"Although oral administration works well for smaller drug molecules, it is not as capable of delivering larger macromolecules like proteins and peptides. Biological barriers, such as high acidity and digestive enzymes in the gastrointestinal tract, degrade most of the proteins and peptides into their amino acid subunits before they reach their targets."
Figure 2: Microneedle array compared with a hypodermic needle Source: Wikimedia Commons
185
day 28, demonstrating that micro-patches were a valid vaccination method. They observed if and when seroconversion – the production of antibodies produced by the immune system that become detectable in the blood – occurred to determine the success of the vaccination. At day 28, the seroconversion percentages of the microneedle patch vaccination were significantly higher than those of the placebo and were similar to those of intramuscular delivery, signifying that the vaccine had successfully taken effect (Rouphael et al., 2017).
"Microneedleassisted vaccine delivery is accessible to under-resourced communities with insufficient healthcare providers or storage facilities because the patches are easy to use and can be selfadministered at home."
Microneedle-assisted vaccine delivery is accessible to under-resourced communities with insufficient healthcare providers or storage facilities because the patches are easy to use and can be self-administered at home. Furthermore, they do not require special refrigeration or disposal (NIBIB, 2016). Although some microneedles made of metal, stainless steel, or silicon need to be discarded as sharps, dissolvable microneedle patches like MicroHyala (MH) require no additional disposal. These microneedles can be made from hyaluronic acid, a natural component of skin tissue that can thus dissolve in the body (Hirobe et al., 2015). The advantages of microneedle arrays make them an attractive choice for several applications. Microneedle patches are equally effective but do not require the same frequent and/or professional administration as hormonal contraceptives (Li et al., 2019). Previous microneedle delivery of levonorgestrel (LNG) used for emergency contraception was not able to provide proper sustained release of the drug, but a team at Georgia Institute of Technology designed a microneedle patch that did provide this sustained release. These microneedles contained an air bubble between them and the patch backing, which enabled separation once the needles penetrated the skin. The detached microneedles would then stay in the skin and slowly biodegrade over the course of more than a month. Its lengthy release period increased the time that the contraceptive was effective (Li et al., 2019). A major drawback of these delivery routes is the lack of control over delivery mechanisms. Although microneedles exert some influence over the location and time of delivery, they primarily deliver vaccines and hormones without target specificity. For greater control, researchers have implemented several types of vehicles that transfer their drug cargo with greater precision (Kearney & Mooney, 2013).
186
Vehicles of Delivery Allow for Control Vehicles of delivery primarily fall into two categories that differ by size: macroscale drug-delivery (MDD) devices have at least one dimension (length, width, or height) over 1 mm, whereas micro- and nanocarriers are much smaller with micrometer and nanometer dimensions, respectively (Kearney & Mooney, 2013). The agents that the vehicles can deliver range from small-molecule drugs and antibodies to proteins, oligonucleotides, silencing RNA, and plasmid DNA. Nanoparticles have been developed as delivery vehicles only recently, and MDD devices are the more traditional. MDD devices are typically delivered by injection or implantation and employ a carrier material that controls the drug or agent by virtue of its physical or chemical properties (Kearney & Mooney, 2013). Materials involved in constructing both types of vehicles include liposomes and hydrogels (Li & Mooney, 2016). Liposomes are vesicles composed of one or more phospholipid layers. They provide a barrier between the external environment and the drug, which is held in the interior of the vesicle. Because of their similarity to cell membranes, liposomes are effective in transferring agents into the body. Although they can be rapidly absorbed by phagocytic cells that engulf foreign particles as part of the immune response, liposomes with surface modifications have enhanced circulation times and effectiveness (Mitchell et al., 2020). Liposomes have previously shown effective delivery of cancer treatment (Tiwari et al., 2012). After liposomes enter tumor tissue, they stay there until they are subject to enzymatic degradation or phagocytic attack, which releases the drug among the tumor cells. Furthermore, liposomes can be affected by external triggers that control when a drug is released and, consequently, where it is absorbed. One such trigger can be heat, as investigated in a study on the heat-mediated delivery of a thermosensitive liposomal taxol formulation as an anticancer treatment. Experimentation with murine melanoma (mice models) showed a significant decrease in tumor volume in mice treated with hyperthermia and thermosensitive liposome-encapsulated taxol, as compared to those treated with free taxol with or without hyperthermia (Mukhopadhyay et al., 1995). Novel release triggers such as light, ultrasound, electrical and magnetic fields, and the presence of specific molecules are still being investigated (Wang & Kohane, 2017).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Hydrogels are another method of drug delivery and rely on three-dimensional solids of crosslinked polymer chains. They can effectively encapsulate hydrophilic and labile (chemically reactive) agents, such as recombinant proteins and monoclonal antibodies. Hydrogels can also carry a variety of small-molecular drugs, macromolecular drugs, and even cells. They have unique properties that give them spatial and temporal control over the release of the drug (Kearney & Mooney, 2013). Their advantages include controllable physical properties, vehicle degradability, and ability to protect drugs from degradation (Li & Mooney, 2016). Hydrogels contain a large amount of water (around 70–99%) within a crosslinked polymer network, which makes them highly biocompatible. The aqueous environment of the hydrogels helps prevent organic solvents in the body from denaturing and aggregating the agents. Differences in properties such as density between aqueous and organic solutions make them immiscible, thus protecting agents in the aqueous hydrogel interior from mixing with organic solvents. The crosslinked polymer network gives the hydrogel characteristics reminiscent of a solid. The stiffness of the network can be adjusted from 0.5 kPa to 5 MPa, which enables their physical properties (rigidity, durability, etc.) to
match their intended function. In addition, the crosslinked polymer network prevents other proteins from penetrating, thus protecting the bioactive agents from premature degradation. Hydrogels come in a range of sizes and shapes that govern their functions. Some crosslinked polymer networks contain micropores, which allow for convective drug delivery, while others have non-porous structures. The mesh size of the network – referring to its open spaces that are much smaller than the pores – controls the diffusion process through the network. Furthermore, interactions between the drug and hydrogel can take place on a molecular or atomistic scale through polymer chains attached to the hydrogel that contain binding sites (Li & Mooney, 2016). These hydrogel-drug interactions control the release of the drug.
Figure 3: Liposomes with a double phospholipid layer and an interior
Hydrogel delivery systems are shown to be therapeutically beneficial and clinically practical for medical treatments. They can be divided into three categories based on vehicle size: macroscopic hydrogels (millimeters to centimeters), microgels (micrometers), and nanogels (nanometers) (Li & Mooney, 2016). Macroscopic hydrogels include in situ-gelling hydrogels that can be activated through irradiation or self-assembly. Other macroscopic hydrogels include compressible and shapememory macro-porous gels and shear-thinning gels that can reversibly change volume. Each type of hydrogel is matched with the most suitable delivery routes. Most macroscopic hydrogels utilize transepithelial or transdermal delivery. They can also be implanted inside the body, though in situ-gelling gels can be injected locally through a syringe-needle. After the in situ-gelling gels are injected as a liquid, they undergo a sol-gel transformation to return to a solid shape once inside the body. The transition can be carried out through several strategies, the simplest of which is to initiate gelation outside the body in a slow-gelling system. The macroscopic hydrogels are unable to penetrate
"Hydrogel delivery systems are shown to be therapeutically beneficial and clinically practical for medical treatments. They can be divided into three categories based on vehicle size: macroscopic hydrogels (millimeters to centimeters), microgels (micrometers), and nanogels (nanometers)."
Source: Wikimedia Commons
Figure 4: Crosslinked ultrashort peptide hydrogel (left) and nanofibrillar cellulose hydrogel (right): two types of hydrogels used to deliver drugs Source: Wikimedia Commons
WINTER 2021
187
Figure 5: Nanoparticle drug delivery to tumor cells of diffuse intrinsic pontine glioma (DIPG) Source: Wikimedia Commons
epithelial barriers, but agents released from the hydrogels can. In addition, hydrogels may also be applied as wound dressings when they are created from synthetic polymers such as alginate, collagen and chitosan. They are also used to deliver important proteins like insulin and calcitonin.
"In particular, researchers have focused on using nanotechnologymediated drug delivery systems to target cancer (Patra, 2018). Nanoparticle drug carriers may be more effective than traditional chemotherapeutics and can better select for only tumor tissues."
188
Other mechanisms for hydrogel delivery have been employed too, such as charge interactions (attraction between oppositely charged objects; repulsion between like-charged objects) or stereo-complexation (specific interactions between polymers with complementary configurations). A group of researchers, led by Dr. Zhang of MIT, developed a type of hydrogel that took advantage of charge properties to treat inflammatory bowel disease (IBD) (Zhang, 2015). Dr. Zhang and his team intended for their hydrogels to bind preferentially to inflamed areas of the large intestine and release antiinflammatory compounds (Zhang et al., 2015). In order to achieve binding, they created a negatively charged hydrogel containing selfassembled ascorbyl palmitate molecules that attach to positively charged proteins in the colonic mucus layer. In a mouse model of colitis, which can be caused by IBD, they found that the hydrogels significantly reduced inflammation (Zhang et al., 2015). Another type of gel – shear-thinning gels – are injected in their gel form using shear stress, which causes them to flow like a low-viscosity (non-sticky) liquid. Once the stress is removed, the hydrogels return to their initial stiffness inside the body. The transformation reversibility comes from the physical crosslinks’ balance between proassembly forces (hydrophobic interactions, hydrogen bonding, electrostatic interactions) and anti-assembly forces (solvation, electrostatic
repulsion) (Li & Mooney, 2016). While microgel delivery tends to occur through oral or pulmonary routes or via injections, nanogels are often systemically administered to the entire circulatory system (Li & Mooney, 2016). Microgels are easily needle-injectable, but molecules smaller than 5 μm, such as nanogels, are unsuited for injection because of their quick circulation clearance. Systemic drug administration that affects the whole body works best for nanogels, which are able to seep into tissues after leaving small blood vessels through endothelial fenestrations (windows or pores in the vessels themselves). Nanogels are especially effective as vehicles for gene therapy because they are good at delivering nucleotide-based agents such as plasmid DNA. The agents can also include oligonucleotides that inhibit cell proliferation, DNAzymes that inhibit cell migration, and aptamers that target cancer cells. In particular, researchers have focused on using nanotechnology-mediated drug delivery systems to target cancer (Patra, 2018). Nanoparticle drug carriers may be more effective than traditional chemotherapeutics and can better select for only tumor tissues (Senapati, 2018). Additionally, though most oligonucleotide therapeutics have focused on gene silencing, researchers have been working on methods for splice modulation and gene activation too (Roberts et al., 2020). Gene therapy has potential in treating cancers, haemophilia (genetic disorder that impairs blood clotting ability), and viral infections, making nanogel applications particularly meaningful because of their higher uptake and circulation times. Cationic nanogels containing
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
PEO and poly(ethylenimine) were even able to increase oligonucleotide transport across the gastrointestinal epithelium and blood– brain barrier (BBB), both of which are obstacles for several other drug delivery systems (Li & Mooney, 2016).
1–17. https://doi.org/10.1038/natrevmats.2016.71
Conclusion
Mitchell, M. J., Billingsley, M. M., Haley, R. M., Wechsler, M. E., Peppas, N. A., & Langer, R. (2020). Engineering precision nanoparticles for drug delivery. Nature Reviews Drug Discovery, 1–24. https://doi.org/10.1038/s41573-020-0090-8
The diverse modes of drug delivery, each with their own benefits and drawbacks, enable suitable treatments for a variety of diseases and purposes. Drug delivery systems are powerful in their capability to treat specific structures or cells with great precision. However, there is still room for improvement. To enhance the convenience and effectiveness of drug delivery, researchers tackle these systems from two main angles – their routes and their vehicles – in addition to improving the drugs they contain. Creating better drug delivery systems that allow the BBB to be crossed would enable scientists to directly deliver drugs that treat diseases in the brain like Alzheimer’s disease and Parkinson’s disease. Current research focused on improving drug delivery systems indicates potential for even better treatments in the future. References Brown, T. D., Whitehead, K. A., & Mitragotri, S. (2020). Materials for oral delivery of proteins and peptides. Nature Reviews Materials, 5(2), 127–148. https://doi.org/10.1038/s41578-0190156-6 Drug Delivery Systems. (n.d.). Retrieved February 11, 2021, from https://www.nibib.nih.gov/science-education/sciencetopics/drug-delivery-systems-getting-drugs-their-targetscontrolled-manner Fjellestad‐Paulsen, A., Höglund, P., Lundin, S., & Paulsen, O. (1993). Pharmacokinetics of 1-deamino-8-d-arginine vasopressin after various routes of administration in healthy volunteers. Clinical Endocrinology, 38(2), 177–182. https://doi. org/https://doi.org/10.1111/j.1365-2265.1993.tb00990.x
Li, W., Terry, R. N., Tang, J., Feng, M. R., Schwendeman, S. P., & Prausnitz, M. R. (2019). Rapidly separable microneedle patch for the sustained release of a contraceptive. Nature Biomedical Engineering, 3(3), 220–229. https://doi. org/10.1038/s41551-018-0337-4
Mukhopadhyay, A., Mukhopadhyay, B., & Basu, S. K. (1995). Circumvention of multidrug resistance in neoplastic cells through scavenger receptor mediated drug delivery. FEBS Letters, 376(1–2), 95–98. https://doi.org/10.1016/00145793(95)01250-6 Patra, J. K., Das, G., Fraceto, L. F., Campos, E. V. R., RodriguezTorres, M. del P., Acosta-Torres, L. S., Diaz-Torres, L. A., Grillo, R., Swamy, M. K., Sharma, S., Habtemariam, S., & Shin, H.-S. (2018). Nano based drug delivery systems: recent developments and future prospects. Journal of Nanobiotechnology, 16(1), 71. https://doi.org/10.1186/ s12951-018-0392-8 Research, C. for D. E. and. (2018). Transcript: Definition of a Drug (April 2017). FDA. https://www.fda.gov/drugs/ information-health-care-professionals-drugs/transcriptdefinition-drug-april-2017 Roberts, T. C., Langer, R., & Wood, M. J. A. (2020). Advances in oligonucleotide drug delivery. Nature Reviews Drug Discovery, 19(10), 673–694. https://doi.org/10.1038/s41573020-0075-7 Rouphael, N. G., Paine, M., Mosley, R., Henry, S., McAllister, D. V., Kalluri, H., Pewin, W., Frew, P. M., Yu, T., Thornburg, N. J., Kabbani, S., Lai, L., Vassilieva, E. V., Skountzou, I., Compans, R. W., Mulligan, M. J., Prausnitz, M. R., Beck, A., Edupuganti, S., … Nesheim, W. (2017). The safety, immunogenicity, and acceptability of inactivated influenza vaccine delivered by microneedle patch (TIV-MNP 2015): a randomised, partly blinded, placebo-controlled, phase 1 trial. The Lancet, 390(10095), 649–658. https://doi.org/10.1016/S01406736(17)30575-5 Senapati, S., Mahanta, A. K., Kumar, S., & Maiti, P. (2018). Controlled drug delivery vehicles for cancer treatment and their performance. Signal Transduction and Targeted Therapy, 3(1), 1–19. https://doi.org/10.1038/s41392-017-0004-3
Hirobe, S., Azukizawa, H., Hanafusa, T., Matsuo, K., Quan, Y.-S., Kamiyama, F., Katayama, I., Okada, N., & Nakagawa, S. (2015). Clinical study and stability assessment of a novel transcutaneous influenza vaccination using a dissolving microneedle patch. Biomaterials, 57, 50–58. https://doi. org/10.1016/j.biomaterials.2015.04.007
Tiwari, G., Tiwari, R., Sriwastawa, B., Bhati, L., Pandey, S., Pandey, P., & Bannerjee, S. K. (2012). Drug delivery systems: An updated review. International Journal of Pharmaceutical Investigation, 2(1), 2–11. https://doi.org/10.4103/2230973X.96920
Kearney, C. J., & Mooney, D. J. (2013). Macroscale delivery systems for molecular and cellular payloads. Nature Materials, 12(11), 1004–1017. https://doi.org/10.1038/nmat3758
Wang, Y., & Kohane, D. S. (2017). External triggering and triggered targeting strategies for drug delivery. Nature Reviews Materials, 2(6), 1–14. https://doi.org/10.1038/ natrevmats.2017.20
Kusama, S., Sato, K., Matsui, Y., Kimura, N., Abe, H., Yoshida, S., & Nishizawa, M. (2021). Transdermal electroosmotic flow generated by a porous microneedle array patch. Nature Communications, 12(1), 658. https://doi.org/10.1038/s41467021-20948-4 Li, J., & Mooney, D. J. (2016). Designing hydrogels for controlled drug delivery. Nature Reviews Materials, 1(12),
WINTER 2021
Zhang, S., Ermann, J., Succi, M. D., Zhou, A., Hamilton, M. J., Cao, B., Korzenik, J. R., Glickman, J. N., Vemula, P. K., Glimcher, L. H., Traverso, G., Langer, R., & Karp, J. M. (2015). An inflammation-targeting hydrogel for local drug delivery in inflammatory bowel disease. Science Translational Medicine, 7(300), 300ra128. https://doi.org/10.1126/scitranslmed. aaa5657
189
Sublimable Adhesives: The New Way to Stick BY NEPHI SEO '23 Cover: Many adhesives are common household items used for repairs, crafts, and other purposes. Sublimable adhesives are a new kind of adhesives that have a variety of potential applications. Source: Pixabay
The following is the transcript of an interview conducted with Carly Tymm, a researcher in Professor Katherine Mirica's chemistry lab at Dartmouth
Introduction Everyone uses adhesives in their daily lives. From sticky notes to glue to Teflon tape, adhesives are used for a variety of purposes such as crafts, repair, and more. While there are many kinds of adhesives, the Mirica Group, headed by Professor Katherine Mirica of Dartmouth College, has been focused on a unique kind of adhesives - one that stops sticking when heat is applied. Coined a “sublimable adhesive,” it was found to be very strong, strong enough to withstand the weight of a full-grown man. The following article is an interview with Carly Tymm ’20, an undergraduate researcher who worked as part of the Mirica Group for four years. She describes this new development and
190
the potential implications of their findings. At the end of the article is a brief terminology.
How long have you been working with Professor Mirica on this project? I started working in Professor Mirica’s lab my freshman winter at Dartmouth and continued up until the end of my senior year. I was initially matched with her through the Woman in Science Project, a freshman research program. She was also a great professor I had my freshman fall. I really liked her lab environment, and I was excited about this project of using molecular solids as temporary adhesives that use sublimation to release. So, I ended up diving into this project. When I started, the lab published its first paper looking at polycyclic aromatic hydrocarbons as sublimable adhesives. Still, the project was in its early DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
the extent that she did at Dartmouth. I'll clarify that sublimable temporary adhesives is what I'm talking about here, because temporary adhesives are used in many things already. For example, double sided tape is used every day.
It’s fascinating that you made temporary adhesives that can sublime. Can you explain how that works? Yes! The temporary adhesives that we studied are molecular solids, which are solids held together by different intermolecular interactions like hydrogen bonding and Van der Waals interactions. Many of these molecular solids that we tested as sublimable adhesives are common compounds that you may have heard about like ibuprofen and menthol.
stages, and I ended up spending four years at Dartmouth in the lab and this new paper was published soon after I graduated. I pretty much worked in a lab almost every term and then some off terms at Dartmouth.
You mentioned that the idea of temporary adhesives was very new. Did Professor Mirica’s group come up with that idea or was it something that was present already? I believe it was part of the work that Professor Mirica started at MIT, where she began exploring this idea of temporary adhesives that could sublime. But she didn’t explore the project to
Phase transitions are integral to the functioning of sublimable adhesives. Initially, these compounds are essentially solid powders that we melt between two substrates. During this time, the compound moves from the solid phase to the liquid phase. For example, camphor is one of the molecular solids that we examined, and it has a melting temperature of about 175° Celsius. So, we keep our heating block at 190° Celsius or so, and then we watch the camphor powder melt between those two substrates at this elevated temperature. Next, we remove the system from the heat and allow it to cool. The molecular solid then solidifies and crystallizes between the substrates. In summary, the molecular starts as a solid powder, transitions to a liquid in between the substrates, spreads out over the substrates, and then it goes back to a solid phase once it is cooled down. So, temperature is a big factor
Figure 1: Camphor is an organic compound that is extracted from the wood of camphor trees. It was also found to be the strongest temporary adhesive out of the eight compounds that were tested. Source: Wikimedia Commons
"Phase transitions are integral to the functioning of sublimable adhesives. Initially, these compounds are essentially solid powders that we melt between two substrates. During this time, the compound moves from the solid phase to the liquid phase."
Figure 2: The strong adhesive camphor is capable of holding up a 50 pound dumbbell. Source: Blelloch et al., 2020., reused with permission from the author
WINTER 2021
191
Figure 3: Ibuprofen, a common medication that relieves pain, was one of the eight compounds that were tested.
here because that is what’s driving it to initially melt. When you add heat, it melts. When you take the heat away, it solidifies.
Source: Flickr
In order to release the substrates, we can put the whole bonded sample in a vacuum apparatus with reduced pressure sitting on top of a heat block. The reduced pressure and elevated temperature will enable sublimation during which the solid that is bonded between the two substrates to go to the gas form.
When you say two substrates, are you speaking about two different surfaces?
"A tiny square of camphor that was melted between a glass area of about two and a half centimeters, an area that you could basically draw out just using your finger, could hold up one of our lab mates from the ceiling."
You can think of the substrates as the bread of a sandwich in which the molecular solid is between. We generally bonded our molecular solids between two glass slides. Glass is the main substrate that we used, but the term substrate could refer to anything that you are going to bond the molecular solid in between.
Are there different types of molecular solids that can act as temporary adhesives? Yes. Our group has tested a whole bunch of molecular solids over the course of this project so far, but for this paper we focused on eight different molecular solids that have been well characterized in other applications Two of the molecular solids we examined in the paper were ibuprofen, which is used as a pain reliever, and menthol, which can be derived from mint.
When you tested these molecular solids, how strong was the adhesion? The strongest compound overall was camphor. And that could reach up to 1240 kPa on average. That is drastically stronger than double sided scotch tape which demonstrates a strength of 210 kPa on average. kPa is a unit of pressure that expresses how much force per bonded area the bonded molecular solid can withstand. Figure 4: Menthol is an organic compound that comes from oil of mint leaves. It was also one of the eight compounds that were tested. Source: Flickr
192
For context, a tiny square of camphor that was melted between a glass area of about two and a half centimeters, an area that you could basically draw out just using your finger, could hold up one of our lab mates from the ceiling. But not all our molecular solids were that strong. Some of them are a lot weaker. If you handled these weaker bonded molecular solids too roughly, the substrates would separate. In other words, we had a wide range of strength.
Do you see any limitations that these compounds could have? I think one of the key components that makes sublimable adhesives so cool, but also so challenging is that we have them sublime to release. In order to get it to sublime, you're going to need a vacuum or some way to reduce the pressure greatly in the environment. To do that in daily life would be quite challenging. So, it probably wouldn't be used in a daily context like home-use tape because it requires sophisticated releasing technologies. Another weakness is that some compounds are really weak. You might have a compound that you want to use for certain reasons, maybe you're interested in a biological effect in some respect, but it may just be a really weak compound. So, you have to be quite strategic about the strength that some of these molecular solids can handle and the context that you want to use them in.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
In your recently published journal article, it reads that this could be applied to “microelectronics, medicine, construction and manufacturing and chemical protection.” (Blelloch et al., 2020) Can you give specific examples of how these could be applied? The work that we're doing is really focused on answering questions about how the structure and molecular interactions of a molecule and its crystal structure influence its mechanical properties. In general, there is a lot unknown about how you can predict the mechanical properties of something based on its molecular structure or crystal structure. The molecular structure is just the structure of a single molecule, and a crystal structure reflects how this molecule will interact with other molecules of its own kind. Being able to predict how a molecule builds from a single molecule to multiple molecules in a crystal and the mechanical properties that we observe afterwards are valuable in many areas outside of just adhesion. In medicine and pharmaceuticals, for example, the mechanical properties of the drug combination are important because you will likely want to process it and make it into a tablet form. So, understanding how the structure of small molecules influences its mechanical properties could be helpful in terms of using certain molecules in pharmaceuticals. In addition, knowing how molecular properties depend on the crystal structures of the molecules is important in energetic materials. The crystal structure can influence things like how charge is transported which is relevant to organic electronics.
What do you think the next step might be?
know what properties we want to see. In this recently published study, we really started with the molecular solids and then tested their mechanical properties. Could we eventually go into reverse and design molecules that give us a desired set of mechanical properties? That may be a very exciting future direction.
Terminology Adhesive - a substance used to stick things together Camphor - an organic compound that is extracted from the wood of camphor trees Ibuprofen - an anti-inflammatory drug that relieve pain Menthol - an organic compound that comes from oil of mint leaves Sublimation - a process in which a solid substance changes directly into vapor Substrate - a surface or layer on which a reaction occurs References Blelloch, N. D., Mitchell, H. T., Tymm, C. C., Van Citters, D. W., & Mirica, K. A. (2020). Crystal Engineering of Molecular Solids as Temporary Adhesives. Chemistry of Materials, 32(23), 9882–9896. https://doi.org/10.1021/acs.chemmater.0c01401 Mitchell, H. T., Smith, M. K., Blelloch, N. D., Van Citters, D. W., & Mirica, K. A. (2017). Polycyclic Aromatic Hydrocarbons as Sublimable Adhesives. Chemistry of Materials, 29(7), 2788–2793. https://doi.org/10.1021/acs.chemmater.6b04641 New Glue Sticks Easily, Holds Strongly, and is a Gas to Pull Apart. (2020, December 1). Dartmouth College - Press Releases. https://www.dartmouth.edu/press-releases/newglue-sticks-easily-holds-strongly-gas-pull-apart.html Research—Mirica Group. (n.d.). Mirica Group. Retrieved February 26, 2021, from http://www.miricagroup.com/ research.html Tymm, C. (2020, December 17). Interview on Sublimable Adhesives [Personal communication].
This project really introduced the promise of using these molecular solids as sublimable adhesives, so I think testing other important molecules would continue to provide fundamental knowledge about how structure and surface interactions influence mechanical properties. I think another future goal of this project would be to use the knowledge that we’ve gained to design an adhesive with a certain strength and a certain property that we desire. We WINTER 2021
193
Plant Environmental Perception Signaling Pathways & Epigenetic Memory BY OWEN SEINER '24 Cover: A sunflower is a common example of plant perception, particularly heliotropism, since it's known to turn in the direction of the sun Source: Wikimedia Commons
194
Introduction Given that plants are unable to move, they must be able to detect and respond to environmental threats through advanced techniques, often involving changes in growth and phenotypic variety. While plants obviously do not have a brain, they do have certain abilities that resemble intelligence: the ability to sense external stimuli as well as the ability to learn or memorize certain threats and experiences. This phenomenon of plant intelligence has been the subject of fierce debates, with arguments primarily arising over whether plants are capable of intelligence or adaption alone (Alpi et al., 2007). Regardless of whether plants demonstrate abilities similar to intelligence, plants do demonstrate a variety of neurobiological processes that closely resemble those in animals, particularly the ability to generate action potentials, variation potentials, local electrical potentials, and systemic potentials that transduce external
stimuli into biochemical responses within the plant (Fromm & Lautner, 2007). Additionally, plants demonstrate at least some learning abilities through epigenetic programming.
Touch Sensing Plants demonstrate an ability to respond to touch in a variety of ways, including changing growth patterns, moving towards the direction of the stimulus (towards light, touch, etc.), and moving in a direction independent of the stimulus. In responding to touch stimuli, plants utilize mechanosensing to detect and perceive the stimuli in their environment. Evidence suggests that plant plasma membranes contain a large variety of mechanically gated ion channels which open in response to force, though none of these channels have been characterized molecularly (Basu & Haswell, 2017). However, there are two models that currently dominate the discussion of plant mechanosensing. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
The simpler model is based on experimental evidence in pollen and root-hair that shows an increase in cell wall pH and reactive oxygen species in response to Ca2+ influx (the ion most often implicated in producing mechanoresponses in plants) from mechanosensitive channels (Yahraus et al., 1995; Messerli et al., 1999). The model suggests that force on the membrane leads to the opening of a Ca2+ ion channel, which increases the concentration of cytosolic Ca2+ and activates a proton pump and a NADPH reductase. This leads to acidification of the cytoplasm, alkalization of the cell wall, and the production of reactive oxygen species (Takeda et al., 2008). Decreased cytosolic pH, presence of reactive oxygen species, and influx of Ca2+ are thought to contribute to signaling events leading to touch responses, while increased cell wall pH and reactive oxygen species are thought to impact growth since they are known to affect cell wall rigidity (Monshausen & Gilroy, 2009). Since no mechanically gated ion channels that may be involved in this model for mechanosensation have been identified in plants, genome wide sequencing has been used to identify homologs from a variety of organisms, most notably bacteria. In bacteria, small (MscS) and large (MscL) mechanosensitive ion channels release solutes extracellularly given high osmotic pressure when the organism is in a hypotonic environment (Corry & Martinac, 2008). There are multiple homologs of these channels in a variety of plants species, with ten MscS-like (MSL) homologs identified in Arabidopsis (Haswell & Meyerowitz, 2006). In Arabidopsis, a knockout of MSL 4-6, MSL
WINTER 2021
9, and MSL 10 resulted in no mechanically gated ion channel response, though it cannot be definitively concluded that the MSL family encodes mechanically gated ion channels or whether they encode proteins responsible for mechanosensitive responses (Haswell et al., 2008). It is also possible that the MSL knockouts resulted in unintended downstream effects that inhibited mechanosensitive responses. MCA1 is another potential gene associated with mechanosensation derived from work with yeast. MCA1 was identified in Arabidopsis and was incorporated into yeast as an integral membrane protein (Nakagawa et al., 2007). MCA1 is thought to be involved in the cytoplasmic Ca2+ influx system since MCA1 complemented mid1 mutants, which affect a stretch activated Ca2+ ion channel in yeast (Nakagawa et al., 2007). Constitutive MCA1 expression led to increased cytosolic Ca2+ concentrations as well as increased expression of TCH3, which codes for a Ca2+ dependent protein often implicated in discussions about responses to touch stimuli in plants (Braam & Davis, 1990).
Figure 1: A common example of Thigmo-responses are tendrils, which are able to wrap around surrounding objects due to touch stimulated differential auxin gradients
The second model explains potential mechanosensing from the cell wall rather than from the plasma membrane, though both are generally thought to be present in plants exhibiting touch responses. The model focuses on specific receptor-like kinases (RLKs) called wall-associated kinases (WAKs), which communicate cell wall changes through kinase dependent signal transduction cascades initiated from their cytoplasmic kinase domains (Anderson et al., 2001). The model suggests that differences in cell wall pectin structure cause differential WAK1 binding, leading to the kinase cascade resulting in touch responses, though pectin binds far more effectively when Ca2+ is also present (Decreux & Messiaen, 2005). Certain RLKs have been identified as potential cell wall mechanosensors, such as THE-SEUS1 (THE1) and RLKs containing a lectin-like domain that have yet to be fully characterized, though are presumed to be active upon lectin binding (Hematy & Hofte, 2008; Gouget et al., 2006).
"In addition to demonstrating touch responses, plants also show the ability to sense gravity (also known as gravisensing). Roots typically grow in the direction of gravitational force and stems grow in the direction opposite to gravitational force."
Source: Wikimedia Commons
Gravity Sensing In addition to demonstrating touch responses, plants also show the ability to sense gravity (also known as gravisensing). Roots typically grow in the direction of gravitational force and stems grow in the direction opposite to gravitational force—this phenomenon is known as gravitropism. The prevailing model of gravity sensing is called the starch-statolith hypothesis, which states that starch-dense 195
Figure 2: An example of gravitropism in a fallen tree. Notice that the stem persists upward growth despite its position Source: Wikimedia Commons
"Beyond an ability to perceive their environment through signaling pathways, plants also demonstrate memorization powered through epigenetics."
amyloplasts within specific cells fall in the direction of gravity, leading to physiological changes that cause differential growth mediated by the phytohormone auxin. In response to the gravity signal, auxin accumulates at different concentrations on opposite sides of the root. Studies making use of a new auxin sensor, called DII-Venus, and mathematical approaches recently discovered that the differential auxin gradients can occur just minutes after gravisensing (Band et al., 2012). Gravity sensing takes place in columella cells located in the root caps and gravitropic responses occur in the elongation zone cells. It is thought that columella cells include starchfilled amyloplasts, which sediment as a result of gravity, leading to a signal that initiates the gravitropic response (Sato et al., 2015). A variety of possible explanations exist to elucidate the mechanism by which the physical stimulus of amyloplast sedimentation is converted into a biochemical growth response. One possibility is that amyloplast sedimentation leads to interactions with plasma membrane of endoplasmic reticulum (ER) proteins. In this model, amyloplast sedimentation as a result of gravity allows the amyloplasts to interact with membrane bound receptors and produce the biochemical response (Braun & Limbach, 2006). Evidence for this model emerges from experiments that demonstrated that mutations in the translocon of the outer membrane of chloroplasts (TOC) complex resulted in enhanced gravitropic defects of starchless amyloplasts,
196
implying that the TOC complex is involved in gravitropic signaling pathways (Strohm et al., 2014). Another model suggests that amyloplast sedimentation opens mechanosensitive ion channels in the plasma membrane or ER mechanosensitive ion channels by locally deforming the ER (Leitz et al., 2009). Additionally, mechanosensitive ion channel inhibitors prevent gravitropic responses, though genetic studies have failed to identify the responsible ion channels (Caldwell et al., 1998). The role of actin in gravisensing is highly contested. Proteins that play major roles in gravity sensing like ARG1 and a homolog of MAR2 (which is part of the TOC complex) are able to bind to actin, demonstrating a potential connection between gravity sensing and the actin cytoskeleton (Boonsirichai et al., 2003; Jouhet & Gray, 2009). Furthermore, certain chloroplast membrane proteins (CHUP1) show the ability to bind actin as well, leading to speculation that similar proteins may be present on amyloplasts that potentially act as a ligand for receptors that stimulate gravity signaling, providing a possible link between gravity sensation/response and actin (von Braun & Schleiff, 2008; Sato et al., 2015).
Plant Memory: Epigenetic Storage and Recall Beyond an ability to perceive their environment through signaling pathways, plants also demonstrate memorization powered through epigenetics. In response to environmental stressors, plants undergo chromatin
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
modifications that allow them to better adapt to the given stressor when re-presented to them. Additionally, these epigenetic memories sometimes persist through cell division, making them heritable memories that allow the plant to better respond to environmental stressors faced by their ancestors. Epigenetic modifications include a variety of chemical additions that result in differential gene expression. Modifications that encourage gene expression include histone acetylation and histone H3 lysine-4 trimethylation (H3K4me3), while histone deacetylation and histone H3 lysine-27 trimethylation (H3K27me3) discourage gene expression (Avramova, 2015). Histone acetylation and deacetylation are catalyzed by histone acetyltransferases and histone deacetylases, respectively, while H3K4me3 is catalyzed by an enzyme called COMPASS-like and H3K27me3 is catalyzed by the Polycomb repressive complex 2 (PRC2) (Shen et al., 2015; Jiang et al., 2011; Mozgova & Hennig, 2015). A well-studied environmental stressor that induces an epigenetic response is cold weather. Oftentimes, plants must be exposed to prolonged cold weather for a flowering period to commence, a process known as vernalization which involves a variety of epigenetic changes that allow for flower growth in the spring (Bouché et al., 2017). Vernalization occurs by Polycomb-Group Protein facilitated chromatin modifications that result in the suppression of FLOWERING LOCUS C (FLC) expression, thus delaying flowering until the spring (Helliwell et al., 2015). In plants not yet exposed to winter cold, FLC chromatin contains a variety of active
WINTER 2021
marks including histone acetylation and H3K4me3 which result in high levels of FLC gene expression (Kim & Sung, 2014). Winter cold exposure leads to the expression of VERNALIZATION INSENSITIVE 3 (VIN3), whose product—a plant homeodomain (PHD) protein—then associates with its homolog VIL1/ VRN5 and PCR2 to form PHD-PCR2. PHD-PCR2 then adds H3K27me3 to the FLC chromatin, repressing flower growth by shutting down FLC expression (De Lucia et al., 2008). These H3K27me3 additions are committed to the plant’s memory as repression of FLC continues through cell division in development even after prolonged exposure to warmer weather (Angel et al., 2011). Though cell division erases some H3K27me3 histone additions, FLC repression continues due to a protein called CAF-1 (Jiang & Berger, 2017). CAF-1 recruits PCR2 to induce H3K27me3 markings at various locations within FLC, spreading it throughout the locus (Jiang & Berger, 2017). This epigenetic state initiated by the return to warmer temperature is erased in gametes in order to allow the next generation to best adjust to their individual environment since overwintering will result in FLC repression (Amasino, 2010).
Figure 3: A nucleosome, which is considered to be the fundamental unit of DNA packaging in eukaryotic cells. Nucleosomes are composed of 8 histone proteins (2 each of H2A, H2B, H3, and H4) as well as the DNA. Epigenetics largely occurs on the lysine rich H3 tails Source: Wikimedia Commons
Discussion and Further Questions Currently, there are many questions left to be answered about plant perception physiology, with a number of experts devoted to studying plant mechanosensitive ion channels, actin, and epigenetics. The question of plant intelligence fostered a variety of philosophical debates within and outside of the scientific community and challenged our dominant evolutionary ideas about plants and animals (Baldauf & Palmer, 1993). It seems as though the neurobiological similarities between plants and animals conflict with the widely accepted idea that animals and fungi are more closely related phylogenetically, perhaps guiding us towards a better understanding of the complex evolutionary history of multicellular organisms (Stiller, 2007). References Alpi, A., Amrhein, N., Bertl, A., Blatt, M. R., Blumwald, E., Cervone, F., Dainty, J., Michelis, M. I. D., Epstein, E., Galston, A. W., Goldsmith, M. H. M., Hawes, C., Hell, R., Hetherington, A., Hofte, H., Juergens, G., Leaver, C. J., Moroni, A., Murphy, A., … Wagner, R. (2007). Plant neurobiology: No brain, no gain? Trends in Plant Science, 12(4), 135–136. https://doi. org/10.1016/j.tplants.2007.03.002 Anderson, C. M., Wagner, T. A., Perret, M., He, Z., He, D., & Kohorn, B. D. (2001). WAKs: Cell wall-associated kinases linking the cytoplasm to the extracellular matrix. Plant
197
Molecular Biology, 47(1–2), 197–206. http://dx.doi.org. dartmouth.idm.oclc.org/10.1023/A:1010691701578
of the National Academy of Sciences of the United States of America, 105(44), 16831–16836.
Amasino, R. (2010). Seasonal and developmental timing of flowering. The Plant Journal, 61(6), 1001–1013. https://doi. org/10.1111/j.1365-313X.2010.04148.x
Fromm, J., & Lautner, S. (2007). Electrical signals and their physiological significance in plants. Plant, Cell & Environment, 30(3), 249–257. https://doi.org/10.1111/j.13653040.2006.01614.x
Avramova, Z. (2015). Transcriptional ‘memory’ of a stress: Transient chromatin and memory (epigenetic) marks at stressresponse genes. The Plant Journal, 83(1), 149–159. https://doi. org/10.1111/tpj.12832 Baldauf, S. L., & Palmer, J. D. (1993). Animals and fungi are each other’s closest relatives: Congruent evidence from multiple proteins. Proceedings of the National Academy of Sciences of the United States of America, 90(24), 11558–11562. https://doi. org/10.1073/pnas.90.24.11558 Band, L. R., Wells, D. M., Larrieu, A., Sun, J., Middleton, A. M., French, A. P., Brunoud, G., Sato, E. M., Wilson, M. H., Péret, B., Oliva, M., Swarup, R., Sairanen, I., Parry, G., Ljung, K., Beeckman, T., Garibaldi, J. M., Estelle, M., Owen, M. R., … Bennett, M. J. (2012). Root gravitropism is regulated by a transient lateral auxin gradient controlled by a tipping-point mechanism. Proceedings of the National Academy of Sciences of the United States of America, 109(12), 4668–4673. Basu, D., & Haswell, E. S. (2017). Plant Mechanosensitive Ion Channels: An Ocean of Possibilities. Current Opinion in Plant Biology, 40, 43–48. https://doi.org/10.1016/j.pbi.2017.07.002 Bouché, F., Woods, D. P., & Amasino, R. M. (2017). Winter Memory throughout the Plant Kingdom: Different Paths to Flowering. Plant Physiology, 173(1), 27–35. https://doi. org/10.1104/pp.16.01322 Boonsirichai, K., Sedbrook, J. C., Chen, R., Gilroy, S., & Masson, P. H. (2003). ALTERED RESPONSE TO GRAVITY Is a Peripheral Membrane Protein That Modulates Gravity-Induced Cytoplasmic Alkalinization and Lateral Auxin Transport in Plant Statocytes. The Plant Cell, 15(11), 2612–2625. https://doi. org/10.1105/tpc.015560 Braam, J., & Davis, R. W. (1990). Rain-, wind-, and touch-induced expression of calmodulin and calmodulin-related genes in Arabidopsis. Cell, 60(3), 357–364. https://doi.org/10.1016/00928674(90)90587-5 Braun, M., & Limbach, C. (2006). Rhizoids and protonemata of characean algae: Model cells for research on polarized growth and plant gravity sensing. Protoplasma, 229(2–4), 133–142. http://dx.doi.org.dartmouth.idm.oclc.org/10.1007/s00709-0060208-9 Caldwell, R. A., Clemo, H. F., & Baumgarten, C. M. (1998). Using gadolinium to identify stretch-activated channels: Technical considerations. American Journal of PhysiologyCell Physiology, 275(2), C619–C621. https://doi.org/10.1152/ ajpcell.1998.275.2.C619 Corry, B., & Martinac, B. (2008). Bacterial mechanosensitive channels: Experiment and theory. Biochimica et Biophysica Acta (BBA) - Biomembranes, 1778(9), 1859–1870. https://doi. org/10.1016/j.bbamem.2007.06.022 Decreux, A., & Messiaen, J. (2005). Wall-associated Kinase WAK1 Interacts with Cell Wall Pectins in a Calcium-induced Conformation. Plant and Cell Physiology, 46(2), 268–278. https://doi.org/10.1093/pcp/pci026 De Lucia, F., Crevillen, P., Jones, A. M. E., Greb, T., & Dean, C. (2008). A PHD-Polycomb Repressive Complex 2 Triggers the Epigenetic Silencing of FLC during Vernalization. Proceedings
198
Gouget, A., Senchou, V., Govers, F., Sanson, A., Barre, A., Rougé, P., Pont-Lezica, R., & Canut, H. (2006). Lectin Receptor Kinases Participate in Protein-Protein Interactions to Mediate Plasma Membrane-Cell Wall Adhesions in Arabidopsis. Plant Physiology, 140(1), 81–90. https://doi.org/10.1104/ pp.105.066464 Haswell, E. S., & Meyerowitz, E. M. (2006). MscS-like Proteins Control Plastid Size and Shape in Arabidopsis thaliana. Current Biology, 16(1), 1–11. https://doi.org/10.1016/j. cub.2005.11.044 Haswell, E. S., Peyronnet, R., Barbier-Brygoo, H., Meyerowitz, E. M., & Frachisse, J.-M. (2008). Two MscS Homologs Provide Mechanosensitive Channel Activities in the Arabidopsis Root. Current Biology, 18(10), 730–734. https://doi.org/10.1016/j. cub.2008.04.039 Helliwell, C. A., Anderssen, R. S., Robertson, M., & Finnegan, E. J. (2015). How is FLC repression initiated by cold? Trends in Plant Science, 20(2), 76–82. https://doi.org/10.1016/j. tplants.2014.12.004 Hematy, K., & Hofte, H. (2008). Novel receptor kinases involved in growth regulation. Current Opinion in Plant Biology, 11(3), 321–328. https://doi.org/10.1016/j.pbi.2008.02.008 Jiang, D., & Berger, F. (2017). DNA replication–coupled histone modification maintains Polycomb gene silencing in plants. Science, 357(6356), 1146–1149. https://doi.org/10.1126/ science.aan4965 Jiang, D., Kong, N. C., Gu, X., Li, Z., & He, Y. (2011). Arabidopsis COMPASS-Like Complexes Mediate Histone H3 Lysine-4 Trimethylation to Control Floral Transition and Plant Development. PLoS Genetics, 7(3), e1001330. https://doi. org/10.1371/journal.pgen.1001330 Jouhet, J., & Gray, J. C. (2009). Interaction of Actin and the Chloroplast Protein Import Apparatus. Journal of Biological Chemistry, 284(28), 19132–19141. https://doi.org/10.1074/ jbc.M109.012831 Kim, D.-H., & Sung, S. (2014). Genetic and Epigenetic Mechanisms Underlying Vernalization. The Arabidopsis Book / American Society of Plant Biologists, 12. https://doi. org/10.1199/tab.0171 Leitz, G., Kang, B.-H., Schoenwaelder, M. E. A., & Staehelin, L. A. (2009). Statolith Sedimentation Kinetics and Force Transduction to the Cortical Endoplasmic Reticulum in Gravity-Sensing Arabidopsis Columella Cells(W)(OA). Plant Cell, 21(3), 843–860. Messerli, M. A., Danuser, G., & Robinson, K. R. (1999). Pulsatile influxes of H+, K+ and Ca2+ lag growth pulses of Lilium longiflorum pollen tubes. Journal of Cell Science, 112 ( Pt 10), 1497–1509. Monshausen, G. B., & Gilroy, S. (2009). Feeling green: Mechanosensing in plants. Trends in Cell Biology, 19(5), 228–235. https://doi.org/10.1016/j.tcb.2009.02.005
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Mozgova, I., & Hennig, L. (2015). The Polycomb Group Protein Regulatory Network. Annual Review of Plant Biology, 66(1), 269–296. https://doi.org/10.1146/annurevarplant-043014-115627 Nakagawa, Y., Katagiri, T., Shinozaki, K., Qi, Z., Tatsumi, H., Furuichi, T., Kishigami, A., Sokabe, M., Kojima, I., Sato, S., Kato, T., Tabata, S., Iida, K., Terashima, A., Nakano, M., Ikeda, M., Yamanaka, T., & Iida, H. (2007). Arabidopsis Plasma Membrane Protein Crucial for Ca2+ Influx and Touch Sensing in Roots. Proceedings of the National Academy of Sciences of the United States of America, 104(9), 3639–3644. Sato, E. M., Hijazi, H., Bennett, M. J., Vissenberg, K., & Swarup, R. (2015). New insights into root gravitropic signalling. Journal of Experimental Botany, 66(8), 2155–2165. https://doi. org/10.1093/jxb/eru515 Shen, Y., Wei, W., & Zhou, D.-X. (2015). Histone Acetylation Enzymes Coordinate Metabolism and Gene Expression. Trends in Plant Science, 20(10), 614–621. https://doi. org/10.1016/j.tplants.2015.07.005 Stiller, J. W. (2007). Plastid endosymbiosis, genome evolution and the origin of green plants. Trends in Plant Science, 12(9), 391–396. https://doi.org/10.1016/j.tplants.2007.08.002 Strohm, A. K., Barrett-Wilt, G. A., & Masson, P. H. (2014). A functional TOC complex contributes to gravity signal transduction in Arabidopsis. Frontiers in Plant Science, 5. https://doi.org/10.3389/fpls.2014.00148 Takeda, S., Gapper, C., Kaya, H., Bell, E., Kuchitsu, K., & Dolan, L. (2008). Local Positive Feedback Regulation Determines Cell Shape in Root Hair Cells. Science (American Association for the Advancement of Science), 319(5867), 1241–1244. https:// doi.org/10.1126/science.1152505 von Braun, S. S., & Schleiff, E. (2008). The chloroplast outer membrane protein CHUP1 interacts with actin and profilin. Planta, 227(5), 1151–1159. Yahraus, T., Chandra, S., Legendre, L., & Low, P. S. (1995). Evidence for a Mechanically Induced Oxidative Burst. Plant Physiology (Bethesda), 109(4), 1259–1266. https://doi. org/10.1104/pp.109.4.1259
WINTER 2021
199
The Impact of Stories as Therapy BY RACHEL MATTHEW '24 Cover: Two people share a story with one another, with a vast collection of additional stories waiting on the shelves. Source: Pexels
200
The Therapeutic Potential of Stories Stories—whether told through the written word, passed down orally, or displayed on the big screen—can be meaningful to those who experience them. Inspired by this promise, people have investigated a variety of storybased techniques in therapeutic applications. Further research needs to be done, but many preliminary studies demonstrate encouraging results. Iranian researchers found story-telling therapy to be effective against depression; adolescent cancer patients decreased their depression scores, as indicated by Beck's Depression Inventory, after twelve biweekly hour-long sessions of story therapy (Estahbanati & Ghasemian, 2019). Similarly, storytelling interventions for students aged 8 through 11 decreased their hopelessness scores as measured by the Kazdin hopelessness scale (Shafieyan et al., 2017). Other studies, also conducted in Iran, identified promising effects of story-based therapies on aggression
in young boys. But all of these results come with limitations. Not all of the Iranian studies referenced above were controlled, and all were conducted with a sample size of 34 or fewer. In terms of accessibility to English-speaking scientists, the full-text publications of these studies lack official English translations, and the online publication of Shafieyan et al.’s paper incorrectly translated the key word “hopelessness” as “hopefulness,” leading to a confusing read for anyone unfamiliar with Kazdin’s scale. Given all of these experimental limitations, the associated findings can only reach a limited level of significance; however, the results are still meaningful. They indicate potential for story-based therapy. Research elsewhere has supported the basic idea that stories can impact how people think. For instance, Corrigan et al. (2013) determined that adult subjects’ stigmas towards mental illness changed in response to their reading DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: A mother and daughter bond through shared consumption of a written story. Source: Unsplash
of an assigned journalism article, where a positive article about recovery left them with more affirming attitudes and a negative article about problems in public mental health systems increased their stigma. Their personal attitudes shifted to reflect the presented narrative. Likewise, Cook et al. (2014) found that an intervention they called “Therapeutic Storytelling Technique” (TST) could motivate children aged 5 through 6 to adopt a particular outlook. They identified the technique’s ability to help children want to change for the better as a major strength, as the children treated with TST demonstrated little resistance to therapy. While Cook et al.’s findings suffer from similar limitations to those in the Iranian studies— small samples sizes and no control group— they detail their process and its rationale in the hopes of inspiring more in-depth clinical studies. Aside from nonfiction self-help novels, therapists lack any consensus on the criteria a story must meet to be utilized in therapy. Bibliotherapy utilizes books as chosen by the individual therapist, while cinematherapy does the same with film, and other techniques only involve an oral telling or retelling. These various techniques have been applied to depression, anxiety, conduct disorders, post-traumatic stress disorder, and more. Either as a result of inconsistent intervention methods or due to a lack of interest, the exact potential of stories in therapy remains unknown. Nonetheless, a multitude of preliminary investigations are underway to determine their value.
WINTER 2021
Possible Mechanisms Cook et al. (2004) proposed several other theories to explain how stories may aid in the therapeutic process; these theories include “cognitive restructuring,” in which the story’s content leads the listener to reevaluate their automatic thoughts and beliefs in order to develop healthier ones, and “vicarious experiential learning,” in which the listener can engage with potential courses of action and consider the associated consequences in a purely story-mediated environment. Fortunately, modern investigations into the workings of the brain can provide insight into which of these mechanisms actually occur, allowing research to move beyond the realm of pure speculation. For example, a review of cognitive psychological studies and brain imaging tests written in 2016 reports that people develop a stronger sense of empathy and improve their social understanding through engagement in written fiction (Oatley, 2016). Since empathy and social understanding are important in developing successful and comfortable interactions in any social environment, mutual consumption of stories by a therapist and their patient may aid in the two’s ability to interact with one another. As Cook et al. (2004) suggest, story-based techniques could connect therapists to the children with whom they work, enabling the therapy to be more collaborative. A smoother relationship between therapist and patient could enhance the rest of the therapeutic process, working not as treatment on its own but as an assistant to other treatment techniques. Additionally, it could increase
"A review of cognitive psychological studies and brain imaging tests written in 2016 reports that people develop a stronger sense of empathy and improve their social understanding through engagement in written fiction."
201
the efficacy of any interventions which utilize the placebo effect, as research has found that patients with increased trust in their physicians experience better health outcomes due to an increase in their compliance with the treatment and an enhanced placebo effect (Lee & Lin, 2009). The human mind has some capability to heal itself; even open-label placebos—treatments that are presented openly as placebos—can improve a patient’s self-observation symptoms, which include pain, nausea, fatigue, and any other self-reported symptoms (Marshall, 2016). Therefore, it is possible that mood-based disorders improve due to story-based treatment through mechanisms similar to the workings of a placebo.
"When a person tells themself the story of their past trauma, they may both undergo exposure therapy and attempt to “rewrite” the experience, changing it from a source of constant anxiety to something that happened and was resolved in the past."
202
Due to the way the brain responds to narrative, stories may also provide a mechanism of therapy on their own, acting not as an aid, but as an independent source of recovery. When a person reads a story, their brain activity reflects the actions taking place within the story. For instance, as a character holds an object, the areas of the reader’s brain associated with holding an object increase in activation. This sort of mirroring was observed in many areas of the brain, including motor regions, visual processing centers, and the areas which track time, all activating when a person read about a scenario involving related concepts (Speer et al., 2009). The observed pattern of activity lends support to the idea that the process of imagining situations described in text occurs via mental simulation, a mechanism similar to that of remembering situations or stimuli from the past (Speer et al., 2009). In imagining a scenario which is both described by an outside source and recognized as fiction, a person’s brain still behaves as if it were remembering events which truly did occur. Any improvements in a person’s condition which could be brought on by particular therapeutic experiences might also be evoked by fiction. The therapy could potentially manifest in several ways; moments of warmth, affection, or unbridled joy in fiction can provide moments of happiness in real life; a fictional protagonist’s determined motivation, even in the face of horrific circumstances, could inspire a patient; or revelations within the narrative could trigger self-reflection and allow the patient to explore different possibilities for their present and future, all with the added benefit that stories are far cheaper than reality in most situations.
Personal Narrative When a person tells themself the story of their past trauma, they may both undergo exposure therapy and attempt to “rewrite” the experience, changing it from a source of constant anxiety to something that happened and was resolved in the past. Exposure therapies of this nature have already been developed and tested. Narrative Exposure Therapy (NET), for example, guides a patient through their life story in the form of an ongoing narrative. The technique follows the assumption that a person's self-narrative reflects their state of mind, based on “numerous studies [which] indicate that disarranged, unassimilated narratives of traumatic experiences lead to PTSD. Hence, the ability to construct healthy narratives of traumatic experience corresponds to a healthy recovery process” (Mehl-Madrona & Gwozdziewycz, 2013). By retelling and altering the narrative, a person can recover from traumatic experiences and return to a healthy psychological place. As an added benefit, this requires lower levels of professional counselor training and less overall time than required by other forms of cognitive behavioral therapy. Mehl-Madrona and Gwozdziewycz (2013) conducted a meta-analysis of seven studies on refugee populations, and found that NET demonstrated an efficacy on par with established treatments, such as interpersonal therapy, and it is easier to apply than most alternatives. Notably, NET, when utilized to treat refugees, is generally more effective if conducted by a fellow refugee rather than a foreign professional. While scientists do not yet know the reason for this trend, it creates the possibility that narrative-based therapy techniques could be taught as tools to people who remain constant in a patient’s life, allowing them to maintain treatment even when they cannot access a professional therapist. The power of stories and storytelling does not necessarily lie in the extreme and overwhelming responses they elicit but in their universal accessibility.
Creative Bibliotherapy and Cinematherapy This accessibility goes in both directions; anyone can tell a story, and anyone can listen to one. Oftentimes, people do more than just listen; they may find themselves relating to themes and empathizing with characters in another’s story, even connecting to others due to shared interest in a particular work of
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: A library with multiple shelves packed full of books. Image source: Burst
fiction. Creative bibliotherapy, although its specific implementation differs, is a story-based technique which utilizes guided reading of works of fiction for therapeutic means, including treatment of PTSD. Glavin and Montgomery (2017) conducted a systematic review to determine whether creative bibliotherapy reduces PTSD symptoms and found evidence in low-quality and qualitative studies indicating that it does; however, they were unable to find any controlled studies investigating the technique’s efficacy. Creative bibliotherapy shows potential in PTSD treatment, but any definitive conclusions require further research. Fortunately, studies provide more substantial support for creative bibliotherapy’s capability to improve children’s mental conditions and behavior. A 2015 systematic review determined creative bibliotherapy has “a small to moderate positive effect on child behavior,” helping to reduce “internalizing behaviors” indicative of depression or anxiety and “externalizing behaviors,” such as aggression and attention deficits, while simultaneously encouraging “voluntary behavior intended to benefit another” like “moral reasoning, social competence, and self-regulation” (Maunders and Montgomery, 2015). As with NET, the promise of the results doesn’t come from the intensity of the treatment’s effect but from the ease of application. Creative bibliotherapy, being simply a guided reading, requires very little cost and can be implemented relatively easily. In analyzing how
WINTER 2021
"Stories are not restricted to characters on a page. Cinematherapy is a proposed form of therapy in which the patient views films chosen by their therapist then undergoes a guided reflection to identify and explore themes relevant to their mental state, all for the ultimate purpose Stories are not restricted to characters on a page. of treating their Cinematherapy is a proposed form of therapy in which the patient views films chosen by their current condition." creative bibliotherapy, despite its simplicity, affects children’s behaviors, Maunders and Montgomery consider a mechanism like that employed in cognitive-behavioral therapy (CBT), a technique in which the therapist aids the patient in recognizing their thoughts and feelings in order to rework them to improve their mental state. This is similar to Cook et al.’s cognitive restructuring, wherein the story facilitates the process of identifying unhelpful beliefs and behaviors, challenging them, and developing new ones in their place. The two authors found this process to be consistently present in the studies they reviewed, despite the lack of a cohesive method of intervention delivery amongst the studies (Maunders and Montgomery, 2015).
therapist then undergoes a guided reflection to identify and explore themes relevant to their mental state, all for the ultimate purpose of treating their current condition. The treatment functions in essentially the same way as creative bibliotherapy, but it utilizes a different story medium. Despite the need for further studies on the practice, the current research provides an empirical foundation for cinematherapy as a therapeutic technique (Powell, 2008). The rationales for story-based therapy apply to literature and films equally; both present narratives concerning human emotion, and a viewer’s brain activity mirrors the content
203
of films just as it does while people read, with Speer et al. (2009) observing brain activity increases corresponding to a film’s content in the somatosensory cortex, motor cortex, and regions involved in the remembrance of audiovisual information. Overall, explorations into both creative bibliography and cinematherapy echo the findings of the previously referenced literature; story-based therapy has promise, though the studies which have been previously conducted are commonly low in quality, mostly inconsistent with regards to the method of intervention delivery, and all lacking a clear, evidence-supported mechanism to explain the results.
and, with a dedicated effort to collect empirical evidence and a focus on the therapeutic mechanism, stories have great potential as a form of treatment.
The Promise of Stories
Estahbanati, M.A.E., & Ghasemian K. (2019). THE EFFECTIVENESS OF STORYTELLING ON REDUCING DEPRESSION IN CANCER PATIENTS. Universidade, pressões e adoecimento 2, 6(9), 278-284. https://revista.unitins.br/index. php/humanidadeseinovacao/article/view/1464
The research, as it presently stands, is incomplete. However, all the pieces required to further our understanding about the power of stories are in place. Papers propose theories behind their therapies, from Maunders and Montgomery’s consideration of CBT mechanisms, to the argument for empathy, which literary fiction encourages, to Cook et al.’s ideas of motivation. Literature reviews speak to the efficacy storytelling techniques can achieve. Altogether, the research implies a potential that scientists have yet to truly investigate. Stories are worth investigating because, as supported by over twenty years of research, scientists know that people experience profound mental changes when they tell their own stories about emotional events. While it is clear that stories can have therapeutic capabilities and that narrative can heal, scientists have yet to determine the mechanisms through which stories act (Niederhoffer and Pennebaker, 2009). Investigating why, how, and when stories make for effective therapy could lead to significant progress in the scientific understanding of the relationship between stories and mental health and open up a new branch of therapy with low cost and universal application. From telling one’s own story to reading books to watching movies, stories have the ability to activate a yet-to-be-known mechanism which alters a person’s state of mind, to change the direction of thoughts in a person’s head, and to improve mental health. Stories demonstrate this ability even without a consistent method of therapeutic intervention and even though scientists still lack the knowledge of how stories work to impact mental health. However, this knowledge doesn’t need to be unattainable. Much of our understanding of stories comes from small, low-quality, and preliminary studies,
204
References Cook, J.W., Taylor, L.A., & Silverman, P. (2004). The application of therapeutic storytelling techniques with preadolescent children: A clinical description with illustrative case study, Cognitive and Behavioral Practice, 11(2), 243-248, ISSN 10777229, https://doi.org/10.1016/S1077-7229(04)80035-X Corrigan, P.W., Powell, K.J., & Michaels, P.J. (2013). The Effects of News Stories on the Stigma of Mental Illness, The Journal of Nervous and Mental Disease: March 2013, 201(3), 179-182. doi: 10.1097/NMD.0b013e3182848c24
Glavin, C.E.Y., & Montgomery, P. (2017). Creative bibliotherapy for post-traumatic stress disorder (PTSD): a systematic review, Journal of Poetry Therapy, 30(2), 95-107. Dos: 10.1080/08893675.2017.1266190 Gwozdziewycz N., & Mehl-Madrona L. (2013). Meta-analysis of the use of narrative exposure therapy for the effects of trauma among refugee populations. Perm J, 7(1), 70-6. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3627789/ Lee, YY., Lin, J.L. (2009). Trust but Verify: The Interactive Effects of Trust and Autonomy Preferences on Health Outcomes. Health Care Anal 17, 244–260. https://doi.org/10.1007/ s10728-008-0100-1 Marshall, M. (2020). A placebo can work even when you know it's a placebo. https://www.health.harvard.edu/blog/placebocan-work-even-know-placebo-201607079926. Maunders, K., & Montgomery, P. (2015). The effectiveness of creative bibliotherapy for internalizing, externalizing, and prosocial behaviors in children: A systematic review, Children and Youth Services Review, 55, 37-47, ISSN 0190-7409, https:// doi.org/10.1016/j.childyouth.2015.05.010 Niederhoffer, K.J., & Pennebaker, J.W. (2009). Sharing One's Story: On the Benefits of Writing or Talking About Emotional Experience. The Oxford Handbook of Positive Psychology (2nd edn), doi: 10.1093/oxfordhb/9780195187243.013.0059. https://www.oxfordhandbooks.com/view/10.1093/ oxfordhb/9780195187243.001.0001/oxfordhb9780195187243-e-059 Oatley, K. (2016). Fiction: Simulation of Social Worlds. Trends in Cognitive Sciences, 20(8), 618-628. ISSN 1364-6613, https:// doi.org/10.1016/j.tics.2016.06.002 Powell, M. (2008). Cinematherapy as a Clinical Intervention: Theoretical Rationale and Empirical Credibility. Theses and Dissertations Retrieved from https://scholarworks.uark.edu/ etd/2984 Shafieyan, S., Soleymani, M. R., Samouei, R., & Afshar, M. (2017). Effect of storytelling on hopefulness in girl students. Journal of education and health promotion, 6, 101. https:// doi.org/10.4103/jehp.jehp_59_16
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Speer, N. K., Reynolds, J. R., Swallow, K. M., & Zacks, J. M. (2009). Reading Stories Activates Neural Representations of Visual and Motor Experiences. Psychological Science, 20(8), 989–999. https://doi. org/10.1111/j.1467-9280.2009.02397.x
WINTER 2021
205
Passing Through Barriers: Quantum Tunneling and its Applications BY RUJUTA PUROHIT '24 Cover Image: A visual representation of quantum tunneling depicting a passing through a barrier. Quantum tunneling is possible due to the wave nature of matter and it has a host of applications. Source: Wikimedia Commons
206
Introduction and an Overview of Quantum Mechanics The term quantum mechanics was first coined by three physicists, Max Born, Werner Heisenberg, and Wolfgang Pauli, in the 1920s. The history of quantum mechanics stretches over many years and includes the discoveries and the many theories of the photoelectric effect, cathode rays, various atomic models, and the wave nature of light and matter. In 1905, Einstein established the wave-particle duality of light with his equations for the photoelectric effect using a relationship between a photon’s energy and its frequency. French physicist Louis de Broglie used this relationship to argue that all matter, not just light, exhibits wave-particle duality, which was a groundbreaking claim in 1924. De Broglie used Einstein’s energyfrequency relation to then find the equation for “matter waves,” an equation that relates the wavelength of the matter wave with its
momentum using Planck’s constant h (de Broglie, 1924). This equation – called the de Broglie hypothesis – is now known to hold for all types of matter (McEvoy & Zarate, 2004), including macroscopic particles even though their wave properties are not easily detected because of their extremely short wavelengths (Eisberg & Resnick, 1985). This wave–particle duality is deeply embedded in the foundations of quantum mechanics and dictates many important observations. In the formalism of the theory, all the information about a particle is encoded in its wave function. A wave function is a mathematical description of the superpositions associated with a quantum entity at any particular moment. It used to represent the particle itself—in space and time. It is represented by Ψ (x,t). The information in this wave function includes position, phase, frequency, and momentum.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
In 1927, German physicist Werner Heisenberg introduced the uncertainty principle. Heisenberg stated that even with the most precise measurement devices and methods, it is impossible to predict the position, x, and the momentum, p, of a particle simultaneously (Heisenberg, 1927):
In the above equality, ∆x is the uncertainty in position, ∆px is the uncertainty in the x-component of the particle’s momentum, and h-bar is the reduced Planck’s constant. For reference, reduced Planck’s constant has a value of 1.05 x 10-34 J s. This has two important conclusions: first, the position and momentum of a particle cannot be known accurately simultaneously, and second, the more accurate one is with the value of one quantity, the less accurate one becomes for the other. That is, in gaining information about one of these quantities, one loses information about the other. A classical paradox in physics: there are no physical precautions that can be taken to avoid the uncertainty principle. The uncertainty does not arise from the methods or the process of the measurement but rather lies in the measurement itself (Eisberg & Resnick, 1985). The uncertainty principle is not readily apparent on the macroscopic scales of everyday experience for the simple reason that it does not affect everyday experiences (Jaeger, 2014). For large particles, the uncertainty becomes incredibly small (relative to common measurements of inches, centimeter, miles per hour, etc.) so much so that it no longer affects calculations on a broader sense. Decoherence can be used to explain the apparent uncertainty paradox. A perfect wave function is coherent. Macroscopic objects do not have perfect wave functions because they are constantly interacting with other macroscopic objects and are affected by natural phenomena like gravity, electricity, and other forces. Thus, a macroscopic object becomes decoherent by its many interactions with its environment and with other objects. This leads to the result that a macroscopic object does not exist as a superposition state (Schlosshauer, 2005). The superposition principle states that linear functions can be added together to get another linear function which also satisfies all conditions of its components. Because it does not exist in a superposition state, the
WINTER 2021
macroscopic object cannot be simplified as a single wave function. Decoherence explains why people do not routinely see quantum superpositions in the world. It is not because quantum mechanics intrinsically stops working for objects larger than a threshold size. Instead, macroscopic objects such as cats, buildings, and planets are almost impossible to keep isolated to the extent needed to prevent decoherence (Zurek, 2003). Microscopic objects, in contrast, are more easily isolated from their surroundings so that they retain their quantum behavior.
Schrodinger's Equation and Quantum Tunneling In 1925, Erwin Schrödinger formulated the wave equation for matter now known as the Schrödinger Equation – a partial and linear differential equation that is used to determine the state of a wave function (Fleisch & Kinnaman, 2015). This state refers to the general evolution of the function: where is it, what is it doing, and how is it doing it? Schrödinger won the Nobel Prize in Physics in 1933 for this equation and his contributions to quantum mechanics. The general Schrödinger equation, also called the “Time-Dependent” Schrödinger equation (TDSE), determines the state of the particlesystem, which is associated with its potential energy and is given as:
"The Schrödinger equation determines how wave functions evolve over time and how a wave function behaves qualitatively like other waves, such as sound waves or waves representing ripples in still water."
Here, i is the imaginary number, h-bar is reduced Planck’s constant, Ψ is the wave function of the particle, m is the mass of the particle, and U is the potential energy. The potential energy can be expressed as a function of position and time or just position. When this potential energy depends explicitly on position, TDSE can be separated into 2 linear, ordinary differential equations that can be solved independently. The solutions of these equations give us functions for time t and position x (Schrödinger, 1926). The Schrödinger equation determines how wave functions evolve over time and how a wave function behaves qualitatively like other waves, such as sound waves or waves representing ripples in still water. The wave function describes a physical phenomenon (Born, 1927). The equation details the behavior of Ψ but says nothing about its nature, that is Schrödinger could never fully explain what Ψ actually represented, beyond the wave function itself. In 1926, Max Born developed 207
Figure 1: The one-dimensional rectangular barrier depicting different potentials in the three regions. This barrier style is most commonly used to explain quantum tunneling. Source: Wikimedia Commons
the probabilistic theory and interpreted Ψ as the probability amplitude. The probability amplitude is the likelihood of finding the wave function at a particular point in space. This likelihood is influenced by several factors like the momentum, energy, and the quantum process the particle is undergoing.
"Quantum tunneling happens when a particle passes through a barrier. In classical mechanics, this is physically impossible."
The two ordinary differential equations derived from TDSE are expressed as follows: i. For the time function equation, TDSE is resolved to get:
The time function is not unique and is the same for all systems when the potential energy is a constant U = U0. ii. For the time-independent Schrödinger equation (TISE), TDSE is resolved to get:
In these two equations, E is the total energy of the particle and is used to relate the two equations with each other. E is the energy associated with motion in the x-axis and is sometimes measured relative to the potential energy U. These equations can be further used to define Ψ as wave function that is a solution to the Schrödinger equation for only specific values of E (Schrödinger, 1928). Quantum tunneling happens when a particle passes through a barrier. In classical mechanics, this is physically impossible. For example, an
208
electron with total energy E less than potential energy U required to climb over a barrier will simply be reflected off it and bounce back. But in quantum mechanics, the particle doesn’t act like a particle when it passes through a barrier – it acts like a wave. Tunneling is a direct consequence of the wave nature of matter. When looking at the wave picture, tunneling is the quantum mechanical phenomenon in which a wave function passes through a potential barrier (Lerner, 1991). However, in quantum mechanics, the electron, even if its total energy is less than the potential of the barrier, can pass through the barrier. This happens through a transmission of the electron’s wave function. Thus, there is some finite probability of finding the electron on the other side of the potential barrier. After tunneling through the barrier, the wave function has the same initial energy E. The differences between the classical and quantum existence of the same concept can be explained by the wave nature of matter and the uncertainty principle. In another macroscopic example – one that falls under the jurisdiction of classical physics – a ball without enough energy to roll up a hill will simply roll back down. A ball that is unable to pass through a wall will bounce back. However, quantum mechanically speaking, the ball can tunnel through the hill or the wall and be transmitted to the other side (Davies, 2005). The quantum treatment of the ball-particle implies that the probability of finding the particle anywhere is 1. The particle exists somewhere – on one side of the barrier. There is a finite probability of the ball existing on either side of the barrier, but these probabilities are not equal. Moreover, the wave function does not disappear on one side and reappear on the other side. Although it may seem that way,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
the actual process of quantum tunneling is not instantaneous (Ramos et al, 2020). During their research, a team of scientists measured the time delay – if there is any – that is involved in the tunneling of rubidium atoms. This delay is the order of 1 millisecond, putting to rest previous claims of instant quantum tunneling. The “tunneling problem” problem arises due to the Heisenberg uncertainty principle. When a particle’s wave function collides on a potential barrier as it hopes to pass through, the barrier forces it to become taller and narrower. This can be visualized as an elastic ball striking a wall. The wave becomes much more delocalized – it is now on both sides of the barrier. Essentially, the wave is squeezed across the barrier when it passes through it (Sen, 2014). A simple model of a tunneling barrier, such as the rectangular barrier, can be analyzed and solved algebraically using the Schrödinger equation to determine the probability density within the space (Bjorken & Drell, 1965). The tunneling is described by a wave function which has a non-zero potential energy function inside the tunnel and zero outside the tunnel. In the example of the ball rolling up the hill, it is assumed that the ball only exists in a closed system within the universe where it has a finite potential energy function inside the system and 0 potential outside the system. This has real physical consequences because by applying these constraints, the wave function is localized as much as is possible to reduce the impact of the uncertainty principle on some calculations. To derive quantum tunneling using TISE, consider a particle of mass m that needs to cross a barrier. The potential energy of the particle outside the barrier is 0 and the potential energy inside the barrier is some constant U0, such that U0 is greater than the total energy of the particle E. In the two zero-potential regions (as seen in the figure below) I and III, the wave functions are:
For these wave functions, k = (√2mE)/h, x is the position of the particle, s, and t are parameters. In the region with a positive U0, the wave function is a superposition of two terms, each decaying from one side of the barrier as follows:
WINTER 2021
For this wave function k = (√2m(U-E))/h, x is the position, ξ and ς are parameters. These wave functions are obtained by simply solving the time-independent Schrödinger equation (TISE) in each region as shown in the figure and by applying appropriate boundary conditions. The boundary conditions are met by solving the wave functions for normalization constants. For the quantum tunneling functions, s, t, ξ and ς are normalization constants. They can be evaluated by solving for the probability density of the wave function and setting it equal to 1. This comes from the conclusion that because the particle is on either side of the barrier as these make up the universe of the problem, and the particle must exist somewhere. Therefore, the chances of finding it anywhere are always 100%, leading to a probability of 1. The probability density for wave functions is calculated by multiplying the wave function (ψ) with its own complex conjugate (ψ*), meaning taking the negative value of the imaginary component, and integrating that product over all space (Born, 1954):
"Because of the uncertainty principle, the position of a wave function can only be predicted and never accurately calculated. In other words, the uncertainty in the exact location of light particles allows these particles to break rules of classical mechanics and move in space without passing over the potential energy barrier."
This is the reason the quantum theory is called a probabilistic theory. Because of the uncertainty principle, the position of a wave function can only be predicted and never accurately calculated. In other words, the uncertainty in the exact location of light particles allows these particles to break rules of classical mechanics and move in space without passing over the potential energy barrier. At any point in time, the wave either exists in Region I, Region II, or Region III but it is not known exactly: only the probability can be calculated. The probability of the particle passing the barrier can also be calculated using the energy function and TISE. The probability P of the particle tunneling through a barrier is given by the equation:
209
Figure 2. The islands floating in a chaotic sea, representing chaosassisted tunneling. The warmer colors depict the islands, where the probability of finding the wave function is the highest. Source: Wikimedia Commons
"Chaos in physics refers to systems or types of systems whose states cannot be predicted by their general characteristics. These systems are similar to integrable systems and are said to have a unique evolution – one that is not, in any way, affected by random natural or artificial elements."
Here, a is the thickness of the barrier with other symbols the same as defined before. From the equation it is apparent that the probability of an object passing through a barrier depends greatly on the particle’s mass and the width of the potential barrier. A wider barrier makes it more difficult for the particle to tunnel through and the probability decreases. Similarly, the probability decreases for a particle with a large mass (Libretexts, 2020). Generally, quantum tunneling occurs with barriers of width around 1–3 nm and smaller (Lerner, 1991). This is a part of the broader mathematical explanation for why macroscopic objects and systems cannot tunnel through barriers and why dogs cannot walk through walls.
Types of Tunneling Tunneling falls under quantum tunneling and concerns potential barriers and wave functions. However, scientists believe that the quantum concepts can be extended to include instances of tunneling even when no associated potential barriers are present (Davis & Heller, 1981). The regions through which tunneling occurs may be connected or not, but a barrier is not crucial. This is known as dynamical tunneling. The concept of dynamical tunneling is applied to many situations, especially those in which more than one dimensions are involved in tunneling. These multi-dimensional systems are called integrable systems. An integrable system is a type of dynamical system in which many quantities are classically conserved, with conservation extended to the system’s degrees of freedom (in this context, the degrees of freedom represent the different sates the systems can
210
be in) less in number than the dimensionality of its phase space (Schilling, 2006). The evolution of this integrable system over time is forbidden to a subset within its phase space (phase space, specifically, is the mathematical “space” where systems can be represented). For integrable systems, the classical wave functions of particles are sketched as trajectories called tori (singular torus, a torus of a complete 360° revolution, looks like a donut). In this type of tunneling, where bounded classical trajectories are confined onto tori in phase space, tunneling “can be understood as the quantum transport between semi-classical states built on two distinct but symmetric tori” (Wilkinson, 1986). Another type of dynamical tunneling is chaos-assisted tunneling. Chaos in physics refers to systems or types of systems whose states cannot be predicted by their general characteristics (Werndl, 2009). These systems are similar to integrable systems and are said to have a unique evolution – one that is not, in any way, affected by random natural or artificial elements. For imperfect real-life systems, the chaos theory is used to define tunneling through “islands.” These islands are used to represent the probability density functions of the wave function and float in a ‘chaotic sea’ – the scientific metaphor employed to refer to the region where tunneling is classically allowed. The chaotic sea looks like a contour plot of a three-dimensional function. The islands are shaded in warm colors and the sea is shaded in cool colors. There are two islands and when compared with the rectangular barrier model, each island seemingly represents the region around the barrier (Tomsovic & Ullmo, 1994).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3. The working of a scanning tunneling microscope (STM), showing its various components such as the tunneling current, voltage, and tip. The scanning tunneling microscope is used to image surfaces at atomic levels and uses the concepts of quantum tunneling to create and tunnel through potential barriers. Source: Wikimedia Commons
Resonance-assisted tunneling happens when the value of Planck’s constant h is very small compared to the size of the islands. As a result, the nuances in the shape and the structure of the phase space greatly influences tunneling. In particular the two symmetric tori are coupled "via a succession of classically forbidden transitions across nonlinear resonances" surrounding the two islands (Brodier et al, 2002). Nonlinear resonances in the classical phase space lead to a sort of amplification of tunneling and sometimes give rise to a complicated tunneling peak structure. Such resonances have been observed by a team of scientists in Hamiltonian systems with an at least four-dimensional phase space. To explain the tunneling peak structure, they used the universal descriptions of single and double resonances by the four-dimensional normalform wave functions and compared them to the rectangular barrier models (Firmbach et al, 2019). A special type of quantum tunneling is the tunneling of water which was discovered in 2016 (Kolesnikov et al, 2016). The quantum tunneling of water is the phenomenon of water
WINTER 2021
molecules tunneling out of nanotubes, resulting in disturbances in the positions of the hydrogen atoms in a molecule of water (Schirber, 2016). When this happens, a water molecule changes its normal bent structure and takes the shape of an unusual double-top. The shape changes and tunneling of molecules is strictly forbidden in classical physics and cannot be explained using electronic configuration laws in chemistry (Pugliano, 1992). The reason the quantum tunneling of water is so unique and so different from other types of tunneling is because it involves the breaking of two hydrogen bonds (Richardson et al, 2016). This is observed in the water trapped in small rocks, fine soil, and cell walls in plants. The quantum tunneling of water is now being used to design the transport of water through carbon nanotubes and is used to better understand the process of diffusion and osmosis in nature (Walli, 2016).
"A special type of quantum tunneling is the tunneling of water which was discovered in 2016 (Kolesnikov et al, 2016). The quantum tunneling of water is the phenomenon of water molecules tunneling out of nanotubes, resulting in disturbances in the positions of the hydrogen atoms in a molecule of water."
Applications of Quantum Tunneling Although tunneling is studied in the quantum world, it has many applications in the macroscopic world. The first application of quantum tunneling was in radioactive decay, which is also how tunneling itself was
211
discovered (Razavy, 2003). In 1928, Russian physicist George Gamow discovered that tunneling is responsible for the radioactive decay of atomic nuclei. Radioactive decay is the spontaneous emission of particles and energy from the nuclei of unstable atoms like uranium and thorium. The products of a radioactive decay, however, are stable. This is done via the tunneling of a particle out of the nucleus—an electron tunneling into the nucleus is electron capture (Gurney & Condon, 1928). Gamow observed that isotopes of thorium, uranium, and bismuth disintegrated by emitting α-particles. This is called alpha decay and is one of three types of radioactive decay, the other two being beta decay and gamma decay. He realized that to escape from the nucleus, the α -particle has to ‘tunnel’ through the Coulomb barrier formed due to the electrostatic forces within the atom. This technology is now used to store radioactive materials by creating appropriate potential barriers to prevent decay while in storage.
"The scanning tunneling microscope (STM) was developed in 1981 by Gerd Binnig and Heinrich Rohrer (who then went on to win the Nobel Prize in Physics in 1986). STM is used to view surfaces at the atomic levels."
Quantum tunneling is essential for nuclear fusion. Two protons exert a strong repulsive Coulomb force on each other. However, when the protons get very close to each other inside the cores of stars like the Sun, the strong nuclear force takes over as a direct consequence of tunneling and the protons actually attract each other. The protons come closer to a range of about 10-15 m. This atomic arrangement very closely resembles a potential barrier, and it is called the Coulomb barrier. The temperature in stars' cores is usually insufficient to allow atomic nuclei to overcome the Coulomb barrier and achieve nuclear fusion. Quantum tunneling helps increase the probability of penetrating the barrier. Though this probability is still low, the extremely large number of nuclei in the core of a star is sufficient to sustain a steady fusion reaction – a necessary requirement for the evolution of life in isolated habitable zones (Trixler, 2013). In other words, tunneling is the reason nuclear fusion is possible and the reason stars shine. This same shining light and heat from the sun is how life on Earth is possible. The scanning tunneling microscope (STM) was developed in 1981 by Gerd Binnig and Heinrich Rohrer (who then went on to win the Nobel Prize in Physics in 1986). STM is used to view surfaces at the atomic levels. It is made up of a conducting tip that can identify objects smaller than 0.01 nm (Binnig & Rohrer, 1986). This tip is brought close to a surface with a voltage already set up in the tip. The voltage provides electric potential energy to electrons in the atoms and
212
then allows them to tunnel through the space between them. The electric potential energy is determined as a function of the position of the tip. Thus, the TISE can be used to find appropriate wave functions for the potential energy. This potential energy is called the tunneling potential and the electric current that is set up because of the voltage is called tunneling current. When the tip moves across a surface, the corresponding changes in surface height results in changes in the tunneling current. Due to the extreme sensitivity of the tunneling current to the separation of the electrodes, proper vibration isolation or a rigid STM body is imperative for obtaining usable results. In the first STM designed by Binnig and Rohrer, magnetic levitation was used to keep the STM free from vibrations; now mechanical spring or gas spring systems are often employed (Chen, 1993). The technology of a scanning tunneling microscope is widely used to gain more information about the electronic structure at a particular location. The measurement of this electronic structure using a microscope is called scanning tunneling spectroscopy (STS) (Voigtländer, 2015). STS can be thought of as a refinement of the techniques used for microscopes. The information about electronic structures is gained by sweeping the bias voltage and measuring the corresponding current change at that specific location (Bai, 2000). STS is used to compare the density of states at an impure site to the densities of states around that site and anywhere else on the surface (Pan et al, 2000). Another essential application of quantum tunneling is the creation of tunnel diodes. Tunnel diodes are semiconductors that facilitate the flow of electric current through a circuit in many directions. First manufactured commercially by the company Sony in 1957, tunnel diodes utilize electron quantum tunneling to create a range of voltages across the Coulomb barrier. In these barriers, the current decreases rapidly as the voltage increases. This is necessary in many electronics where rapid current changes are required – like amplifiers, oscillators, and switching circuits (Fink, 1975). Tunnel diodes have a characteristically low output and are not used very readily. However, new developments in tunneling have led to the manufacture of resonant-tunneling diodes, which have a very high frequency range for the change of current. (Brown et al, 1991). This allows for the same technology of tunnel diodes is used to develop
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
other electronic devices like tunnel field-effect transistors and tunnel junctions (Ionescu & Riel, 2011). Furthermore, quantum tunneling has numerous applications in biology. Proton tunneling is central to the concepts of spontaneous DNA mutation while electron tunneling plays a key role in the many redox reactions that take place inside cells (Trixler, 2013). Quantum tunnelinginduced mutations in biology are also believed to be a cause of natural body ageing and are also involved in the growth of cancerous cells (Cooper, 1993).
Conclusion In quantum mechanics, tunneling is the evanescent wave phenomenon in which a particle is able to pass through a barrier by mean of its effective wave function. This only happens when considering the wave nature of the particle and considering the wave function of the particle as the quantity that tunnels, not the particle in matter form. Tunneling is possible even when the potential of the barrier is greater than the total energy of the particle – something that is not allowed in classical physics. The Schrödinger equation can be used to find the wavefunctions for a tunneling particle, by applying appropriate boundary conditions and by substituting for values of the potential and total energies. Moreover, tunneling – like many other quantum phenomena – is supplemented by other quantum theories like the uncertainty principle, de Broglie’s hypothesis about matter waves, and probabilistic estimates. This phenomenon of quantum tunneling can be mathematically expressed using Schrodinger’s equation. For a quantum particle to appreciably tunnel through a barrier three conditions must be met: i. The height of the barrier must be finite, and the width of the barrier should be small. ii. The potential energy of the barrier should be greater than the total energy of the particle (E < U). iii. The particle must be evaluated using the quantum theory that regards the particle as a wave and only considers its wave function. The particle is able to tunnel through the barrier because it acts as a wave and the wave function is actually tunneling through the barrier. This is a continuation of the wave-particle duality and the de Broglie hypothesis.
WINTER 2021
There are many types of tunneling, right from tunneling in phase to chaos-assisted tunneling and resonance tunneling. Quantum tunneling has also been discovered for water in nanotubes. Due to its versatility, quantum tunneling has many applications. It is used to design and develop the scanning tunneling microscope and is used in scanning tunneling spectroscopy. It also has a host of applications in the world of electronics like in tunnel diodes, junctions, and transistors. These components make up a wide variety of daily-use appliances like televisions, computers, microwaves, and music systems. The most celebrated use of tunneling lies in the design of scanning tunneling microscopes which have taken surface imaging to a whole new level. Tunneling as a phenomenon was discovered through radioactivity and is an essential component of nuclear fusion. Nuclear fusion is what powers the sun and creates sunlight. This sunlight is essential for life on Earth and for maintaining the physical (and chemical) balance of the universe. Moreover, tunneling makes possible DNA mutation and many essential processes in our cells which dictate our mere existence. In sum, without quantum tunneling, life, from its nuanced instances of mutations to solar temperatures, would not be possible. References Bai C. (2000) “Scanning tunneling microscopy and its applications”. New York: Springer Verlag. ISBN 978-3-54065715-6. Benioff, Paul (1980). "The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines". Journal of Statistical Physics. 22 (5): 563–591. Bibcode:1980JSP....22..563B. doi:10.1007/bf01011339. S2CID 122949592. Binnig G., Rohrer H. (1986) "Scanning tunneling microscopy". IBM Journal of Research and Development. 30 (4): 355–69. doi:10.1016/0039-6028(83)90716-1 Bjorken; Drell (1965) "Relativistic Quantum Mechanics", page 2. Mcgraw-Hill College. Born, M. (1927) "Physical aspects of quantum mechanics". Nature. 119 (2992): 354–357. Bibcode:1927Natur.119..354B. doi:10.1038/119354a0 Born, M. (1954) "The statistical interpretation of quantum mechanics". www.nobelprize.org. Nobel Lecture, Nobel Foundation. 122 (3172): 675–9. doi:10.1126/ science.122.3172.675. PMID 17798674. Retrieved 19 December 2020. Bratteli, O.; Robinson, D. W (1987). “Operator Algebras and Quantum Statistical Mechanics “1. 2nd edition. Springer. ISBN 978-3-540-17093-8. Brodier, O.; Schlagheck, P.; Ullmo, D. (2002). "Resonance-
213
Assisted Tunneling". Annals of Physics. 300 (1): 88–136. arXiv:nlin/0205054. Bibcode:2002AnPhy.300...88B. doi:10.1006/ aphy.2002.6281. ISSN 0003-4916 Brown, E.R.; Söderström, J.R.; Parker, C.D.; Mahoney, L.J.; Molvar, K.M.; McGill, T.C. (1991). "Oscillations up to 712 GHz in InAs/AlSb resonant-tunneling diodes". Applied Physics Letters. 58 (20): 2291. Bibcode:1991ApPhL..58.2291B. doi:10.1063/1.104902. ISSN 0003-6951 Chen C.J. (1993) “Introduction to Scanning Tunneling Microscopy”. Oxford University Press. ISBN 978-0-19-507150-4 Cooper, W.G. (1993). "Roles of Evolution, Quantum Mechanics and Point Mutations in Origins of Cancer". Cancer Biochemistry Biophysics. 13 (3): 147–70. PMID 8111728 Davies, P. C. W. (2005). "Quantum tunneling time". American Journal of Physics. 73 (1): 23–27. arXiv:quant-ph/0403010. Bibcode:2005AmJPh..73...23D. doi:10.1119/1.1810153. S2CID 119099861. Davis, Michael J.; Heller, Eric J. (1 July 1981). "Quantum dynamical tunneling in bound states". The Journal of Chemical Physics. 75 (1): 246–254. Bibcode:1981JChPh..75..246D. doi:10.1063/1.441832. ISSN 0021-9606 de Broglie, L. (1970) "The reinterpretation of wave mechanics". Foundations of Physics. 1 (1): 5–15. Bibcode:1970FoPh....1....5D. doi:10.1007/BF00708650 Dirac, P.A.M.. (1967). “The Principles of Quantum Mechanics” 4th Edition. Oxford University Press. p. 3. ISBN 9780198520115. Eisberg, R., Resnick, R. (1985). “Quantum physics of atoms, molecules, solids, nuclei, and particles” (2nd ed.) John Wiley & Sons, New York. 3-3, 78. Esaki, Leo (1958). "New Phenomenon in Narrow Germanium p−n Junctions". Physical Review. 109 (2): 603–604. Bibcode:1958PhRv..109..603E. doi:10.1103/PhysRev.109.603 Fink, D G., (1975) “Electronic Engineers Handbook”. New York, NY: McGraw Hill. ISBN 0-07-020980-4. Firmbach M., Fritzsch F., Ketzmerick R., and Bäcker A. (2019). “Resonance-assisted tunneling in four-dimensional normalform Hamiltonians”. Phys. Rev. E 99, 042213. doi:10.1103/ PhysRevE.99.042213 Fleisch, D., Kinnaman, L. (2015). “A Student's Guide to Waves (Student's Guides)”. Cambridge: Cambridge University Press. doi:10.1017/CBO9781107294929 Gurney, R. W.; Condon, E. U. (1928). "Quantum Mechanics and Radioactive Disintegration". Nature. 122 (3073): 439. Bibcode:1928Natur.122..439G. doi:10.1038/122439a0. S2CID 4090561 Heisenberg, W. (1927), "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". Zeitschrift für Physik. 43 (3–4): 172–198, Bibcode:1927ZPhy...43..172H, doi:10.1007/BF01397280 Ionescu, A. M.; Riel, H. (2011). "Tunnel field-effect transistors as energy-efficient electronic switches". Nature. 479 (7373): 329– 337. Bibcode:2011Natur.479..329I. doi:10.1038/nature10679. PMID 22094693. S2CID 4322368 Jaeger, G. (2014). "What in the (quantum) world is macroscopic?". American Journal of Physics. 82 (9): 896–905.
214
Bibcode:2014AmJPh..82..896J. doi:10.1119/1.4878358 Kolesnikov, A. I.; Reiter, G. F.; Choudhury N.; Timothy R. P.; Mamontov E., Podlesnyak, A., Ehlers, G., Seel, A. G., Wesolowski, D. J. (2016). "Quantum Tunneling of Water in Beryl: A New State of the Water Molecule". Physical Review Letters. 116 (16): 167802. Bibcode:2016PhRvL.116p7802K. doi:10.1103/PhysRevLett.116.167802 Lerner; T. (1991). “Encyclopedia of Physics” (2nd ed.). New York: VCH. p. 1308. ISBN 978-0-89573-752-6. Libretexts. (2020). 4.9: Quantum-Mechanical Tunneling. Retrieved December 18, 2020, from https://chem. libretexts.org/Courses/University_of_California_Davis/ UCD_Chem_107B:_Physical_Chemistry_for_Life_Scientists/ Chapters/4:_Quantum_Theory/4.09:_Quantum-Mechanical_ Tunneling McEvoy, J. P., Zarate, O. (2004). “Introducing Quantum Theory”. Totem Books. pp. 110–114. ISBN 978-1-84046-577-8 Pan S.H., Hudson E.W., Lang K.M., Eisaki H., Uchida S., Davis J.C. (February 2000) "Imaging the effects of individual zinc impurity atoms on superconductivity in Bi2 Sr2 Ca Cu2O8 + delta". Nature. 403 (6771): 746–50. arXiv:cond-mat/9909365. doi:10.1038/35001534 Pugliano, N. (1992) “Vibration-Rotation-Tunneling Dynamics in Small Water Clusters”. p 6. Lawrence Berkeley Laboratory. Ramos, R., Spierings, D., Racicot, I., Steinberg, A.M. (2020) “Measurement of the time spent by a tunneling atom within the barrier region”. Nature 583, 529–532. https://doi. org/10.1038/s41586-020-2490-7 Razavy, M. (2003) “Quantum Theory of Tunneling”. World Scientific. pp. 4, 462. ISBN 978-9812564887 Richardson, J.O., Pérez C., Lobsiger S., Reid A. A., Temelso B., Shields G.C. (2016). "Concerted hydrogen-bond breaking by quantum tunneling in the water hexamer prism". Science. 351 (6279): 1310–1313. Bibcode:2016Sci...351.1310R. doi:10.1126/ science.aae0012 Schirber M., (2016) "Focus: Water Molecule Spreads Out When Caged". Physics. 9. Retrieved 31 December 2020. Schilling, L. (2006). “Direct dynamical tunneling in systems with a mixed phase space”. Schlosshauer, M. (2005). "Decoherence, the measurement problem, and interpretations of quantum mechanics". Reviews of Modern Physics. 76 (4): 1267–1305. arXiv:quantph/0312059. Bibcode:2004RvMP...76.1267S. doi:10.1103/ RevModPhys.76.1267 Schrödinger, E. (1926) "An Undulatory Theory of the Mechanics of Atoms and Molecules" (PDF). Physical Review. 28 (6): 1049–1070. Bibcode:1926PhRv...28.1049S. doi:10.1103/ PhysRev.28.1049 Schrödinger, E. (1928) “Wave mechanics”. pp. 185–206 of “Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique, tenu à Bruxelles du 24 au 29 Octobre 1927”, sous les Auspices de l'Institut International de Physique Solvay, Gauthier-Villars, Paris, pp. 185–186; this translation at p. 447 of Bacciagaluppi, G., Valentini, A. (2009), Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference. Cambridge University Press, Cambridge UK. ISBN 978-0-521-81421-8.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Schrödinger, E. (1995) “The interpretation of quantum mechanics: Dublin seminars (1949–1955) and other unpublished essays”. Ox Bow Press. ISBN 9781881987086. Sen, D. (2014). "The Uncertainty relations in quantum mechanics". Current Science. 107 (2): 203–218. doi: 10.13140/2.1.5183.0406 Tomsovic, S.; Ullmo, D. (1994) "Chaos-assisted tunneling". Physical Review E. 50 (1): 145–162. Bibcode:1994PhRvE..50..145T. doi:10.1103/PhysRevE.50.145. PMID 9961952. Trixler, F. (2013) "Quantum tunnelling to the origin and evolution of life". Current Organic Chemistry. 17 (16): 1758– 1770. doi:10.2174/13852728113179990083. PMC 3768233. PMID 24039543 Voigtländer, B. (2015) "Scanning Tunneling Spectroscopy (STS)". Scanning Probe Microscopy: Atomic Force Microscopy and Scanning Tunneling Microscopy, Nanoscience and Technology, Berlin, Heidelberg: Springer. 309–334. doi:10.1007/978-3-662-45240-0_21, ISBN 978-3-662-45240-0 Walli, R. (2016) "New state of water molecule discovered". Phys.org. Retrieved 31 December 2020. Werndl, C. (2009). "What are the New Implications of Chaos for Unpredictability?". The British Journal for the Philosophy of Science. 60 (1): 195–220. arXiv:1310.1576. doi:10.1093/bjps/ axn053. S2CID 354849 Wilkinson, M. (1986). "Tunnelling between tori in phase space". Physica D: Nonlinear Phenomena. 21 (2): 341–354. Bibcode:1986PhyD...21..341W. doi:10.1016/01672789(86)90009-6. ISSN 0167-2789. Zurek, Wojciech H. (2003). "Decoherence, einselection, and the quantum origins of the classical". Reviews of Modern Physics. 75 (3): 715. arXiv:quant-ph/0105127. Bibcode:2003RvMP...75..715Z. doi:10.1103/ revmodphys.75.715. S2CID 14759237
WINTER 2021
215
Treatment and Early Detection of Cardiogenic Pulmonary Edema BY SOYEON (SOPHIE) CHO '24 Cover: A micrograph of the alveoli of a patient with pulmonary edema. Source: Wikimedia Commons
Introduction Pulmonary edema is a condition in which fluids accumulate in the parenchyma of the lung and the pulmonary alveoli. For every hundred thousand people with heart failure, around 75,000 to 83,000 have pulmonary edema (Platz et al., 2015). This life-threatening condition has a low rate of survival after one year – 50%, which has prompted research into the topic (Crane, 2002). Depending on its cause(s), pulmonary edema can be divided into two categories: non-cardiogenic and cardiogenic. Non-cardiogenic pulmonary edema (non-CPE) is caused by direct or indirect force on the lung tissue or on the alveolar capillaries, which can result from pneumonia, sepsis (harmful microbes in the blood), toxic gas inhalation, and other types of lung injury. Specifically, increased permeability of the capillary endothelium allows plasma proteins and fluids to enter the interstitial spaces, and the disrupted epithelial
216
barriers allow protein-rich fluids to fill up the alveoli (Murphy and Roubinhan, 2015). On the other hand, cardiogenic pulmonary edema (CPE) connects to heart problems, and it conventionally refers to an increase in the pressure on the left ventricle (Alwi, 2010). CPE is a major clinical manifestation of congestive heart failure, which is caused by heart problems such as coronary artery disease (plaque buildup in arteries), congestive heart failure (chronic decline in the heart’s ability to pump blood), cardiac arrhythmias (irregular heart patterns), valvular heart disease (damage to valves in the heart), and myocardial disease (disease of the heart muscle). These types of heart disease increase pulmonary venous pressure, increasing pulmonary capillary pressure in response. This increases fluid filtration from the pulmonary capillaries to the spaces between the pulmonary capillaries and alveolar sacs. As the fluids in the interstitial spaces accumulate DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: An x-ray image of a patient with interstitial and alveolar edema in the circled region. The region shown by the arrow is where excessive fluids in the lungs, or pleural effusion, have accumulated. Source: Wikimedia Commons
over time, they enter the alveoli through the epithelial barrier via bulk flow (Murphy and Roubinhan, 2015). Because the increased hydrostatic pressure in the capillaries is not equal to increased permeability, CPE patients have protein poor fluids in the alveoli (Murphy and Roubinhan, 2015). This paper will focus on cardiogenic pulmonary edema, given that it is connected to heart problems, and it offers broad research opportunities into novel preventive treatments (Platz et al., 2015). CPE can be further categorized into acute and chronic CPE, each of which has unique symptoms that are used for diagnosis. Acute cardiogenic pulmonary edema (ACPE) usually progresses over hours to days, and chronic cardiogenic pulmonary edema (CCPE) has a longer time frame of days to weeks. A common symptom of ACPE and CCPE is the shortness of breath, although the degree of this symptom will be more severe for ACPE given its rapid onset. Depending on its cause, ACPE is associated with additional symptoms like chest pain (indicative of myocardial infarction), dizziness, and increased heart rate (Guntupalli, 1984).
WINTER 2021
Treatment of CPE This section will review various treatment strategies for cardiogenic pulmonary edema. First, a CPE patient needs to undergo initial airway clearance through airway assessment and ventilatory support (if necessary). Furthermore, throughout the procedures, it is important that supplemental oxygen is provided to prevent hypoxemia, occurring when SpO2 (peripheral capillary oxygen saturation) is below 90% (Iqbal and Gupta, 2020). Following these procedures regarding the airway, both non-invasive and invasive treatment strategies can be used to treat and manage CPE. Non-invasive treatment includes loop diuretics, nitroglycerin, and non-invasive ventilation. Invasive treatment includes IABP (intra-aortic balloon pump) and ultrafiltration (UF).
"Pulmonary edema is a condition in which fluids accumulate in the parenchyma of the lung and the pulmonary alveoli. For every hundred thousand people with heart failure, around 75,000 to 83,000 have pulmonary edema."
Loop diuretics are substances that increase diuresis, or the excretion of fluids, by inhibiting the reabsorption of sodium chloride (NaCl) in the kidney’s loop of Henle (Ellison and Felker et al., 2017). The most common type of loop diuretic is furosemide, although other types include torsemide and bumetanide. Loop diuretics reduce NaCl reabsorption by inhibiting the sodium-potassium-chloride transporter in the loop of Henle, a part of the kidney that recovers water and reabsorbs NaCl 217
Figure 2: Chemical structure of furosemide, a high-ceiling diuretic commonly used to treat the accumulation of fluids, such as pulmonary edema. Source: Wikimedia Commons
"One invasive treatment strategy is ultrafiltration (UF). UF passes blood through a hemofilter, which consists of a semipermeable membrane that forms a pressure gradient."
(Huxel et al., 2020). Thus, diuresis reduces levels of fluids in the body and can act as a treatment for the fluid accumulation in CPE (Wittner et al., 1991). Furosemide is commonly used for treating CPE, and it can reduce the in-hospital mortality rates for patients hospitalized with acute heart failure (Matsue et al., 2017). Other drugs include nitrates like nitroglycerin and isosorbide dinitrate. Nitrates are added to reduce venous pressure by increasing the dilation of veins compared to arteries. Because nitroglycerin has a shorter half-life than isosorbide dinitrate, the former allows physicians to monitor hypotension and reduce doses if necessary (Iqbal and Gupta, 2020). Non-invasive ventilation (NIV) is another treatment implemented by physicians when a CPE patient continues to have hypoxemia during treatment. NIV falls under the larger category of oxygen therapy, which includes nasal cannula, face mask, NIV, and mechanical ventilation with endotracheal intubation, or the insertion of a tube into the windpipe to help ventilation (Doyle, 2015). Previously, mechanical ventilation was the principal means of supplying higher concentrations of oxygen to CPE patients, but newer types of oxygen therapy such as non-invasive ventilation or have decreased the need for endotracheal intubation (Bello et al., 2018). A clinical trial of ACPE patients compared the differential patient outcomes for non-invasive ventilation and standard, invasive procedures for supplementing oxygen (Gray et al., 2008). On average, non-invasive ventilation improved patient-reported dyspnea, or labored breathing, one hour after the treatment. Other
218
outcomes such as heart rate and hypercapnia (the accumulation of carbon dioxide in the bloodstream) also improved, demonstrating that this non-invasive strategy is important for patient outcomes (Gray et al., 2008). To clarify, this trial included continuous positive airway pressure (CPAP), a related type of non-invasive treatment, into the category of NIV. Both NIV and CPAP improve outcomes for patients with CPE, although CPAP differs from NIV in that it keeps a single positive airway pressure and does not aid inhalation (Bello et al., 2018). One invasive treatment strategy is ultrafiltration (UF). UF passes blood through a hemofilter, which consists of a semipermeable membrane that forms a pressure gradient. This process removes sodium and water from blood plasma fluids, similar to the function of loop diuretics. UF is an effective treatment method for ACPE patients who are unresponsive to non-invasive drug treatment, as it clears alveolar edema without changes in the plasma osmolarity and reduces the need for invasive mechanical ventilation (Susini et al., 1990). It is also known to decrease pulmonary capillary wedge pressure (PCWP), which is larger than 20 mmHg for ACPE patients (Iqbal and Gupta, 2020). Furthermore, not only is it a treatment method for patients with ACPE, but it is also used to improve the clinical outcomes of heart failure patients by restoring diuresis and improving cardiac output and congestion (Marenzi et al., 1993). Another invasive treatment strategy is the intra-aortic balloon pump (IABP). IABP involves inserting a balloon pump in the patient’s aorta
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
threatening situations.
that improves coronary flow by inflating and deflating according to the patient’s pulse. To clarify, this strategy is addressing the hypotension caused by myocardial infarction (MI), and it is not a first line therapy due to its invasive nature. As MI is accompanied by CPE, it was included as a strategy to improve clinical outcomes for patients with both MI and CPE. IABP improves cardiac functions temporarily and gives time for physicians to implement other treatment strategies such as heart transplants (Iqbal and Gupta, 2020). Clinical research on pulmonary edema patients under VA-ECMO (venoarterial extracorporeal membrane oxygenation, or temporary mechanical support of the heart and lung through tubes called cannula) showed that patients treated with IABP had lower risk of pulmonary edema and mortality (Bréchot et al., 2017). Bréchot et al. also indicated that IABP implementation was associated with more days without mechanical ventilation, demonstrating that IABP can decrease pressure on the left ventricle and reduce negative outcomes related to ECMO and CPE (2017).
Limitations in Treatment Strategies Limitations in current treatment strategies include the reliance on constantly changing evidence, high mortality rates of CPE, and more. First of all, certain treatment strategies may or may not be effective in improving patient outcomes in the ED, thus leading to life-
WINTER 2021
For example, a literature review by Graham discusses the limitations of some non-invasive treatments (excluding non-invasive ventilation) (Graham, 2004). From this review, several studies demonstrate that drugs commonly used by EDs do not affect, nor decrease, intensive care unit (ICU) admission. Sacchetti et al. reviewed the case files for 181 patients with severe pulmonary edema, evaluating the use of morphine, furosemide, nitrates, and other drugs used by the ED. The negative outcomes in the ED were represented by the rates of ICU admission and endotracheal intubation with the administration of drugs in emergency situations. The study included patients admitted to the ICU for either acute pulmonary edema or congestive heart failure and respiratory failure. This study demonstrated that the immediate administration of a loop diuretic like furosemide did not significantly affect ICU admission and endotracheal intubation. The study presents a possible explanation: a loop diuretic’s diuretic effects in the kidney increase after the “respiratory crisis is resolved” and more blood flows to the kidney (Sacchetti et al., 1998).
Figure 3: An illustration of the IABP, located in the aorta in the image. Source: Wikimedia Commons
"While ineffective in late CPE, loop diuretics can be effective for early treatment of CPE due to its vasodilatory effect rather than its diuretic effect, meaning large doses of loop diuretics would not be effective for many The same study also demonstrated that morphine administration increased chances patients in the ED." While ineffective in late CPE, loop diuretics can be effective for early treatment of CPE due to its vasodilatory effect rather than its diuretic effect, meaning large doses of loop diuretics would not be effective for many patients in the ED (Figueras et al., 1978). For example, low doses of furosemide with frequent administrations of high doses of nitrates produced better results than high doses of furosemide with frequent administrations of low doses of nitrates, supporting the efficacy of nitrates as a first-line drug for emergency treatment of ACPE (Cotter et al., 1998).
of ICU admission and endotracheal intubation, signifying that morphine’s negative effect on the nervous system outweighs its vasodilatory effects, and it may need to be replaced by other vasodilators like nitrates (Sacchetti et al., 1998). One limitation of this study is that the standards by which its cases for severe pulmonary edema were selected are unclear (Sacchetti et al., 1998). Nevertheless, it provides compelling evidence in addition to corroborating the findings of studies in Graham’s review (Graham, 2004).
219
"Given that CPE is closely related to heart problems such as coronary artery disease and congestive heart failure, it is important for physicians to inform patients of the risk factors for heart disease. These include abstaining from alcohol and smoking, blood pressure control, following a low cholesterol and salt diet, exercise, and many more."
Another limitation of treating CPE is that its severe outcomes make it more difficult to treat. For example, a clinical trial by Wiener et al. studied the mortality rates after different periods of time for a patient population diagnosed with ACPE and coronary artery disease. The mortality rate after one year was around 30% for all patients. About half of them also had acute myocardial infarction, but this factor was accounted for and did not affect mortality rates. Rather, a history of congestive heart failure was the most important factor in increasing mortality risk. A follow-up after six years showed that the mortality rate was 85% for patients with a history of congestive heart failure and ACPE before the clinical trial. The mortality rate was 45% for patients whose manifestation of heart failure and ACPE was the first (Wiener et al., 1987). A similar trial demonstrated that acute pulmonary edema (APE) has a high in-hospital mortality rate of 12%, and noted that most APE patients in the study were elderly patients with hypertension, diabetes, ischemic heart disease (IHD), and a history of APE (Roguin et al., 2000). These studies support the claim that ACPE is especially life-threatening due to its acute nature and rapid progressing; the condition often adds to the existing threats of underlying diseases in patients. Furthermore, the outcomes of CPE connect to the limitations of treatment methods. Early diagnosis of ACPE is especially difficult since the clinical signs, such as shortness of breath, manifest in the later stages and rapidly progress over hours to days. Depending on underlying diseases such as hypertension, long term survival for CPE patients can be difficult (Iqbal and Gupta, 2020). Therefore, early diagnosis and treatment are vital to decreasing mortality rates and emergency treatment of CPE. It should be noted that many clinical trials have observed the patient outcomes and mortality rates specifically for ACPE patients (often those with heart failure). A relative gap in clinical trials for CCPE further supports that more research is needed to discover the differences between acute and chronic care for CPE. Nevertheless, it is expected that similar symptoms will occur for CCPE patients in different degrees, and there is much potential for improvement in CPE treatments.
Early Detection of CPE Given that CPE is closely related to heart problems such as coronary artery disease and congestive heart failure, it is important for physicians to inform patients of the risk factors for heart disease. These include abstaining
220
from alcohol and smoking, blood pressure control, following a low cholesterol and salt diet, exercise, and many more (Iqbal and Gupta, 2020). Furthermore, it is evident that current treatment strategies limit physicians from addressing CPE for the patient population, because many implemented following the clinical signs of advanced stages. In response to the limitations, novel preventive measures specifically for CPE have emerged, allowing for early detection. One of these measures focuses on internal thoracic impedance (ITI), which determines the lung impedance, a clinical indicator of CPE. Impedance is defined as the electrical resistance to a weak current. CPE causes fluids to accumulate in the lung spaces and lowers the electrical resistance of the lungs, thereby lowering impedance. Clinical trials have demonstrated that if ITI decreased by more than 12% from the baseline levels, the patients developed CPE (Shochat et al., 2006). Therefore, it is easier to treat ACPE before ITI decreases by more than 12%; when the baseline ITI decreases by more than 12%, patients experience a cycle of physical symptoms. This cycle involves declining arterial oxygen saturation that damages the heart muscles, which weakens the heart's function of maintaining blood flow around the body. This can worsen CPE and decrease oxygen saturation even more (Shochat et al., 2012). Given that ITI is not commonly tested, it is difficult to diagnose ACPE before the advanced stages. Nevertheless, recent studies on ITI sensors have potential, since it monitors ITI and may diagnose CPE before significant changes in the common clinical signs such as heart rate, blood pressure, and breathing difficulties are observed. In a 2012 study, wearable sensors monitoring ITI were tested for their effectiveness in early diagnosis of CPE (Charach et al., 2012). This study used the non-invasive Edema Guard Monitor (EGM), which focuses on the right lung (since the left lung overlaps with the heart, containing its own fluids) and yielded accurate ITI measurements (Charach et al., 2015). First, the patient population, consisting of patients hospitalized after acute myocardial infarction, were separated into two groups. gave the standard treatment only to the patients of the first group who showed clinical signs of CPE (Group 1). For the second group, ITI was measured by the EGM at 30-minute intervals during hospitalization. Then, patients whose ITI decreased by more than 12% were subject
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
to early preventive treatment, consisting of furosemide, additional oxygen, and infusion of nitrates (Group 2). Results showed that CPE developed for all patients in Group 1, and only 8% of the patients in Group 2 (Charach et al., 2012). This study demonstrated that early diagnosis based on ITI measurements can improve patient outcomes by administering treatments before the advanced stages of CPE. Other studies on pulmonary edema sensing devices have focused on radiofrequency (RF). One study suggested a different type of wearable monitoring sensor for pulmonary edema (Salman et al., 2014). Instead of monitoring lung impedance, the sensor excites its active port using a 40 MHz RF signal, which propagates electric fields through the surrounding deep tissue. The 15 passive ports collect data about these electric fields, called fringing fields due to the distance between the active port and the deep tissue, using amplitudes of scattering parameters (S-parameters). The S-parameters are inputted into the calculations for the lung permittivity. To test the efficacy of the sensor, the lung’s permittivity in simulations of normal and pulmonary edema states were employed. This sensor was tested on a human phantom, which modeled a human torso and contained a porcine lung. It did not use human subjects; instead, the human phantom simulated in-vivo tissues using gels. Additionally, it also reduced the effect of layers of tissue between the lung and the sensor (i.e., skin, heart, muscle) in the calculated values by using 17 electrodes attached to the patient’s torso, allowing for deep tissue detection. Comparison to the directly measured values for permittivity showed that the calculated values from the wearable sensor only varied by 11%, and the study demonstrated that the sensors could diagnose pulmonary edema (Salman et al., 2014). Furthermore, the different types of wearable sensors may help expand CPE prevention. The 2014 study by Salman et al. tested the feasibility of integrating its wearable radio frequency (RF) sensor with a medical sensing body-area network (MS-BAN). MS-BAN is a system that allows for real-time monitoring and data transfer of health indicators, and this study demonstrated that the wireless MS-BAN could transfer real-time measurements of lung irregularities (Salman et al., 2014). While this study did not feature many considerations about the costs of distributing this sensor for wider use, improvements in the sensitivity
WINTER 2021
and size of the sensor could implement MSBAN into telehealth. If retail clinics rented these wearable sensors, people would have easier access to a prevention technique and physicians would be able to monitor lung irregularities more accurately. This application of telehealth could also reduce the costs of emergency care for ACPE. These two studies provide insight into new detection and prevention methods for CPE, since current diagnosis is limited to monitoring clinical signs like heart rate, temperature, and breathing. The RF wearable sensors and ITI sensors are two applications of deep tissue detection that can increase early diagnosis and prevention of CPE, which will accompany existing non-invasive treatments and reduce cases of severe CPE.
Conclusion Treatments of CPE vary from loop diuretics, nitroglycerin, non-invasive ventilation, ultrafiltration, and IABP. Non-invasive ventilation is a newer method compared to standard oxygen therapies. However, changing evidence on various treatments and severe patient outcomes CPE pose limitations into the treatment of CPE. Recent studies on preventive measures for cardiogenic pulmonary edema include exciting advancements and technological innovation. One measure is the ITI sensor, which measures ITI and indicate lung impedance, allowing for early detection. Another preventive measure is the wearable RF sensor, which measure the permittivity of the lung using the RF signal. A third strategy is the MS-BAN system, which extends from the RF sensor by transferring real-time measurements through a wireless system. Beyond these preventive measures, it should be considered that CPE is a life-threatening condition on its own, and future work could further isolate CPE’s effect on patients and CPE-specific treatment. References Alwi, I. (2010). Diagnosis and management of cardiogenic pulmonary edema. Acta Medica Indonesiana, 42(3), 176–184. Bello, G., De Santis, P., & Antonelli, M. (2018). Non-invasive ventilation in cardiogenic pulmonary edema. Annals of Translational Medicine, 6(18), 355. https://doi.org/10.21037/ atm.2018.04.39 Bréchot, N., Demondion, P., Santi, F., Lebreton, G., Pham, T., Dalakidis, A., Gambotti, L., Luyt, C.-E., Schmidt, M., Hekimian, G., Cluzel, P., Chastre, J., Leprince, P., & Combes, A. (2018). Intra-aortic balloon pump protects against hydrostatic pulmonary oedema during peripheral venoarterialextracorporeal membrane oxygenation. European Heart Journal: Acute Cardiovascular Care, 7(1), 62–69. https://doi.
221
org/10.1177/2048872617711169 Charach, G., Rabinovich, P., Grosskopf, I., & Weintraub, M. (2001). Transthoracic monitoring of the impedance of the right lung in patients with cardiogenic pulmonary edema: Critical Care Medicine, 29(6), 1137–1144. https://doi. org/10.1097/00003246-200106000-00008 Charach, G., Rubalsky, O., Charach, L., Rabinovich, A., Argov, O., Rogowski, O., & George, J. (2015). Internal Thoracic Impedance—A Useful Method for Expedient Detection and Convenient Monitoring of Pleural Effusion. PLOS ONE, 10(4), e0122576. https://doi.org/10.1371/journal.pone.0122576 Cleland, J. G. F., Yassin, A. S., & Khadjooi, K. (2010). Acute heart failure: Focusing on acute cardiogenic pulmonary oedema. Clinical Medicine (London, England), 10(1), 59–64. https://doi. org/10.7861/clinmedicine.10-1-59 Cotter, G., Metzkor, E., Kaluski, E., Faigenberg, Z., Miller, R., Simovitz, A., Shaham, O., Marghitay, D., Koren, M., Blatt, A., Moshkovitz, Y., Zaidenstein, R., & Golik, A. (1998). Randomised trial of high-dose isosorbide dinitrate plus low-dose furosemide versus high-dose furosemide plus low-dose isosorbide dinitrate in severe pulmonary oedema. The Lancet, 351(9100), 389–393. https://doi.org/10.1016/S01406736(97)08417-1 Crane, S. D. (2002). Epidemiology, treatment and outcome of acidotic, acute, cardiogenic pulmonary oedema presenting to an emergency department. European Journal of Emergency Medicine, 320–324. https://doi. org/10.1097/00063110-200212000-00005 Doyle, G. R. (2015, November 23). 5.5 Oxygen Therapy Systems – Clinical Procedures for Safer Patient Care. Pressbooks. https://opentextbc.ca/clinicalskills/chapter/5-5oxygen-therapy-systems/ Ellingsrud, C., & Agewall, S. (2016). Morphine in the treatment of acute pulmonary oedema--Why?. International Journal of Cardiology, 202, 870–873. https://doi.org/10.1016/j. ijcard.2015.10.014 Ellison, D. H., & Felker, G. M. (2017). Diuretic Treatment in Heart Failure. New England Journal of Medicine, 377(20), 1964–1975. https://doi.org/10.1056/NEJMra1703100 Figueras, J., & Weil, M. H. (1978). Blood volume prior to and following treatment of acute cardiogenic pulmonary edema. Circulation, 57(2), 349–355. https://doi.org/10.1161/01. CIR.57.2.349 Graham, C. A. (2004). Pharmacological therapy of acute cardiogenic pulmonary oedema in the emergency department. Emergency Medicine Australasia, 16(1), 47–54. https://doi.org/10.1111/j.1742-6723.2004.00534.x Gray, A., Goodacre, S., Newby, D. E., Masson, M., Sampson, F., & Nicholl, J. (2008). Noninvasive Ventilation in Acute Cardiogenic Pulmonary Edema. New England Journal of Medicine, 359(2), 142–151. https://doi.org/10.1056/ NEJMoa0707992 Guntupalli, K. K. (1984). Acute pulmonary edema. Cardiology Clinics, 2(2), 183–200. Huxel, C., Raja, A., & Ollivierre-Lawrence, M. D. (2020, May 28). Loop Diuretics. Retrieved February 24, 2021, from https:// www.ncbi.nlm.nih.gov/books/NBK546656/. Iqbal, M. A., & Gupta, M. (2020, December 16). Cardiogenic Pulmonary Edema. Retrieved February 24, 2021, from https:// www.ncbi.nlm.nih.gov/books/NBK544260/. Marenzi, G., Grazi, S., Giraldi, F., Lauri, G., Perego, G., Guazzi, M., 222
Salvioni, A., & Guazzi, M. D. (1993). Interrelation of humoral factors, hemodynamics, and fluid and salt metabolism in congestive heart failure: effects of extracorporeal ultrafiltration. The American Journal of Medicine, 94(1), 49–56. https://doi.org/10.1016/0002-9343(93)90119-a Matsue, Y., Damman, K., Voors, A. A., Kagiyama, N., Yamaguchi, T., Kuroda, S., Okumura, T., Kida, K., Mizuno, A., Oishi, S., Inuzuka, Y., Akiyama, E., Matsukawa, R., Kato, K., Suzuki, S., Naruke, T., Yoshioka, K., Miyoshi, T., Baba, Y., … Kitai, T. (2017). Time-to-Furosemide Treatment and Mortality in Patients Hospitalized With Acute Heart Failure. Journal of the American College of Cardiology, 69(25), 3042–3051. https:// doi.org/10.1016/j.jacc.2017.04.042 Murphy, E., & Roubinian, N. (2015). Transfusion-associated circulatory overload (TACO): Prevention, management, and patient outcomes. International Journal of Clinical Transfusion Medicine, 17. https://doi.org/10.2147/IJCTM.S77343 Platz, E., Jhund, P. S., Campbell, R. T., & McMurray, J. J. (2015). Assessment and prevalence of pulmonary oedema in contemporary acute heart failure trials: A systematic review: Systematic review of pulmonary oedema in contemporary AHF trials. European Journal of Heart Failure, 17(9), 906–916. https://doi.org/10.1002/ejhf.321 Roguin, A., Behar, D. M., Ami, H. B., Reisner, S. A., Edelstein, S., Linn, S., & Edoute, Y. (2000). Long-term prognosis of acute pulmonary oedema—An ominous outcome. European Journal of Heart Failure, 2(2), 137–144. https://doi. org/10.1016/S1388-9842(00)00069-6 Sacchetti, A., Ramoska, E., Moakes, M. E., McDermott, P., & Moyer, V. (1999). Effect of ED management on ICU use in acute pulmonary edema. The American Journal of Emergency Medicine, 17(6), 571–574. https://doi. org/10.1016/S0735-6757(99)90198-5 Salman, S., Wang, Z., Colebeck, E., Kiourti, A., Topsakal, E., & Volakis, J. L. (2014). Pulmonary Edema Monitoring Sensor With Integrated Body-Area Network for Remote Medical Sensing. IEEE Transactions on Antennas and Propagation, 62(5), 2787–2794. https://doi.org/10.1109/TAP.2014.2309132 Shochat, M., Charach, G., Meyler, S., Meisel, S., Weintraub, M., Mengeritsky, G., Mosseri, M., & Rabinovich, P. (2006). Prediction of cardiogenic pulmonary edema onset by monitoring right lung impedance. Intensive Care Medicine, 32(8), 1214–1221. https://doi.org/10.1007/s00134-006-0237-z Shochat, M., Shotan, A., Blondheim, D. S., Kazatsker, M., Dahan, I., Asif, A., Shochat, I., Rabinovich, P., Rozenman, Y., & Meisel, S. R. (2012). Usefulness of Lung Impedance-Guided Pre-Emptive Therapy to Prevent Pulmonary Edema During ST-Elevation Myocardial Infarction and to Improve LongTerm Outcomes. The American Journal of Cardiology, 110(2), 190–196. https://doi.org/10.1016/j.amjcard.2012.03.009 Susini, G., Zucchetti, M., Bortone, F., Salvi, L., Cipolla, C. M., Rimondini, A., & Sisillo, E. (1990). Isolated ultrafiltration in cardiogenic pulmonary edema. Critical Care Medicine, 18(1), 14–17. https://doi.org/10.1097/00003246-199001000-00004 Vismara, L. A., Leaman, D. M., & Zelis, R. (1976). The effects of morphine on venous tone in patients with acute pulmonary edema. Circulation, 54(2), 335–337. https://doi. org/10.1161/01.cir.54.2.335 Wiener, R. S., Moses, H. W., Richeson, J. F., & Gatewood, R. P. (1987). Hospital and long-term survival of patients with acute pulmonary edema associated with coronary artery disease. The American Journal of Cardiology, 60(1), 33–35. https://doi. org/10.1016/0002-9149(87)90979-9 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Wittner, M., Di Stefano, A., Wangemann, P., & Greger, R. (1991). How do loop diuretics act?. Drugs, 41 Suppl 3, 1–13. https:// doi.org/10.2165/00003495-199100413-00003
WINTER 2021
223
Carbon Recycling: A Novel Pathway to Renewable Fuel Production BY SPRIHA PANDEY '24 Cover: Emissions from a Factory Source: Unsplash
224
Introduction
Carbon Recycling and Its Necessity
Ever since the industrial revolution, fossil fuels have been the major drivers of human systems – from engines, to machines, to internal heating mechanisms. However, given the burning of coal, petroleum, and natural gas, along with the consistent rise of anthropogenic carbon dioxide (CO2) emissions which outpace the carbon cycle, the world faces two issues: fossil fuel scarcity and global warming due to heightened greenhouse gas emissions (Goeppert et al., 2014). In fact, carbon content in the atmosphere has increased by an average of 2.17 ppm per year from 2000 to 2017, despite emission mitigation policies, compared to 1.31 ppm per year from 1960 to 2000. This proves that present efforts do not sufficiently satisfy our climate goals (Goeppert et al., 2014).
In recent years, therefore, carbon recycling – especially in the form of CO2 capture and utilization (CCU) – has been identified as an increasingly important technology for mitigating CO2 emissions. CCU focuses on utilizing the large available resource of CO2 as raw material for extracting carbon and recycling it. It also works to reduce CO2 emissions by converting captured CO2 directly or indirectly into other chemicals and fuels sources such as methanol (Wang et al., 2020). In this manner, CCU (also called carbon conversion) tackles fuel scarcity as well. Hence, carbon dioxide conversion is vital to building a carbon-based economy primarily because CO2 is a non-toxic, abundant C1-feedstock that can be converted into a variety of value-added chemicals. There is also value in using carbon dioxide in its raw form, where it can be commercially utilized in beverages, food protection, water treatment, enhanced oil recovery, and chemical (including DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: This figure depicts the two common methods of the intermittent fasting diet. The image on the left depicts the method that involves only eating during a specified time interval every day. For example, only eating between 12pm and 6pm. The image on the right depicts the other method of intermittent fasting: picking 2 or 3 days each week to eat nothing and eating normally on all the other days. Source: Wikimedia Commons
urea and polymer) production (Wang et al., 2020). The first step in the CCU process is to capture carbon emissions from point sources such as industrial plants or power stations (Hu et al., 2013). One method of conversion is chemical, wherein CO2 is chemically transformed into high-demand fuels and urea. For example, the conversion of CO2 into methanol is carried out through hydrogenation. To produce methanol from CO2 and H2 are first converted into water gas (CO and H). The CO formed will further perform the hydrogenation to produce methanol (Wang et al., 2020). This is called methanol synthesis.
Methods of Carbon Conversion Catalysts play an important role in all hydrogenation reactions, controlling both yield and conversion rate. Copper is the most commonly used catalyst while zinc, titanium, zirconium, aluminium, silicon, chromium, gallium, etc. are usually used as additives to further improve the reaction performance. For example, a Cu/Zn catalyst can enhance reaction activity and methanol selectivity by providing a larger surface for the reactants to react on. Zn is used as a promoter to increase the dispersion of Copper (Lim et al., 2009). The conversion to ethanol is carried out in a similar fashion. Urea is another important industrial and agricultural compound obtained through hydrogenation. Urea is non-toxic and over 90% of synthesized urea is used as fertilizer. Urea is commonly produced by the reaction between
WINTER 2021
"Catalysts play an important role in all hydrogenation reactions, controlling both yield and conversion rate. Copper is the most commonly used catalyst while zinc, titanium, zirconium, aluminium, silicon, chromium, gallium, etc. are usually used as additives to further improve the reaction performance." Ways to productively utilize carbon, however, ammonia (NH3) and CO2. In the reaction, NH3 is generally synthesized by reacting coal or natural gas with air and water. CO2 is produced in the air as a by-product, and can then be further used to synthesize urea with NH3 (Wang et al., 2020). The industrial production of urea began as early as the 1920s, and the process has been economically refined over decades. To produce urea, it is ideal to capture CO2 from power plants whose emissions (flue gas) consist primarily of nitrogen, CO2 and water vapor. The capture of CO2 and the removal of moisture from the combustion gases can produce N2 with high purity (Koohestanian et al., 2018). The reaction of N2 with H2 produces more NH3 for the subsequent reactions, to keep the reaction running. Hence, the process possesses environmental and economic incentives.
are not limited to chemical conversions. In fact, other economically viable options to convert carbon emissions principally include: biological fixation, mineralization, and electrochemical and photocatalytic reduction. Biological CO2 fixation is a natural process through which atmospheric carbon is converted to biofuels. In nature, the process is carried out by microalgae via photosynthesis. Due to evolving technologies like artificial microalgae cultivation, this process can now be repeated in laboratories at large scales. Moreover, if the CO2 released from the bioenergy sources can be recycled and reused through the cultivation of microalgae, net CO2 emissions could be approximately zero. In fact, additional energy is not required, and there is no secondary pollution generated, making microalgae 225
Figure 2: A diagram of the carbon capture and sequestration process Source: Wikimedia Commons
"Another feasible method to reduce emissions is mineralization, in which carbon dioxide reacts with mineral resources or industrial wastes containing metal ions to produce carbonates."
226
cultivation an extremely eco-friendly method. The reasons for choosing microalgae over other plants include its relatively high photosynthetic conversion of the captured CO2, fast conversion rate, and high capacity for the production of various biofuels (Huang & Tan, 2014). Another feasible method to reduce emissions is mineralization, in which carbon dioxide reacts with mineral resources or industrial wastes containing metal ions to produce carbonates. The suitable materials include natural minerals such as serpentine and wollastonite, alkaline solid wastes such as Fe, steelmaking slag, as well as the fly ash from the incinerated urban solid waste (Saito & Murata, 2004). Ca and Mg are typically the most abundant metals used in this process. However, since these are alkaline earth metals, and are thus extremely reactive, they are present in the form of minerals – giving the mineralization process its name. The carbonates are then used in various industries to produce detergents, fertilizers, explosives, and more. The cost of this process can be reduced by locating mineral sources near power plants, hence minimizing transportation costs. Furthermore, a great amount of heat is released in the reaction which requires an outlet or use. At the same time, the reaction occurs at high temperatures, demanding high energy. If the heat can be collected and recycled, the net energy demand is lowered, improving overall performance (Wang et al., 2020).
Finally, electrochemical and photocatalytic reduction processes are used extensively for small-scale CO2 conversion. The former goes through three steps: (1) CO2 adsorption on the electrocatalyst; (2) C–H bond formation; (3) product desorption from the electrocatalyst (Yu et al., 2019). The electrocatalyst here is usually a transition metal which has several active sites to accommodate electrons. In this manner, several hydrocarbons are formed from the captured emissions including methanol, formic acid, and acetone. Photocatalytic reduction forms similar products. However, the reaction takes place by exciting electrons with the help of light. As a semiconductor material, TiO2 is the most thoroughly investigated catalyst in photocatalysis for reducing CO2. This electrons in this catalyst can only be activated by ultraviolet light, making the process easily controllable in comparison to mineralization (Wang et al., 2020). In this manner, carbon dioxide – which is present in excess – can act as an effective raw material for several starting materials in industrial processes.
Effects of Impurities on Carbon Capture and Utilization The merits of CCU necessitate the question: if the technology is available, why isn’t it being widely used? As a process, CCU is still under development, making its cost a topic of contention. Studies have compared the costs of preparing methanol through CCU to the costs of production of traditional fuels. To make DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Marine Carbon Cycle Source: Wikimedia Commons
CCU competitive, the price of methanol would have to nearly double (Pérez-Fortes et al., 2016). Hence, CCU must be further polished to increase its cost efficiency. However, given the continually depleting fuel resources, switching to CCU may be both urgent and inevitable. The key component which affects the economic and environmental efficiency of the carbon recycling processes is the presence of impurities. These include traces of other gases in the carbon dioxide fumes and presence of other metals in powdered catalysts. Most CO2 conversion processes require a catalyst, and impurities may affect the performance of the catalyst. For example, sulfur and chlorine have toxic effects on the catalyst in the methanol synthesis, and the presence of H2O may deactivate the catalyst and thus slow or prevent the reaction (Wang et al., 2020). Sulfur can also cause catalyst deactivation in dehydrogenation practices. Additionally, the influence of impurities in the flue gas on microalgae has been studied. The flue gas usually contains CO2 with NOx and SOx. SOx greatly inhibits the growth of microalgae, and the growth of microalgae ceases when the SOx concentration reaches 50 ppm (Yanagi et al., 1995). The measurement of the range at which other impurities hinder catalyst performance, however, is limited. The impurity effects change with CO2 concentration, catalyst concentration, pH, and temperatures, among other parameters. Further research into this
WINTER 2021
arena is necessary to best refine the methods and reduce their respective costs. Another aspect which hinders widespread use of carbon recycling is transportation. Industries are usually concentrated in certain areas, and the transportation of any gas over large distances is difficult because it needs to be stored at specific temperatures and pressures. A proposed alternative source of carbon for synthetic fuel production is marine CO2. The process involves a combination of existing technologies to use solar energy to recycle atmospheric CO2 into a liquid fuel (Patterson et al., 2019). Marine extraction is more attractive due to its close placement to densely populated coastal cities coupled with the low cost of shipbased transportation. The present fractional concentration of CO2 in the atmosphere is approximately 400 ppm, while that in water is 125 times the amount in air. Thus, for the same amount of carbon, a substantially larger volume of air has to be captured than that of water (Sadiq et al., 2020). Hence, carbon extraction from seawater for renewable fuel production is lucrative in more ways than one.
"Another aspect which hinders widespread use of carbon recycling is transportation. Industries are usually concentrated in certain areas, and the transportation of any gas over large distances is difficult because it needs to be stored at specific temperatures and pressures."
Organic Carbon Recycling in Marine Environments The pH-dependent chemistry of dissolved CO2 suggests that extraction may be accomplished by making the seawater more acidic via photovoltaic reduction processes.This is done by applying an electrical potential which 227
drives OH− and Cl− anions toward the anode and moves H+ cations from the “base” channels to accumulate in the “acid” channels. The acid channel pH is thus reduced. Below pH 5, CO2 comes out of solution and is collected (Patterson et al., 2019). The extracted carbon dioxide is then converted to value-based fuels like methanol using a process similar to the photovoltaic reduction described previously. Solar energy panels are used to harness light energy, and researchers suggest that large scale conversion could be carried out by solar panels floating atop ocean waters. In fact, organic carbon present in sediments has been recycled in a similar fashion in the Baltic Sea for decades (Nilsson et al., 2019). An example of carbon in marine environments can also be seen in figure 1. However, marine carbon recycling, too, is limited in scope. It is only feasible when the energy will be employed in coastal regions, and it is hugely dependent on the hours of sunlight a place receives per day. Using a method of carbon conversion besides solar energy can become too resource intensive. However, the process still holds significant environmental promise. The average CO2 emission, or “energy intensity,” of typical fossil fuels is ∼70 g(CO2)/MJ, or 19 g C/ MJ, and the corresponding value for methanol is 30.3 g C/MJ. This implies that the conversion by a solar methanol island facility of 5,700 tC per year of carbon would effectively avoid the emission of 3,600 tC per year from fossil fuels (Patterson et al., 2019). In their study, Patterson et al. estimates that if 1.5% of the ocean area (about 3.62 × 108 km2) is occupied by floating solar panel facilities, and each panel is sized at 1 × 1 km2 and placed with an edge-to-edge separation of 300 meters, the maximum possible number of panels would be 3.2 million. The corresponding avoided emission of 12 GtC/y (gigatons carbon per year) would then exceed the total global emission from fossil fuels (Patterson et al., 2019). Scientists suggest that when compared to conventional fuel, renewable methanol offers carbon reduction benefits ranging from 65% to 95% – that is, the carbon output is reduced by this percentage in comparison to other methods. These greenhouse gas benefits were among the highest for alternative fuels that can displace gasoline and diesel (Roode‐Gutzmer et al., 2019). Hence, there is tremendous scope in carbon recycling through various methods, although their viability differs by geographical location and demographics. However, more research
228
is required: given our current rising rate of emissions, it is essential that carbon recycling is incorporated into fuel production at a global scale. Research is also necessary to assess and improve its economic viability, especially in developing countries. Carbon recycling is an alternative process to confront unavoidable or already existing emissions. Given the rate at which the climate is changing, however, recycling becomes fundamentally important to the ways in which we tackle emissions in future years. References Goeppert, A., Czaun, M., Jones, J.-P., Surya Prakash, G. K., & Olah, G. A. (2014). Recycling of carbon dioxide to methanol and derived products – closing the loop. Chem. Soc. Rev., 43(23), 7995–8048. https://doi.org/10.1039/C4CS00122B Hu, B., Guild, C., & Suib, S. L. (2013). Thermal, electrochemical, and photochemical conversion of CO2 to fuels and valueadded products. Journal of CO2 Utilization, 1, 18–27. https:// doi.org/10.1016/j.jcou.2013.03.004 Huang, C.-H., & Tan, C.-S. (2014). A Review: CO2 Utilization. Aerosol and Air Quality Research, 14(2), 480–499. https://doi. org/10.4209/aaqr.2013.10.0326 Koohestanian, E., Sadeghi, J., Mohebbi-Kalhori, D., Shahraki, F., & Samimi, A. (2018). A novel process for CO2 capture from the flue gases to produce urea and ammonia. Energy, 144, 279–285. https://doi.org/10.1016/j.energy.2017.12.034 Lim, H.-W., Park, M.-J., Kang, S.-H., Chae, H.-J., Bae, J. W., & Jun, K.-W. (2009). Modeling of the Kinetics for Methanol Synthesis using Cu/ZnO/Al2O3/ZrO2 Catalyst: Influence of Carbon Dioxide during Hydrogenation. Industrial & Engineering Chemistry Research, 48(23), 10448–10455. https://doi. org/10.1021/ie901081f Nilsson, M. M., Kononets, M., Ekeroth, N., Viktorsson, L., Hylén, A., Sommer, S., Pfannkuche, O., Almroth-Rosell, E., Atamanchuk, D., Andersson, J. H., Roos, P., Tengberg, A., & Hall, P. O. J. (2019). Organic carbon recycling in Baltic Sea sediments – An integrated estimate on the system scale based on in situ measurements. Marine Chemistry, 209, 81–93. https://doi.org/10.1016/j.marchem.2018.11.004 Patterson, B. D., Mo, F., Borgschulte, A., Hillestad, M., Joos, F., Kristiansen, T., Sunde, S., & van Bokhoven, J. A. (2019). Renewable CO 2 recycling and synthetic fuel production in a marine environment. Proceedings of the National Academy of Sciences, 116(25), 12212–12219. https://doi.org/10.1073/ pnas.1902335116 Pérez-Fortes, M., Schöneberger, J. C., Boulamanti, A., & Tzimas, E. (2016). Methanol synthesis using captured CO2 as raw material: Techno-economic and environmental assessment. Applied Energy, 161, 718–732. https://doi.org/10.1016/j. apenergy.2015.07.067 Roode‐Gutzmer, Q. I., Kaiser, D., & Bertau, M. (2019). Renewable Methanol Synthesis. ChemBioEng Reviews, 6(6), 209–236. https://doi.org/10.1002/cben.201900012 Sadiq, M. M., Batten, M. P., Mulet, X., Freeman, C., Konstas, K., Mardel, J. I., Tanner, J., Ng, D., Wang, X., Howard, S., Hill, M.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
R., & Thornton, A. W. (2020). A Pilot-Scale Demonstration of Mobile Direct Air Capture Using Metal-Organic Frameworks. Advanced Sustainable Systems, 4(12), 2000101. https://doi. org/10.1002/adsu.202000101 Saito, M., & Murata, K. (2004). Development of high performance Cu/ZnO-based catalysts for methanol synthesis and the water-gas shift reaction. Catalysis Surveys from Asia, 8(4), 285–294. https://doi.org/10.1007/s10563-004-9119-y Wang, H., Liu, Y., Laaksonen, A., Krook-Riekkola, A., Yang, Z., Lu, X., & Ji, X. (2020). Carbon recycling – An immense resource and key to a smart climate engineering: A survey of technologies, cost and impurity impact. Renewable and Sustainable Energy Reviews, 131, 110010. https://doi. org/10.1016/j.rser.2020.110010 Yanagi, M., Watanabe, Y., & Saiki, H. (1995). CO2 fixation by Chlorella sp. HA-1 and its utilization. Energy Conversion and Management, 36(6), 713–716. https://doi.org/10.1016/01968904(95)00104-L Yu, F., Wei, P., Yang, Y., Chen, Y., Guo, L., & Peng, Z. (2019). Material design at nano and atomic scale for electrocatalytic CO2 reduction. Nano Materials Science, 1(1), 60–69. https:// doi.org/10.1016/j.nanoms.2019.03.006
WINTER 2021
229
The Arctic’s ‘Zombie’ Wildfires: Burning Peat Could Prove Disastrous in the Climate Fight BY SPRIHA PANDEY '24 Cover Image: Ice Melting in the Arctic Source: Wikimedia Commons
Introduction A new phenomenon has begun to plague the northernmost parts of the planet over the past two years—one that perhaps presents one of the most visible consequences of climate change: wildfires in the Arctic. Unimaginable as recently as two decades ago, these fires burn millions of hectares of land and cloak supposedly frozen Siberian cities with smoke every summer. In fact, by the time the fire season ended in 2020, the blazes had emitted a record 244 megatons of carbon dioxide—35% more than 2019, which set records of its own (Witze, 2020).
The Origin of Ignition So, what is causing the coldest regions of the planet to burn? Scientists say that the answer lies in one form of vegetation: peat. Peatlands are carbon-rich soils that accumulate over thousands of years as waterlogged plant 230
and animal remains gradually decay. As the ecosystem with the highest carbon density, peatlands are extremely prone to fires. A typical northern peatland contains nearly ten times as much carbon as a boreal forest (Witze, 2020). The problem in the Arctic is that below the layers of permafrost—ground that remains completely frozen for two or more years—stands millions of acres of peatland. As the climate warms, this permafrost is thawing, giving way to the carbon-rich soils safely tucked below. Hence, climate change has given rise to a positive feedback loop: as peatlands release more carbon, global warming increases, which thaws more peat and causes more wildfires (Hugelius et al., 2020). In 2020, these fires started blazing as early as May. They scourged the land to the north of the Siberian tree line—a phenomenon that typically does not occur until July. One reason for the early fires were the unusually warm temperatures in DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: A burnt region of the Arctic Tundra from 2015. Source: Wikimedia Commons
winter and spring, which primed the landscape to burn. However, scientists believe that the other, more responsible reason was “zombie fires.” It is possible that peat fires from previous years had been smoldering beneath the ice and snow throughout the winter and subsequently emerged, zombie-like, as the snow melted in the spring (Witze, 2020). This phenomenon is not uncommon. Small sources of ignition and fires smolder without flame in low-temperature underground peat environments for decades and have often been spotted in coal mines. Given the amount of carbon trapped within the Arctic soil, these fires could prove disastrous. Northern peatlands are an important and dynamic component of the climate system. They hold over 80% of the global stocks of organic carbon and nitrogen and have been a persistent long-term sink of atmospheric carbon dioxide.
The Sources and Drivers of Peat Wildfires What starts these fires in the Arctic? A common ignition source of smoldering fires in tropical peatlands are intentional flaming fires used to clear surface vegetation. In the north, wildfires are a common, natural process (Purnomo et al., 2020). The cause of concern is that their intensity, land coverage, and duration of have drastically increased in recent years. Extremely dry ground and higher than average temperatures combined with heat, lightning, and strong winds have caused the fires to spread aggressively (BBC, 2019).
conditions are below the temperature threshold and above the precipitation threshold, but these thresholds have been crossed too much as of late (Hu et al., 2015). The high level of fire activity suggests that fuel availability is not a limiting factor for fire occurrence, possibly because of the rapid post-fire recovery of tundra vegetation (Racine et al., 1987). The recovery of total ecosystem carbon stocks, however, lags behind vegetation recovery because soil carbon forms from several years of vegetation productivity. The disappearance or shrinkage of ponds and wetlands in some Arctic regions may also enhance fuel connectivity, facilitating tundra-fire spread (Myers-Smith et al., 2011). The exposed peat further catalyzes the fires. Since these wildfires do not directly impact human life, the research on these peat fires has been limited thus far.
"What starts these fires in the Arctic? A common ignition source of smoldering fires in tropical peatlands are intentional flaming fires used to clear surface vegetation."
Recently, Purnomo and colleagues used Cellular Automata models to study and simulate the factors that aggravate these fires (2020). Cellular Automata are discrete computational models that use simple rules to simulate complex emergent behavior. These models Figure 2: The fires are extending further inwards each year, so the land coverage of the fires moves southward from the Arctic, affecting cities in Sweden and Russia. This is an aerial view of the region where the white spots represent clouds and snow. Source: Wikimedia Commons
Global warming has been statistically linked to tundra fires. Climate variability has minimal effects on tundra burning when the weather
WINTER 2021
231
Figure 3: The thickness of the sea ice in the Arctic region is predicted to drastically reduce given the current trends – reducing to only 54% of its original volume in 100 years. Source: Wikimedia Commons
"Estimates of the total northern peatland carbon stocks remain variable and uncertain due to a lack of studies. Based on an equilibrium model, it is estimated that the preindustrial extent of permafrost in peatlands was about 2 million km2, with a present-day coverage of 1.7 million km2."
use a finite m by n grid of cells, each of which can exist in one of k discrete states. At each time interval, every cell in the grid will update its state based on a set of rules according to other nearby cells. This model has been extremely useful in studying wildfires. The cells are connected to each other, and these connections represent the flammability of the surrounding fuels when they are consumed by the fire. Hence, an unburned cell will update itself to a burning cell with a probability P if there are other burning cells nearby (Purnomo et al., 2020). Through simulation of peat fires in this model, it was determined that in fires with multiple ignition points, smoldering hotspots merge over time, meaning the evolution of burnt area is non-linear. Furthermore, burnt area could be reduced to a 150-fold smaller area by increasing the moisture content of the soil above 100%, suggesting that any human-made fires should take place in conditions with high moisture, ideally during the wet season. Artificially increasing the moisture content can reduce the area burnt as well (Purnomo et al., 2020). Thus, it is extremely necessary that steps be taken to do so.
Vulnerabilities to Permafrost Thaw Historically, the Arctic—like the Amazonian Rainforest—has been a carbon sink. A carbon sink is a region which absorbs more carbon than it emits. These sinks are responsible for decreasing the amount of carbon emissions in the atmosphere and are extremely vital to maintaining low temperatures on the planet (Pugh et al., 2019). In fact, peatlands have been
232
helping to cool the climate for thousands of years by storing carbon. As peatlands accumulate and get ignited, they can become a net source of carbon, releasing more carbon into the atmosphere than they absorb. Scientists predict that this change in carbon absorption/emission in the Arctic may occur before the end of the century (Witze, 2020). Estimates of the total northern peatland carbon stocks remain variable and uncertain due to a lack of studies. Based on an equilibrium model, it is estimated that the preindustrial extent of permafrost in peatlands was about 2 million km2, with a present-day coverage of 1.7 million km2. This area is projected to decrease to 1 million km2 at 2°C above the preindustrial temperatures. In the northernmost region of the Arctic, the permafrost is more frozen yet shallow, and thus more susceptible to thaw. At a 6°C increase in global temperature, studies predict that no peatland permafrost would remain (Hugelius et al., 2020). The reason that fires release carbon dioxide from the ground is because following fire events, there is an increase in soil active-layer thickness and moisture. This leads to the formation of thermokarst, which develops when permafrost thaws and soils collapse under their own mass (Bowden et al., 2019). In sloping terrain, saturated, warm soils can be carried down the slopes by gravity, resulting in active-layer detachments or “thaw slumps” (Hu et al., 2015). Thermokarst exposes deep soils that are rich in ancient carbon to ambient normal temperatures. Once exposed, this
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
carbon becomes vulnerable to photochemical and microbial degradation, thus releasing trapped greenhouse gases into the atmosphere (Schuur et al., 2009). Mack and colleagues estimated that an Arctic tundra wildfire in 2007 resulted in a loss of 2.016 ± 0.435 kilograms of carbon per square meter from the ice (2011). This amount equals approximately 25 years of carbon accumulation and 50–60% of the average annual carbon sequestration in the entire Arctic tundra biome. These levels of carbon dioxide could prove disastrous for global climate efforts. Furthermore, the positive feedback loops for this process are set forth in several ways. An ignition source of these fires is lightning, which is also predicted to increase as a result of Arctic warming due to an increase in convective energy in the atmosphere (Hu et al., 2015). There also exist factors which could minimize the fires, such as increased precipitation in the North due to greater temperatures and retreating sea levels. However, researchers believe these are not sufficient to inhibit the fires (Hu et al., 2015). Globally, increased carbon and melting permafrost would equate to faster rising sea levels, and cities would submerge much earlier than predicted. The Arctic’s ‘zombie’ wildfires have, thus, become a topic of grave concern—one which requires immediate international efforts to effectively curb, because if left untreated, humans will ultimately lose the climate fight. While artificially increasing moisture content and dousing fires (as was recently done in northern Russia) are quick fixes, they are temporary and ineffective. Fire management practices need to be improved to deprive the flames of fuel. Furthermore, due to wind patterns, the Arctic warms twice as fast as the rest of the world (Blunden & Arndt, 2013). Hence, curbing global warming and carbon emissions is the most crucial and fundamental way to curb the Arctic fires. Meeting these challenges is at the forefront of climate action in the 21st century, as the international community seeks to reverse the calamitous effects of anthropogenic activities already at play. References
S258. https://doi.org/10.1175/2013BAMSStateoftheClimate.1 Bowden, R. D., Wurzbacher, S. J., Washko, S. E., Wind, L., Rice, A. M., Coble, A. E., Baldauf, N., Johnson, B., Wang, J., Simpson, M., & Lajtha, K. (2019). Long‐term Nitrogen Addition Decreases Organic Matter Decomposition and Increases Forest Soil Carbon. Soil Science Society of America Journal, 83(S1). https://doi.org/10.2136/sssaj2018.08.0293 Hu, F. S., Higuera, P. E., Duffy, P., Chipman, M. L., Rocha, A. V., Young, A. M., Kelly, R., & Dietze, M. C. (2015). Arctic tundra fires: Natural variability and responses to climate change. Frontiers in Ecology and the Environment, 13(7), 369–377. https://doi. org/10.1890/150063 Hugelius, G., Loisel, J., Chadburn, S., Jackson, R. B., Jones, M., MacDonald, G., Marushchak, M., Olefeldt, D., Packalen, M., Siewert, M. B., Treat, C., Turetsky, M., Voigt, C., & Yu, Z. (2020). Large stocks of peatland carbon and nitrogen are vulnerable to permafrost thaw. Proceedings of the National Academy of Sciences, 117(34), 20438–20446. https://doi.org/10.1073/ pnas.1916387117 Mack, M., Bret-Harte, M., Hollingsworth, T., Jandt, R., Schuur, E., Shaver, G., & Verbyla, D. (2011). Carbon loss from an unprecedented Arctic tundra wildfire. Nature, 475, 489–492. https://doi.org/10.1038/nature10283 Myers-Smith, I. H., Forbes, B. C., Wilmking, M., Hallinger, M., Lantz, T., Blok, D., Tape, K. D., Macias-Fauria, M., Sass-Klaassen, U., Lévesque, E., Boudreau, S., Ropars, P., Hermanutz, L., Trant, A., Collier, L. S., Weijers, S., Rozema, J., Rayback, S. A., Schmidt, N. M., … Hik, D. S. (2011). Shrub expansion in tundra ecosystems: Dynamics, impacts and research priorities. Environmental Research Letters, 6(4), 045509. https://doi. org/10.1088/1748-9326/6/4/045509 Patterns of Vegetation Recovery after Tundra Fires in Northwestern Alaska, U.S.A. (1987). Arctic and Alpine Research, 19(4). https://doi.org/10.2307/1551412 PerkinsAug. 6, S., 2013, & Pm, 3:45. (2013, August 6). ScienceShot: Arctic Warming Twice as Fast as Rest of World. Science | AAAS. https://www.sciencemag.org/news/2013/08/ scienceshot-arctic-warming-twice-fast-rest-world Pugh, T. A. M., Lindeskog, M., Smith, B., Poulter, B., Arneth, A., Haverd, V., & Calle, L. (2019). Role of forest regrowth in global carbon sink dynamics. Proceedings of the National Academy of Sciences, 116(10), 4382–4387. https://doi.org/10.1073/ pnas.1810512116 Purnomo, D. M. J., Bonner, M., Moafi, S., & Rein, G. (2020). Using cellular automata to simulate field-scale flaming and smouldering wildfires in tropical peatlands. Proceedings of the Combustion Institute, S1540748920306490. https://doi. org/10.1016/j.proci.2020.08.052 Schuur, E., Vogel, J., Crummer, K., Lee, H., Sickman, J., & Osterkamp, T. (2009). The effect of permafrost thaw on old C release and net C exchange from tundra. Nature, 459, 556–559. https://doi.org/10.1038/nature08031. Witze, A. (2020). The Arctic is burning like never before— And that’s bad news for climate change. Nature, 585(7825), 336–337. https://doi.org/10.1038/d41586-020-02568-y
Arctic wildfires: How bad are they and what caused them? (2019, August 2). BBC News. https://www.bbc.com/news/ world-europe-49125391 Blunden, J., & Arndt, D. S. (2013). State of the Climate in 2012. Bulletin of the American Meteorological Society, 94(8), S1–
WINTER 2021
233
Fundamentals of Language Learning and Acquisition BY TYLER CHEN '24 Cover: A proliferation of dictionaries and language textbooks reflect the globalized nature of today’s world, where learning languages is more crucial than ever. Source: Pixabay
234
Introduction Spoken languages are a uniquely human form of communication. With over 7 billion people worldwide speaking thousands of different languages, it is also becoming more and more important for individuals to learn additional languages to interface with this globalized world. But despite language’s longstanding importance to everyday life, the processes of language learning and acquisition still remain uncertain and contentious among cognitive scientists, psychologists, and neuroscientists alike. Nevertheless, with more information about language being uncovered every single day, we are getting closer to unlocking the essence of how human beings learn to communicate with one another.
Language Learning vs. Language Acquisition Before delving into the neurological processes of picking up a new language, it is important to distinguish between language learning and language acquisition. Language learning is the purposeful process in which lingual elements are studied, memorized, and committed to memory through conscious behavior. This process primarily involves intensive study of the target language’s unique linguistic patterns in order to achieve mastery. Across the world, language learning is the most common method of language education for students. In this process, an instructor, highly skilled in the language being taught, will teach the definitions of vocabulary terms and the grammar systems that form the basis of the target language, while helping reinforce good language production along the way by correcting when needed. It is a heavily-enforced DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
process, either through structured personal study or instructor assistance methodically working through a series of lessons, that eventually leads to the ability to create and perceive the target language (Hussain, 2017). Language acquisition, on the other hand, is the process in which language is absorbed subconsciously in our minds, without any explicit knowledge of any grammatical rules. There is a lot of trial and error that comes with language acquisition, since unlike language learning, this process is largely unstructured. The mind attempts to make sense of the target language, which prompts the speaker to mimic and try to produce some sort of meaningful thought within that language. This is then either reinforced or discouraged by seeing if there is a desired or undesired reaction to the language produced; the cycles of reinforcement and rejection eventually results in gradual mastery of the target language. The most prominent example of language acquisition is how people learn their native languages. Beginning at birth, young children begin to pick up on the different words their parents speak, and eventually learn to associate words and phrases with objects and actions – all of which encompasses a very subconscious process. And unlike language learning, there is no direct instruction when applied to language acquisition – it is a largely independent process (Hoque, 2017). Beyond the basic concepts of language learning and language acquisition, there are also some key differences in how each of these processes are carried out and experienced. As mentioned before, a key difference is the consciousness of each process. Language learning is a very structured process that results from a lot of conscious decision making in adopting the target language and requires corrective guidance to make sure that the target language is also being learned appropriately. Language acquisition, on the other hand, is a very subconscious process. The student seeks to create meaning and places less fear on creating errors, devoting more time to imitating others to see what is correct. Additionally, there are many other technical differences in both processes. Language learning focuses significantly more on the written context of language – that is reading and writing language; students routinely fill out vocabulary and grammar worksheets to supplement their language learning. On the other hand, language acquisition focuses heavily on the spoken context of language, as the primal roots of language acquisition stems WINTER 2021
from the need to communicate orally. Rather than filling out worksheets, speaking and listening activities are involved in reinforcing the process of language acquisition. Alongside the fundamental pedagogical and technical differences between how language learning and language acquisition are implemented, there is also a difference in how both of these processes are perceived. Language learning relies on deductive reasoning, or reasoning that is drawn from general rules to create specific conclusions, to obtain language mastery. These students are taught language and grammatical theory, which they then apply to situations in which they then produce language. Language acquisition, on the other hand, relies on inductive reasoning, or the process in which general paradigms are drawn from experiencing a variety of different and specific situations. Essentially, language acquirers first observe to create generalizations which will subsequently result in language theory – while language learners apply generalizations in linguistic structure to specific instances (Hussain, 2017). In addition to the differences between language learning and language acquisition, a student’s journey towards mastering a language is also distinct for both of these processes. Language learning requires significantly more mental endurance and work due to its deductive nature, as students need to exert energy and effort to learn the theory and structure behind the target language. Thus, student motivation is a very important factor in whether or not language learning is successful in developing language proficiency (McLaughlin, 1992). Language acquisition, however, is a far more passive process which doesn’t require nearly as much as effort. This process is also rooted in the primal instincts of mankind, as learning how to communicate is a process fundamental to survival (Hussain, 2017). This is why it is far more likely to encounter students who are unsuccessful in learning their second language than it is to encounter children who know no language at all.
"Language learning is a very structured process that results from a lot of conscious decision making in adopting the target language and requires corrective guidance to make sure that the target language is also being learned appropriately. Language acquisition, on the other hand, is a very subconscious process."
Though these two processes are seemingly distinct, it is important to recognize that they can coexist in the journey to obtain language proficiency. Language acquisition is crucial to being able to produce and perceive the target language effectively, but it can be a very timeconsuming process. Although young children can be very proficient in their native languages due to language acquisition, this is largely due to their constant exposure to language – a 235
convenience that many older students trying to learn languages on their own do not have (Hoque, 2017). Though this can’t be entirely compared to the adult experience in language acquisition, similar conclusions can be made – many people don’t have the time nor resources to submit themselves to constant exposure to a foreign language. Thus, language learning can be very useful in shortening that time frame, as a strict framework will help direct the path to procuring language proficiency. With a combination of structured language learning, as well as immersion in a setting where the target language is spoken to promote language acquisition, students begin to use and adopt languages at a much quicker rate than if they stuck to either language acquisition or language Source: Wikimedia Commons learning alone.
Figure 1: The WernickeGeschwind Model of language, which posits that Broca’s area is crucial in speech production while Wernicke’s area is responsible for language comprehension. The model explains that communication between these two regions is accomplished through a major fiber bundle called the arcuate fasciculus which allows for meaningful and comprehensible speech to be produced. This model is now considered obsolete and oversimplified, but nonetheless it is still widely upheld across the entire world as the primary model for language processing.
How the Brain Learns Language "After spending some time in the conscious incompetence phase and slowly improving, learners will gradually progress into the conscious competence stage."
Overall, the process of language learning is very similar to the process of learning in general. The four-stage model of competence can be used to characterize the various stages that students proceed through when learning a new language. The four steps of this “conscious competence” learning model are: unconscious incompetence, conscious incompetence, conscious competence, and unconscious competence. Each of these phases represent different stages of mastery within the language learning process (Cannon et al., 2010). The first stage of unconscious incompetence is when people don’t know what they don’t know (Cannon et al., 2010). Currently, there are more than 7,000 known languages that are being spoken. Outside of linguists who devote entire careers to identifying, researching, and exploring all of these different languages, people generally don’t know many of these languages – upon hearing them, most people would perceive the language as mere sounds that hold no significance at all. In essence, they aren’t aware of what they don’t know in those languages. This stage is also a “honeymoon” period of sorts, where students are very highly motivated and excited to learn a new language, yet they don’t exactly know the length and difficulty of the process that lies ahead of them. Another critical component of this stage, especially in the context of language learning, is the unknowingness of what they are doing incorrectly. As a result, in a process known as disconfirmation, accurate feedback is needed to help guide students back onto the right track, or at least to tell those students that they have been making errors. (Cannon et al., 2010).
236
Upon getting corrected through disconfirmation and learning the first few words, phrases, and grammatical points, language learners begin to understand how much they have to learn before reaching some sort of level of competency, and thus reach the “conscious incompetence” stage. This is also when much of the allure and excitement begin to fade, as students begin to truly understand how much work is left before they reach a desired level of proficiency. However, motivated students do make it through this phase. To progress onto the next stage, students must not only recognize where they were wrong through feedback, but should also see how they can specifically rectify their mistakes and improve (Cannon et al., 2010). After spending some time in the conscious incompetence phase and slowly improving, learners will gradually progress into the conscious competence stage (Cannon et al., 2010). Here, students are able to communicate and perceive a good amount of the target language – though it often takes a considerable amount of effort. This is because language learners at this stage still heavily rely on energy-consuming deductive processes to create intelligible language. But even though the process of using the target language is still relatively difficult in this stage, students now do have the critical ability to use the target language to a certain degree. During this stage, many students also evaluate if they have reached an acceptable level of proficiency. In other words, if students feel that they have learned enough of the target language to be satisfied, they may discontinue their learning efforts. For those students who do wish to continue their language learning, they might seek to reach the unconscious competence phase. At
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
this point, all knowledge about the language is known as “tacit knowledge,” where language processing and production are all understood and analyzed below the conscious level through internalized logic that has been set through repeated practice (Cannon et al., 2010). For example, both non-native and native English speakers can tell that the sentence “you am eating dinner” is incorrectly composed when hearing it for the first time. However, the difference in those English speakers in the conscious competence phase and unconscious competence phase lies in how they detect this error. The former will know that the “you am” portion is incorrect by deducing that following the pronoun “you,” the verb “are” should follow instead of “am.” The latter, on the other hand, will hear “you am” and immediately detect that it doesn’t sound quite correct. The logical process that the consciously competent speaker carries out also occurs within the unconsciously competent English speaker, but it is all internalized and happens in an implicit manner due to repeated exposure to correct English composition – in contrast to the purposeful and explicit reasoning that the consciously competent speaker undergoes to reach the same conclusion. From a neurological point of view, the WernickeGeschwind Model was the original model for language processing and production. This model stated that Wernicke’s area, a region of the brain (BA22p) located in the superior temporal gyrus (STG), was involved in speech processing and Broca’s area (the region composed of BA44 and BA45), located in the inferior frontal gyrus (IFG), was responsible for speech production. The model originally claimed that the brain receives either audio signals from spoken language that are processed in the primary auditory cortex (BA22a), or visual signals (from reading words and phrases) that are processed in the visual cortex. Wernicke’s area would then match those signals with vocabulary and other linguistic properties stored in the hippocampus – the part of the brain responsible for memory
WINTER 2021
– and assemble the corresponding definitions and lingual associations together to create meaning from the perceived language. Then, the signals would be sent down the arcuate fasciculus – a bundle of nerve fibers connecting Wernicke’s area to Broca’s area (this mechanism is under scrutiny, but is the most satisfactory connection that was put forth in connecting Broca’s area and Wernicke’s area). Broca’s area then ultimately relays signals to the motor cortex to actually produce the language (Tremblay and Dick, 2016). However, the Wernicke-Geschwind Model is now being challenged, as new neuroimaging studies indicate that this model is far too simplistic for the cognitive processes that occur in language processing. Most recently, it has been shown that the inferior parietal lobule (IPL), which is composed of the angular gyrus (BA39) and the supramarginal gyrus (BA40), also plays an important role in language perception. This area of the brain sits above Wernicke’s area, and recent fMRI studies (studies which measure blood flow through specific parts of the brain that are subjected to certain stimuli) have shown that this area of the brain becomes heavily activated when processing language. This has led researchers to believe that the IPL works in tandem with Wernicke’s area to process language (Barbeau et al., 2017). Similarly, it is now thought that Broca’s area may also be involved in language perception and comprehension processes that were previously solely attributed to Wernicke’s area (Ruschemeyer et al., 2005). Wernicke’s area has also been seen to be more involved with the speech production side of things as well. Despite all these shortcomings, the WernickeGeschwind Model was still a monumental neurologic discovery in terms of how the brain processes language, as it first established language comprehension and language production as two separate functions of the brain – a property that has since shaped the study of language processing.
Figure 2. Abbreviations such as “BA44” and “BA22p” refer to a specific Brodmann area within the brain – a region characterized by its cell architecture. The highlighted portions refer specifically to the corresponding Brodmann area, with the first two showing Brodmann areas 44 and 45 (BA44 and BA45), which form Broca’s area. BA39 and BA40 mark the location of the angular gyrus and supramarginal gyrus, respectively, and together form the IPL. BA22 serves several different purposes. The anterior portion of BA22, or BA22a, serves as the primary auditory cortex, while the posterior portion of BA22, or BA22p, functions as the Wernicke’s area. However, one highly contested topic in the field of neurolinguistics is what actually fully constitutes Wernicke’s area – some say that BA39, BA40, and BA22p all form Wernicke’s area, while others heavily argue that these are separate regions with separate purposes. Source: Wikimedia Commons
"From a neurological point of view, the Wernicke-Geschwind Model was the original model for language processing and production. This model stated that Wernicke’s area, a region of the brain (BA22p) located in the superior temporal gyrus (STG), was involved in speech processing and Broca’s area (the region composed of BA44 and BA45), located in the inferior frontal gyrus (IFG), was responsible for speech production."
237
Currently, a more accepted neurolinguistic model is the Hickok-Poeppel two-stream pathway model, which posits that there are two auditory streams originating in the auditory cortex and ending in the IFG. Like the Wernicke-Geschwind Model, which separated the processes of comprehension and production, this model also separates these processes through the two pathways. The auditory ventral stream is the processing stream that primarily concerns itself with speech recognition and sentence comprehension. It first exits the STG, where it then projects towards the middle portion of the temporal lobe before projecting towards the IFG. Along this pathway, the ventral stream maps the auditory representations of speech retrieved from the auditory cortex to conceptual representations, resulting in comprehension of speech. Speech recognition first occurs within the anterior superior temporal gyrus (aSTG), where words are perceived and recognized. Downstream from the aSTG on the ventral stream, the MTG-TP region (middle temporal gyrus and temporal pole) is thought to form a semantic lexicon – a library that stores words and their audio-visual representations which are sorted through semantic relationships. This lexicon would theoretically match the perceived speech with its associated concepts, thus creating meaning from the speech. Lesioning studies, Source: Wikimedia Commons which examine the functionalities of different parts of the brain by examining the results on behavior after damage, have reinforced this part of the theory, as individuals with damage to the MTG-TP region had a tendency to create "Beyond the semantic errors (Dronkers et al., 2004).
Figure 3. A visual representation of the two pathways that constitute speech processing, the auditory dorsal stream and the auditory ventral stream. The red arrows represent the auditory ventral stream, which originates in the anterior superior temporal gyrus (aSTG), which lies within the auditory cortex, and then projects towards the anterior superior temporal sulcus (aSTS), and then the middle temporal gyrus (MTG) and the temporal pole (TP). These two regions form the MTG-TP region, which connects to the inferior frontal gyrus (IFG), concluding the auditory ventral stream. The blue arrows depict the auditory dorsal stream, which originates on the posterior side of the superior temporal gyrus (pSTG), which then projects towards the posterior superior temporal sulcus (pSTS), followed by the Sylvian fissure between the parietal and temporal lobes (Spt) and the inferior parietal lobule (IPL) which both characterize the Spt-IPL region. These signals then get directed towards the IFG, where they get processed and passed along to the motor cortex.
neurological processes behind speech production and comprehension lies the underlying knowledge of the structure and grammar systems of a language. This knowledge gets stored a bit differently than the aforementioned semantic and phonological lexicons which support the production of each individual word."
238
The other stream, the auditory dorsal stream, is primarily involved with converting auditory signals into a motor articulatory representation, leading to speech production (Hickok & Poeppel, 2007). Upon leaving the auditory cortex, the auditory dorsal stream projects towards the posterior superior temporal sulcus (pSTS), and then proceeds towards the Sylvian fissure between the parietal and temporal lobes (Spt) and the IPL before heading towards the IFG. The pSTS supports the “sensory coding of speech,” or the actual processing and intake of sensory information, while the Spt is involved in the articulation of these sensory codes into the motor system (Hickok & Poeppel, 2007). Further studies have shown that the Spt-IPL region (the region characterized by the Spt and the IPL areas) acts as a phonological lexicon, or a longterm store that contains the names of various objects. Those who had sustained damage to the IPL were able to correctly identify an object, but
had trouble correctly pronouncing the name of said object (e.g. saying “cloof” instead of “clock”) (Schwartz et al., 2009). After retrieving the correct articulative representation, the signal is sent along to the IFG, which then sends the appropriate signals to the motor cortex to prompt the correct speech production. But beyond the neurological processes behind speech production and comprehension lies the underlying knowledge of the structure and grammar systems of a language. This knowledge gets stored a bit differently than the aforementioned semantic and phonological lexicons which support the production of each individual word. One model that seeks to explain the neurocognitive basis for the distinction between lexicons and grammar as processed and stored in the brain is Ullman’s declarative/procedural model. The model suggests that grammar structures such as syntax and morphology fall under a procedural memory system, while mental lexicons which hold the meaning and sounds of different words fall within declarative memory systems, suggesting that grammatical processing and lexical processing are separate (Ullman, 2001). This model co-exists with the Hickok-Poeppel model presented earlier, which attributes the lexicons to temporal and temporoparietal spaces within the brain – regions that are associated with declarative memory systems.
How the Brain Acquires Language The process by which we acquire language is fundamentally different from the language learning process. As mentioned before, language acquisition is a process that is largely subconscious, as opposed to the conscious process of language learning.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
It’s important to distinguish between first and second language acquisition. First language acquisition occurs when children are newborn and thus acquire their very first language – their native language. Second language acquisition occurs when individuals learn additional nonnative languages without formal classroom instruction (Hoque, 2017). The main difference between these two processes lies within the development of the brain between these first language acquisition and second language acquisition. This period of cognitive development ultimately changes how second language acquisition is handled compared to first language acquisition. Within the first language acquisition process, newborns largely follow a timeline of crying, cooing, babbling, one-word utterances, twoword utterances, early-multi-word phrases, and finally more complex phases (see Figure 3 for an in-depth description and timeline for each of these steps) (Hoque, 2017). This overall process is highly successful, since young children are not deterred by the fear of error that many adults and students face later on when embarking on their language learning journey. When very young children make mistakes in acquiring their first language, parents generally still warmly nurture the child and don’t berate them for making a mistake. This perceived safety cushion is absent in the environment that older adults and students typically learn in. Amongst peers and colleagues, the biological comfort that comes with being surrounded and cared for by your parents isn’t there, which can increase hesitancy in students, even if there are actually no negative consequences from making mistakes in a classroom or work setting. In addition, children also have ample exposure to the language that they are acquiring. They encounter the language 24 hours a day, compared to the 45 minutes a day spent in class for many students across the world, giving them a larger opportunity time-wise to acquire the language. The native language is also stored in a different way from other languages that are acquired later in life – particularly within Broca’s area. Research has shown that for individuals who have learned languages later on in their lives, speaking their native languages will activate a different portion of BA44 in comparison to speaking languages that were learned and acquired later on (Kim et al., 1997). However, researchers have found that for children who had learned additional languages earlier
WINTER 2021
on in their childhood, there was no distinct spatial separation in neural activation within Broca’s area. This is largely due to the fact that a person’s neuroplasticity – how “plastic” or how capable an individual’s neural network is to change, grow, and reoptimize – is very high during childhood, making it easier to learn and acquire additional languages in his or her early years. This is why adults will often understand a question being asked of them in their target language but will often struggle to respond in the target language – the fact that they are learning a new language after this period of heightened neuroplasticity in childhood means that the Broca’s area attributes speaking the new language to a new portion distinct from their previously learned languages. This new portion might not be fully developed yet, which creates the struggle in trying to respond in that other language. However, the adults will have less trouble understanding the question in the other language because unlike Broca’s area, Wernicke’s area does not substantially spatially distinguish between different languages at any point in time, allowing for pre-existing neural pathways to process the new language. However, this isn’t to say that adults can’t be as successful as children can be in learning and acquiring new languages. A misconception is that children learn languages very easily and quickly. In reality, the process of language acquisition that children go through to adopt their first languages isn’t as easy as it is commonly made out to be: first language acquisition takes several years to fully develop, in conjunction with primary school education later on to bolster this language development. This is to say that it is very convenient for children to learn languages. And even though children also have an innate and biological advantage in neuroplasticity to learn a second language, they might not have the same motivation or will to commit to learning a new language. Adults who decide that they want to learn a new language are typically more motivated to do so, have refined their own study habits over a lifetime of education, and have also had a lifetime of intellectual and personal experiences to further support the learning process (Ausubel, 1964). Young children on the other hand, who are often subjected to foreign language classes by their parents and are heavily favor quick rewards over delayed gratification, might not see any instant gratification from learning a language, and thus may lack the commitment to studiously pursue another language. Consequently, it has been seen that adults
"Within the first language acquisition process, newborns largely follow a timeline of crying, cooing, babbling, one-word utterances, two-word utterances, early-multi-word phrases, and finally more complex phases."
239
Figure 4. A brief timeline of the progression of stages in First Language Acquisition, moving from the first instances of somewhat comprehensible speech towards eventual control of some language structures. This process, although often resulting in fluency in the native language, does take a very long time, even with extended exposure every single day. Source: Wikimedia Commons
"Language learning has been shown to have several positive effects. Cognitive benefits include better decisionmaking skills, increased attention span, improved memory, improved multi-tasking abilities, improved general cognitive abilities, and even improvement in the native language."
consistently outperform children when it comes to learning new languages, largely as a result of aforementioned factors regarding motivation and habits. So even though young children objectively have the ability to learn languages faster, external factors can certainly confound and offset this advantage. Alongside the spatial differences in Broca’s area, studies have also shown a difference in brain activity when it comes to processing native languages and learned languages. In a study that played German phrases to both native German speakers and non-native Russians who learned German later on, it was seen that the non-native German speakers showed an increased level of activation of BA44 (Ruschemeyer et al., 2005). This discovery led to the suggestion that nonnative speakers had to employ extra resources to fully parse through and comprehend even relatively simple German sentences, indicating that their lower German proficiency led to the analysis of simple speech as if it was complex. Native speakers, on the other hand, reported an increase in activation of the mid-section of the STG, indicating that their speech perception processing had become highly optimized and efficient.
Effects of Learning Languages on the Brain Language learning has been shown to have several positive effects. Cognitive benefits include better decision-making skills, increased attention span, improved memory, improved multi-tasking abilities, improved general cognitive abilities, and even improvement in the native language. More specifically, bilingualism and multilingualism have been shown to yield
240
higher executive control, or the control and management of cognitive processes in order to attain a certain goal; bilingual individuals were seen to outperform monolingual individuals on tasks requiring executive control such as the Stroop Test – a test that asks participants to say the color of the words that spell out different colors (saying “red” for the word “blue” that is printed in red font) among other tasks (Quinteros Baumgart & Billick, 2018). Multiple studies have also shown an improvement in overall cognitive health as a result of multilingualism. Most notably, learning an additional language was seen to delay the onset of dementia by an average of around 4.5 years (Alladi et al., 2013). And compared to bilingual individuals, older multilingual adults have been seen to have an even longer delay in the onset of Alzheimer’s and dementia as well (Quinteros Baumgart & Billick, 2018). This pronounced delay in the onset of Alzheimer’s has been attributed to a greater level of cognitive reserve – or the brain’s resilience to damage, developed through mental stimulation from learning new languages – which means that individuals that are at risk for Alzheimer’s will be able to stay independent for longer. Beyond the cognitive benefits, learning additional languages can help provide some personal benefits as well. Being able to communicate in a variety of different languages allows individuals to connect with more people in more personal ways, promotes changing perspectives, and opens people up to a whole new world of valuable and priceless experiences. Additionally, it becomes easier to learn additional languages as extra languages are learned (e.g. the third language is easier than the second, the fourth easier than the third,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
the importance of multilingualism in today’s global society, foreign language education is more important than ever in today’s school systems. Over the past few centuries, there have been several different methods of teaching that were tried in schools around the world.
and so on and so forth). This can be attributed to several cognitive changes for bilingual and multilingual individuals that come with each additional language that is learned. Because of their knowledge of previous linguistic systems, multilingual individuals can reach a higher level of metalinguistic awareness – the ability to reflect on the nature of language outside of the definition of each word. For example, individuals with higher levels of metalinguistic awareness can exploit certain flexibilities in language to express sarcasm and irony (e.g. understanding the ambiguous meanings behind “children make delicious snacks”). Additionally, after having exposure to multiple different languages and language systems, bilingual and multilingual individuals have a larger linguistic repertoire that can help them learn closely related languages (e.g. Italian and Spanish are both Romance languages, so a bilingual speaker who learned Spanish will find it easier to learn Italian because of their very similar language structures). And for learning languages that aren’t closely related to mastered languages, multilingual individuals can still rely on effective learning strategies and skills that they have accumulated from learning their previous languages to help make learning those unrelated languages easier (Cenoz, 2013). The brain also changes as a result of learning additional languages, with those speaking additional languages appearing to have larger brains on average. In addition to this increase in brain size, due to the bilateral nature of language within the brain, multilingualism does lend itself to increasing the brain’s plasticity as well, allowing for more changes to easily occur and also promotes future brain growth as well. Brain processing speeds are seen to be heightened as well – even in non-linguistic tasks and environments (Diamond, 2010).
Optimizing Language Education With the many neurological benefits that come with learning foreign languages, coupled with
WINTER 2021
The most ancient method of teaching languages is through the grammar-translation method – a method that is mainly used to teach Latin, Sanskrit, and Classical Greek (as opposed to modern Greek). As the name implies, it heavily focuses on the grammatical structure of the foreign language. Learning activities primarily involved translating phrases and passages to and from the target language to the native language. There is very low to no focus put on oral communication and it is mainly focused on the written language and theory. Thus, this method drew a lot of criticism because students couldn’t even develop elementary communication skills to express basic thoughts, and also because the whole fundamental reasoning behind translating word for word is heavily flawed (Kuznetsova, 2015). Moving through to the late 19th century, instructors began to transition away from the translation methods and into more direct methods, which discouraged the use of one’s native language in the classroom. This gave rise to the direct method, or the “oral method.” The basic premise of this method was to mimic full immersion for the student in the target language. The instructor will not speak nor permit the native language in the classroom, and there would also be no explicit instruction of grammar, thus reflecting the first language acquisition process closely. This method differs greatly from the grammar-translation method and other translation methods that were used in the past, as it primarily focused on oral communication skills (speaking and listening) instead of written language skills (Kuznetsova, 2015). As a result, students under this method saw a larger improvement in oral language production abilities compared to the grammartranslation method.
Figure 5: The Stroop test. The words above the black line are in the same color font as the color they print, while the color of the font for the words on the bottom don’t match the colors they spell. This is a classic test for executive control, in which individuals have to actively manage their own cognitive instincts and processes in order to say the color of the words and not the words themselves (yellow, red, blue, green, purple) – a test in which multilingual individuals typically outperform monolingual individuals. Source: Wikimedia Commons
"Moving through to the late 19th century, instructors began to transition away from the translation methods and into more direct methods, which discouraged the use of one’s native language in the classroom."
The direct method paved the path forward for the audio-lingual approaches, which heavily emphasized repetition. Students would listen to, repeat, and memorize short pieces of dialogues in the target language, focusing on speaking abilities more than anything. This method was revolutionary in that it began to employ the usage of modern technologies and
241
Figure 6: A German language class in Chandigarh, India, where the instructor utilizes the communicative method. Source: Wikimedia Commons
"Dartmouth College employs the “Rassias Method” alongside its in-person classes to help reinforce speaking competency. This method was developed in response to the widespread criticism of the audio-lingual method’s efficacy and mimicked the audio-lingual method in a more automatic fashion that helped set up students for communicative competence." Figure 7. Professor John Rassias of Dartmouth College, the developer of the Rassias Method. Source: Wikimedia Commons
equipment that increased learning motivation for students under this method. However, this method was heavily criticized for being highly ineffective, as the results did not translate over from the classroom (Kuznetsova, 2015). The highly repetitive nature of this process was designed to mimic real-life encounters but became so rigid that students failed to develop communicative competence, or sufficient knowledge of the target language and the ability to apply the language to a variety of situations. The audio-lingual method then led to the “communicative method,” which is by far the most widely-adopted method of teaching languages across the world today. This method is in some ways a comprehensive compilation of all the previous methods listed above. Grammar isn’t taught explicitly, but isn’t completely ignored as well. There is a heavy focus on communication within context (rather than purely direct exposure as evident in previous teaching ideologies), in which students will be asked to act out a scene or resolve problems on their own, rather than the highly structured audiolingual method (Kuznetsova, 2015). The curriculum is typically split over everyday topics such as weather, food, and school, in which new vocabulary pertaining to each unit will be taught and subsequently employed and activated through activities that prompt students to communicate self-sufficiently. And unlike any of the methods mentioned before, all four disciplines of language proficiency are emphasized through this method.
method’s efficacy and mimicked the audiolingual method in a more automatic fashion that helped set up students for communicative competence. Rather than strictly reciting an instructor as evident in the audio-lingual approach, the Rassias method is unique in that it injects spontaneity in the phrases that students create by having each student produce phrases with functional portions changed. For example, instead of students repeating a base phrase “I live in the first house on the road” as the audio-lingual method would enforce, one student would say “I live in the second apartment on the road,” while the next student might say “I live in the third townhouse on the road,” with the instructor supplying the specific functional portion right before the student spoke. This variety in the spoken phrases promoted independent creation of speech and quick production of language, resulting in greater communicative competence over time (Luplow, 1982). To this day, the Rassias method is still employed across all foreign language departments at Dartmouth College, and remains a core component of its foreign language education.
Different schools have also implemented different methods of teaching to help improve the efficacy of foreign language education. For example, Dartmouth College employs the “Rassias Method” alongside its in-person classes to help reinforce speaking competency. This method was developed in response to the widespread criticism of the audio-lingual
242
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
With the rise of apps such as Duolingo and Babbel and other language platforms like Anki and Clozemaster, language learning is now becoming more and more accessible day by day. These platforms all rely on the notion of space repetition, in which students are asked to recall specific concepts or vocabulary at gradually lengthening time intervals after the concept is recalled. With such a repetitive pattern, concepts and vocabulary are going to be committed to memory for a longer period of time every time it is tested, eventually becoming fully ingrained in memory (Kang, 2016). These additional platforms, albeit widely accessible, aren’t always the most effective for students, as they have to be self-disciplined and motivated in order to achieve language mastery – the same qualities that explains why first language acquisition is so widely successful as well. However, in order to truly optimize and choose the best method of language education, it really depends on the student. Many students don’t have the time or resources to engage in full immersion and trigger language acquisition right off the bat. Language learning, albeit more tedious and arguably less effective than language acquisition in developing communicative competence, is a lot more structured and guided, which makes it a more suitable choice for most students. However, if an individual really wants to achieve high oral competency in a limited amount of time, then language acquisition in a foreign environment is the better path. But even the decision to elect either language learning or language acquisition leads to further personalization of which methods to survey, ultimately making it impossible to generalize one method as the most optimal for foreign language mastery.
Conclusion While current knowledge of language processing still remains largely uncertain, new neurolinguistic studies continue to broaden the collective understanding of human language learning and acquisition processes – exemplified through recent progression from the Wernicke-Geschwind model towards the Hickok-Poeppel dual-stream model. And alongside the expanding neurological understanding of language, new discoveries in the cognitive health benefits of multilingualism and the pedagogy behind different foreign language education methods and resources also mean that language learning and secondary language acquisition are becoming more necessary and efficient than ever.
WINTER 2021
References Alladi, S., Bak, T. H., Duggirala, V., Surampudi, B., Shailaja, M., Shukla, A. K., . . . Kaul, S. (2013). Bilingualism delays age at onset of dementia, independent of education and immigration status. Neurology, 81(22), 1938-1944. doi:10.1212/01.wnl.0000436620.33155.a4 Ausubel, D. P. (1964). Adults Versus Children in SecondLanguage Learning: Psychological Considerations. The Modern Language Journal, 48(7), 420-424. doi:10.1111/j.1540-4781.1964.tb04523.x Barbeau, E. B., Chai, X. J., Chen, J., Soles, J., Berken, J., Baum, S., . . . Klein, D. (2017). The role of the left inferior parietal lobule in second language learning: An intensive language training fMRI study. Neuropsychologia, 98, 169-176. doi:10.1016/j. neuropsychologia.2016.10.003 Cannon, H. M., Feinstein, A. H., & Friesen, D. P. (2010). Managing Complexity: Applying the Conscious-Competence Model to Experiential Learning. Developments in Business Simulation and Experiential Learning: Proceedings of the Annual ABSEL Conference, 37. https://absel-ojs-ttu.tdl.org/ absel/index.php/absel/article/view/306 Cenoz, J. (2011). The influence of bilingualism on third language acquisition: Focus on multilingualism. Language Teaching, 46(1), 71-86. doi:10.1017/s0261444811000218 Diamond, J. (2010). The Benefits of Multilingualism. Science, 330(6002), 332-333. doi:10.1126/science.1195067 Dronkers, N. F., Wilkins, D. P., Van Valin, R. D., Redfern, B. B., & Jaeger, J. J. (2004). Lesion analysis of the brain areas involved in language comprehension. Cognition, 92(1), 145–177. https://doi.org/10.1016/j.cognition.2003.11.002 Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8(5), 393–402. https://doi.org/10.1038/nrn2113 Hoque, M. (2017). An Introduction to the Second Language Acquisition (pp. 1–23). Hussain, I. (2017). Distinction Between Language Acquisition and Language Learning: A Comparative Study. Journal of Literature, Languages and Linguistics, 39. Kang, S. H. (2016). Spaced Repetition Promotes Efficient and Effective Learning. Policy Insights from the Behavioral and Brain Sciences, 3(1), 12-19. doi:10.1177/2372732215624708 Kim, K. H., Relkin, N. R., Lee, K., & Hirsch, J. (1997). Distinct cortical areas associated with native and second languages. Nature, 388(6638), 171-174. doi:10.1038/40623 Kuznetsova, E. M. (2015). Evolution of Foreign Language Teaching Methods. Mediterranean Journal of Social Sciences. doi:10.5901/mjss.2015.v6n6s1p246 Luplow, C. A. (1982). The Rassias Method and the Teaching of Russian. The Slavic and East European Journal, 26(2), 216. doi:10.2307/308090 McLaughlin, B. (1992). Myths and Misconceptions about Second Language Learning: What Every Teacher Needs to Unlearn. Educational Practice Report 5. Distributed by ERIC Clearinghouse. Quinteros Baumgart, C., & Billick, S. B. (2018). Positive Cognitive Effects of Bilingualism and Multilingualism on
243
Cerebral Function: A Review. Psychiatric Quarterly, 89(2), 273–283. https://doi.org/10.1007/s11126-017-9532-9 Rüschemeyer, S., Fiebach, C. J., Kempe, V., & Friederici, A. D. (2005). Processing lexical semantic and syntactic information in first and second language: FMRI evidence from German and Russian. Human Brain Mapping, 25(2), 266-286. doi:10.1002/ hbm.20098 Schwartz, M. F., Kimberg, D. Y., Walker, G. M., Faseyitan, O., Brecher, A., Dell, G. S., & Coslett, H. B. (2009). Anterior temporal involvement in semantic word retrieval: Voxel-based lesionsymptom mapping evidence from aphasia. Brain, 132(12), 3411–3427. https://doi.org/10.1093/brain/awp284 Tremblay, P., & Dick, A. S. (2016). Broca and Wernicke are dead, or moving past the classic model of language neurobiology. Brain and Language, 162, 60–71. https://doi.org/10.1016/j. bandl.2016.08.004 Ullman, M. T. (2001). A neurocognitive perspective on language: The declarative/procedural model. Nature Reviews Neuroscience, 2(10), 717–726. https://doi. org/10.1038/35094573
244
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
245
The Role of Phagocytosis in Intracerebral Hemorrhage: An Avenue for Novel Treatments and Better Patient Outcomes BY VAISHNAVI KATRAGADDA '24 Cover: Intracerebral hemorrhages are located within the brain parenchyma, leading to excessive swelling in surrounding areas during secondary injury. Despite the extensive damage occurring during hemorrhages, there are very few effective treatments to improve patient outcome. Source: Wikimedia Commons
246
Introduction Intracerebral hemorrhage (ICH) occurs when blood vessels within the brain parenchyma (functional tissue of the brain) rupture, resulting in a bleed that puts pressure on surrounding tissues. ICH accounts for 10-15% of all stroke subtypes, with an associated one-year mortality rate greater than 50-60%. It also has the highest mortality of all stroke subtypes, as roughly 40,000 Americans die annually due to the illness (Zhao et al., 2008; Zhao et al., 2015). The incidence of ICH greatly increases with age; while the rate is 0.0059% in those aged 35-54 years old, the rate increases 30-fold for those aged 75-94 to 0.18% (An et al., 2017). Mortality is also higher for the elderly, with 30-50% of elderly ICH patients dying (Gao et al., 2018). Many related traumatic brain injuries (TBI) have been shown to result in cognitive impairments, dementia, damaged grey matter, and neuronal death, among other long-term effects. Nearly 60% of ICH patients experience these long-
term neurological impairments in addition to other risk factors such as hypertension (Wang et al., 2013; Gao et al., 2018). However, there are not many effective treatments for ICH. As a result, research groups are beginning to look at stages of the injury process to determine targets for potential treatments. One of the main focuses in developing novel treatments is phagocytosis (the engulfing of cellular debris by macrophages and microglia), a process affected by a variety of factors in the cellular environment and by a variety of medications. The mechanism with which some of these medications act, notably glibenclamide, is unclear, preventing them from being approved for use. More research is needed to enhance the potential of these novel compounds as treatments.
ICH Injury Process During ICH injury, the most damage occurs within the first few hours. This primary DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
injury is known as ‘mass effect,’ as the hemorrhage compresses surrounding brain tissue. Secondary injury is characterized by intraparenchymal blood (blood within the functional tissue of the brain). In these injuries, extravasated red blood cells (RBCs) undergo hemolysis, breaking down and releasing products like hemoglobin, heme, and iron into the brain. This process results in a cytotoxic effect as these products generate more free radicals that create an oxidized environment (Zhao et al., 2008; Zhao et al., 2015). These free radicals, known as reactive oxidative species (ROS) and reactive nitrogenic species (RNS), are formed in amounts higher than the antioxidants within the brain can handle, leading to neuronal cell loss, grey matter damage, vascular injury, blood-brain barrier (BBB) disruption, and edema (swelling due to excess watery fluid) due to the oxidation of cellular proteins and structures within the brain by ROS and RNS (Zhou et al., 2018; Zhao et al., 2008; Zhao et al., 2015). Secondary injury can similarly impair axons, leading to demyelination (Wang et al., 2013). As these oxidative stresses are caused by the free radicals produced through the breakdown of excess neurotoxic blood, a larger ICH bleed results in greater damage to the brain (Zhao et al., 2015). The hemolysis of RBCs is mediated by the activation of the membrane attack complex (MAC), which forms a protein complex on cells targeted for death. As these cells begin the process of degradation, the ROS and RNS released during the process begin to increase and oxidate the brain environment. As MAC levels increase and begin accumulating at high levels between days 3 and 7, bleed resolution begins (Cao et al., 2016). As blood cells continue to lyse within the brain, microglia (macrophages native to the brain) and other immune system macrophages, infiltrate the brain to remove debris and cells tagged by the MAC (Zhao et al., 2015; Zhao et al., 2007). At that point, macrophages and microglia begin to release effector molecules (regulatory molecules) and chemotactic factors (molecules that attract microglia and macrophages to the injury site) to resolve the bleed, leading to restoration and neuronal sprouting (the generation of new neuronal connections) due to the beneficial effects of inflammation (Wang et al., 2013; Zhao et al., 2007). However, despite its benefits, the process leads to excess damage as well, as macrophage and microglia phagocytosis results in the phagocytosed RBCs breaking down, generating additional quantities of ROS
WINTER 2021
that add to the ROS and RNS already formed by hemolysis of free-floating RBCs within the brain parenchyma. As the phagocytosis process further increases oxidative stress upon the brain, it is important to regulate macrophage and microglia activity in promoting a healthy injury resolution process (Zhao et al., 2007; Zhao et al., 2015).
Present Treatments While the ICH injury process presents a great danger to many patients worldwide, the current approved treatments are risky or ineffective. The only Food and Drug Administration (FDA) approved pharmaceutical product at present is recombinant tissue plasminogen activator (rtPA) (King et al., 2018; Simard et al., 2014). The drug rtPA is a thrombolytic, meaning that it works by breaking up blood clots through the conversion of plasminogen to plasmin, the primary enzyme involved in dissolving blood clots that develop in the brain to cause strokes (Jilani and Siddiqui et al., 2020). However, less than 20% of all stroke patients receive this treatment due to a variety of reasons and side effects. In many instances, symptomatic intracerebral hemorrhage (ICH that worsens or develops additional stroke symptoms as a side-effect to the drug) may occur due to rtPA treatment, amplified by risk factors like age, blood pressure, cardiac risk factors, sex, weight, and race (Mandava et al., 2013; Miller et al., 2011). Other treatments attempt to manage swelling with drugs like mannitol or hypertonic saline; however, these treatment methods have not been proven successful. Another option is decompressive craniectomy, but this option is highly invasive and has a high morbidity, as it involves the removal of a large bone flap and the opening of the dura in an attempt to control swelling and the raised intracranial pressure that results from that swelling (Kolias et al., 2018; King et al., 2018). Several clinical trials have even shown that decompressive craniectomy might result in several other dangers. For instance, the RESCUEicp trial showed that while decompressive craniectomy had a lower mortality than the control, the surgery still resulted in higher numbers of individuals being left in a vegetative state (Kolias et al., 2018).
"While the ICH injury process presents a great danger to many patients worldwide, the current approved treatments are risky or ineffective. The only Food and Drug Administration (FDA) approved pharmaceutical product at present is recombinant tissue plasminogen activator (rtPA)."
As these methods are not clinically proven or effective, an important research field within the study of ICH is finding novel treatments. Many research groups are looking to safely upregulate macrophage and microglia activity as a way to improve patient outcome, as their activity is vital in cleaning up post-ICH via the 247
removal of dangerous cellular debris. However, some macrophage phenotypes have been shown to result in further oxidative stress (Zhao et al., 2007). Due to this dual positive-negative effect, many groups are attempting to find novel Source: Wikimedia Commons drugs that upregulate macrophages without increasing oxidative stress upon the brain by controlling the various phenotypes expressed by macrophages.
Figure 1: During a decompressive craniectomy, the dura is opened in a highly invasive surgery in order to limit swelling on the brain due to secondary injury during ICH.
The Role of Macrophages/Microglia in ICH Cleanup and their Phenotypes Macrophages and microglia infiltrate the site of injury, increasing steadily in number for the first seven days and subsiding over the course of one to two weeks post-ICH. While the cells generally infiltrate damaged tissue, they have also been known to proliferate locally. In the phagocytosis process, macrophages and microglia engulf cellular debris and break it down within vesicular structures (Zhao et al., 2015).
"Macrophages/ microglia are seen as plastic cells because of their ability to change phenotypes from the initial resting state (M0 macrophages) and respond dynamically to the injury environment."
Figure 2: RBCs are phagocytosed by macrophages and are broken down internally. The released products result in the generation of species like ROS, which further damage the brain when released back into the brain parenchyma. Source: Wikimedia Commons
248
Macrophages/microglia are seen as plastic cells because of their ability to change phenotypes from the initial resting state (M0 macrophages) and respond dynamically to the injury environment (Wang et al., 2013). Each active phenotype can be present at the injury site— the transient M2 phenotype and the sustained M1 phenotype—both serving a different purpose and characterized by different markers. M1 macrophages have been described to look elongated and possess a spindle shape, while M2 macrophages are characterized by flattened, rounded shapes with elongated filopodia (Li et al., 2016). While both macrophages have phagocytic activity, M2 macrophages have been described as healthier than M1 macrophages in various experiments, as they persist longer in their activity and do not produce as many ROS (Wang et al., 2013). Typically, M2 macrophages are expressed immediately after TBI; however, they are eventually replaced by the unhealthier
M1 macrophages. Major M1 markers include genes such as iNOS (inducible nitric oxide synthase; creates nitric oxide, an inflammatory mediator generated in phagocytosis), CD11b (involved in pathogen recognition), CD32 (helps regulate phagocytosis), CD86 (causes activation of the immune system), and CD16 (receptor that recognizes debris to be removed), which increase from day 3 and peak between day 5 and 7 post-ICH. M2 markers include CD206 (a surface protein characterizing M2 macrophages), Arg1 (promotes wound healing), CD163 (a receptor involved in causing local inflammation), and transforming growth factor-β (involved in cell migration and proliferation at the injury site) which are induced 1 to 3 days post-TBI and peak at 3 to 5 days (Cao et al., 2016; Wang et al., 2013). Each of these associated genes and markers help activate the macrophages and cause their proliferation, resulting in either M1 or M2 typeactivity, which then affects the rate at which injury resolution occurs. Other important markers of microglia and macrophages include scavenger receptors such as CD47, an integrin-associated protein that is expressed on RBCs and other cells targeted by phagocytosis with high expression 1 day postICH and decreasing by day 3. An increased level of CD47 produces a signal that protects RBCs, preventing macrophages from phagocytosing RBCs. CD163 is another scavenger receptor that mediates RBC phagocytosis by transporting hemoglobin into microglia and macrophages with heme oxygenase-1 (HO-1). By day 3, CD163 and HO-1 positive macrophages and microglia infiltrate the hemorrhagic area. By this time, CD47 is reduced, diminishing the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Microglia, the resident brain macrophages, migrate to the point of inflammation after a bleed. Cells begin to show elongated processes upon activation and express phagocytic activity. (Image scale: 50 microns) Source: Wikimedia Commons
protective signal on RBCs, allowing CD163 and HO-1 positive cells to phagocytose the RBCs (Cao et al., 2016). M1/M2 Polarization Both macrophage phenotypes have been shown to result in different cellular effects. M1 macrophages exacerbate oligodendrocyte (cells that produce myelin in the brain and spinal cord) death caused by oxygen-glucose deprivation that occurs in ICH, exacerbating nerve cell damage, releasing ROS and cytokines, and reducing neurotrophic factors that help with neuronal growth. M1 cells were also found to have less phagocytic activity than their M2 counterparts. Despite the opposing effects, 3 days post-ICH, the majority of macrophages found in an untreated environment were M1, resulting in the secondary injury characterized by excess ROS species from hemolysis of blood and phagocytosis (Wang et al., 2013). Healthier M2 macrophages, on the other hand, have increased activity and reduce inflammatory mediators, further protecting the central nervous system (CNS) from damage. While macrophages mainly polarize into one of the phenotypes, the polarization is not concrete; macrophage phenotypes have been shown to exist on a spectrum, with intermediate phenotypes being induced in different environments (Wang et al., 2013).
WINTER 2021
Finding treatments to promote M2 macrophages is an important area of study for treating ICH. Scientists are attempting to remove debris while limiting ROS and further stresses produced by M1 macrophages. Studies have shown that alterations in the resting potential of the neuronal membrane (Vmem) have an impact on the differentiation of macrophages. By blocking KATP channels (potassium channels activated by ATP) on macrophages and microglia and depolarizing the cell membrane, M2 phagocytic markers increase. Other stimuli have been shown to polarize macrophages/microglia as well. M1 macrophages are differentiated upon exposure to stimuli like lipopolysaccharide (LPS) and interferon-γ, while M2 macrophages were polarized upon exposure to stimuli like IL-4 or IL-13 (Li et al., 2016). These cytokines activate a macrophage through increasing the cell’s oxidative metabolism in order to cause differentiation. However, M1 and M2 macrophages are affected by different stimuli since M1 macrophages are more proinflammatory while M2 macrophages are more anti-inflammatory. Thus, either pro- or antiinflammatory cytokines can be used to skew the macrophage differentiation towards either the M1 or M2 phenotype (Celik et al., 2020).
"Scientists are attempting to remove debris while limiting ROS and further stresses produced by M1 macrophages. Studies have shown that alterations in the resting potential of the neuronal membrane (Vmem) have an impact on the differentiation of macrophages."
Pathways Affecting Phagocytosis Several pathways have been shown to upregulate or downregulate genes that control macrophage action. One such pathway involves
249
Figure 4: As ions flow in and out of the cell, the cellular membrane potential (Vmem) changes, leading to macrophage differentiation. As KATP channels are blocked, potassium ions can no longer leave the intracellular environment, leading to membrane depolarization and an increase in M2 macrophage markers. Source: Wikimedia Commons
"One of the many promising drugs being studied, glibenclamide, has been shown to improve ICH outcome. However, the mechanism by which the treatment works is unclear..."
the activation of transcription factor peroxisome proliferator-activated receptor gamma (PPAR-γ). This transcription factor acts as a mediator for cellular defense, activating the brain’s scavenger system. It also promotes antioxidative genes, such as catalase and superoxide dismutase, which help protect the brain, as well as the macrophages and microglia, against secondary injury and oxidative stress (Zhao et al., 2008). In addition, PPAR- γ has been shown to control CD36, a class B scavenger receptor (a receptor that helps cells identify unwanted debris within the brain) associated with M2 macrophages and microglia vital in removing oxidatively damaged cells and cell fragments (Zhao et al., 2008; Pennathur et al., 2015; Zhao et al., 2015). Without CD36 expression, phagocytosis was shown to decrease by 56%. Inhibiting the PPAR- γ transcription factor, which upregulates CD36, results in a 19.6% increase in neuronal damage (Zhao et al., 2007). Therefore, a treatment shown to increase CD36 expression by upregulating PPAR- γ would potentially allow for greater ICH resolution and offer an increased neuroprotective effect. Another important pathway is the Keap1-Nrf2 pathway; activated by electrophiles and prooxidants, the pathway allows macrophages and microglia to better combat ROS through activating antioxidative genes. The pathway also decreases the activation of NF-κB, a transcription factor which results in M1 macrophage activation, the unhealthier of the two phenotypes. In in vitro experiments, activating this pathway resulted
250
in enhanced M2 macrophage phagocytosis as well as a decrease in ROS generation, specifically for the reactive molecule H2O2. Nrf2 was found to be a transcriptional regulator of the CD36 gene as well. Therefore, activating Nrf2 allowed for better neurological outcomes as it increased CD36, a marker that stimulates M2 macrophages, and was associated with an increase in antioxidant enzymes (Zhao et al., 2015). Other important pathways include the SOCS3 pathway, which has been shown to play a role in M1/M2 differentiation. Suppressing the SOCS3 gene resulted in lower levels of M1 markers while M2 macrophage levels increased—which is beneficial in treating ICH—suggesting that the gene typically results in M1 polarization (Ji et al., 2020). The JAK1/STAT6 pathway has been investigated as well, mediating M2 macrophage polarization after exposure to IL-4 (one of the stimulants shown to activate M2 macrophages) (He et al., 2020). By mediating these pathways and increasing M2 markers, ICH debris can be cleaned efficiently while limiting the effect of oxidation upon the brain. Novel treatments are currently being studied for their ability to upregulate some of these pathways.
SUR1 and KATP Channels One of the many promising drugs being studied, glibenclamide, has been shown to improve ICH outcome. However, the mechanism by which the treatment works is unclear; it is possible that the drug may affect macrophage and
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
microglia polarization through its effect on the SUR1 channels expressed in the brain. The SUR1 receptor is encoded by the Abcc8 gene and is a pore-forming subunit that forms ion channels with other proteins, including TRPM4 (a protein channel that allows for the influx of cations into cells). The SUR1 receptor contains two nucleotide binding domains and has a high affinity for sulfonylurea drugs, such as glibenclamide. Acting as a regulatory subunit, the SUR1 protein prolongs the probability of either open or closed states of the ion channels it forms (Simard et al., 2012). In most CNS cells, SUR1 is upregulated in times of ICH injury. TRPM4 is upregulated as well, and the two proteins colocalize to form the SUR1-TRPM4 channel (Simard et al., 2012). SUR1 upregulation is significant in the perihematomal (surrounds hematoma) tissue 24 hours to 72 hours after ICH, during the acute phase of the injury (Zhou et al., 2018). Thus, SUR1-TRPM4 expression is associated with hypoxia or other forms of injury (Khanna et al., 2014). TRPM4 regulates calcium and sodium influx, acting as a nonselective cation channel and allowing calcium ions to flow into the cell, modifying cellular membrane potential (Simard et al., 2012). As the receptor opens, SUR1 accepts monovalent cations (ions that have lost one electron), allowing excess Na+ to flow in, followed by Cl- and H2O to balance the charges (Zhou et al., 2018; Tosun et al., 2013). This results in oncotic cell swelling and cytotoxic effects, which lead to necrotic death (Simard et al., 2012; Tosun et al., 2013). Typically, an upregulation of SUR1-TRPM4 channels results in the increase of NO and other free radicals which further agitate the brain (Jha et al., 2020). Thus, blocking the SUR1-TRPM4 channel has been shown to reduce ICH mortality by almost
50% (Khanna et al., 2014). Glibenclamide has been shown to reduce levels of microglial NOS2 (nitric oxide synthase) and NO (nitric oxide), both of which cause oxidative stresses on the brain, by inhibiting the SUR1 channels, creating a protective effect in instances of TBI (Jha et al., 2020). In some neurons, SUR1 is constitutively expressed where KATP channels are made, as SUR1 binds to the protein to regulate the channel (Simard et al., 2012). During ICH, intracellular ATP decreases, resulting in the prolonged opening of the KATP channels, depolarizing the cell membrane and leading to cell death (King et al., 2018). It has been suggested that glibenclamide binds to both SUR1-TRPM4 channels and KATP channels, making it a potential treatment for targeting macrophage polarization; however, SUR1TRPM4 blockage is a more critical target in treating swelling (Woo et al., 2020).
"In most CNS cells, SUR1 is upregulated in times of ICH injury. TRPM4 is upregulated as well, and the two proteins colocalize to form the SUR1-TRPM4 channel."
Glibenclamide In targeting these SUR1-TRPM4 and KATP channels, the sulfonylurea drug glibenclamide has been shown to have a neuroprotective effect in the CNS post-ICH (Xu et al., 2019). However, the mechanism by which the drug works has yet to be elucidated. Several models and studies have shown its beneficial effects in blocking SUR1 and reducing neuronal cell death. Others have shown that glibenclamide can induce M2 polarization through blocking SUR1 and modifying cell potential, and retrospective studies and clinical trials have produced beneficial effects as well. Yet, a specific, proven mechanism is unclear. The drug has been shown to have a high potency at lower pH (acidity), as it is a weak acid that can be protonated at lower pH. At t a pH of 6.8 or lower, the drug has been shown Figure 5: Glibenclamide has been used extensively in treating diabetes. It’s structure and mechanism in diabetes has been studied extensively, meaning the drug has a high safety rating and is a promising candidate in treating ICH. Source: Wikimedia Commons
WINTER 2021
251
to have an 8-fold increase in potency compared to neutral pH (King et al., 2018; Jha et al., 2020). The protonated state has greater solubility in lipids, allowing the drug to infiltrate cellular membranes and bind to proteins (King et al., 2018). During stroke, carbon dioxide begins to accumulate in the brain, leading to a lowered pH, amplifying the protective effect of glibenclamide in ICH-type environments (Orlowski et al., 2011). Extensive research has proven glibenclamide to be safe as well, as the drug has been used for several decades in treating type II diabetes (Khanna et al., 2014). Studies have also demonstrated that the drug remains localized within the brain, contained by the BBB and not leaking into the broader central nervous sytem, making the treatment safe for the brain and body (King et al., 2018).
"Once in the brain, glibenclamide begins targeting the SUR1 protein on SUR1TRPM4 and KATP channels (Xu et al., 2019). The drug has a greater affinity for SUR1 than SUR2, and plays an important role in blocking this specific channel and affecting neuronal polarization."
Once in the brain, glibenclamide begins targeting the SUR1 protein on SUR1-TRPM4 and KATP channels (Xu et al., 2019). The drug has a greater affinity for SUR1 than SUR2, and plays an important role in blocking this specific channel and affecting neuronal polarization (Khanna et al., 2014). As an upregulation of KATP and SUR1-TRPM4 results in cellular blebbing (which typically occurs when the cell membrane is pushed outwards during the process of apoptosis) and swelling through increased ionic influx, blocking the channel reduces neuronal apoptosis (Woo et al, 2020). In this way, glibenclamide offers a protective effect upon neurons in the ICH injury site. Macrophages as a Potential Target Glibenclamide has been found to have an effect on macrophage polarization through changing the cell’s Vmem by affecting the KATP and SUR1-TRPM4 channels. As glibenclamide targets the KATP channels (which are open to cations) and blocks them, a membrane potential is created, resulting in the activation of M2 phagocytic markers (Li et al., 2016). With a prepolarization treatment of 20 μM glibenclamide on macrophages, an increase in M2 macrophage markers, such as CD206 (a surface protein characterizing M2 macrophages) was noted. Additionally, other dangerous markers, like TNF- α, a cytokine that leads to apoptosis, were decreased, proving the beneficial effect of the drug in reducing dangerous inflammatory responses while promoting the more active and healthier M2 macrophage phenotype (Idriss et al., 2000; Li et al., 2016).
252
The treatment worked on already polarized macrophages as well. M0, M1, and M2 macrophages were polarized for 18 hours and then treated with 20 μM glibenclamide. Glibenclamide was found to downregulate gene expression of M1 markers and upregulate CD206, an effect found for all three treatment groups, suggesting that glibenclamide can alter macrophage and microglia polarization despite their present polarization (Li et al., 2016). Other studies have showed that glibenclamide reduces the expression of dangerous inflammatory cytokines associated with M1 macrophages, leading to a reduction in edema and inflammation by 80% within 3 days after ICH (Xu et al., 2019). Another study showed that glibenclamide reduces ROS produced by M1 macrophages, inhibiting iNOS, a M1 phagocytic marker, and improving motor and sensory function (Zhou et al., 2018). Other novel drugs currently being studied have shown to help ameliorate ICH by targeting certain genes to activate macrophages and microglia, suggesting that glibenclamide could operate with a similar mechanism. One study focused on the Keap1-Nrf2 pathway discussed above, showing that treatment inhibits the pathway and decreased phagocytosis. This suggests that a drug with an effect of increasing phagocytosis does so by activating a relevant marker or gene, hinting at the possibility that glibenclamide may do the same (Zhao et al., 2015). Another drug, rosiglitazone, has been shown to work through this mechanism, activating the PPAR-γ transcription factor and increasing phagocytosis (Zhao et al., 2007). Although studies have not been conducted to elucidate the potential genes glibenclamide upregulates to increase M2 macrophage activity, the notion that this occurs is a possibility that should be further studied. Retrospective Studies and Clinical Trials Additionally, retrospective studies on glibenclamide treatment for ICH have shown that type II diabetic patients experiencing stroke while on glibenclamide or another sulfonylurea fared better and had fewer deaths as well as lower rates of hemorrhagic transformation. Another phase IIa clinical trial looking at glibenclamide in edema and stroke showed that malignant edema occurred in only 20% of patients treated with glibenclamide (Khanna et al., 2014).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Other retrospective studies similarly showed that 36% of diabetic stroke patients being treated with sulfonylureas such as glibenclamide had a 4-point decrease on the National Institutes of Health Stroke Scale (a scale in which points are assigned based on severity of symptoms such as motor deficits, language deficits, or issues with their visual field), meaning better neurological outcome (Lyden, 2017; King et al., 2018). These patients also had a Modified Rankin Score (a scale from 0 to 5 characterizing disability in stroke patients, with 0 representing no symptoms at all and 5 representing death) of ≤2 in 82% of patients versus 57% of the control patients, suggesting beneficial outcomes with glibenclamide treatment (Banks and Marotta et al., 2007; King et al., 2018). Studies have also been done to show how glibenclamide interacts with rtPA, the sole therapeutic approved for ICH. One study showed that glibenclamide and rtPA interact independently of each other, resulting in a synergistic effect. While rtPA is still able to have a thrombolytic effect, promoting clot lysis, glibenclamide prevents neuronal apoptosis and improves clearance, allowing the two drugs to work together and greatly improve outcome (King et al., 2018). Another study showed that while rtPA treatment resulted in equally poor mortality between the tested rats, adding the additional treatment of glibenclamide significantly reduced mortality and improved neurological scores (Simard et al., 2014). As glibenclamide does not inhibit rtPA from acting on an ICH patient, the combined effect of both drugs could prove to be a valuable treatment.
Competitor Drugs and Treatments While studies are being conducted to elucidate the mechanism by which glibenclamide works, other studies are being conducted as well to investigate other drugs that upregulate phagocytosis. One such drug, rosiglitazone, along with pioglitazone (an FDA approved anti-diabetic), have been shown to increase CD36 expression, a gene that increases M2 macrophage activity, resulting in increased RBC phagocytosis. The drug was shown to reduce free hemoglobin in the brain, leading to patients showing improved functional deficits. These results were obtained with a treatment 24 hours after ICH induction and then once a day for 14 days (Zhao et al., 2008).
WINTER 2021
Another study looked at rosiglitazone in pretreatment and posttreatment groups. Within 24 hours after an intraperitoneal treatment, the ICH volume reduced by 28.3%, and a 14-day treatment augmented the blood absorption 2.3-fold. Another drug tested within the study, 15d-PGJ2 reduced H2O2 as well, a common ROS produced by M1 macrophages that is associated with increased oxidative damage to neural cells. With a pretreatment of 30 minutes, rosiglitazone was shown to reduce neurological functional deficits and neuronal damage (Zhao et al., 2007). However, this additional benefit was found to occur with a pretreatment of the drug, which is not always plausible in cases of ICH. Another set of drugs, sulforaphane and tertbutylhydroquinone (tBHQ), were used to treat induced ICH in rat and mice models. Both drugs decreased oxidants and upregulated phagocytosis, suggesting a dual beneficial effect. Sulforaphane additionally increased expression of other targets such as superoxide dismutase-1, glutathione S-transferase, catalase, NAD(P)H dehydrogenase, quinone 1, and HO-1, which assist in alleviating and metabolizing oxidative stresses. CD36 was induced effectively as well within only 6 hours of ICH, faster than the 16-hour efficacy time period for phagocytosis. The induced hematomas caused by the ICH were significantly decreased by day 10 as well (Zhao et al., 2015).
"Another set of drugs, sulforaphane and tertbutylhydroquinone (tBHQ), were used to treat induced ICH in rat and mice models. Both drugs decreased oxidants and upregulated phagocytosis, suggesting a dual beneficial effect."
Deferoxamine is yet another drug shown to have potential effects in treating ICH. An iron chelator, the drug binds excess iron together into a ring, reducing edema, white matter injury, and neuronal cell death by reducing free iron levels in the brain parenchyma. Positive effects were seen with experimental treatments in piglets 2 hours after ICH and every 12 hours for ≤7 days. Lower levels of CD47 were found as well (Cao et al., 2016). As with the rosiglitazone study, the treatment times for deferoxamine used in this experiment could limit the efficacy of the drug; ICH patients would need to be treated within just two hours after injury. Finally, another type of therapy being investigated relies on stem cells of various types, such as hematopoietic and induced pluripotent stem cells. However, this treatment is novel and has been shown to have a variety of negative effects. A growing number of experiments have shown that stem cells in animal models and clinical trials have the ability to ameliorate tissue damage and promote
253
functional recovery post-ICH, as well as replace destroyed nerve cells and tissues. On the other hand, some studies have shown side effects like immune system rejection, tumorigenicity, and instability as cell clusters are formed, and other effects like seizures, infections, or hyperpyrexia (Gao et al., 2018). All of these treatments and drugs must be further studied, both in preclinical and clinical settings.
Conclusion As ICH mortality is dangerously high, it is imperative that novel treatments are studied to improve patient outcomes. With multiple phases of injury, untreated ICH can present a grave danger. Secondary injury is especially important to study, offering a beneficial effect in removing debris while also contributing to further damage through the release of ROS. The macrophages and microglia that control secondary injury have been studied extensively, showing that M2 macrophages are effective in removing debris while limiting the release of cytokines that result in further stresses in the CNS (Wang et al., 2013). The process of polarization and macrophage differentiation into M1 or M2 phenotypes can be mediated by the cellular environment through certain compounds, for example, by changing their resting cellular membrane potential. Drugs like glibenclamide change Vmem, targeting SUR1-TRPM4 and KATP channels in an effort to reduce the fast depolarization effect caused by the upregulated channels in their prolonged open states (Li et al., 2016). However, the mechanism by which glibenclamide targets macrophages is unclear. There is some evidence that it owes to the blockage of SUR1 and the upregulation of M2 macrophages, but the process must be studied in more detail (Li et al., 2016; Khanna et al., 2014). While other drugs are being studied simultaneously, glibenclamide is a valuable treatment to continue to explore, supported by the extensive literature proving its beneficial outcomes. More studies are needed, however, to elucidate the mechanism through which the treatment works for FDA approval, which would lead to an FDA-approved ICH treatment that could drastically decrease the present mortality rates and save countless lives. References An, S. J., Kim, T. J., Yoon, B. W. (2017). Epidemiology, risk factors, and clinical features of intracerebral hemorrhage: an update. Journal of Stroke, 19(1), 3-10. https://doi.org/10.5853/ jos.2016.00864 Banks, J. L., Marotta, C. A. (2007). Outcomes validity and reliability of the Modified Rankin Scale: implications for
254
stroke clinical trials. Stroke, 38(3), 1091-1096. https://doi. org/10.1161/01.STR.0000258355.23810.c6 Cao, S., Zheng, M., Hua, Y., Chen, G., Keep, R. F., Xi, G. (2016). Hematoma changes during clot resolution after experimental intracerebral hemorrhage. Stroke, 47(6), 1626-1631. https:// doi.org/10.1161/STROKEAHA.116.013146 Celik, M. O., Labuz, D., Keye, J., Glauben, R., Machelska, (2020). H. IL-4 induces M2 macrophages to produce sustained analgesia via opioids. JCI, 5(4). https://doi.org/10.1172/jci. insight.133093. Gao, L., Xu, W., Li, T., Chen, J., Shao, A., Yan, F., Chen, G. (2018). Stem cell therapy: a promising therapeutic method for intracerebral hemorrhage. Cell Transplantation, 27(12), 18091824. https://doi.org/10.1177/0963689718773363 He, Y., Gao, Y., Zhang, Q., Zhou, G., Cao, F., Yao, S. (2020). IL-4 switches microglia/macrophage M1/M2 polarization and alleviates neurological damage by modulating the JAK1/ STAT6 pathway following ICH. Neuroscience, 437, 161-171. https://doi.org/10.1016/j.neuroscience.2020.03.008 Idriss, H. T., Naismith, J. H. (2000). TNF alpha and the TNF receptor superfamily: structure-function relationship(s). Microscopy Research and Technique, 50(3), 184-195. https:// doi.org/10.1002/1097-0029(20000801)50:3<184::AIDJEMT2>3.0.CO;2-H Jha, R. M., Bell, J., Citerio, G., Hemphill, J. C., Kimberly, W. T., Narayan, R., K., Sahuquillo, J., Sheth, K. N., Simard, J. M. (2020). Role of Sulfonylurea Receptor 1 and Glibenclamide in Traumatic Brain Injury: A Review of the Evidence. International Journal of Molecular Sciences, 21(2). https://doi. org/10.3390/ijms21020409 Ji, X., Shi, Y., Zhang, Y., Chang, M., Zhao, G. (2020). Reducing suppressors of cytokine signaling-3 (SOCS3) expression promotes M2 macrophage polarization and functional recovery after intracerebral hemorrhage. Frontiers in Neurology, 11. 10.3389/fneur.2020.586905 Jilani, T. N. and Siddiqui, A. H. (2020). Tissue plasminogen activator. StatPearls. https://www.ncbi.nlm.nih.gov/books/ NBK507917/ Khanna, A., Walcott, B. P., Kahle, K. T., Simard, J. M. (2014). Effect of glibenclamide on the prevention of secondary brain injury following ischemic stroke in humans. Neurosurgical Focus, 36(1). 10.3171/2013.10.FOCUS13404 King, Z. A., Sheth, K. N., Kimberly, E. T., Simard, J. M. (2018). Profile of intravenous glyburide for the prevention of cerebral edema following large hemispheric infarction: evidence to date. Drug Design, Development and Therapy, 12, 2539-2552. https://doi.org/10.2147/DDDT.S150043 Kolias, A. G., Viaroli, E., Rubiano, A. M., Adams, H., Khan, T., Gupta, D., Adeleye, A., Iaccarino, C., Servadei, F., Devi, B. I., Hutchinson, P. J. (2018). The current status of decompressive craniectomy in traumatic brain injury. Current Trauma Reports, 4, 326-332. https://doi.org/10.1007/s40719-0180147-x Li, C., Levin, M., Kaplan, D. L. (2016). Bioelectric modulation of macrophage polarization. Scientific Reports, 6. https://doi. org/10.1038/srep21044 Lyden, P. (2017). Using the National Institute of Health Stroke Scale. Stroke, 40(2), 513-519. https://doi.org/10.1161/
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
STROKEAHA.116.015434 Mandava, P., Murthy, S. B., Munoz, M., McGuire, D., Simon, R. P., Alexandrov, A. V., Albright, K. C., Boehme, A. K., MartinSchild, S., Martini, S., Kent, T. A. (2013). Explicit consideration of baseline factors to assess recombinant tissue-type plasminogen activator response with respect to race and sex. Stroke, 44(6), 1525-1531. https://doi.org/10.1161/ STROKEAHA.113.001116 Miller, D. J., Simpson, J. R., Silver, B. (2011). Safety of thrombolysis in acute ischemic stroke: a review of complications, risk factors, and newer technologies. Neurohospitalist, 1, 138-147. https://doi. org/10.1177/1941875211408731
Hematoma resolution as a therapeutic target: the role of microglia/macrophages. Stroke, 40(3), 92-94. https://doi. org/10.1161/STROKEAHA.108.533158 Zhao, X., Sun, G., Ting, S., Song, S., Zhang, J., Edwards, N. J., Aronowski, J. (2015). Cleaning up after ICH: the role of Nrf2 in modulating microglia function and hematoma clearance. Journal of Neurochemistry, 133, 144-152. https://doi. org/10.1111/jnc.12974 Zhou, F., Liu, Y., Yang, B., Hu, Z. (2018). Neuroprotective potential of glibenclamide is mediated by antioxidant and anti-apoptotic pathways in intracerebral hemorrhage. Brain Research Bulletin, 142, 18-24. https://doi.org/10.1016/j. brainresbull.2018.06.006
Orlowksi, P., Chappell, M., Park, C. S., Grau, V., Payne, S. (2011). Modelling of pH dynamics in brain cells after stroke. Interface Focus, 1(3), 408-416. https://doi.org/10.1098/rsfs.2010.0025 Pennathur, S., Pasichynk, K., Bahrami, N. M., Zeng, L., Febbraio, M., Yamaguchi, I., Okamura, D. M. (2015). The macrophage phagocytic receptor CD36 promotes fibrogenic pathways on removal of apoptotic cells during chronic kidney injury. American Journal of Pathology, 185(8), 2232-2245. https://doi. org/10.1016/j.ajpath.2015.04.016 Simard, J. M., Woo, S. K., Schwartzbauer, G. T., Gerzanich, V. (2012). Sulfonylurea receptor 1 in central nervous system injury: a focused review. Journal of Cerebral Blood Flow and Metabolism, 32, 1699-1717. https://doi.org/10.1038/ jcbfm.2012.91 Simard, J. M., Sheth, K. N., Kimberly, W. T., Stern, B. J., Zoppo, G. J., Jacobson, S., Gerzanich, V. (2014). Glibenclamide in cerebral ischemia and stroke. Neurocritical Care, 20(2), 319-333. 10.1007/s12028-013-9923-1 Tosun, C., Kurland, D. B., Mehta, R., Castellani, R. J., deJong, J. L., Kwon, M. S., Woo, S. K., Gerzanich, V., Simard, J. M. (2013). Inhibition of the SUR1-TRPM4 channel reduces neuroinflammation and cognitive impairment in subarachnoid hemorrhage. Stroke, 44(12), 3522-3528. https:// doi.org/10.1161/STROKEAHA.113.002904 Wang, G., Zhang, J., HU, X., Zhang, L., Mao, L., Jiang, X., Liou, A. K., Leak, R. K., Gao, Y., Chen, J. (2013). Microglia/macrophage polarization dynamics in white matter after traumatic brain injury. Journal of Cerebral Blood Flow and Metabolism, 33(12), 1864-1874. https://doi.org/10.1038/jcbfm.2013.146 Woo, S. K., Tsymbalyuk, N., Tsymbalyuk, O., Ivanova, S., Gerzanich, V., Simard, J. M. (2020). SUR1-TRPM4 channels, not KATP, mediate brain swelling following cerebral ischemia. Neuroscience Letters, 718. https://doi.org/10.1016/j. neulet.2019.134729 Xu, F., Shen, G., Su, Z., He, Z., Yuan, L. (2019). Glibenclamide ameliorates the disrupted blood–brain barrier in experimental intracerebral hemorrhage by inhibiting the activation of NLRP3 inflammasome. Brain and Behavior, 9. https://doi.org/10.1002/brb3.1254 Zhao, X., Sun, G., Zhang, J., Strong, R., Song, W., Gonzales, N., Grotta, J. C., Aronowski, J. (2007). Hematoma resolution as a target for intracerebral hemorrhage treatment: role for peroxisome proliferator-activated receptor y in microglia/ macrophages. Annals of Neurology, 61(4), 352-362. https:// doi.org/10.1002/ana.21097 Zhao, X., Grotta, J., Gonzales, N., Aronowski, J. (2008).
WINTER 2021
255
A Closer Look at GM2 Gangliosidoses in Tay-Sachs Disease BY VALENTINA FERNANDEZ '24 Cover: Schematic representation of a neuronal cell body. Tay Sachs Disease is characterized by extreme swelling of the lysosomes due to massive accumulations of GM2 ganglioside. Source: Wikimedia Commons, created by Bruce Blaus.
Introduction Tay-Sachs Disease is a lysosomal storage disease that is part of the family of diseases known as GM2 gangliosidoses (Ferreira & Gahl, 2017). It is estimated to occur every 1 in 222,000 live births. Tay-Sachs pathology is characterized by a deficiency in the lysosomal enzyme β-Hexosaminidase A and progressive degeneration of the central nervous system resulting from a massive neuronal accumulation of GM2 ganglioside (a lipid), the substrate for the Hex-A enzyme (Ferreira & Gahl, 2017). There are three types of GM2 gangliosidoses: Tay-Sachs Disease (TSD), Sandhoff disease, and GM2 Activator Protein Deficiency. GM2 gangliosidoses are all caused by mutations in HEXA or HEXB that produce deficient hexosaminidase enzyme in one of its two forms (isozymes): Hex-A or Hex-B. The enzyme Hex-A is a heterodimer composed of an α-β subunit, and Hex-B is a homodimer composed of two β
256
subunits. In Tay-Sachs, mutations occur solely in the HEXA gene, disrupting only Hex-A activity (B Variant) (Mahuran, 1999). Sandhoff disease is caused by mutations in the HEXB gene where both Hex-A and Hex-B activity is disrupted (O Variant) (Mahuran, 1999). Finally, GM2 Activator Protein Deficiency is due to mutations in the GM2A gene (AB Variant) (Ferreira & Gahl, 2017). All three diseases are indistinguishable from each other based on symptoms and can only be differentiated through testing (Steiner et al., 2016). TSD manifests itself in three forms: infantile, juvenile, and late onset, with the infantile form being the most common and the most fatal. Infantile TSD is usually diagnosed during the first four to eight months of a child’s life, with visible symptoms appearing by six months and disease resulting in death between the ages of three and five years old (Walker, 2007). In contrast, patients with juvenile TSD, also known DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
of disease were yet to be discovered.
as the subacute form, develop symptoms later than those with the infantile form and may live until later stages of childhood or adolescence (Liguori et al., 2016). Finally, the adult form is known as Late-Onset Tay-Sachs Disease (LOTS) and is diagnosed anytime between adolescence and someone’s mid-30s (Walker, 2007). In addition, unlike infantile TSD, LOTS and the juvenile form have milder symptoms (Walker, 2007). Because LOTS lacks hallmark symptoms like the cherry-red spot on the ey which characterizes infantile TSD, it is often misdiagnosed as multiple sclerosis or a form of muscular dystrophy, resulting in statistical underestimations of its actual prevalence (Walker, 2007).
A Brief History What is now known as Tay-Sachs Disease was first described by British ophthalmologist Dr. Warren Tay in 1881, who documented a one-year-old patient that presented a bright red spot on the retina of his eye, along with impaired motility in his arms & legs, and an inability to hold up his head—all of which are unusual observations for a child of that age (Walker, 2007). Six years later and in the other hemisphere of the world, Dr. Bernard Sachs, a neurologist from New York, encountered a patient with the same bright red spot on the retina. After two decades of observing this spot in various patients, Dr. Sachs documented his observations in a 1910 article in the Journal of Experimental Medicine, reporting a “balloonlike swelling of the dendrites,” as well as the characteristic red spot (Walker, 2007). His observations solidified the understanding that these unusual symptoms seemed to be hereditary, occurring among children of Jewish families of eastern and central European descent. At the time, however, information on the pathology of Tay-Sachs was purely observational, and pathophysiological drivers WINTER 2021
Figure 1: The characteristic cherry red spot in the center of the fovea indicative of TSD pathology.
In August of 1969, medical school scientists from the University of California at San Diego, Dr. Shintaro Okada and Dr. John O’Brien, Source: Wikimedia Commons, created by Jonathan Trobe, M.D. discovered that deficiency in the enzyme Hex-A led to phenotypes associated with TSD (O’Brien, 1981). Okada and O’Brien worked together on human hexosaminidases using synthetic substrates with the goal of measuring the amounts of enzyme present in different subjects, some with TSD pathology and some without (O’Brien, 1981). Okada, using gel electrophoresis of both versions of the enzyme, hexosaminidase A and B, found that Hex-A was absent in Tay-Sachs disease tissue (O’Brien, 1981). Ultimately, their discovery facilitated the first prevention program through mass carrier screening, which has proven to be one of the most significant measures taken to diminish the presence of TSD in the general population (O’Brien, 1981).
Prevalence After the implementation of mass carrier screening programs, the prevalence of TSD has diminished and is estimated to occur every 1 in 222,000 live births of the general population, with 1 in every 250 people being carriers for the disease (Meikle, 1999). TSD affects males and females equally, but disproportionately affects Jewish individuals of eastern and central European descent; approximately 1 in every 27 people of Jewish heritage are carriers for the disease in the United States (Walker, 2007). There are three distinct subgroups of the Jewish population: Ashkenazi Jews (from Western, Eastern, and Central Europe), Sephardic Jews (from Mediterranean regions such as Spain, Libya, and Morocco), and Mizrachi Jews (from Middle Eastern countries). Out of these three groups, the Ashkenazi Jews have the highest risk of carrying the HEXA gene mutation, resulting in their highest susceptibility to TSD (Walker, 2007). However, Tay-Sachs has also been found in the French Canadians of Southeastern Quebec, the Cajuns of Southwest Louisiana, and other populations around the world (National Center for Biotechnology Information, 2011).
"What is now known as Tay-Sachs Disease was first described by British ophthalmologist Dr. Warren Tay in 1881, who documented a one-year-old patient that presented a bright red spot on the retina of his eye, along with impaired motility in his arms & legs, and an inability to hold up his head— all of which are unusual observations for a child of that age."
Specific statistical values also depend on the form of TSD, with the infantile form being the most common. For the Ashkenazi Jew population, the infantile form of TSD has a carrier frequency of about 0.032 (Ferreira & Gahl, 2017; Mahuran, 1999). For the general population, the infantile form of TSD has a carrier frequency of about 0.0039 (Mahuran, 1999). In addition, 257
for the general population HEXA mutations have a heterozygous frequency of 0.006, while HEXB mutations have a heterozygous frequency of Source: Wikimedia Commons, 0.0036 (Ferreira & Gahl, 2017; Mahuran, 1999).
Figure 2: Visual representation of the GM2 Ganglioside Activator Protein’s quaternary structure. created by Jawahar Swaminathan and MSD staff at the European Bioinformatics Institute
"Lysosomes are membrane-bound, acidified, cytoplasmic organelles essential for the degradation of specific materials in the cell. If in surplus, these materials may be harmful to the cell, either by causing toxicity or by disrupting some normal function of the cell."
Molecular Mechanisms The Hex-A Enzyme: From the Endoplasmic Reticulum to the Lysosome Lysosomes are membrane-bound, acidified, cytoplasmic organelles essential for the degradation of specific materials in the cell. If in surplus, these materials may be harmful to the cell, either by causing toxicity or by disrupting some normal function of the cell. Lysosomes contain a variety of lysosomal enzymes known as hydrolases that work to catalyze the breaking down of cellular materials that accumulate in the lysosome (Ferreira & Gahl, 2017). Hex-A is the lysosomal enzyme responsible for breaking down GM2 Ganglioside to GM3. Its two subunits, α and β, are synthesized in the endoplasmic reticulum (ER), where glycosylation (the formation of intramolecular disulfide bonds) and dimerization of the subunits occur (Weitz & Proia, 1992). After dimerization, Hex-A moves to the Golgi complex, where it receives the addition of a mannose-6-phosphate (M6P) signal to the side chains of the oligosaccharide (Mahuran, 1999). The M6P signal targets Hex-A to the lysosome, where the enzyme is processed into its mature form and becomes ready to degrade the GM2 Ganglioside (Mahuran, 1999). Hydrolysis of GM2 Ganglioside through the Hex-A Enzyme The GM2 ganglioside is one of the main glycolipids of neuronal cell plasma membranes (Sandhoff and Harzer, 2013). It is found mostly in brain cells, and was coined ganglioside by German scientist Ernst Klenk, who isolated many specific types of gangliosides from ganglion cells in the brain (Ferreira & Gahl, 2017). GM2 is a glycosphingolipid – that is, a glycolipid with an amino alcohol sphingosine. Sphingosine contains a sialic acid component with the chemical formula C67H121N3O26. It functions in cell-cell interactions and signal transduction, and exists as 6% of the weight of lipids in the brain (Christie, 2009). Like many other lysosomal enzymes, Hex-A requires a cofactor, the GM2 Activator Protein (GM2-AP), to degrade its substrate, GM2 Ganglioside. The GM2-AP is a small, monomeric, heat stable activator protein encoded by the GM2A gene. Hex-A is specific for the GM2 Ganglioside-GM2-AP Complex (Weitz & Proia, 1992). The GM2-AP eliminates steric hindrance
258
from the membrane by solubilizing GM2 Ganglioside, allowing it to interact with the water-soluble Hex-A enzyme and be broken down into GM3 (Mahuran, 1999; Pastores & Maegawa, 2015). Therefore, the full hydrolysis of GM2 Ganglioside requires the activity of three components: the α subunit of Hex-A, the β subunit of Hex-A, and the GM2-AP. These three components are encoded by three different genes: HEXA on chromosome 15 for the α subunit, HEXB on chromosome 5 for the β subunit, and GM2A on chromosome 5 for the GM2-AP. While the α and β subunits of β-Hexosaminidase A are evolutionarily related, the GM2-AP gene diverges, emphasizing GM2-AP’s different structure as a glycoprotein (Pastores & Maegawa, 2015). Hydrolysis of GM2 Ganglioside to GM3 Ganglioside in Non-Pathological Conditions As mentioned above, the HEXA gene codes for the α subunit of Hex-A. The α subunit of Hex-A has catalytic abilities, and the β subunit is thought to play a role solely in substrate binding (Mahuran, 1999). The α subunit contains an Arg-424 residue that associates with the N-acetyl-neuramanic residue of GM2 gangliosides, allowing for their degradation. Structurally, the Gly-280, Ser-281, Glu-282, and Pro-283 amino acids create loops in the α subunit that facilitate the binding of the GM2AP. The Arg-424 residue on the α subunit and its amino acid loops facilitate binding of the α subunit to the GM2-AP and allow the subunit to hydrolyze GM2 Gangliosides. The α subunit does this by cleaving the glycosidic linkages of the terminal N-acetyl galactosamine (GalNAc) residues, thus enabling GM2 Gangliosides to become GM3 gangliosides (Lemieux et al., 2006). The GM2-AP aids in hydrolyzing GM2 by binding to it and interrupting a hydrogen bond between the acetamido-NH of the GalNAc
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
residue and the carboxylic group of the NeuAx residue of GM2, which essentially frees the GalNAc residue, allowing Hex-A to associate with it and hydrolyze it (Mahuran, 1999). It is important to note that GM2 is only one of the many substrates Hex-A is able to catalyze; Hex-A also acts on various other glycolipids, glycoproteins, and glycosaminoglycans (Mahuran, 1999).
Heritability of TSD OTSD is an autosomal recessive disorder with heritability resulting from mutations in either the HEXA, HEXB, or GM2A genes, which all encode protein products essential for GM2 Ganglioside degradation (National Center for Biotechnology Information, 2011). These mutations in HEXA or HEXB often result in null alleles, which are usually large deletions or full splice mutations and are particularly detrimental to GM2 hydrolysis because they result in absent Hex-A enzymatic activity (Pastores & Maegawa, 2015). Other mutations can result in deficient, but not absent, Hex-A enzymatic activity, which leads to the milder, late-onset form of TSD known as LOTS (Pastores & Maegawa, 2015). LOTS patients produce only 10-15% of the Hex-A enzyme normally produced by healthy individuals (Walker, 2007). Over 130 mutations have been identified in the three genes that compose the GM2 Ganglioside hydrolysis system (Pastores & Maegawa, 2015). Among Ashkenazi Jews, the
WINTER 2021
population most impacted by TSD, the most common mutation is a four base pair addition (TATC) in exon 11 of the gene. This results in the formation of an early stop codon, rendering the α subunit of Hex-A null and the Hex-A enzyme deficient (Boles & Proia, 1995). This mutation accounts for approximately 80% of mutated alleles in TSD (Kolodny, 2008). The second most common mutation is a splice junction mutation at the 5’ end of intron 12 (+1 IVS-12), which is reported in around 13% of Tay-Sachs alleles (Kolodny, 2008). In the French Canadian population, two founder mutations in the HEXA gene have been identified in the infantile (and most fatal) form of TSD: a 7.6-kb deletion in the 5’ end and a G>A transition at the +1 position of intron 7 (Ferreira & Gahl, 2017).
Figure 3: Location of the HEXA gene on the long (q) arm of chromosome 15 at position 24.1, from base pair 72,343,436 to base pair 72,376,178.
Deficiencies of the GM2A gene are much less prevalent, and only five mutations have been found to cause TSD (Pastores & Maegawa, 2015). These mutations produce very little of the GM2-AP, severely impeding the hydrolysis of GM2 Ganglioside even in the presence of a Hex-A enzyme with functional α and β subunits.
"TSD pathology leads to vacuolated neurons and later causes progressive neurodegeneration. In fact, one of the most characteristic features of TSD neurons observable under electron microscopy are membranous cytoplasmic bodies termed “onion skin lesions”, which are enlarged lysosomes filled with gangliosides."
TSD and Neuronal Degeneration TSD pathology leads to vacuolated neurons and later causes progressive neurodegeneration. In fact, one of the most characteristic features of TSD neurons observable under electron microscopy are membranous cytoplasmic bodies termed “onion skin lesions”, which are enlarged lysosomes filled with gangliosides (Weitz & Proia, 1992). The degree of neuronal damage is directly correlated to GM2 intralysosomal accumulation in neuronal cells, which in turn depends on the amounts of functional Hex-A enzyme present (National Center for Biotechnology Information, 2011). Induced pluripotent stem cells (iPSCs) can yield disease-derived neurons in vitro and, therefore, have been used as cellular models to investigate TSD as well as other diseases (Matsushita et al., 2019). Human somatic cells, such as fibroblasts, are reprogrammed to iPSCs through the introduction of four reprogramming factors: OCT 3/4 , SOX3, KLF4, and cMYC (Matsushita et al., 2019). Then, the iPSCs are differentiated into TSD-derived neural progenitor cells (NPCs) by inducing a mutation in the HEXA gene. This induced mutation is a G to T substitution in the 3’ end of intron 5 which causes abnormal splicing and Hex-A enzyme deficiency (Matsushita et al., 2019). TSDderived NPCs display the same biochemical phenotypes of clinical TSD, specifically an
Source: Wikimedia Commons, created by U.S. National Library of Medicine
259
enlargement and swelling of the lysosome, as well as upregulation of a lysosomal membrane protein called LAMP1 due to GM2 ganglioside accumulation (Wenger et al., 2002). In addition, disease-derived NPCs show a decrease in exocytotic activity after GM2 ganglioside accumulation, suggesting an impairment in neurotransmission (Matsushita et al., 2019). The fluorescence of FMI-43 can be used as a readout for exocytotic activity of these TSD NPCs. FM1-43 enters the cell through endocytosis and reduces its fluorescence intensity during exocytosis, which is triggered by depolarization at the synaptic terminal (Betz et al., 1992). In normal, non-TSD neurons, depolarization decreases FM1-43’s fluorescence intensity. In TSDderived neurons, fluorescence intensity further decreases even after depolarization and a full action potential, indicating that the diseasederived neurons lose their ability for exocytosis at the synaptic terminal (Matsushita et al., 2019). Losing the ability for exocytosis at the synaptic terminal is detrimental to signal transduction to postsynaptic neurons and negatively impacts the overall synaptic network of neurons.
"The precise mechanisms (apoptotic factors and pathways) by which intralysosomal accumulation of GM2 Ganglioside leads to deficient exocytosis and subsequent cell death remain elusive. However, inhibition of lysosomal degradation activity by protease inhibitors causes lysosomal enlargement and a decline in synaptic endocytotic/ exocytotic events, similar to TSD pathology."
260
The precise mechanisms (apoptotic factors and pathways) by which intralysosomal accumulation of GM2 Ganglioside leads to deficient exocytosis and subsequent cell death remain elusive. However, inhibition of lysosomal degradation activity by protease inhibitors causes lysosomal enlargement and a decline in synaptic endocytotic/exocytotic events, similar to TSD pathology (Sambri et al., 2017). This insinuates that the mechanisms related to TSD neurodegeneration may be consistent with general lysosomal dysfunction (Sambri et al., 2017). Finally, it is known that TSD causes progressive neurodegeneration, but the expression levels of autophagy markers, such as LC3B and insoluble p62, have been found to remain unchanged in TSD-NPCs compared to normal cells, suggesting that neuronal death may not always be due to autophagy mechanisms (Matsushita et al., 2019). In contrast, when exposed to hydrogen peroxide treatment, the viability of TSD-NPCs decreased significantly in comparison to normal cells. TSD-NPCs are therefore likely less resistant to oxidative stress than normal NPCs. Ultimately, the neuronal cell death caused by TSD is catastrophic because, aside from specific regions in the subventricular zone, amygdala, and subgranular zone (encompassing the hippocampus), neurogenesis does not occur in the adult brain; hence, brain cells cannot be
replenished.
Symptoms Like most lysosomal storage disorders, patients appear healthy at birth and do not begin to show symptoms until the first months of life (Ferreira & Gahl, 2017). For the classic, infantile form of Tay-Sachs, initial symptoms begin to show at around 3-6 months of age, with normal cognitive & physical development prior to that (Walker, 2007). The earliest symptoms are motor weakness, hypotonia (a decrease in muscle strength), poor head control, and a decline in attentiveness (Mahuran, 1999). Furthermore, peripheral vision may weaken, and a cherry-red spot on the retina will allow physicians to confirm the diagnosis of TSD (Mahuran, 1999; Walker, 2007). The red spot is a hallmark symptom of TSD and occurs when the macular cells of the eye deteriorate (as a result of GM2 ganglioside buildup) and expose the choroid, which consists primarily of blood vessels that nourish the retina (National Organization for Rare Disorders, 2017). By the first year of age, TSD patients lose most motor skills, and are unable to turn over, crawl, or reach out to objects and people (Walker, 2007). By the second year of life, seizures become common and paralysis may follow (Mahuran, 1999). Loss of swallowing and gag reflexes lead to an eventual vegetative state. Death by age 3-5 is usually attributed to life-threatening complications from respiratory failure or an infection such as pneumonia (Mahuran, 1999). For juvenile (subacute) Tay-Sachs Disease, the first signs of disease are general clumsiness and difficulty with coordination (National Organization for Rare Disorders, 2017). Symptoms are developed at a later age than patients with the infantile form of TSD, anywhere between two and 10 years of age (National Organization for Rare Disorders, 2017). After the first initial symptoms are observed, patients experience ataxia, which is impaired balance due to nerve damage. In addition, patients will experience progressive loss of intellectual ability and optic atrophy. The cherry-red spot characteristic of TSD pathology may or may not develop in the eyes of juvenile (subacute) TSD patients. Alternatively, some children may have retinitis pigmentosa, a large group of vision disorders that lead to degeneration of the retina. As a result, children with this form of TSD become less responsive to their environments and, like in the infantile form, life-threatening complications are the main cause of death at around fifteen years of
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
age (National Organization for Rare Disorders, 2017). The third form of Tay-Sachs Disease is LOTS, Late-Onset Tay-Sachs, which is characterized by deficiency, but not absence, of β-hexosaminidase A. LOTS patients produce only 10-15% of the Hex-A enzyme normally produced by healthy individuals, allowing them to partially prevent the massive accumulation of GM2 observed in more serious cases of TSD (Walker, 2007). Therefore, LOTS, in comparison to juvenile and infantile TSD, is less severe and progresses much more slowly. Its initial symptoms include light tremors, twitching (fasciculations), poor coordination, weakness or cramping of muscles, mood alterations, and slurred speech (Walker, 2007). As individuals age, they may develop amyotrophy, more serious fasciculations, ataxia, and dysphagia (difficulty swallowing) (Walker, 2007). The symptoms of LOTS are very similar to those of motor neuron diseases, but in some instances may encompass the psychiatric category, including depression, psychosis, and dementia (National Organization for Rare Disorders, 2017). After onset, symptom progression is very unpredictable, and depends on the patient. For instance, one person may have serious symptoms at age twenty, requiring assistive devices such as a wheelchair, while another may live symptomless until age 60 or 70 (National Organization for Rare Disorders, 2017). The symptoms experienced in all three forms of Tay-Sachs vary greatly from one person to another and are highly dependent on the degree of Hex-A deficiency. In addition, the age of onset, severity, and rate of progression of LOTS and the juvenile form of TSD in particular vary even in members of the same family and may also be different depending on what mutation caused the TSD condition (National Organization for Rare Disorders, 2017).
Diagnosis and Treatment Besides the hallmark cherry-red spot in the retina of the eye, the rest of TSD symptoms are also common in many other diseases that affect the nervous system. Therefore, differential diagnosis of Tay-Sachs Disease largely depends on two exams: HEXA genotyping and enzyme assays measuring Hex-A activity. High-throughput genotyping of HEXA screens for nine pathogenic mutations common in TSD, while next-generation sequencing (NGS) fragments the DNA to be analyzed and is done on exons 1-14 of HEXA (Mehta et al., 2016).
WINTER 2021
Enzyme assays, on the other hand, measure the levels of Hex-A enzyme in the body by taking either serum or platelet samples (Nakagawa et al., 2012; Walker, 2007). If the amount of Hex-A is lower than normal, the test subject is classified as a carrier; if the amount of Hex-A is absent, the patient has TSD (Walker, 2007). Serum enzyme assays are performed using a heat inactivation methodology, while platelet assays are performed using ion exchange chromatography to separate components of the fraction by charge (Nakagawa et al., 2012). Then, 4MUG is used as the artificial substrate for the assays (Nakagawa et al., 2012). TSD may also be diagnosed prenatally through both amniocentesis and an enzymatic assay (Walker, 2007). An amniocentesis, which is performed between 15-20 weeks of pregnancy, consists of inserting a thin & hollow needle into the abdomen of the mother to retrieve a sample of the amniotic fluid that bathes the baby (Walker, 2007). Then, the sample of amniotic fluid is tested to measure the levels of Hex-A (Walker, 2007). Another method used to detect TSD before birth is Chorionic Villus Sampling (CVS), which is performed during the 10th or 11th week of pregnancy (Walker, 2007). CVS takes a cell sample composed of chorionic villi from the placenta (outside of the amniotic sac) in order to run an enzyme assay (Walker, 2007). Instead of using a thin & hollow needle, a catheter is inserted into the uterus, rather than just the abdomen (Walker, 2007). Both methods of prenatal detection pose a risk of miscarriage: 1 in 200 for amniocentesis and 1 in 100 for CVS (Walker, 2007).
"Differential diagnosis of Tay-Sachs Disease largely depends on two exams: HEXA genotyping and enzyme assays measuring Hex-A activity."
Because of the high prevalence of TSD among Ashkenazi Jews, massive genetic screening programs were initiated in the 1970s to combat the frequency of the disease. Fortunately, to this day, the birthrate of infants with TSD among Ashkenazi Jews has diminished worldwide by 90% (Kaback, 2000). Currently, no cure exists for TSD. Neural dysfunction in TSD patients continues to progress even in the presence of medication (Matsushita et al., 2019). Therefore, most treatments focus on alleviating the specific symptoms that TSD patients face, and delaying progression of the disease in the case of LOTS (Solovyeva et al., 2018). Palliative care for infantile and juvenile TSD includes pain medication, anti-seizure drugs, physical therapy, gastric tube feeding, and respiratory care (Herndon & Kim, 2017).
261
The National Tay-Sachs and Allied Diseases Association (NTSAD) has evaluated six therapeutic approaches to combat TSD and other lysosomal storage diseases. These include: (1) enzyme replacement therapy, (2) bone marrow transplantation, (3) neural stem cell therapy, (4) gene therapy to restore expression of the dysfunctional protein, (5) substrate deprivation therapy, and (6) metabolic bypass therapy (Walker, 2007). Recent studies and clinical trials have increased the feasibility of these methods as effective therapeutic options, but further research is necessary for them to be deemed fully effective.
Conclusion The repercussions of this detrimental disease reach far beyond the patient’s physical symptoms. Due to the progressive nature of TSD and the strict daily regimen which TSD patients must follow to achieve any symptom relief, TSD imposes a significant burden on daily life for both patients and caregivers. TSD patients experience psychological impacts, and family and friends are often impacted as well. Caregivers have reported need for tranquilizers and sedation, as well as an inability to sleep and nightmares (Kanof et al., 1962). To make matters worse, patients and their families may face ambiguity surrounding their specific diagnosis. Since TSD shares early symptoms with several common neurological diseases, the disease is often misdiagnosed or confused with other conditions. Fortunately, the advent of genetic testing has helped families gain guidance and greater agency over treatment decisions earlier. Research on TSD is ongoing, and several promising discoveries have recently been made. For example, Lahey et al. achieved recovery of motor function in Sandhoff disease mouse models by administering an adenovirusassociated viral vector (AAV vector) encoding the HEXA and HEXB genes simultaneously (Lahey et al., 2020). Previously, the most successful gene therapy had consisted of intracranial co-delivery of monocistronic AAV vectors. Monocistronic vectors expressed only one gene per promoter region, meaning that this method required two monocistronic AAV vectors, one for each subunit of the enzyme Hex-A. Co-delivery of two vectors proved unfeasible, since they were unable to effectively penetrate the blood-brain barrier. Consequently, researchers developed AAV constructs that would encode HEXA and HEXB simultaneously and tested their therapeutic
262
efficacy in 4- to 6- week-old Sandhoff’s Disease mice. The outcome in this study by Lahey et al. confirmed the superiority of this AAV vector, and the AAV-treated mice performed normally in motor function exams, had reduced amounts of GM2 Ganglioside accumulation, and had increased survival rates, living past two years of age (Lahey et al., 2020). In another study, Beegle et al. transduced hematopoietic stem cells to express wildtype Hex-A and Hex-B enzymes, and then transplanted these cells into a Sandhoff disease mouse model (Beegle et al., 2020). These mice then showed gains in motor and behavioral skills, as well as lower levels of GM2 ganglioside. Hematopoietic stem cell gene therapy seems like a promising approach to systematically deliver functional enzyme to affected cells and recuperate motor function in TSD patients. While these studies are still experimental, they give hope to Tay-Sachs patients around the world as researchers work towards the cure. These advancements in scientific knowledge continue to improve quality of life and pave the way for a brighter future for sufferers of this tragic disease. References Beegle, J., Hendrix, K., Maciel, H., Nolta, J. A., & Anderson, J. S. (2020). Improvement of motor and behavioral activity in Sandhoff mice transplanted with human CD34+ cells transduced with a HexA/HexB expressing lentiviral vector. The Journal of Gene Medicine, 22(9). https://doi.org/10.1002/ jgm.3205 Boles, D. J., & Proia, R. L. (1995). The molecular basis of HEXA mRNA deficiency caused by the most common Tay-Sachs disease mutation. American Journal of Human Genetics, 56(3), 716–724. Bruce Blaus. (n.d.). Neuron Cell Body. https://commons. wikimedia.org/wiki/File:Neuron_Cell_Body.png Christie, W. (n.d.). GANGLIOSIDES STRUCTURE, OCCURRENCE, BIOLOGY AND ANALYSIS. https://web.archive.org/ web/20091217095434/http://lipidlibrary.aocs.org/Lipids/ gang/index.htm Ferreira, C. R., & Gahl, W. A. (n.d.). Lysosomal storage diseases. Translational Science of Rare Diseases, 2(1–2), 1–71. https:// doi.org/10.3233/TRD-160005 Herndon, J., & Kim, S. (2017, July 26). How Is Tay-Sachs Treated? Healthline. https://www.healthline.com/health/taysachs-disease#treatment Kanof, A., Kutner, B., & Gordon, N. B. (1962). The impact of infantile amaurotic familial idiocy (Tay-Sachs disease) on the family. Pediatrics, 29, 37–45. Kolodny, E. H. (2008). Encyclopedia of neuroscience. Elsevier/ Academic Press, 2008. http://www.sciencedirect.com/ science/referenceworks/9780080450469
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Lahey, H. G., Webber, C. J., Golebiowski, D., Izzo, C. M., Horn, E., Taghian, T., Rodriguez, P., Batista, A. R., Ellis, L. E., Hwang, M., Martin, D. R., Gray-Edwards, H., & Sena-Esteves, M. (2020). Pronounced Therapeutic Benefit of a Single Bidirectional AAV Vector Administered Systemically in Sandhoff Mice. Molecular Therapy, 28(10), 2150–2160. https://doi. org/10.1016/j.ymthe.2020.06.021 Lemieux, M. J., Mark, B. L., Cherney, M. M., Withers, S. G., Mahuran, D. J., & James, M. N. G. (2006). Crystallographic Structure of Human β-Hexosaminidase A: Interpretation of Tay-Sachs Mutations and Loss of GM2 Ganglioside Hydrolysis. Journal of Molecular Biology, 359(4), 913–929. https://doi. org/10.1016/j.jmb.2006.04.004 Liguori, M., Tagarelli, G., Romeo, N., Bagalà, A., & Spadafora, P. (2016). Identification of a patient affected by “Juvenilechronic”Tay Sachs disease in South Italy. Neurological Sciences, 37(11), 1883–1885. https://doi.org/10.1007/s10072016-2646-2 Mahuran, D. J. (1999). Biochemical consequences of mutations causing the GM2 gangliosidoses. Biochimica et Biophysica Acta (BBA) - Molecular Basis of Disease, 1455(2–3), 105–138. https://doi.org/10.1016/S0925-4439(99)00074-5 Matsushita, K., Numakawa, T., Odaka, H., Kajihara, R., Soga, M., Ozasa, S., Nakamura, K., Mizuta, H., & Era, T. (2019). Presynaptic Dysfunction in Neurons Derived from Tay–Sachs iPSCs. Neuroscience, 414, 128–140. https://doi.org/10.1016/j. neuroscience.2019.06.026 Mehta, N., Lazarin, G. A., Spiegel, E., Berentsen, K., Brennan, K., Giordano, J., Haque, I. S., & Wapner, R. (2016). Tay-Sachs Carrier Screening by Enzyme and Molecular Analyses in the New York City Minority Population. Genetic Testing and Molecular Biomarkers, 20(9), 504–509. https://doi.org/10.1089/ gtmb.2015.0302
presynaptic maintenance and restoration of presynaptic function prevents neurodegeneration in lysosomal storage diseases. EMBO Molecular Medicine, 9(1), 112–132. https:// doi.org/10.15252/emmm.201606965 Solovyeva, V. V., Shaimardanova, A. A., Chulpanova, D. S., Kitaeva, K. V., Chakrabarti, L., & Rizvanov, A. A. (2018). New Approaches to Tay-Sachs Disease Therapy. Frontiers in Physiology, 9, 1663. https://doi.org/10.3389/ fphys.2018.01663 Steiner, K. M., Brenck, J., Goericke, S., & Timmann, D. (2016). Cerebellar atrophy and muscle weakness: Late-onset TaySachs disease outside Jewish populations. BMJ Case Reports, bcr2016214634. https://doi.org/10.1136/bcr-2016-214634 Trobe, J. (2011). Cherry red spot as seen in Tay Sachs disease. The center of the fovea appears bright red because it is surrounded by a milky halo. http://www.kellogg.umich.edu/ theeyeshaveit/congenital/tay-sachs.htm U.S. National Library of Medicine. (2015). HEXA gene. Cytogenetic Location: 15q24.1. http://ghr.nlm.nih.gov/gene/ HEXA Walker, J. (2007). Tay-Sachs disease (1st ed). The Rosen Pub. Group. Weitz, G., & Proia, R. L. (1992). Analysis of the glycosylation and phosphorylation of the alpha-subunit of the lysosomal enzyme, beta-hexosaminidase A, by site-directed mutagenesis. The Journal of Biological Chemistry, 267(14), 10039–10044.
Meikle, P. J. (1999). Prevalence of Lysosomal Storage Disorders. JAMA, 281(3), 249. https://doi.org/10.1001/jama.281.3.249 Nakagawa, S., Zhan, J., Sun, W., Ferreira, J. C., Keiles, S., Hambuch, T., Kammesheidt, A., Mark, B. L., Schneider, A., Gross, S., & Schreiber-Agus, N. (2012). Platelet Hexosaminidase A Enzyme Assay Effectively Detects Carriers Missed by Targeted DNA Mutation Analysis. In SSIEM (Ed.), JIMD Reports—Case and Research Reports, 2012/3 (Vol. 6, pp. 1–6). Springer Berlin Heidelberg. https://doi. org/10.1007/8904_2011_120 National Center for Biotechnology Information. (2011). TaySachs disease. In Genes and Disease. https://www.ncbi.nlm. nih.gov/books/NBK22183/pdf/Bookshelf_NBK22183.pdf National Organization for Rare Disorders. (2017). Tay Sachs Disease. In Rare Disease Database. https://rarediseases.org/ rare-diseases/tay-sachs-disease/ O’Brien, J. (1981). Okada S & O’Brien J S. Tay-Sachs disease: Generalized absence of a beta-D-Nacetylhexosaminidase component. Science 165:698-700, 1969. 1. Pastores, G. M., & Maegawa, G. H. B. (2015). Rosenberg’s Molecular and Genetic Basis of Neurological and Psychiatric Disease. Elsevier. https://doi.org/10.1016/C2012-0-02688-9 Sambri, I., D’Alessio, R., Ezhova, Y., Giuliano, T., Sorrentino, N. C., Cacace, V., De Risi, M., Cataldi, M., Annunziato, L., De Leonibus, E., & Fraldi, A. (2017). Lysosomal dysfunction disrupts
WINTER 2021
263
Augmenting the Human Experience - the Story of Prosthetics STAFF WRITERS: ABENEZER SHEBERU, ASHNA KUMAR, BAMLAK MESSAY, COLLINS KARIUKI, GEORGE DAWAHARE, KATIE WALTHER, MIRANDA YU, SAMANTHA PALERMO, TOMA BENARFA MOYNE BOARD WRITER: SAM NEFF Cover: An athlete at the Summer Olympic Games in Rio de Janeiro (2016) running the final stretch of a track event. In today’s world, prosthetic limbs allow users to engage in athletic competition at the highest levels Source: Pixabay
264
Introduction The history of prosthetics (the science of creating artificial body parts) stretches back to the dawn of humanity. The consistent use of prosthetic devices throughout human history owes to the fact that our species is a resilient one. For centuries, humans have sought to restore functions of the human body that have fallen prey to the dangers of the world – from tragic accidents and the devastating results of armed conflict to the aftermath of infection or the unfortunate product of individual genetics. This story has long been one of restoration and renewal – fixing what is broken. It is now becoming one of enhancement and augmentation – adding something new to human life and boosting either the aesthetics or the functionality of the human body. With emerging technology, it is becoming possible to link up the brain not just to our natural body parts, but to artificial ones as well. Someday, the brain may interface with and control machines
in the outside world, or even communicate (in the absence of audible speech) with other living beings. In this paper, we explore multiple aspects of prosthesis use (the word prosthesis is the technical term for an artificial device implanted into the body to restore or augment function). We follow the story of human prosthetics (the science of prosthesis development) from its distant past to its current state, and end with a vision of its bright and exciting future. The article is broken down into five individual episodes, each telling this story of prosthetics from a unique angle. The first episode will outline the history of prostheses – from their use by the ancient Egyptians to modern attempts to make them more functional, universally accessible, and compatible with neural control. We trace out the great innovations in prosthetic technology DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
over time, and show how events like major wars and periods of technological growth served as catalysts for prosthesis development. The second section will tackle the issue of prosthetic accessibility. Its central point is that the extent of disability worldwide is vast, but the infrastructure for providing prosthetic care is inadequate to meet the need. We explore the challenges of prosthetic distribution in the developing world, as well as the issues of high cost, poor insurance coverage, and societal stigma that make it difficult for those who need prosthetics to acquire them and live out productive lives. The third episode will explore the extent to which prosthetics have become an element of popular culture. We consider the general public perception of prosthetics, including the perspective of prosthesis users, on the use of prostheses in the workplace, home, and athletic arena. We further demonstrate the extent to which prosthetics have become a feature of modern media: from literature to visual art. The fourth episode addresses the biological challenges that accompany implantation of prostheses. How do they interact with the immune system? How can prostheses be tested before implantation to ensure that they will be functional and not provoke a violent immune response? What are the best materials for prosthetic design? These questions and more will be explored. The fifth and final episode will illuminate the future of prosthetics. In particular, we remark on the promise of visual prostheses (restoring sight to the visually impaired), and innovative companies like Neuralink that are working to design neural implants that allow for brainbased control of prosthetic movement. Finally, the article will end with the key takeaways from all the individual episodes put together. It will emphasize just how much prostheses have done for humanity, and note how prosthetic devices may ultimately redefine humanity itself.
Episode 1: The History of Prosthetics From peg legs to limbs that can be controlled by thought, prosthetic arms and legs have come a long way and continue to grow more effective. In order to truly appreciate how far we have come, we need to look back on the history of prosthetics. Prosthetics is an ancient
WINTER 2021
technology, so ancient, in fact, that replacement body parts are mentioned prominently in the classical literature (Bliquez, 1983). The mythical Greek hero, Pelops, for example, was dismembered by Tantalus and served as a feast by the Gods. Fortunately for Pelops, though, only his left shoulder was eaten. The rest of his body was left unconsumed, and with a bit of mythological magic, was reconstructed. After his rebirth, Pelops wore a shoulder prosthesis made of ivory (MacDonald, 2017). The first real documented use of prosthetics dates back between 950-715 BC in Egypt: an artificial big toe (Bender, 2015). The discovery that the earliest form of prostheses (that we are aware of ) is something as seemingly unimportant as a toe might seem unremarkable. Yet, the big toe was particularly valuable in Egyptian culture, considering its open display in traditional Egyptian sandals (Bender, 2015). Furthermore, Egyptians believed that to reach spiritual wholeness, they must maintain physical wholeness. So, this toe made of cartonnage—which is similar in texture and durability to papier Mache (Finch, 2011)—is believed to have been created for physical reasons as well as religious ones (Bender, 2015).
"The first validated written account for the use of prosthetic limbs dates back to the Greek and Roman periods. The Greek historian Herodotus wrote in 484 BC that a Greek prisoner of war had freed himself from his shackles by amputating his leg, then fit himself with a prosthetic made of The first validated written account for the use copper and wood to of prosthetic limbs dates back to the Greek and make a run for it." Roman periods. The Greek historian Herodotus wrote in 484 BC that a Greek prisoner of war had freed himself from his shackles by amputating his leg, then fit himself with a prosthetic made of copper and wood to make a run for it. He traveled about 30 miles from a Spartan war camp to Capua, Italy before getting caught. This event was later proven to be true in 1858 by the finding of a copper and wooden leg in the referenced location that dated back to the 5th century BC (Thurston, 2007). Generally, during the periods of ancient Grecian and Roman civilization, prostheses were believed to be used either functionally during battle or aesthetically in its aftermath for shielding deformity. At this point in time, devices typically consisted of heavy, crude materials (i.e., wood, metal, and leather) battle (Hernigou, 2013). A well-known milestone in the field of prosthetics occurred during the second Punic War (around 215 BC). In this war, general Marcus Sergius lost his right hand, which handicapped his ability to hold a shield. To remedy this, an iron prosthesis was fabricated for him. It was fashioned exactly for him, and despite the fact that it limited movement and weighed the general down, it nonetheless allowed him to 265
Figure 1: Prosthetic toe made of cartonnage, found on the foot of a mummy from the Third Intermediate Period (circa 1070664 BC). It is the earliest known examples of a prosthetic device Source: Wikimedia Commons
hold his shield and return to battle (Thurston, 2007). Even in time of Marcus Sergius, although wood and iron were used often, innovation was underway. Prostheses began to be developed with more durable materials such as copper, bronze, and iron (Mota, 2017). During the Middle Ages, (400-1400 AD), many knights would use iron prosthetics to conceal lost limbs—which were often seen as a sign of weakness. There are also some accounts of pirates wearing peg legs and hand hooks (Hernigou, 2013). In general, however, this period didn’t see much innovation in the materials used in prosthetic devices, or in terms of the functionality of these devices. Instead, there was a shift towards the devices that looked and felt more like real body parts to restore normalcy in the general public. The most common body structure replaced was the foot. Early artificial feet have been found in Bonaduz, Switzerland, and Griesheim, France dating back to the period from 400-700 AD. One such prosthetic foot was essentially a leather sack filled with either hay or moss (for cushioning). It was reinforced with a wooden base (Finch, 2011).
"As time and technology progressed, so did prosthetic technology. At the start of the Early Modern Period (roughly 1500-1800 AD), oftentimes clockmakers, locksmiths, and others who used lightweight materials for their jobs would also produce artificial limbs."
As time and technology progressed, so did prosthetic technology. At the start of the Early Modern Period (roughly 1500-1800 AD), oftentimes clockmakers, locksmiths, and others who used lightweight materials for their jobs would also produce artificial limbs. For example, in the early 1700s, a clockmaker named Kreigseissen created a below-elbow artificial arm (I.e., a prosthetic to replace the forearm and hand). It consisted of sheets of copper and had joints at the wrist and the first and second knuckles, with the thumb able to move laterally as well. Pulleys, activated by bending the elbow, accomplished a pinching motion (Foord, 2020). As an added bonus, this limb was a couple of kilograms lighter than the average male hand and forearm. In the late 1700s, a similar device was constructed to be even lighter—one-third the weight of the average male arm. This was accomplished by the replacement of copper with steel, a much lighter metal (Foord, 2020). The early modern period also saw many new developments in amputation surgery and prosthetic limb development come about. The beginning of this period was marked by early efforts to optimize the comfort of prostheses, through precision amputation and residual limb shaping (Bender, 2015). One contributor to this new wave of discovery was Etienne J. Morel, a French army surgeon, who introduced
266
the tourniquet into surgery (Wilson Jr., 1981). Another major contributor was French barber surgeon Ambrose Paré of the early sixteenth century. Paré discovered that through ligations—cauterizing, or burning blood vessels during surgery to minimize bleeding— he could drastically improve his patients’ chances of living (Thurston, 2007). These two discoveries dramatically increased the survival rates of amputees, and as a consequence, caused a surge in the need for prosthetic limbs. As a doctor, Paré couldn’t help being disturbed by the people he had saved. He would witness people express that they’d rather die than live without a limb or with terrible wounds. In response to these expressions of remorse, he began making prosthetic limbs. By 1551, with the help of a mechanic, Paré had created “Le Petit Lorrain”—a hand device that was said to be the first prosthetic that demonstrated a sound understanding of real physiological function (Foord, 2020). The joints within the mechanical hand were a series of springs and locks that allowed some of the fingers to move in opposition to each other (like a pinching motion). It was fashioned for a French army captain, allowing him to grip and release the reins of his horse (Hernigou, 2013). A significant advancement, occurring in the early 1800s, was James Potts’ creation of what is considered the first “controllable” prosthetic limb. It had a wooden shank and socket, a steel knee joint, and a controllable foot that connected to the knee by catgut tendons.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
influences on the field of prosthetics and the use of prostheses by persons with an amputation. They have become one of the largest and most important showcases of the physical potential of people with disabilities and the power of prosthetic technology (Reznick, 2008).
Figure 2: This mechanical hand, called “Le Petit Lorrain,” designed by Paré in 1551 was the first prosthetic device to allow for motion in all five digits Source: Wikimedia Commons
In the 1970s, Ysidro M. Martinez further strengthened the power of prosthetic technology by developing a lower-limb prosthesis that’s major goal was improving gait (manner of walking) and reducing friction instead of trying to replicate the motion of a natural limb. By relieving pressure and making walking more comfortable, Martinez made a significant impact on the history of prosthetics (Bender, 2015).
Flexion of the knee caused dorsiflexion of the foot, and extension of the knee caused plantar flexion of the foot (Thurston, 2007). The carnage of the Civil War, around the mid1800s, prompted another exponential rise in the number of amputees and, as a consequence, increased demand for prosthetics. One of the most notable contributors to the progression of prosthetic limbs during this period was James Hanger. He was a confederate soldier who lost his leg in the war and went on to create the most advanced limb of the time (Bender, 2015). This particular prosthesis, which he named the “Hanger Limb,” consisted of barrel staves, nails, and most notably, rubber bumpers for connecting the knee to the foot as opposed to catgut tendons. Additionally, it allowed for much more articulation (movement at the joint) as it featured hinged joints at both the knee and ankle. As technology evolved, the capacity of amputees to participate more fully in social and athletic activities did as well. In 1948, neurosurgeon Sir Ludwig Guttmann organized a sports competition for World War II veterans with spinal cord injuries, which eventually became known as the Stoke Mandeville Games (Reznick, 2008). In 1952, competitors from the Netherlands took part in these games, bringing the games into international light. Both events paved the way for the first Paralympic Games in 1960 hosted in the city of Rome (Reznick, 2008). Today, the Paralympics are among the greatest
WINTER 2021
As the world enters the second decade of the 21st century, there are several problems that endure and continue to plague the development of prostheses – ranging from poor fit and limited control to undesirable appearance. These problems are especially stark among low-income populations where access to high-quality prosthetics is more limited, and most troubling in the developing world where the lack of infrastructure makes distribution of prosthetics difficult.
"The increased use of composites – materials made up of two separate phases – has enabled the design of stronger prosthetics. Carbon fiber and glass fiber in particular are used To this end, the past few decades have seen extensively." significant advances that promise to make prosthetic technology more personalized and effective. The final destination on this path to individualized prosthesis is the syncing up of prosthetic devices with the brain (so that they can be controlled remotely), and the regeneration of missing or damaged organs in a lab for implantation. These innovations, however, are far down the road. That being said, there are a number of recent prosthetic developments that are already improving quality of life for users.
First to note is the presence of more sturdy prosthetic materials. The increased use of composites – materials made up of two separate phases – has enabled the design of stronger prosthetics. Carbon fiber and glass fiber in particular are used extensively. Carbon fiber, a composite material comprised of carbon fibers embedded in a matrix of resin, plastic, or metal, provides reliable reinforcement to the prosthesis. The structure of glass fibers is similar; while the former is used for a wide array of prostheses ranging from heart valves to artificial feet, the latter has seen wide application in the dental field. These
267
are not new materials (carbon fiber was first used commercially in the late 19th century and glass fiber in the 1930s), but they have found many new applications in prostheses and other technologies over time (Fiorillo et al., 2019).
"3D printers work like sophisticated glue guns, taking melted fluid and depositing it on a surface or scaffold in the form of a pattern dictated by a computer. In the context of prosthetics, that pattern might be anything from a prosthetic hand to the acetabular cup used in hip transplants."
Combining these technologies with another key innovation of the past half-century, the 3D printer, has allowed for the production of prostheses that are tailored to individual patients. The first 3D printer was developed in the early 1980s, but now the technology has become so commonplace and inexpensive that it has the potential for use even in the developing world. Although plastic tends to be thought of as the material of choice for 3D printers, printers can actually be loaded with a wide array of materials, from carbon fiber to glass fiber to “bio-ink” that is made up of living cells. The final material is not so useful for prosthetics as they are currently defined, but holds immense value for the nascent “tissue-engineering” industry, which seeks to grow organs artificially in a part-lab, part-factory setting. The hope is that someday there will be no need to construct artificial prostheses - broken limbs and other organs may simply be regrown. 3D printers work like sophisticated glue guns, taking melted fluid and depositing it on a surface or scaffold in the form of a pattern dictated by a computer. In the context of prosthetics, that pattern might be anything from a prosthetic hand to the acetabular cup used in hip transplants. A pattern can be generated from the anatomical images of individual patients, allowing each 3D-printed prosthetic to be custom-fit. Employing this approach, recent studies have demonstrated the use of 3D printed prosthetics in a number of different domains: from craniofacial defects (e.g., babies born with facial abnormalities) to orthopedic trauma surgery (Nyberg et al., 2017; Lal and Patralekh, 2018). Finally, it is worth mentioning advances in the understanding of biometric measurements and biomarker detection. When fitting a prosthetic, orthopedic surgeons often have to choose between multiple procedures and pick the one best suited to the individual patient. The choice depends upon individual patient factors like age, gender, and immune system status. These factors are typically weighed by the physician, but deciding the proper course of action can be made much easier with the use of algorithms. One study examining the use of prosthetic selection algorithms for
268
patients receiving a total hip arthroplasty (hip replacement) drew from data on 51 patients that had previously received the surgery. Relying on bone computed tomography (CT) imaging, electromyography testing, and gait analysis, the algorithm achieved 93% accuracy in choosing the right type of prosthetic (i.e., the one that doctors deemed appropriate when they initially conducted the surgery). For future patients, this algorithm and others like it could be applied to ensure that physicians make the proper choice, and that patients are directed towards physicians that have experience performing the surgery that is right for them (Ricciardi et al., 2020). Looking to the future, machine learning algorithms are going to be critical for the new brain-machine interface technologies in development. The algorithms would work by translating the user’s intentions - represented by the activity of individual neurons in their brain - into specific movements. When a particular motion is performed enough times, the system learns to recognize neural signal patterns, and over time, the prosthetic will be able to perform the desired motion more efficiently and effectively. These algorithms should also help prosthetic users to perform subtly different motions – such as touching the thumb to the pad of the ring finger versus the middle finger (Jee, 2020). In conclusion, prosthetics have come a long way and scientists continue to develop prosthetic technology which improves the quality of life for tens of millions of people. As work in the world of prosthetics continues, we will most certainly see more aesthetic and functional improvements alongside improvements in technology and ground-breaking innovation. Modern prosthetics will continue to move beyond the demands of basic function and deliver a more complete sense of wholeness to amputees who deserve to experience the same passions, mobility, and activities as abled body individuals (Bender, 2015).
Episode 2: Prosthetic Accessibility Introduction The field of prosthetics has undoubtedly advanced by leaps and bounds from its genesis in the ancient world. Today, prosthetics are manufactured from suitable biocompatible materials like high-tensile carbon fibers and common polymers such as polyoxymethylene (Mota, 2017). Modern prostheses are both
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Mark Inglis of Exceed (an NGO working with disadvantaged people with disabilities in the developing world) accompanies a Cambodia Trust community worker as she visits a young client in Kompong Chhnang Source: Flickr
more durable and aesthetic. Notwithstanding these advances, the field of prosthetics has experienced stunted growth in terms of the accessibility and cost of prostheses. There is an overall lack of streamlined channels for distribution and an incomplete receptiveness towards amputees by their surrounding communities.. According to the World Health Organization (WHO), more than 30 million people in the world require prostheses. Moreover, WHO approximates that 75% of developing countries have no prosthetics training programs, which lead to poor coverage of prosthetics and orthotics services (WHO, 2005). It’s not just the developing world. Ziegler-Graham et al. project that by 2050, 3.6 million people in the USA are going to suffer from limb loss, more than double the number of people who suffered from limb-loss in 2005 (Ziegler-Graham, 2008). In the United States alone, more than 32,500 children have undergone a major pediatric amputation (Manero et al., 2019). Prostheses can reach this target group (people suffering from limb-loss), but only if development and distribution are revised. *** The Cost of Prosthetics One crucial factor that affects the use of prostheses amongst disabled populations is the heavy cost of acquiring and assembling them (Manero et al., 2019; Ibrahim et al.,
WINTER 2021
2019). Prostheses can be divided into several categories depending on how they are operated. This line of classification separates body-powered and myoelectric prostheses. The latter, which uses electric signals from the user’s own muscles to operate, are more expensive due to the intricate nature of the technology they employ. Body-powered prostheses are relatively cheaper because they are manually powered, that is, they require the use of other functional body parts to work (Carey et al., 2019).
"In total, the cost of advanced prostheses, either myoelectric or body-powered ranges between $5,000 to $50,000."
In total, the cost of advanced prostheses, either myoelectric or body-powered ranges between $5,000 to $50,000 (Mohney, 2013); this does not include the number of visits (which is also a crucial determinant of price) the amputee has to make to the prosthetic clinic (Haggstrom et al., 2013). And the price may go up even more if the prostheses involved any computerized or activated system. This generally high cost, combined with the under-researched role of disability in determining poverty rates creates substantial barriers to widespread acquisition of prostheses (Braithwaite & Mont, 2009). Funding policies for prosthetics in most countries include personal out-of-pocket payments, government insurance covers, private insurance and government programs. Developed countries have a wide array of funding sources that they can utilize to cater to their prostheses needs. In the U.S., for
269
example, prosthetic care can be offered by locally available organizations like the Veterans Health Administration and facilitated by private insurance policies through Medicare and Medicare. In Canada, The War Amputations provides financial assistance towards acquiring prostheses. In developed countries, some of the disabled population can afford to cater to their prosthetic needs, though this is usually not the case for those in Low and Middle Income Countries (LMICs) (Ibrahim et al., 2019).
"Proper recommendation and fitting of prostheses requires not just physicians, but prosthetists, orthotists, orthopaedic technologist, bench workers, and other support staff to guiding new prosthesis recipients through rehabilitation."
Many insurance plans vary in the degree to which they cover prosthetic limbs (Cornell Orthotics & Prosthetics, 2018). A study conducted by Elaine Biddis, a senior scientist at the PEARL lab in Canada, highlights the challenges faced by those who have insurance coverage (Biddiss et al., 2011). Most insurance companies require that the amputee pay a “smaller” proportion of the medical expenses that result from prosthetic care. For most, raising money to cater for this smaller proportion proves difficult and often impossible (Biddiss et al., 2011). This is true concerning Medicare in which patients must pay for 20% of the Medicare-approved amount (the amount cited by the supplier or doctor) related to prosthesis use (Medicare, n.d). Some health care providers are also slow in processing their agreed-upon part of the medical expense payment, which could lead to delayed medical assistance and more adverse medical conditions (Biddiss et al., 2011). Some insurance companies do not cover prosthetic limb replacements (Ibrahim et al., 2019). This creates an especially difficult situation for young amputees who continually outgrow their prosthetic limbs or parts (Manero et al., 2019). In the same study conducted by Biddiss et al., 48% of the participants considered the cost of a prosthesis to be an influential factor in their decision not to adopt prostheses use. The researchers found that 37% of the participants incurred personal costs ranging from $50 to $40,000 (Biddiss et al., 2011). To offset the high cost of prostheses, especially myoelectric ones, 3D-manufactured prostheses are gaining traction. 3D-printed prostheses are a cheaper alternative and thus will likely be available to a greater number of disabled people (Manero et al., 2019). One reason why the 3D-printing part is popular is that it is customizable and hence user-specific. This has led to a ubiquitous “Do It Yourself” mindset that has most especially helped those in low-resource areas who cannot get the required prosthesis promptly (Manero
270
et al., 2019). Numerous companies and nongovernmental organizations like Limbitless Solutions are doing exactly that by providing disabled people in low-resource regions with prostheses. Limbitless solutions provide low-weight, non-invasive prostheses and most importantly, cost-effective prostheses that have been tested through clinical trials (Manero et al., 2019; Limbitless Solutions, n.d.) A 3D-printed prosthesis from Enable Community Organization, a popular prosthetic-care provider, costs around $35.00 as compared to the $6,000 - $10,000 one would spend on the same non-3D printed prosthesis (Enable, n.d.). *** Further Challenges: Societal Shortcomings and Individual Incompatibility One challenge affecting prosthetic availability is the lack of a proper workforce (WHO, 2017). Proper recommendation and fitting of prostheses requires not just physicians, but prosthetists, orthotists, orthopaedic technologist, bench workers, and other support staff to guiding new prosthesis recipients through rehabilitation. In particular, there is a s shortage of more specialized therapists, nurses, physicians, pedorthotists, pedorthists, and podiatrists (WHO, 2017). With the growing number of amputation cases, adequate training not only of the science behind prosthetics and orthotics but also on how to advise the amputees on how to live their “new” lives, is needed to offset this shortage (WHO, 2017). Fortunately, the WHO has set several guidelines that will ensure that disabled individuals who require prostheses can access them (WHO, 2017; WHO, 2005). A further problem is that those with varying forms of amputation are susceptible to ostracization because of their impairment, especially in developing countries where people with disabilities are often seen as incomplete individuals (Pasquina et al., 2015; WHO, 2017; Ibrahim et al., 2019; Mareno et al., 2019). Consequently, those who receive prosthesis often avoid activities that would portray their disabilities, which may lead to depression and low-self-esteem, among other psychological problems (Pasquina et al., 2015). To treat these mental health conditions faced by amputees, more personnel should be trained on how to teach the recipients of new prostheses how to lead their new lives (WHO, 2017; WHO, 2005) Another issue altogether is the problem of
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
biocompatibility. Some individuals are not able to receive prosthetics, or at least a certain kind of prosthetic, because their body rejects them. Modern prosthetics are composed of a wide range of materials, both natural and man-made. In lower-limb prosthetics, wood is still commonly used, due to its strength, light weight and low price. It is also common to see leather or cloth as suspensions, straps, or waist belts, to improve the structural stability of limb prosthetics. But the most common materials are organic polymers like plastics and carbon fibers, and metals like titanium, magnesium, aluminum, and copper. With the introduction of any foreign material to the body, however, there are physiological challenges. Take cobalt prostheses as an example. Cobalt is commonly used material in prosthetics. However, it has to be used carefully because it is estimated that 1-3% of men and women are allergic to it. After a cobalt prosthesis is implanted into the body, it naturally degrades over time, and the cobalt ions released into the body interact with the immune system. In particular, the ions trigger the activation of immune system cells called T-cells, which can act adversely against the prosthetic. Consider the following case study. Researchers examined a case of a 30-year-old patient with a cobalt limb prosthesis; one week after installation, she had maculopapular and vesicular lesions in the areas of her skin that came in contact with the prosthetic limb socket, which they attributed to cobalt allergy. Factors such as “friction, sustained pressure, and humidity of the amputation residual limb” also contributed to contact dermatitis, a red, bumpy rash that itches intensely (Arslan et al., 2015). In a broader study, approximately ninety percent of patients experienced satisfactory results with metal to metal prostheses – meaning no notable adverse reactions. Of the thirty-five patients that did not, thirteen unsatisfactory reactions were attributed to cobalt, possibly due to a cobalt allergy. One patient that reacted negatively to cobalt presented with widespread skin lesions and allergic vasculitis, or inflamed blood vessels. Given the results of this study and others, researchers are now advising that patients be tested for metal allergies prior to getting metal prostheses so the allergens present in these prosthetics do not cause adverse reactions
WINTER 2021
(Munro-Ashman & Miller, 1976). All these things considered, although prostheses are innovative tools designed to help amputees, there are times when amputees are better served through other assistive devices such as a wheelchair. (Mduzana et al. 2018) It is typically up to physicians to use their aptitude and determine a patient’s eligibility to receive prosthetics. However, the unequal aptitude of physicians can result in discrepancies in the provision of prosthetics to patients, so there have been efforts to create a more standardized screening tool. The most significant effort thus far has been a screening tool called the Amputee Mobility Predictor (AMP) for lower-limb amputees, which uses information about the patient’s locomotor abilities prior to receiving the prosthetic to determine the best recommendation for the patient on which assistive device to use (Mduzana et al. 2018).
"Cobalt is commonly used material in prosthetics. However, it has to be used carefully because it is estimated that 1-3% of men and women are allergic to it. After a cobalt prosthesis is implanted into the body, it naturally degrades over time, and the cobalt ions released into the body interact with the immune system."
AMP considers four factors in determining the patient’s eligibility: “sitting, standing, balance, and locomotion.” (Mduzana et al. 2018) In particular, the amputee being able to balance on their non-amputated leg has a significant association with being able to proficiently use their prosthetic (Raya et al. 2010; Schoppen et al. 2003). In addition, the AMP considers age, comorbidities, type of amputation, cognitive abilities, and strength of non-amputated limbs in evaluating a patient’s candidacy for receiving prosthetics (Mduzana et al. 2018) For patients who receive prosthetics that are implanted into their skeletal structure, in a process called osseointegration, other eligibility considerations are in play, such as good blood circulation to prevent bone infections, and the candidates’ ability to learn how to use the prosthetic to maximize its use. For example, the elderly are typically not good candidates to receive these types of prosthetic devices as they have weak blood circulation which would lead to a greater risk of bone infection from the required surgery. Patients who receive osseointegrated prostheses are typically young and show great limb control, getting the most out of the prosthetic. (Marks et al., 2001) *** Prosthetic Accessibility in the Developing World These aforementioned considerations all apply to prosthetic distribution and accessibility
271
in the developed world – countries like the United States, Canada, and Western Europe. The accessibility of prosthetics in the developing world – countries in Africa, Central and South America, Eastern Europe, the Middle East, and elsewhere – is hampered by much the same factors that affect prosthetic accessibility in the developed world. The difference is that in the developing world, the barriers are greater in magnitude, and the need for functional prosthetics is higher. The World Health Organization (WHO) estimated in 2017 that 3540 million people in developing countries are amputees (Ramadhani et al., 2019).
"One way to reduce the costs of prostheses in the developing world would be to construct them locally. There has been some movement towards producing ultralow-cost prosthetics with a blend of older (simple components like servo motors and Arduino microcontrollers) and newer technologies (like the 3D printer)."
Affordability is a particularly large problem in the developing world. The cutting edge devices being designed in the United States or Western Europe are simply not accessible because the costs of acquiring them and distributing them are prohibitively high. First, hospitals in developing countries may not have sufficient funds to source prosthetic materials due to the incorporation of relatively expensive technology in modern prostheses. This is especially true of myoelectric prostheses, which uses electric signals generated by body muscles to function (Ibrahim et al., 2019). Moreover, most disabled people living in developing countries do not have access to insurance that would cover amputation and prosthetic fitting costs. A study conducted in a Tanzanian hospital by JM Ibrahim of the University of California, San Francisco, noted that most local insurance providers do not cover the full cost of prosthetic-care. This gap in prosthetic-care often leads to ineffective postop treatment of amputees (Ibrahim et al., 2019). One way to reduce the costs of prostheses in the developing world would be to construct them locally. There has been some movement towards producing ultra-low-cost prosthetics with a blend of older (simple components like servo motors and Arduino microcontrollers) and newer technologies (like the 3D printer). One research team out of the University of Naples and Western Sydney University has constructed a novel 3D-printed prosthetic hand: a two-finger, claw-like structure operated by residual muscle function. When the in-tact muscles on the arm above the amputated hand are contracted, a sensor detects that contraction and causes the hand to clamp shut. The researchers rigged up a second sensor as well that provides haptic feedback to the user and signals to what extent the hand is closed. This is a very simple way to give the user “feeling” for their prosthetic (Sreenivasan et al., 2018).
272
Despite the fact that these low-cost devices are in development, there are still enormous challenges that make development of new prostheses within developing countries more difficult. It’s important to note that the previous study was conducted by scientists in Australia and Italy for use in the developing world (Sreenivasan et al., 2018). Such studies in the developing world itself, given the relative lack of research institutions and resources to fund those institutions, are rarer. Education in general, and training for prosthetic development and fitting in particular, is lackluster in the developing world compared to the West. One study has examined the quality of prosthetic education programs in Ghana relative to an analogous program in the United States. After surveying faculty members, the researchers found that the Ghanaian faculty were much younger on average than their American counterparts, and possessed less background education and teaching experience in the realm of prosthetics. In addition, the researchers conducted a network analysis and found that the Ghanaian research center possessed far fewer outside professional connections. This means that their access to scientific literature and to the opinions of other medical professionals in the field is limited (McDonald et al., 2020). A survey of students in a prosthetic program in Togo reveals further problems - many of the students rated the state of prosthetics education in the country poorly along multiple lines: a lack of continued education, lack of good research facilities, and lack of devices to utilize in training programs (Aduayom-Ahego, Ehara & Kpandressi, 2017). It is not just the training capabilities, and the communication networks of prosthetic researchers that are limited in the developing world. Lines of communication with the general population are also less extensive. Many are suffering physical disabilities or the effects of amputation unaided because they don’t know that assistive devices exist. One study examining the distribution of hearing aids in the developing world has cited lack of information as one of the three key barriers to prosthetic use in low and middle income countries, alongside high cost and lack of trained personnel to develop and fit the hearing aids (McPherson, 2014). Infrastructural development is a broader
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: An infant with a cochlear implant that allows him to hear. A cochlear implant replaces the function of the inner ear, stimulating the auditory nerve directly with an electric array. At the same time, however the prosthesis is rather bulky, and might make this child feel uncomfortable in a social setting Source: Wikimedia Commons
challenge for developing world – it is not just an issue of communication, but also of prosthetic distribution and usage. For one, the abundance of uneven terrain and lack of paved roads means that prosthetic devices, particularly those built for the lower extremities, must be made especially stable and flexible. One research group at the Indian Institute of Technology in Madras has grappled with this problem in their attempt to develop a better polycentric knee. Polycentric knee prosthetics have the freedom to move along multiple axes, as opposed to single axis knees that have a simple hinge structure (Dupes, n.d.). Thus, their structural nature makes them more appropriate for uneven terrain. The researchers compared multiple polycentric knee designs that were commercially available, then designed their own prosthetic knee that optimized the parameters of stability, toe clearance, and flexion. Preliminary user testing has found the new knee to be both stable and user-friendly (Anand & Sujatha, 2016). Along the same lines, lack of paved roads and other transportation infrastructure inhibit the distribution of prosthetics. This is the so-called “last-mile problem” that impacts medical supply in general - not just prosthetics, but vaccines, antibiotics, and other drugs (Ménascé, 2014). To improve this state of affairs, technological innovation alone is not sufficient. Investment in communications and transportation infrastructure is crucial too. The key takeaway here is that the problems
WINTER 2021
confronting those in need of prosthetics are numerous, yet surmountable. Investments in research institutions and public infrastructure would go a long way towards improving the situation in the developing world. And the same is true for more rural regions in developed countries such as the US. Issues of poor insurance coverage and societal stigma surrounding prosthetic use have less clear-cut solutions. But with the right degree of attention to the problems, solutions may be found.
Episode 3: Prosthetics in Culture Prosthetics from a User Perspective Prosthetic devices are often praised for their functionality and innovativeness. With impacts extending beyond their material usages, a prosthesis can significantly reshape an individual’s self-perception of their role in society. Prostheses hold power over identity and psychological health, especially for those who gain a new mode of perception through sensory prostheses (Swindell, 2007; Hansson, 2015). Often implemented in babies or young children who have congenital hearing impairment (Brey, 2005), sensory prostheses like cochlear implants provide life-changing alterations. However, not all individuals see a prosthesis as part of themselves, or even as something useful.
"Infrastructural development is a broader challenge for developing world – it is not just an issue of communication, but also of prosthetic distribution and usage. For one, the abundance of uneven terrain and lack of paved roads means that prosthetic devices, particularly those built for the lower extremities, must be made especially stable and flexible."
To begin with, the owner of a prosthetic may perceive it as a foreign object that does not belong, even though it may provide functional
273
"The owner of a prosthetic may perceive it as a foreign object that does not belong, even though it may provide functional advantages. In those cases, the recipient may request to have it removed. In one prominent case, for example, the recipient of the world’s first hand transplant never felt comfortable with it and had it amputated after two years"
advantages. In those cases, the recipient may request to have it removed. In one prominent case, for example, the recipient of the world’s first hand transplant never felt comfortable with it and had it amputated after two years (Dickenson and Widdershoven, 2001). The success of a prosthesis thus depends on not just its successful integration with the body, but also the psychological factor of user acceptance.
a race to the enhancement that blurs the frontier between the human and the machine. Such dystopias are also recurrent within the video games industry. One of the most recent examples is the game Cyberpunk 2077, by the studio CD Projekt Red, which puts the player in a world where prosthetic enhancement devices are ubiquitous and determine entire social and economic classes.
Another notable example of prosthetic rejection due to socio-cultural factors is the case of the deaf community and cochlear implants. Many members of the community have strongly opposed the use of cochlear implants for prelingually deaf children (Brey, 2005). Deaf activists view these interventions as diminishing the unique history and culture of the community, as well as decreasing its population. They view deafness as a positive attribute, not a disability, that is yet one component of social diversity, and think of the deaf community as a minority culture with its own language, traditions, and values. While they believe that in the power of cultural values, critics highlight the benefits cochlear implants may provide to children (Levy, 2002). They question the preservation of the culture over individual interests, especially given the impact that deafness and other sensory disabilities may have on educational success and work opportunities.
Yet regardless of these grand visions, which some day may hold true, the true value of prosthetics in the moment must have its basis in the opinions of real prosthetics users. A 2020 survey looked into the preferences of 27 Australians with upper limb differences, around half of which owned at least one prosthesis (Walker et al., 2020). Some members of the survey team were engineers who hoped to use the information they gleaned to generate user-influenced designs for a prosthetic hand. The survey participants ranked functionality as the most important criteria in prosthetic design, while affordability and appearance ranked as second and third, respectively. The team found that most participants desired for their prostheses to either completely blend in or stand out (Walker et al., 2020). Their results supported an earlier survey of 114 individuals (Sansoni et al., 2015) that discovered attraction toward both lifelike and robotic designs. This is an interesting finding because it shows that attitudes towards prostheses within the population are highly diverse. Some see them as unwanted visible markers of disability, or as a bane to diversity. Others see them as an opportunity to enhance human functionality and aesthetics.
Others paint prosthetics in a more positive light by considering their potential to enhance the human condition. As developers produce more successful prosthetic devices in the present, others have their eyes on the future ; thinking not only about what prosthetics can be, but what they ought (or ought not to be). People have long mused about what prosthetics can do for human kind through in the pages of books and the digital displays of movies and video games Science fiction in particular has become a vessel for the development of prosthetics in culture. One of the most well-known examples is the 1998 manga Ghost in the Shell, adapted to the big screen in 2017, bringing its universe to the west. In the film, Masamune Shirow creates a futuristic universe where humans can augment themselves using advanced prosthetics, to the point where fully functional artificial bodies can host a human conscience. Science fiction pushes the concept of prosthetics to its limits by creating worlds where they are no longer used to repair bodies but rather to augment their physical and mental abilities. It creates
274
*** Athletic Prosthetics There is no doubt that prosthetics can improve the quality of life for people living with a wide range of disabilities, especially disabilities that affect motor function or mobility. For example, the advent of an energy-storing prosthetic foot can make it easier and more efficient for a lower-limb amputee to walk and run. However, this advantage in movement becomes controversial when applied to the world of athletics, where such technology is shown to significantly increase a sprinter’s velocity while running. Elite Paralympic athletes have reported that standard, everyday prosthetic devices are not conducive to high-level athletic performance. In response, companies have developed technologies like the J-shaped
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 5. Oscar Pistorius runs during the 2012 Olympics. Source: Wikimedia Commons
Flex Sprint III prosthetic foot, which provides forward directed propulsion for elite runners. This, and other devices like seated throwing chairs and racing wheelchairs have found their way into the highest levels of athletic competition. This raises the question of when technology is “essential for performance,” and when it ought to be considered “performance enhancing.” Parallels have been drawn between prosthetic advantage and other technological performance enhancements. For example, at the 2008 Beijing Olympic Games, 94% of swimming champions were wearing an engineered polyurethane suit that was speculated to create a greater advantage than performance enhancing drugs (Burkett, 2010). Such “technological doping” is simpler than prosthetic controversies, however, because banning such suits and performanceenhancing technology effectively levels the playing field. In the case of prosthetics, banning an amputee or otherwise disabled athlete from using a particular prosthetic device does not so cleanly level the playing field, and in some cases it may remove them from the field altogether. Some Paralympic Sports require minimal prosthetic accommodations; for example, as with Olympic rowing, Paralympic rowers all use identical boat hulls, but the seats may be modified to provide necessary postural support. By contrast, some sports like track and field, wheelchair basketball, rugby, and
WINTER 2021
tennis are embroiled in debate over specialized prostheses and wheelchairs (Burkett, 2010). For example, South African sprinter Oscar Pistorius became a near household name in 2008, for one because his exceptional running performance, but also because his performance attracted skepticism of the fairness of his prosthetics. The IAAF found that Pistorius’s specialized prosthetic limbs provided an energy loss of only 9% during the “stance phase”, while a nonprosthetic runner’s ankle joint experiences a 41% energy loss (Burkett, 2010). For athletes like Pistorius, the prosthetic is essential for running, yet such mechanical analysis seems to suggest the prosthetic is simultaneously a performance enhancement. However, mechanical analysis only illuminates a fraction of the issue. One under-researched factor in determining the performance-enhancing power of prosthetic limbs is the interface of the amputee’s limb and the prosthetic socket. This interface significantly impacts the mechanical efficiency of the prosthesis because it transmits the force between the ground and the athlete’s body (Burkett, 2010). The amputee’s limb is extremely susceptible to change depending on physical use, environmental conditions, location, and other factors.
"South African sprinter Oscar Pistorius became a near household name in 2008, for one because his exceptional running performance, but also because his performance attracted skepticism of the fairness of his prosthetics. The IAAF found that Pistorius’s specialized prosthetic limbs provided an energy loss of only 9% during the “stance phase”, while a non-prosthetic runner’s ankle joint experiences a 41% energy loss."
Specialized wheelchairs are another flashpoint in the debate about prosthetics in sport. Racing wheelchairs in track and road events have evolved to be light and aerodynamic, with small push-rims on the wheels and a single
275
large front wheel (Howe, 2011). For sports like wheelchair rugby, the chairs have evolved to add a fifth wheel to prevent flipping during quick directional change, as well as hooks and bumper guards. Unlike prosthetic limbs for track events, these athletes are only competing against other wheelchair rugby players or wheelchair racers, so the question of whether the technology is a performance enhancement is not as significant (Burkett, 2010).
"The biology of blindness can be understood by recognizing the underlying components of the human visual system. Light is focused through the lens of the eye after passing through the cornea, which provides additional focusing power. The iris (the part of the eye which is colored – blue, green, brown, or black) is made up of smooth muscle tissue and helps alter the amount of light that passes through the lens."
However, issues still arise over fair access to these technologies for athletes. As technologies continue to develop, a chasm will form as athletes from developed countries that have access to highly-specialized prosthetics will be able to take advantage of them, while athletes in developing countries will be left with rudimentary, primitive prosthetics (Howe, 2011). However, if this issue is simply corrected by preventing the use of new technologies in athletic competition, technological development of assistive devices may stagnate, which will be a detriment not only to athletes but to amputees in the general public. A possible solution to this issue may be the development of two types of competition, one with controlled standards for devices that guarantee a level playing field, and one that is open and fosters competition between both the athlete and the developers behind prosthetic technology (Burkett, 2010). Ultimately, greater research is necessary to determine the extent or existence of performance advantage from prosthetic devices in athletics. Any calculation of mechanical advantage must be balanced with consideration of compensatory factors elsewhere in the athletes’ body and their actual ability to transfer any advantage in mechanical efficiency to true performance advantage (Burkett, 2010). Until these factors can be accurately quantified and compared, limiting disabled athletes’ opportunities or use of prosthetics seems unjustified, and such decisions should err on considering prosthetics as essential for performance rather than performance enhancing.
Episode 4: The Future of Prosthetics Visual Prosthetics For all of human history, the development of blindness – either owing to genetic predisposition, or the experience of trauma – has been an untreatable condition. Once the ability of the body to sense light has gone, the blind are in many ways closed off from the outside world; emotional connections dim, education suffers, and the physical environment becomes far more difficult to navigate (National Council 276
for Special Education, n.d.). Blindness is, indeed, associated with a higher risk of chronic illness, as well as social withdrawal and depression. And the problem is not a rare or insignificant one; in 2015, it was estimated that 1.02 million people in the United States were blind, and that the number of blind individuals is expected to double by 2050 (Niketeghad & Pouratian, 2019). There are many more people across the world living with this condition, and many fare worse than the blind in the United States due to lack of transportation infrastructure, educational services, or workplace accommodations. The biology of blindness can be understood by recognizing the underlying components of the human visual system. Light is focused through the lens of the eye after passing through the cornea, which provides additional focusing power. The iris (the part of the eye which is colored – blue, green, brown, or black) is made up of smooth muscle tissue and helps alter the amount of light that passes through the lens (MyHealth.Alberta, 2019). Ultimately, focused light shines onto the retina, which possesses an array of rod and cone cells. The rod cells facilitate vision at low light while the cone cells are important for vision in brighter conditions (“Rods and Cones,” n.d.). Each rod or cone cell is plugged into an intricate neural pathway – signals are routed first through retinal ganglion cells to the optic nerve, then to the lateral geniculate nucleus (LGN) of the thalamus, and finally to the occipital lobe in the cortex of the brain. The brain ultimately interprets the external visual signals and allows an individual to make sense of the outside world. If any part of this pathway become dysfunctional, blindness results (Niketeghad & Pouratian, 2019). For a long time, the only “solution” to blindness was for the victim to learn to live without vision. This might mean walking around with a cane to provide additional haptic feedback about the environment, or to sharpen one’s sense of hearing to compensate for lost vision. Blind individuals actually have a remarkable ability to adjust to these circumstances (UCLA, 2009). And novel sensory substitution approaches are consistently being developed. Consider the “EyeMusic” technology - a sensory substitution device that conveys information about the environment through sound (Abboud et al., 2014). However, for all of the aforementioned reasons, blindness still comes at a massive cost to physical and emotional well-being, even in the presence of these workarounds. In recent years, the prospect of visual prosthesis has emerged to target different parts of the visual pathway and actually restore some level DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 6: This diagram demonstrates the way in which the Argus II prosthetic device restores vision. The image of the surrounding environment is captured by video camera glasses (1), which transmit the visual information to the retina (4). Before this information can travel to the brain via the optic nerve (5), the visual image has to be converted to an electronic signal with the help of a special chip (2,3) Source: Wikimedia Commons
of vision. The general idea is to stimulate the visual pathway wherever it is not broken – essentially, rescuing upstream dysfunction by directly activating downstream elements. For example, if the rod and cone cells of the retina are incapable of receiving light and signaling to the brain, the occipital lobe of the brain itself may be stimulated with microelectrodes (small chips that can conduct electric current to the brain). Several organizations in the past two decades have developed visual prosthetics that work along these lines, including the ICVP project (an outgrowth of the NIH neural prosthesis program), the CORTIVIS project (supported by the European Commission of the EU), and the Gennaris device (developed by Monash vision with support from the Australian Research Council) (Xue & MacLaren, 2020; Niketeghad & Pouratian, 2019). The strategy employed by the company SecondSight, which has created the ‘Argus II’ visual prosthetic system, is slightly different than the devices mentioned in the previous paragraph. It works by stimulating the inner retina and interfacing with the neurons that connect to the optic nerve rather than resting directly on the brain. Visual signals are conveyed as follows: an image of the surrounding world is captured by a video camera (in this case, present within video camera glasses). The video glasses convert the visual image into a concise electronic code that can be transmitted as an electrical signal. The electronic code is passed through a microelectrode array that touches the retina, and is designed in such a way that it confers information about the visual scene to the brain. The brain, then, is able to interpret the sequence of numbers as a visual image (Luo & Cruz, 2016). Naturally, the image transmitted is imperfect. WINTER 2021
The Argus II prosthetic provides a rather hazy view of the outside world, and it is certainly not good enough for reading words on a page or admiring the face of a loved one in detail. Yet it does provide the ability to discern moving objects, and allows the user to distinguish between people and things. This alone is a remarkable achievement, as it allows blind individuals to navigate the world safely and with greater confidence (Sharpe, 2015). One could imagine how much easier it would be to traverse the urban landscape or to move successfully along a wooded trail with the help of this device. The Argus II device was the first of its kind to obtain regulatory approval in both the US and Europe, but the other aforementioned products in development are hot on its heels. And a number of yet more novel approaches for vision restoration have entered the scene. Some are more mechanical in nature – e.g. stimulating the visual cortex with magnetic fields. Others lie in the realm of biology (Farnum & Pelled, 2020). For example, various teams are at work developing “optogenetic” solutions to re-establish vision – injecting light-sensitive proteins (opsins) into surviving cells in the retina to restore some retinal function (Xue & MacLaren, 2020).
"The biology of blindness can be understood by recognizing the underlying components of the human visual system. Light is focused through the lens of the eye after passing through the cornea, which provides additional focusing power. The iris (the part of the eye which is colored – blue, green, brown, or black) is made up of smooth muscle tissue and helps alter the amount of light that passes through the lens."
Another exciting approach in development is the use of CRISPR gene therapy to restore visual function. Simultaneous efforts by scientists at the University of Pennsylvania, University College London, and Moorefields Eye Hospital have resulted in the first approved gene therapy drug (Luxturna TM) for congenital blindness (Khanna, 2020). This therapy involves injecting a healthy copy of the gene RPE65, which is dysfunctional in many individuals who are congenitally blind, into retinal cells to 277
restore vision. The future of this field is bright, and in the coming years, new therapies may open new doors for the blind to better interact with the outside world, to see loved ones again, and to start casting aside the many burdens of visual disability. ***
"When developing prosthetics, scientists tend to seek out natural inspiration. In the human body, muscle tissue amplifies the motion-inducing electrical impulses sent by the brain. Researchers have already harnessed the signal amplifying power of muscle to make myoelectric prostheses."
Neuroprosthetics and The Brain-Machine Interface Looking forward, another essential attribute of future prosthetics is the capacity for intuitive user control. Simply put, the goal is for a patient to control their prosthetic device simply by thinking about it moving. In practice, this means that the intuitively controlled artificial device, or neuroprosthetic, must have the capability to detect, decode, and function using the motioncontrolling electrical signals sent by the brain (Perlmutter, 2017). This is easier said than done; however, the future of neuroprosthetics is looking bright. When developing prosthetics, scientists tend to seek out natural inspiration. In the human body, muscle tissue amplifies the motioninducing electrical impulses sent by the brain. Researchers have already harnessed the signal amplifying power of muscle to make myoelectric prostheses. Myoelectric devices function by detecting, decoding, and acting on impulses picked up from muscles in the upper arm or leg above a missing limb. Electrodes are able to detect the electrical signals intended for the amputated limb because the muscle serves to amplify the action potentials transmitted through the residual nerve still present there. After detecting these electrical impulses, the electrodes then decode these signals and cause the prosthetic device to move in the manner intended by the brain. There are a couple of drawbacks the surface electrode approach, including the need to replace and recalibrate the electrodes daily, the presence of skin irritation, and the high volume of noise from surrounding muscle tissue which may lead to unintended movement (Ngan et al., 2019). Since myoelectric devices are controlled by impulses amplified by muscles, the amount of muscle mass left after amputation is a limiting factor in whether a myoelectric prosthetic can be used. There are, however, a few strategies to remedy this issue of limited residual muscle mass. In cases where there is not enough muscle left in the residual limb, a patient may choose to undergo a targeted muscle reinnervation (TMR)
278
or regenerative peripheral nerve interface (RPNI) procedure. TMR surgically implants the severed residual nerve ending into a nearby muscle; for example, the muscles of the upper back instead of those of the residual limb. This process requires the target muscle to be partially denervated to allow for the residual nerve to be implanted (Vu et al., 2020). The motor nerve running through the muscle is cut, and the residual nerve transmitting the desired signal is attached to the distal end of the severed motor nerve. The signals that are transmitted through the residual nerve are now amplified by the re-innervated muscle. These impulses are picked up and decoded to control a prosthetic device (Mioton & Dumanian, 2018). The viability of this procedure is dependent upon the proximity of an available muscle to the terminated nerve ending. This is where RPNI becomes invaluable. RPNI is similar to TMR, but the muscle into which the peripheral nerve or peripheral nerve fascicle is implanted is grafted from a healthy muscle elsewhere in the body. Before implantation into the muscle, the targeted nerve(s) are transected with a wire electrode to transmit electrical impulses to an external device. After implantation, the peripheral nerve regenerates, revascularizes, and reinnervates the free muscle graft over the course of three months, all with the transected electrode still embedded. The final result is a stable muscle-nerve combination that amplifies the nerve’s electrical signals well enough to produce strong, high amplitude EMG signals that can in turn be used to control a prosthetic device (Vu et al., 2020). RPNI is incredibly useful because it is not dependent on the amount of residual muscle left after an amputation. Even so, it requires muscle tissue to be removed and reimplanted in an individual who has already lost a limb. Obviously, this is less than ideal, but the benefits of RPNI seem to outweigh its drawbacks. In fact, Vu et al. report successful control of a prosthetic hand (finger joints included) for up to 300 days without control algorithm recalibration. TMR and RPNI both hold promising potential for future intuitive control of myoelectric prosthetic devices. The brain’s electrical impulses can also be detected directly from the peripheral nervous system. There are a variety of ways to wire into this system, all with different levels of effectiveness and invasiveness. The least invasive peripheral nerve interface strategies
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
up electrical impulses from multiple fascicles (Ngan et al., 2019).
Figure 7: Prototype of a brainmachine interface (BMI) device Source: Wikimedia Commons
An even more invasive but more effective approach is to use a Utah Slanted Electrode Array (USEA). The USEA is a plate of metal covered with spikes of differing lengths that are embedded in the nerve. The USEA is incredibly effective - human study participants have been able to move individual fingers on a prosthetic arm using the technology. Yet these devices are also unable to be chronically implanted due to the damage they do to the nerve (Ngan et al., 2019).
are known as extra-neural electrodes because, as the name implies, they do not penetrate the epineurium (protective sheath) of the nerve bundle (Yildiz et al., 2020). A prominent type of extra-neural electrode is the cuff electrode. Cuff electrodes wrap around the bundle of nerve fascicles. This approach, though only minorly invasive, is not ideal for detecting and decoding electrical signals to control a prosthetic device because of the poor signal transmission. Cuff electrodes are much better suited for chronic pain or bladder dysfunction than for the control of neuroprosthetic devices (Ngan et al., 2019). A more invasive approach to interfacing with the peripheral nervous system involves penetrating the epineurium and implanting a lead right next to targeted groups of motor neurons. The leads used in this approach lie between individual fascicles, and are known as interfascicular electrodes (Yildiz et al., 2020). The next most invasive approach to interfacing with the peripheral nervous system involves penetrating both the epineurium and the fascicles themselves. These devices are known as intrafascicular electrodes. Intrafascicular electrodes produce a clear signal with a high signal-to-noise ratio, but they do cause damage to the nerve because they are highly invasive (Yildiz et al., 2020). There are currently two main types of intrafascicular electrodes: Longitudinal Intrafascicular Electrodes (LIFE) and Transverse Intrafascicular Multichannel Electrodes (TIME). Both LIFE and TIME consist of a single wire with several recording points intended to pick
WINTER 2021
The most invasive approach to interfacing with the peripheral nervous system is the use of a regenerative electrode. A regenerative electrode is placed between two ends of a severed nerve. Then, the nerve regenerates through this electrode, essentially making the device part of the nerve itself. The severing of the nerve, placement of the device, and the necessary recovery time for the patient make regenerative electrodes very invasive. However, this strategy has shown promise in early animal experiments (Ngan et al., 2019).
"The USEA is a plate of metal covered with spikes of differing lengths that are embedded in the nerve."
In addition to the peripheral nervous system, researchers have also discovered ways to interface with the central nervous system itself. This approach is rather extreme, and is therefore reserved for the most extreme cases at this point in time (Ngan et al., 2019). A non-invasive approach to interfacing with the central nervous system is electroencephalography (EEG). EEG electrode arrays are attached to the surface of the skull and detect electrical activity through the cranium. The problem with this approach, however, is that signal is often muffled by the cerebrospinal fluid, skull and skin (Ngan et al., 2019). To obtain a better signal, the patient can choose to submit him/herself to a more invasive device. Electrocorticography (ECoG) electrodes are subdural electrodes, and therefore must be surgically implanted on the brain itself. Despite their invasive nature, ECoG systems have shown to be clinically stable, relatively low risk, and effective (Yanagisawa, 2011). ECoG systems provide better resolution than EEG electrode arrays, but the most clear signals can be obtained by implanting penetrating electrodes into the brain’s motor cortex. One device already taking advantage of this strategy is the DEKA Arm System, or as it is now known,
279
"A non-inv
the LUKE Arm (Mobius Bionics) (Resnik et al., 2018). There are problems, however, that arise from the invasiveness of implanting electrodes into the brain. Notably, the body triggers an immune reaction that results in inflammation and fibrotic capsule formation around the implant (Ngan et al., 2019).
"The body naturally conveys a multitude of sensations to the brain, including temperature, pressure, and texture. It is incredibly difficult to replicate this amount of feedback in an artificial device."
Ongoing attempts to establish a prosthetic interface with the brain have caught the attention of tech giant Elon Musk, the founder of Neuralink (among other companies). Neuralink is a highly ambitious venture, as its goal involves upgrading the brain itself. The device would first be used for medical purposes: providing independence for paralysis victims, mitigating seizures, and aiding in brain damage recovery, among other things. However, Musk has even higher hopes for the technology, looking toward a possible future where people drive without touching a steering wheel, type without touching a keyboard, and communicate without speaking (Crookes, 2020). At this point in time, Neuralink sounds a bit closer to science fiction than to reality. Exactly what type of device could usher in the future Musk envisions? Current models of the Link – the Neuralink implant that transfers neural signals – represent a system of 3,072 electrodes on 96 wires thinner than a human hair that are threaded into the brain and connected to a central processing chip imbedded in the skull. Each thread of this device monitors the activity of 1,000 of the brain’s 86 billion neurons. The signals from the 96,000 total neurons that the Link monitors can be amplified, recorded, and interpreted. The implantation procedure is too complex to be performed by human surgeons; therefore, the company has also invented a robot that performs the surgery in lieu of a human doctor, performing the operation quicker and with more precision (Crookes, 2020). Researchers today are rapidly transforming what was once science fiction into reality. There are, however, many challenges to overcome moving forward in the field of neuroprosthetics. A number of these problems stem from physical issues that arise when attempting to integrate mind and machine. Obviously, there are risks associated with implanting foreign objects in the human body, and especially in the human brain. While researchers do not have a significant amount of evidence on the complications of implanting microelectrode arrays into the brain, researchers
280
have used data from deep brain stimulation trials to predict that complications that may arise from this new technology. The most prevalent and severe risks include hemorrhages, infection, and skin erosion (Bullard et al., 2019). Furthermore, the materials currently used to transmit motion-controlling electric signals from the brain are not fully biocompatible and lead to inflammation, edema, and higher risk of infection. Therefore, the longevity of these electrodes is far less than ideal. Over time, these devices also convey weaker and weaker signals (Borton et al., 2013). This is also an issue that must be addressed, as no patient wants to regularly endure surgery to replace failing electrodes. Another significant challenge is the sheer amount of sensory feedback that must be provided to the brain to allow for tactile recovery. The body naturally conveys a multitude of sensations to the brain, including temperature, pressure, and texture. It is incredibly difficult to replicate this amount of feedback in an artificial device (Bonizatto, 2020). Other challenges arise from the legality of creating new devices. The clinical implantation of a single electrochemical neuroprosthesis requires reviews from ethics boards of the electrode array, implanted pulse generator, realtime control policies, implanted motion sensors, multicompartment drug delivery systems, and multicomponent chemical cocktails. This is going to be a rather cumbersome process. As such, the issue of finding enough funding to turn these projects into reality is an everpresent concern (Borton et al., 2013). Finally, scientists must also consider the moral consequences of their research. The advent of neurotechnology is forcing individuals to redefine what it means to be “human” (Sample et al., 2019). The debate over brain augmentation becomes much more intense when considering products like Neuralink that seek not only to restore normal brain function but to exceed it. Some scientists are raising concerns over how far this technology should be allowed to proceed, and what precautions should be taken. And indeed, the question is one worth asking: to what extent is it morally acceptable to alter the organ that makes us who we are? It is also worth asking questions about social justice and who exactly will receive these devices (Glannon, 2016). The question of morality and neuroprosthetics
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
also comes into play in patients who suffer from Body Integrity Identity Disorder (BIID). These individuals feel as though a certain healthy and fully functioning limb or body part does not belong. This condition causes psychological distress, to the point where patients with BIID will ask doctors to amputate their healthy limb. Typically, this request is refused based on the widely-held medical principle of “do no harm”. Having an amputation performed often impacts the quality of a patient’s life quite negatively after the procedure, as it is more challenging to function independently in dayto-day endeavors. With the advancement of neuroprosthetic devices, however, this may soon no longer be the case (Gibson, 2021). If researchers arrive at the point where a neuroprosthetic device could fully integrate with the body without notable adverse effects, and BIID patients could access these devices, would it still immoral to amputate a healthy limb based on the do no harm argument? There is much distance between today’s technology and the proposed fully-functional, intuitively controlled neural prostheses. Obstacles come in many forms: technical, legal, and moral. Even so, the future of prosthetics is remarkably hopeful. These challenges are surmountable obstacles, not intractable roadblocks, and scientists are constantly finding ways to advance this field in substantial and ethical ways.
Conclusion One might assume that the use of mechanical devices to replace human body parts – detached limbs or dysfunctional internal organs – might be assumed at first thought a remedy unique to the modern age. However, as this series of episodes on the field of prosthetics has shown, there is evidence of prosthesis use dating back to ancient civilization. In addition, the essential function of prosthetic limbs has changed little since ancient times, even as the ability to surgically implant prostheses and mitigate the immune response has grown, and the materials and methods of producing prosthetic devices have become more sophisticated. While the prosthetic hands and feet of today may look better than those of the ancient world, they are still fundamentally limited in their functionality: prosthetic users still lack the ability to really “feel” objects in the surrounding space. Prosthetic hands may grasp handles and open doors effectively, but the user is either required to pull a lever with the opposite hand, or using residual muscles to perform the work.
WINTER 2021
The prosthetic hand is not really a part of the human body. However, there does seem to be a paradigm shift underway. With the advent of the brainmachine interface, the barriers between human and prosthesis are blurring. By syncing up the brain with a prosthetic device, the user may actually control its movement with their mind, and the feedback provided by the device may allow the individual to feel the objects they are interacting with. This sense of feeling is likely to be limited at first, but as devices become more sophisticated, there is no reason why prosthetic devices won’t be able to convey the intricacies of texture to the user, or the strength of grasp. Prosthetic users will someday have the ability to shake another person’s hand with a grip that is comfortable for the recipient, and feel the recipient’s grip in return. Concerns about enhancing the power of the brain to interface with outside entities (starting with the control of prostheses, but extending to control of computers, or even non-verbal communication with others) are valid, but perhaps a bit premature. To the extent that brain machine interfaces devices have been demonstrated thus far, they are not currently capable of such sophisticated acts, and even their ability to control a prosthetic limb is limited. It is quite possible, however, that the technology evolves to such an extent in the future. Philosophers and policy-makers alike ought to consider the implications. Those in the halls of government ought to consider also the issues of prosthetic accessibility – insufficiency of insurance coverage for prosthetic devices, lack of strong channels for prosthetic distribution, and unequal research and development of prosthetics across the world and within individual countries. If the benefits of advanced prostheses such as the brain-machine interface are to be provided to those who need them most, these issues need to be tackled. The future of prosthetics will be brightest if the work of researchers, engineers, and policymakers are well coordinated. References Abboud, S., Hanassy, S., Levy-Tzedek, S., Maidenbaum, S., & Amedi, A. (2014). EyeMusic: Introducing a “visual” colorful experience for the blind using auditory sensory substitution. Restorative Neurology and Neuroscience, 32(2), 247–257. https://doi.org/10.3233/RNN-130338 Aduayom-Ahego, A., Ehara, Y., & Kpandressi, A. (2017).
281
Challenges in Prosthetics and Orthotics Education in SubSaharan Africa Francophone Country Togo. EC Orthopaedics, 6, 230–237. An implant uses machine learning to give amputees control over prosthetic hands. (n.d.). MIT Technology Review. Retrieved February 7, 2021, from https://www.technologyreview. com/2020/03/04/905530/implant-machine-learningamputees-control-prosthetic-hands-ai/ Anand, T. S., & Sujatha, S. (2017). A method for performance comparison of polycentric knees and its application to the design of a knee for developing countries. Prosthetics and Orthotics International, 41(4), 402–411. https://doi. org/10.1177/0309364616652017 Bender, E. (n.d.). The History of Prosthetics. UNYQ. Retrieved March 17, 2021, from http://unyq.com/the-history-ofprosthetics/ Biddiss, E., McKeever, P., Lindsay, S., & Chau, T. (2011). Implications of prosthesis funding structures on the use of prostheses: Experiences of individuals with upper limb absence. Prosthetics and Orthotics International, 35(2), 215–224. https://doi.org/10.1177/0309364611401776 Binder, M., Eitler, J., Deutschmann, J., Ladstätter, S., Glaser, F., & Fiedler, D. (2016). Prosthetics in antiquity—An early medieval wearer of a foot prosthesis (6th century AD) from Hemmaberg/ Austria. International Journal of Paleopathology, 12, 29–40. https://doi.org/10.1016/j.ijpp.2015.11.003 Blatchford. (n.d.-a). Linx Product Brochure. Retrieved February 14, 2021, from https://www.blatchford.co.uk/catalogue/ limbsystems/linx/flyer/203266564%20Linx%20Product%20 Brochure%20Iss2%20AW%20Web%20Pages.pdf Blatchford. (n.d.-b). NHS MPK Package Offers. Blatchford Mobility Made Possible. Retrieved February 14, 2021, from https://www.blatchford.co.uk/prosthetics/professionals/ prosthetic-technology/microprocessor-knees-mpk/nhs-mpkpackage-offers/ Blindness causes structural brain changes, implying brain can re-organize itself to adapt. (n.d.). ScienceDaily. Retrieved February 14, 2021, from https://www.sciencedaily.com/ releases/2009/11/091118143259.htm Bliquez, L. J. (1983). Classical Prosthetics. Archaeology, 36(5), 25–29. Bonizzato, M. (2020). Neuroprosthetics: An outlook on active challenges toward clinical adoption. Journal of Neurophysiology, 125(1), 105–109. https://doi.org/10.1152/ jn.00496.2020 Borton, D., Micera, S., Millán, J. del R., & Courtine, G. (2013). Personalized Neuroprosthetics. Science Translational Medicine, 5(210), 210rv2-210rv2. https://doi.org/10.1126/ scitranslmed.3005968 Braithwaite, J., & Mont, D. (2009). Disability and poverty: A survey of World Bank Poverty Assessments and implications. Alter, 3(3), 219–232. https://doi.org/10.1016/j.alter.2008.10.002 Brey, P. (2005). Prosthetics. MacMillan Encyclopedia of Science, Technology and Ethics, 1527–1532. Bullard, A. J., Hutchison, B. C., Lee, J., Chestek, C. A., & Patil, P. G. (2020). Estimating Risk for Future Intracranial, Fully Implanted, Modular Neuroprosthetic Systems: A Systematic Review of Hardware Complications in Clinical Deep Brain Stimulation and Experimental Human Intracortical Arrays. Neuromodulation: Technology at the Neural Interface, 23(4), 411–426. https://doi.
282
org/10.1111/ner.13069 Burkett, B. (2010). Technology in Paralympic sport: Performance enhancement or essential for performance? British Journal of Sports Medicine, 44(3), 215–220. https://doi. org/10.1136/bjsm.2009.067249 Carey, S., Lura, D., & Highsmith, M. (2017). Differences in Myoelectric and Body-Powered Upper-Limb Prostheses: Systematic Literature Review. Journal of Prosthetics and Orthotics, 29, P4–P16. https://doi.org/10.1097/ JPO.0000000000000159 Cornea, Lens, and Iris. (n.d.). MyHealth.Alberta. Retrieved February 14, 2021, from https://myhealth.alberta.ca:443/ Health/Pages/conditions.aspx?hwid=tp10754&lang=en-ca Cornell Orthotics and Prosthetics. (2018, November 2). Does Insurance Cover Prosthetic Limbs? Cornell Orthotics & Prosthetics. https://www.cornelloandp.com/blog/artificiallimbs-massachusetts/does-insurance-cover-prosthetic-limbs/ Crookes. (n.d.). Neuralink. Retrieved March 20, 2021, from https://search.proquest.com/docview/2453203094/abstract/ EEDB81E10F524CAAPQ/1 D Munro-Ashman & A J Miller. (1976). Rejection of metal to metal prosthesis and skin sensitivity to cobalt. Contact Dermatitis, 2(2), 65–67. https://doi. org/10.1111/j.1600-0536.1976.tb02986.x Definition of BIOCOMPATIBILITY. (n.d.). Retrieved February 12, 2021, from https://www.merriam-webster.com/dictionary/ biocompatibility Dickenson D & Widdershoven G. (2001). Ethical issues in limb transplants. Bioethics, 15(2), 110–124. https://doi. org/10.1111/1467-8519.00219 Enable. (n.d.). Enable Community Organization FAQS ». Retrieved March 10, 2021, from https://www. enablecommunityfoundation.org/faqs/#q10 Farnum, A., & Pelled, G. (2020). New Vision for Visual Prostheses. Frontiers in Neuroscience, 14, 36. https://doi. org/10.3389/fnins.2020.00036 Finch, J. (2011). The ancient origins of prosthetic medicine. The Lancet, 377(9765), 548–549. https://doi.org/10.1016/ S0140-6736(11)60190-6 Fiorillo, L., D’Amico, C., Turkina, A. Y., Nicita, F., Amoroso, G., & Risitano, G. (2020). Endo and Exoskeleton: New Technologies on Composite Materials. Prosthesis, 2(1), 1–9. https://doi. org/10.3390/prosthesis2010001 Foord, D. (2020). CHANGES IN TECHNOLOGIES AND MEANINGS OF UPPER LIMB PROSTHETICS: PART I - FROM ANCIENT EGYPT TO EARLY MODERN EUROPE. MEC20 Symposium. https://conferences.lib.unb.ca/index.php/mec/ article/view/13 Gene therapy and CRISPR strategies for curing blindness. (2020, June 25). University of Massachusetts Medical School. https://www.umassmed.edu/news/news-archives/2020/06/ gene-therapy-and-crispr-strategies-for-curing-blindness/ Gibson, R. B. (2021). Elective amputation and neuroprosthetic limbs. New Bioethics, 27(1), 30–45. https://doi.org/10.1080/20 502877.2020.1869466
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Glannon, W. (2016). Ethical issues in neuroprosthetics. Journal of Neural Engineering, 13(2), 021002. https://doi. org/10.1088/1741-2560/13/2/021002 Haggstrom, E. E., Hansson, E., & Hagberg, K. (2013). Comparison of prosthetic costs and service between osseointegrated and conventional suspended transfemoral prostheses. Prosthetics and Orthotics International, 37(2), 152–160. https://doi.org/10.1177/0309364612454160 Hansson, S. O. (2015). Ethical Implications of Sensory Prostheses. In J. Clausen & N. Levy (Eds.), Handbook of Neuroethics (pp. 785–797). Springer Netherlands. https://doi. org/10.1007/978-94-007-4707-4_46 Hernigou, P. (2013). Ambroise Paré IV: The early history of artificial limbs (from robotic to prostheses). International Orthopaedics, 37(6), 1195–1197. https://doi.org/10.1007/ s00264-013-1884-7 Howe, P. D. (2011). Cyborg and Supercrip: The Paralympics Technology and the (Dis)empowerment of Disabled Athletes. Sociology, 45(5), 868–882. https://doi. org/10.1177/0038038511413421 Ibrahim, J. M., Serrano, S., Caldwell, A. M., Eliezer, E. N., Haonga, B. T., & Shearer, D. W. (2019). Barriers to prosthetic devices at a Tanzanian hospital. East African Orthopaedic Journal, 13(1), 40–47. https://doi.org/10.4314/eaoj.v13i1. Immune Response to Implants: Background, Pathology, Presentation. (2019). https://emedicine.medscape.com/ article/1230696-overview Johns Hopkins Medicine. (n.d.). Cochlear Implant Surgery. Retrieved March 10, 2021, from https://www. hopkinsmedicine.org/health/treatment-tests-and-therapies/ cochlear-implant-surgery Jolliffe, D., & Wadhwa, D. (2018, October 24). Nearly 1 in 2 in the world lives under $5.50 a day. October 24 2018. https:// blogs.worldbank.org/opendata/nearly-1-2-world-lives-under550-day Lal, H., & Patralekh, M. K. (2018). 3D printing and its applications in orthopaedic trauma: A technological marvel. Journal of Clinical Orthopaedics & Trauma, 9(3), 260–268. https://doi.org/10.1016/j.jcot.2018.07.022 Levy, N. (2002). Reconsidering Cochlear Implants: The Lessons of Martha’s Vineyard. Bioethics, 16(2), 134–153. https://doi. org/10.1111/1467-8519.00275 Limbitless Solutions. (n.d.). Learn All About Our Bionic Arms | Our Work | Limbitless Solutions. Retrieved March 10, 2021, from https://limbitless-solutions.org/ourWork Luo, Y. H.-L., & da Cruz, L. (2016). The Argus(®) II Retinal Prosthesis System. Progress in Retinal and Eye Research, 50, 89–107. https://doi.org/10.1016/j.preteyeres.2015.09.003 MacDonald, J. (2017, July 21). A Brief History of Prosthetic Limbs. JSTOR Daily. https://daily.jstor.org/a-brief-history-ofprosthetic-limbs/ Macrophage responses to implants: Prospects for personalized medicine. (n.d.). Retrieved February 12, 2021, from https://reference.medscape.com/medline/ abstract/26168797 Major Exhibition Explores Visual Art and Prosthetics. (2016,
WINTER 2021
July 21). Artnet News. https://news.artnet.com/exhibitions/ exhibition-sculpture-prosthetics-henry-moore-leeds-562523 Manero, A., Smith, P., Sparkman, J., Dombrowski, M., Courbin, D., Kester, A., Womack, I., & Chi, A. (2019). Implementation of 3D Printing Technology in the Field of Prosthetics: Past, Present, and Future. International Journal of Environmental Research and Public Health, 16(9), 1641. https://doi. org/10.3390/ijerph16091641 Marino, M., Pattni, S., Greenberg, M., Miller, A., Hocker, E., Ritter, S., & Mehta, K. (2015). Access to prosthetic devices in developing countries: Pathways and challenges. 2015 IEEE Global Humanitarian Technology Conference (GHTC), 45–51. https://doi.org/10.1109/GHTC.2015.7343953 McDonald, C. L., Larbi, H., McCoy, S. W., & Kartin, D. (2020). Information access and sharing among prosthetics and orthotics faculty in Ghana and the United States. Prosthetics and Orthotics International, 0309364620958828. https://doi. org/10.1177/0309364620958828 McPherson, B. (2014). Hearing assistive technologies in developing countries: Background, achievements and challenges. Disability and Rehabilitation. Assistive Technology, 9(5), 360–364. https://doi.org/10.3109/1748310 7.2014.907365 Medicare.gov. (n.d.). Prosthetic Coverage. Retrieved February 14, 2021, from https://www.medicare.gov/coverage/ prosthetic-devices Ménascé, D. (2014). Economic and social issues around Last Mile Delivery. FACTS Introduction. Field Actions Science Reports. The journal of field actions, Special Issue 12, Article Special Issue 12. http://journals.openedition.org/ factsreports/3637 Micera, S., Carpaneto, J., & Raspopovic, S. (2010). Control of Hand Prostheses Using Peripheral Information. IEEE Reviews in Biomedical Engineering, 3, 48–68. https://doi.org/10.1109/ RBME.2010.2085429 Mioton, L. M., & Dumanian, G. A. (2018). Targeted muscle reinnervation and prosthetic rehabilitation after limb loss. Journal of Surgical Oncology, 118(5), 807–814. https://doi. org/10.1002/jso.25256 Mohney, G. (2013, April 24). Health Care Costs for Boston Marathon Amputees Add Up Over Time. ABC News. https://abcnews.go.com/Health/health-care-costs-bostonmarathon-amputees-add-time/story?id=19035114 Mota, A. (2017, October 3). Materials of Prosthetic Limbs. https://scholarworks.calstate.edu/downloads/h128ng975 Mouriño, V., Cattalini, J. P., & Boccaccini, A. R. (2012). Metallic ions as therapeutic agents in tissue engineering scaffolds: An overview of their biological applications and strategies for new developments. Journal of the Royal Society Interface, 9(68), 401–419. https://doi.org/10.1098/rsif.2011.0611 Ngan, C. G. Y., Kapsa, R. M. I., & Choong, P. F. M. (2019). Strategies for neural control of prosthetic limbs: From electrode interfacing to 3D printing. Materials, 12(12), 1927. https://doi.org/10.3390/ma12121927 Niketeghad, S., & Pouratian, N. (2019). Brain Machine Interfaces for Vision Restoration: The Current State of Cortical Visual Prosthetics. Neurotherapeutics: The Journal of the American Society for Experimental NeuroTherapeutics, 16(1),
283
134–143. https://doi.org/10.1007/s13311-018-0660-1 Nyberg, E. L., Farris, A. L., Hung, B. P., Dias, M., Garcia, J. R., Dorafshar, A. H., & Grayson, W. L. (2017). 3D-Printing Technologies for Craniofacial Rehabilitation, Reconstruction, and Regeneration. Annals of Biomedical Engineering, 45(1), 45–57. https://doi.org/10.1007/s10439-016-1668-5 PARÉ AND PROSTHETICS: THE EARLY HISTORY OF ARTIFICIAL LIMBS - Thurston—2007—ANZ Journal of Surgery—Wiley Online Library. (n.d.). Retrieved February 14, 2021, from https:// onlinelibrary.wiley.com/doi/10.1111/j.1445-2197.2007.04330.x Pasquina, C. P. F., Carvalho, A. J., & Sheehan, T. P. (2015). Ethics in Rehabilitation: Access to Prosthetics and Quality Care Following Amputation. AMA Journal of Ethics, 17(6), 535–546. https://doi.org/10.1001/journalofethics.2015.17.6.stas1-1506 Perlmutter, S. I. (2017). Reaching again: A glimpse of the future with neuroprosthetics. The Lancet, 389(10081), 1777–1778. https://doi.org/10.1016/S0140-6736(17)30562-7 Pizoferrato, A., Vespucci, A., Ciapetti, G., & Stea, S. (1985). Biocompatibility testing of prosthetic implant materials by cell cultures. Biomaterials, 6(5), 346–351. https://doi. org/10.1016/0142-9612(85)90090-0 Prosthetic Knee Systems. (n.d.). Amputee Coalition. Retrieved January 30, 2021, from https://www.amputee-coalition.org/ resources/prosthetic-knee-systems/ Ramadhani, G. A., Susmartini, S., Herdiman, L., & Priadythama, I. (2020). Advanced composite-based material selection for prosthetic socket application in developing countries. Cogent Engineering, 7(1), 1745553. https://doi.org/10.1080/23311916 .2020.1745553 Resnik, L., Acluche, F., Borgia, M., Latlief, G., & Phillips, S. (2019). EMG Pattern Recognition Control of the DEKA Arm: Impact on User Ratings of Satisfaction and Usability. IEEE Journal of Translational Engineering in Health and Medicine, 7. https:// doi.org/10.1109/JTEHM.2018.2883943 Reznick, J. S. (2008). Beyond War and Military Medicine: Social Factors in the Development of Prosthetics. Archives of Physical Medicine and Rehabilitation, 89(1), 188–193. https://doi. org/10.1016/j.apmr.2007.08.148 Ricciardi, C., Jónsson, H., Jacob, D., Improta, G., Recenti, M., Gíslason, M. K., Cesarelli, G., Esposito, L., Minutolo, V., Bifulco, P., & Gargiulo, P. (2020). Improving Prosthetic Selection and Predicting BMD from Biometric Measurements in Patients Receiving Total Hip Arthroplasty. Diagnostics, 10(10), 815. https://doi.org/10.3390/diagnostics10100815 Rods & Cones. (n.d.). Retrieved February 14, 2021, from https:// www.cis.rit.edu/people/faculty/montag/vandplite/pages/ chap_9/ch9p1.html Sample, M., Aunos, M., Blain-Moraes, S., Bublitz, C., Chandler, J. A., Falk, T. H., Friedrich, O., Groetzinger, D., Jox, R. J., Koegel, J., McFarland, D., Neufield, V., Rodriguez-Arias, D., Sattler, S., Vidal, F., Wolbring, G., Wolkenstein, A., & Racine, E. (2019). Brain–computer interfaces and personhood: Interdisciplinary deliberations on neural technology. Journal of Neural Engineering, 16(6), 063001. https://doi.org/10.1088/17412552/ab39cd Sansoni, S., Wodehouse, A., McFadyen, A., & Buis, A. (2016). Utilising the Repertory Grid Technique in Visual Prosthetic Design: Promoting a User-Centred Approach. Journal of
284
Integrated Design and Process Science, 20(2), 31–46. https:// doi.org/10.3233/jid-2016-0015 Schenck, D., Mazariegos, G. V., Thistlethwaite, J. R., & Ross, L. F. (2018). Ethical Analysis and Policy Recommendations Regarding Domino Liver Transplantation. Transplantation, 102(5), 803–808. https://doi.org/10.1097/ TP.0000000000002095 See The World Through Bionic Eyes With This Incredible Simulation. (n.d.). Popular Science. Retrieved February 14, 2021, from https://www.popsci.com/visual-simulation-letsyou-see-world-through-bionic-eyes/ Sevket Arslan, Serkan Aksan, Ramazan Ucar, & Ahmet Zafer Caliskaner. (2015). Contact dermatitis to cobalt chloride with an unusual mechanism. Prosthetics and Orthotics International, 39(5), 419–421. https://doi. org/10.1177/0309364614534293 Sreenivasan, N., Ulloa Gutierrez, D. F., Bifulco, P., Cesarelli, M., Gunawardana, U., & Gargiulo, G. D. (2018). Towards Ultra Low-Cost Myoactivated Prostheses. BioMed Research International, 2018, 9634184. https://doi. org/10.1155/2018/9634184 Standards for prosthetics and orthotics. (2017). World Health Organization. The History of Prosthetics. (n.d.-a). Retrieved February 15, 2021, from http://unyq.com/the-history-of-prosthetics/ Thurston, A. J. (2007). Paré and Prosthetics: The Early History of Artificial Limbs. ANZ Journal of Surgery, 77(12), 1114–1119. https://doi.org/10.1111/j.1445-2197.2007.04330.x Tolkachov, M., Sokolova, V., Loza, K., Korolovych, V., Prylutskyy, Y., Epple, M., Ritter, U., & Scharff, P. (2016). Study of biocompatibility effect of nanocarbon particles on various cell types in vitro. Materialwissenschaft Und Werkstofftechnik, 47(2–3), 216–221. https://doi.org/10.1002/mawe.201600486 Visual Impairment. (n.d.). National Council for Special Education. Retrieved March 17, 2021, from https://www.sess. ie/categories/sensory-impairments/visual-impairment Vu, P. P., Vaskov, A. K., Irwin, Z. T., Henning, P. T., Lueders, D. R., Laidlaw, A. T., Davis, A. J., Nu, C. S., Gates, D. H., Gillespie, R. B., Kemp, S. W. P., Kung, T. A., Chestek, C. A., & Cederna, P. S. (2020). A regenerative peripheral nerve interface allows real-time control of an artificial hand in upper limb amputees. Science Translational Medicine, 12(533). https://doi.org/10.1126/ scitranslmed.aay2857 Walker, M. J., Goddard, E., Stephens-Fripp, B., & Alici, G. (2020). Towards Including End-Users in the Design of Prosthetic Hands: Ethical Analysis of a Survey of Australians with UpperLimb Difference. Science and Engineering Ethics, 26(2), 981–1007. https://doi.org/10.1007/s11948-019-00168-2 WHO. (2005). Guidelines for Training Personnel in Developing Countries for Prosthetics and Orthotic Services. https://apps. who.int/iris/bitstream/handle/10665/43127/9241592672. pdf?sequence=1 WHO. (2017). Standards for prosthetics and orthotics. https://apps.who.int/iris/bitstream/ha ndle/10665/259209/9789241512480-part1-eng. pdf?sequence=1 Wilson Jr., A. B. (n.d.). 1: History of Amputation Surgery and
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Prosthetics | O&P Virtual Library. Retrieved March 17, 2021, from http://www.oandplibrary.org/alp/chap01-01.asp World Bank. (2020, October 7). Poverty: Overview [Text/ HTML]. World Bank. https://www.worldbank.org/en/topic/ poverty/overview Xue, K., & MacLaren, R. E. (2020). Correcting visual loss by genetics and prosthetics. Current Opinion in Physiology, 16, 1–7. https://doi.org/10.1016/j.cophys.2020.03.003 Yanagisawa, T., Hirata, M., Saitoh, Y., Goto, T., Kishima, H., Fukuma, R., Yokoi, H., Kamitani, Y., & Yoshimine, T. (2011). Real-time control of a prosthetic hand using human electrocorticography signals: Technical note. Journal of Neurosurgery, 114(6), 1715–1722. https://doi. org/10.3171/2011.1.JNS101421 Yildiz, K. A., Shin, A. Y., & Kaufman, K. R. (2020). Interfaces with the peripheral nervous system for the control of a neuroprosthetic limb: A review. Journal of NeuroEngineering and Rehabilitation, 17(1), 1–19. https://doi.org/10.1186/ s12984-020-00667-5 Ziegler-Graham, K., MacKenzie, E. J., Ephraim, P. L., Travison, T. G., & Brookmeyer, R. (2008). Estimating the Prevalence of Limb Loss in the United States: 2005 to 2050. Archives of Physical Medicine and Rehabilitation, 89(3), 422–429. https:// doi.org/10.1016/j.apmr.2007.11.005
WINTER 2021
285
Beyond the Genetic Code: The Era of Epigenetics STAFF WRITERS: VAISHNAVI KATRAGADDA, CHELSEA-STARR JONES, MARY-MARGARET CUMMINGS, ANDREW SASSER, VAANI GUPTA, JUSTIN CHONG, RORI STUART, RACHEL MATTHEW, JULIA PATTERSON BOARD WRITERS: ANNA BRINKS & MEGAN ZHOU Cover: A computerized image of a DNA double helix. Epigenetics is the study of heritable changes in DNA expression that do not involve alterations to the genetic code. Source: pixy.org
286
Introduction The term epigenetics was coined by Conrad Waddington in the early 1940s (Dupont et al., 2009). Originally, it was used to describe the molecular pathways that modulate the expression of a genotype (the genetic composition of an organism) and produce a particular phenotype (the visible manifestation of the genotype) (Dupont et al., 2009). However, the term has since evolved to refer to the layer of genetic information that exists beyond that encoded in the DNA sequence (Greally, 2018). This can include heritable changes in chromatin, DNA modifications, and other transcription regulators that act to regulate gene expression (Greally, 2018). Epigenetic modifications play a crucial role in nearly every human process from germ cell development and embryogenesis to producing the differences between genetically identical twins (Nessa, 2012).
The history of epigenetics studies reveals a diverse array of epigenetic mechanisms. One of the earliest observations of an epigenetic event was described by Hermann Joseph Muller in 1930. He found that the generation of antibody diversity involved DNA rearrangement in a somatic cell lineage (Felsenfeld, 2014). A subsequent study determined DNA methylation as the key to X-chromosome inactivation, which is the substitution of a methyl group to a hydrogen atom. Modification of histones (the basic proteins that DNA wind around) on chromatin has also been a focus of epigenetic research. One of the earliest connections to epigenetic mechanisms was by Allis Brownell and coworkers who found that a histone acetyltransferase controlled gene expression in the model organism Tetrahymena (Felsenfeld, 2014). In addition to histone acetylation, histone phosphorylation and histone ubiquitination were notable research focuses in the 1970s. Histone acetylation, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
phosphorylation, and ubiquitination refer to the addition of acetyl groups, phosphate groups, and ubiquitin to the histone proteins, respectively. Small non-coding RNAs also have been a focus in epigenetics for their role in transcription suppression (Choudhuri, 2011). More recently, the study of epigenetics has focused on the transmission of information that is not explicitly encoded in DNA, such as stable alterations in expressions that may not even be heritable. Ultimately, the history of epigenetics is still very much in progress, and understanding its bigger role in the molecular basis of development will have large scale implications for health and disease (Choudhuri, 2011).
Case Studies of Epigenetics Honeybees Several case studies have demonstrated the role of epigenetics in nature and in the laboratory. Honeybees, for example, all originate from fertilized eggs that grow into worker or queen bees. The larvae from honeybees begin to differentiate when their quantity of food changes from an abundant universal diet to either limited food for the worker bees or a surplus of “royal jelly” reserved for the queen bee. Within only a day and a half, the larvae are no longer totipotent; their path as worker or queen has been determined (Oldroyd, 2018). Current research has shown that the royal jelly thus acts as an environmental determinant that establishes the path of a larva’s development (Oldroyd, 2018). Royal jelly is a viscous liquid containing many biologically important molecules—Major Royal Jelly Proteins (MRJPs), carbohydrates, amino acids, fatty acids, and various vitamins and minerals—which guide a honeybee larva’s growth through a presently unknown mechanism. All honeybees have many enzymes capable of methylating DNA, yet the general pattern of methylation found in worker bees differs from that found in queens. Specific methylating enzymes—DNMT-3, for instance— trigger the development of a honeybee queen when provided to larvae, suggesting that the consumption of royal jelly by a larva activates modifications to their DNA, which trigger a phenotype entirely absent in other larvae with previously identical genotypes (Oldroyd, 2018). Agouti Mice While honeybees demonstrate a change in caste—the difference between queen and worker bees—as a result of a diet of royal jelly,
WINTER 2021
agouti mice demonstrate a change in coat color as a result of their mothers’ diets. The Agouti gene produces one of two pigments, black eumelanin or yellow phaeomelanin. Of the two, black-coated mice typically have a healthier phenotype, while the yellow phaeomelanin coat phenotype often is correlated to obesity, diabetes, and tumors in adulthood. Once again, methylation patterns differ between the two phenotypes; black mice exhibit methylation of the Agouti gene where yellow mice do not, with the exact degree of methylation creating a gradient of in-between phenotypes with yellow-and-black mottled coats. Researchers have found that the nutrition of a mouse’s mother impacts the extent of its methylation. For example, a higher level of genistein, a xenoestrogen found in soy, in the maternal diet led to an increased incidence of the black-coat phenotype. Meanwhile, mothers exposed to higher quantities of the endocrine chemical bisphenol A had litters with higher quantities of the yellow-coat phenotype (Dolinoy, 2008). Sex and Gender While many utilize the terms sex and gender interchangeably, these words actually have two distinct meanings. Sex refers to the biological anatomy of an individual, while gender refers to the social identity, behavior, and expression of an individual. A person’s sex therefore may not necessarily be the same as their gender. Epigenetics plays a significant role in sex differences and gender: DNA and protein modifications contribute to sex differentiation and differences in behaviors. For instance, the expression of male sexual behavior can be triggered by the methylation of the estrogen receptor alpha (ERα) in the neonatal brain (Hodes et al., 2017). Furthermore, studies have shown that in rats, lesions in a region of the hypothalamus called the medial preoptic area can lead to inhibition of maternal behavior in females. However, in males, the same lesions block sexual behavior (Malsbury, 1971). Additionally, when female rat brains that have inhibitors to block DNA methylation are injected with testosterone at adulthood, they exhibit male sexual behavior (Hodes et al., 2017).
"All honeybees have many enzymes capable of methylating DNA, yet the general pattern of methylation found in worker bees differs from that found in queens. Specific methylating enzymes—DNMT-3, for instance—trigger the development of a honeybee queen when provided to larvae."
Furthermore, life experiences and behaviors can lead to epigenetic changes in methylation patterns. For example, a study analyzed the effect of the amount of maternal rat licks on the behavior of male and female juvenile rats. Normally, male rats are licked more than female ones by their mothers, and this affects their DNA methylation patterns and their expression 287
Figure 1: Developing queen larvae surrounded by royal jelly. Source: Wikimedia Commons
"In humans, studies have shown that early life stress can result in epigenetic changes within the individual; a child in foster care often exhibits greater DNA methylation, which then impacts their mood and behaviors."
of the ERα gene (Cortes et al., 2019). By using a paintbrush to mimic maternal licking, some young female rats were “licked” more often, receiving a similar amount of attention compared to male rats. This resulted in the female DNA methylation patterns mimicking male patterns, as well as in increased expression of the ERα gene. As such, there was greater masculinization of methylation patterns and cognitive gene expression within females (Cortes et al., 2019). In humans, studies have shown that early life stress can result in epigenetic changes within the individual; a child in foster care often exhibits greater DNA methylation, which then impacts their mood and behaviors. Another instance of epigenetics impacted by life experience is the epigenetic profile of child abuse victims, who often have reduced hydroxymethylation and increased expression of the kappa opioid receptor in the frontal cortex (Cortes et al., 2019). This epigenetic change increases the risk of suicide in victims of abuse. Therefore, epigenetics has a large impact on behavior and can contribute to sex-based differences in disease rates. Depression and alcoholism, for example, are significantly more common in men (Cortes et al., 2019). Although the causes that underlie this disparity are complex and may include factors such as socially learned behavior, epigenetics is a crucial component to consider. The Dutch Famine One of the most famous cases of transgenerational epigenetic inheritance is the Dutch Famine of 1944-1945. Malnutrition of parents during the famine led to significantly
288
higher weight and BMI of their first-generation offspring (Veenendaal et al., 2013). It is now hypothesized that this phenomenon is a result of epigenetic changes to the genome that were passed to offspring. Heijmans and colleagues proved that exposure to adverse conditions, like famine, in a specific time frame of human development is directly associated with epigenetic changes (2008). They showed that the gene encoding IGF2, or insulin-like growth factor, is roughly 5% less methylated in individuals exposed to famine during the periconceptional period (the time from before conception to early pregnancy) of their development (Heijmans et al., 2008). Although IGF2 is not directly related to factors like weight and BMI, the study supports the possibility that epigenetic processes such as methylation could be behind the increased weights and BMI experienced by first-generation offspring of the Dutch famine. Another study showed that retinoid X receptor-α exhibited higher levels of methylation in the neonatal umbilical cord blood during the famine; children exposed to these conditions subsequently grew up to have higher fat percentages by age nine (Lavebratt et al., 2012). Thus, this receptor could be one reason behind the increased BMI and weight found in the offspring of Dutch Famine survivors. Another less well-understood case of transgenerational epigenetic inheritance is athleticism: offspring have been found in some ways to reap the health benefits of parental exercise. In rats, it has been shown that certain parental exercise regimens, specifically in
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Dutch children eating soup during the famine of 1944-45. The limited food access during this time would impact future generations because malnutrition of parents caused epigenetic changes that were passed to offspring. Source: Wikimedia Commons
female rats prior to or during pregnancy, seem to result in epigenetic changes in their offspring (Meireles et al., 2021). These changes differ depending on the time of the exercise, with effects ranging from improved memory to better cognitive function. In humans, parental exercise is less clearly associated with health benefits in offspring, although it has been shown that there is an inverse correlation between strenuous parental exercise and infant birth weight (Axsom & Libonati, 2019).
Molecular Biology of Epigenetics Cellular Differentiation Every cell in the human body is essentially genetically identical, carrying the same “instruction book” that drives the creation of proteins and allows the cell to carry out its function. Despite carrying the same genes, however, different cells must have radically different structures and functions: imaging a neuron and a cardiac muscle cell reveals entirely different shapes, and further investigation will show drastically different types and amounts of proteins within the cells. The differences between different types of cells are due to complex epigenetic mechanisms at work during development. At the earliest stage, the zygote (formed from the union of a female gamete, the egg, with a male gamete, the sperm) is omnipotent, meaning that it possesses complete differentiation potential – it can turn into any type of cell (Denker, 2014). However, as the zygote continues to develop, cells begin to differentiate and take on fixed roles (despite maintaining the same genetic
WINTER 2021
code). Epigenetic mechanisms play a critical role in this process. Pluripotency, the ability of a cell to differentiate into several different cell types, is a feature found in stem cells. Although the full molecular basis for pluripotency is not fully understood, there are a few known factors that affect pluripotency. Human induced pluripotent stem cells (hiPSCs) are derived from human somatic cells, and they have surface antigens that are derived from 5-6 day old human embryos (known as human embryonic stem cells, or hESCs), rather than stage-specific embryonic antigens (SSEAs) (Ilic et al., 2012; Takahashi et al. 2007). Additionally, the hiPSCs expressed several genes found in hESCs, such as OCT3/4, SOX2, and telomerase reverse transcriptase. The amount of protein produced by these genes in the hiPSCs was similar to the amount found in hESCs. This is important because another critical gene involved in pluripotency is NANOG, a protein target of OCT4 and SOX2 (Seymour, Twigger & Kalkulas, 2015). The ability to generate hiPSCs demonstrates that pluripotency can be induced in already differentiated cells. This can be done using a retrovirus, which is capable of inserting the genes OCT3/4, Sox2, Klf4, and c-Myc into mature, differentiated cells; this is also called a retrovirus vector. Next, the cells are cultured with basic fibroblast growth factor, allowing for the growth of cell colonies (Takahashi et al., 2007).
"Every cell in the human body is essentially genetically identical, carrying the same “instruction book” that drives the creation of proteins and allows the cell to carry out its function. Despite carrying the same genes, however, different cells must have radically different structures and functions."
Genetic Switches Genetic switches include gene regulatory proteins and the specific DNA sequences that
289
Figure 3: Diagram depicting the changes in cell potential that occur over the course of development. The morula is a solid ball of cells that result from the division of a zygote. The cells in the morula are totipotent, meaning they can become any type of tissue (including the placenta). Embryonic stem cells within the blastocyst are pluripotent and can become any type of tissue, excluding the placenta. Source: Wikimedia Commons
"The two main characteristics of a stem cell are that they can perpetually self-renew and, when using embryonic cells, can develop into almost every type of cell. These characteristics are theoretically useful in cases of cell death, removal, or damage."
290
these proteins recognize (Alberts et al., 2002). They allow genes to turn on or off and therefore play an important role in encoding epigenetic information. While only approximately 1.5% of the human genome encodes instructions for making protein, the other 98.5% is far from useless (Yong, 2012). These “non-coding” DNA regions contain sites that enable a complex and dynamic regulation of gene expression, driving dramatic changes in cellular and organismal phenotypes by mediating gene expression in response to environmental factors (Hoffmann et al., 2015). There are several different components that make up a gene. The promoter is a specific DNA sequence that directs RNA polymerase to bind to the DNA and begin transcribing mRNA (Alberts et al., 2002). However, gene regulatory proteins can also bind to enhancer sequences and activate transcription. There are thousands of different gene regulatory proteins that can vary from one gene control region to the next. These proteins may aid in assembling the RNA polymerase holoenzyme or alter chromatin structure (Alberts et al., 2002). Alternatively, silencers can act to repress gene transcription (Ogbourne & Antalis, 1998). The complex
interaction of these different gene regions and regulatory proteins contributes to the precise and unique functions of different cells. Stem Cell Therapy Stem cell therapy is a methodology that uses cell differentiation to its advantage. As previously discussed, a stem cell has not yet specialized. The two main characteristics of a stem cell are that they can perpetually self-renew and, when using embryonic cells, can develop into almost every type of cell (Biehl & Russell, 2014). These characteristics are theoretically useful in cases of cell death, removal, or damage. For example, in cases of cell damage by chemotherapy, stem cell therapy could be used to replace those non-cancerous yet damaged cells. Another instance of stem cell therapy is its use in treating different types of heart disease. Myocardial infarction (MI), or a heart attack, is a deadly heart disease often caused by blockage, such that there is a lack of blood to and eventual death of cells in the heart. Unlike other body structures and organs, the tissue of the heart will not regenerate after it has died. Conditions like MI are great opportunities for stem cell therapy as this process could DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: Potential applications of stem cell therapies. Beyond MI, and other heart diseases, stem cell treatments could be used for a wide array of ailments. Source: Wikimedia Commons
theoretically replace damaged and dead heart cells, allowing MI survivors to make a more full recovery. Unfortunately, recent research has found it difficult to produce consistent positive results where patients are able to overcome the damage done by cardiovascular disease (Jeevanantham et al., 2012; Afzal et al., 2015; Gyongyosi et al., 2016). In order to gain more consistency in the data, a deeper understanding of how stem cell-mediated cardiac regeneration is impacted by the specific circumstance of the heart condition, such as disease state and any comorbidities, is necessary. In other words, to be helpful, stem cell therapy would have to become personalized for each MI survivor (Muller et al., 2018; Becher et al., 2011). In theory, stem cell personalization for victims of MI or other heart diseases would be possible through the identification of biomarkers, such as epigenetic tags, to find patients who could benefit the most (Fernandez-Aviles et al., 2017; Madonna et al., 2016). Chromatin Structure and Epigenetic Mechanisms: DNA Methylation and Histone Modifications DNA methylation is a reversible, covalent epigenetic mechanism which generally represses gene expression. It is also associated with inherited epigenetic characteristics, given that DNA methylation patterns can remain
WINTER 2021
constant through DNA replication and mitosis. Usually, methyl groups, when attached to cytosine at the 5’ position, insert themselves in DNA’s major groove and undergo hydrogen bonding to the DNA. Approximately 3% of cytosines in human DNA are methylated in this way. Locally, this chemical change disrupts the structure of the DNA, preventing the binding of certain proteins to the DNA while enabling others to now be able to bind (Gibney & Nolan, 2010). A family of enzymes, called DNA methyltransferases (DNMTs), are responsible for both adding and removing methyl groups from DNA, thereby silencing or activating gene expression. There are four main types of (DNMTs) found in mammals. DNMT1 is responsible for maintaining methylation in a region of DNA during replication. DNMT3a and DNMT3b participate in de novo methylation, pertaining to DNA strands that have not yet been methylated, and the enzymes also help DNMT1 preserve methylation patterns during DNA replication. Last but not least, DNMT2 plays a crucial role in RNA methylation, even though it has a relatively weak DNA methylating ability (Gibney & Nolan, 2010). Apart from adding or removing methyl groups from DNA, DNMTs can also directly interact
"A family of enzymes, called DNA methyltransferases (DNMTs), are responsible for both adding and removing methyl groups from DNA, thereby silencing or activating gene expression."
291
with transcription factors; there are currently 79 known transcription factors that interact with DNMTs. For example, the P21 gene’s expression Source: Wikimedia Commons has been known to have been transcriptionally silenced in this way. The protein p53 recruits a DNMT1 to the site of p21, which subsequently results in the methylation of p21, and its expression is silenced (Gibney & Nolan, 2010). Figure 5: The basic units of chromatin structure.
After DNA methylation, transcriptional silencing is further enhanced when further methyl-binding proteins (MBPs) are recruited. Some MBPs, like MBD1-3, attract co-repressors that downregulate transcription. Others are enzymes like histone deacetylase which alter the structure of the chromatin itself, resulting in the DNA region becoming more difficult for transcription factors to reach. There are many families of MBPs, but in general, they seem to associate most often with the promoters of genes (Gibney & Nolan, 2010).
"In eukaryotes, DNA is structured in a complex structure (chromatin), in which the most basic units are bundles of DNA called nucleosome. In the nucleosome, DNA is wrapped around proteins called core histones."
Histone modification is another important epigenetic mechanism. In eukaryotes, DNA is structured in a complex structure (chromatin), in which the most basic units are bundles of DNA called nucleosome. In the nucleosome, DNA is wrapped around proteins called core histones. A specific type of histone, known as linker histone (H1), binds to the DNA in between nucleosomes. This makes the chromatin look like a beaded string. This structure not only helps bind the DNA into an organized package in the nucleus, but it also plays a critical role in gene expression through epigenetic regulation (Gibney & Nolan, 2010). Histones, although they are globular proteins (somewhat spherical in nature), have protruding, flexible “tails.” These are the components of the histones that undergo post-translational modification (PTM). There are many types of PTMs, the most well studied being the covalent modifications of methylation, acetylation, and phosphorylation. The PTMs are regulated by several enzymes like histone methyltransferases, histone acetyltransferases (HATs), and histone de-acetyl transferases (HDACs) which add methyl groups, add acetyl groups, and remove acetyl groups respectively. Another type of PTM involves enzymes that utilize ATP hydrolysis to remodel the nucleosomal arrangement, sometimes replacing core histones with variant histones or exposing new areas of DNA to histone contact (Gibney & Nolan, 2010). Histone modification, whatever form it takes, ultimately alters gene expression. DNA acetylation reduces the strength of the binding
292
between the histone’s positively charged side chains, which are bound to DNA (which is negatively charged). Thus, the DNA becomes more exposed to transcription factors and is more easily transcribed. In contrast, the process of de-acetylation achieves the opposite, making the DNA-binding sites less accessible or completely inaccessible to transcription factors. Furthermore, it is believed that when transcriptional activators are bound to DNA, they recruit more HATs to facilitate the opening up of adjacent DNA. Conversely, transcriptional repressors recruit further HDACs to promote de-acetylation, thereby reducing transcription (Gibney & Nolan, 2010). RNA-Based Epigenetic Mechanisms Although the role of RNA in epigenetic regulation is not understood as well as the DNA-based mechanisms described previously, RNA does have a significant role. The majority of RNA molecules transcribed from DNA are actually non-coding (ncRNAs), meaning that they are not translated into proteins. Some ncRNAs are small, having a length of less than 200 nucleotides (nt), while others can be much longer, at lengths of 200nt and more. These types of ncRNA molecules can impact epigenetic regulation in many different ways (Gibney & Nolan, 2010). DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
For example, micro-RNAs (miRNA) of a length of 22nt are known to prevent translation of other target mRNA molecules by base-pairing with them, either inhibiting translation or inducing mRNA degradation altogether. Short interfering RNAs (siRNAs) guide DNA methylation by directing methylation to regions of the genome to which the siRNA molecule is bound. Other types of small ncRNAs involved in epigenetic regulation include PIWI-interacting RNAs (piRNAs) and repeat-associated RNAs (rasiRNAs). These are essential for germline viability (maintaining sperm and egg cells) in mammals and fish because they regulate the activity of transposable elements there (Gibney & Nolan, 2010). The function of long ncRNAs (lncRNAs) is not well understood, but they are believed to be important for epigenetic regulation. For example, simply transcribing a lncRNA can alter chromatin structure or RNA polymerase II recruitment, which in turn can affect downstream transcription. Furthermore, lncRNAs interact with chromatin-modifying complexes. For example, a lncRNA called HOTAIR guides the polycomb repressive complex (PRC2), which possesses methyltransferase activity, to a HOXD cluster, and represses several genes. lncRNAs have been found to be associated with various human diseases (including hepatocellular carcinoma, Alzheimer's disease, and diabetes), important for regulation of development, and active in governing cell differentiation and tissue specificity (DiStefano, 2018; Gibney & Nolan, 2010).
(Herman & Baylin, 2003). The hypermethylation of these tumor-suppressor genes can affect the cell cycle, apoptosis, carcinogen metabolism, and several other roles in various different ways via deactivation, allowing for tumor growth (Esteller, 2008; Herman & Baylin, 2003). The hypermethylation of the gene hMLH1, for example, silences its DNA repair capabilities, increasing the likelihood of the cell becoming cancerous (Esteller, 2008). In addition to regions of hypermethylation in DNA, tumor cells often possess large areas of hypomethylation. Areas of DNA that are typically methylated in normal cells can become hypomethylated in cancerous cells, causing DNA instability and mutability. The transition to a tumor from a benign growth of cells is often marked by an increase in hypomethylation as well (Esteller, 2008). Hypomethylation inhibits cellular differentiation while increasing cellular de-differentialization and epigenetic variation. The increase in epigenetic variation allows for phenotypic heterogeneity, which then increases the amount of hypomethylation, in a positive feedback loop, and promotes tumor formation. The variety of different phenotypes caused by hypomethylation allows selective pressures placed upon the cancer cells by their environment to select for increased cell replication and for increased epigenetic adaptability, ultimately stabilizing the transition into a cancerous state and tumor evolution (Feinberg, Koldobskiy, & Göndör, 2016).
Cancer
Psychiatric Illnesses and Neurodegenerative Disorders
Just as mutations in DNA sequences can cause cancer, epigenetic changes can have carcinogenic effects; notably, the effects of an epigenetic change are more easily reversible than a genetic mutation (Yoo & Jones 2006). In some cases, cancer cells are found with very few mutations, yet with numerous epigenetic abnormalities (Feinberg, Koldobskiy, & Göndör, 2016). Changes in methylation in the epigenome – either by hypermethylation or hypomethylation – are a main feature in human cancers (Guo et al., 2019).
Epigenetic changes have also been highly implicated in the development of various neurodegenerative diseases and psychiatric illnesses. These changes represent important targets for intervention, as these diseases are determined by both genetic and environmental factors (Hwang et al., 2017). Maternal behavior, environmental toxins, nutrition, and psychosocial stress, among other factors, are greatly implicated in brain-related pathology (Landgrave-Gomez et al., 2015).
In tumor cells, it has been found that hypermethylation commonly occurs on tumor-suppressor and miRNA genes (Esteller, 2008). A frequent effect of this genomic hypermethylation is the prevention, or silencing, of gene expression since most methylation sites are on promoter regions
One example of epigenetic influence on neurodegenerative disease is the dysregulation of the restrictive element 1-silencing transcription factor (REST). Dysregulation of REST as well as other transcription factors results in cognitive impairment due to neuronal cell death. However, as neurons do not divide once they are specialized, the epigenetic changes
WINTER 2021
"In addition to regions of hypermethylation in DNA, tumor cells often possess large areas of hypomethylation. Areas of DNA that are typically methylated in normal cells can become hypomethylated in cancerous cells, causing DNA instability and mutability."
293
are not heritable and nor are they transferred to progeny. These epigenetic changes are caused by a variety of factors, such as changes in methylation patterns or histone acetylation. In the brain, the dysregulation of certain genes associated with methyltransferases (DNMT1, DNMT3A, DNMT3B) results in the disruption of methylation, and as a consequence, the capacity for memory formation is diminished. Histone acetylation dysregulation similarly plays a role in memories, as well as in synaptic plasticity. Injuries to the brain, caused by ischemia, seizures, stroke, or other causes, result in REST disruption as the gene silences downstream targets and leads to neuronal death. REST is typically activated to promote NMDA receptors (glutamate receptors) in response to brain injury and accumulates in the nuclei of vulnerable cells (Hwang et al., 2017).
"Psychiatric disorders are similarly caused by environmental factors and epigenetic changes. In studies of animal models, chronic physical and psychological stressors have been shown to cause a decrease in reward related behaviors and social interactions."
While ubiquitous REST expression and function is associated with aging, the loss of REST results in mild to severe cognitive impairment associated with Alzheimer’s Disease (AD). Generally, there is an increased REST abundance in the nuclei of neurons in the prefrontal cortex and parts of the hippocampus. With AD patients, this abundance is decreased. As the transcription factor has been shown to play a role in the regulation of genes controlling oxidative stress or the buildup of beta-amyloid, the loss of function results in further stress upon the brain that leads to AD pathology (Hwang et al., 2017). This results in a build up of beta-amyloid, which is a hallmark of Alzheimer’s disease as the protein aggregates result in issues with normal synaptic function, leading to common Alzheimer’s symptoms such as memory loss. REST is implicated in Huntington’s Disease as well, as the huntingtin protein binds to REST in the cytoplasm of vulnerable cells, sequestering REST from target genes. However, in Huntington’s Disease, the huntingtin protein is disrupted, preventing it from binding to REST. As a consequence, REST is able to disrupt downstream targets, silencing genes important for neuronal survival (Hwang et al., 2017). Altered methylation patterns in other genes in various brain regions have been associated with AD as well, such as the temporal lobe, frontal cortices, pons, and cerebellum. Inhibiting DNA methylation has been shown to cause deleterious effects on neuronal plasticity (the ability of the brain’s neural network to reorganize over time). The loss of the brain’s neuroplastic abilities results in difficulties in strengthening and maintaining cognitive skills and various
294
forms of memory (such as procedural memory and technique). Histone modifications, such as those which reduce expression of the H4K14 gene, have been shown to reduce dendritic spines, resulting in a decrease in synaptic connections with other neurons that is associated with AD. In Parkinson’s Disease, another neurodegenerative disease, the methylation of the gene encoding alphasynuclein decreases, and this is associated with its disease pathology (Landgrave-Gomez et al., 2015). Psychiatric disorders are similarly caused by environmental factors and epigenetic changes. In studies of animal models, chronic physical and psychological stressors have been shown to cause a decrease in reward related behaviors and social interactions. These behavioral alterations can be treated with antidepressants, which potentially reverse the epigenetic changes occurring. Histone acetylation may be implicated in depression, as studies have shown how HDAC inhibitors along with antidepressants have been shown to improve patient outcomes. The chronic stresses associated with depression cause genomewide reprogramming and a decrease in H3K14 acetylation in the nucleus accumbens (a region of the brain important for mediating reward behavior). An increase in H3K14 acetylation is associated with adaptation to stress, while a decrease in inhibition disrupts that process. Other genes in the nucleus accumbens are implicated in depression as well. A decrease of H3K9me2 downregulates a particular histone methyltransferase (G9a), which leads to a decrease in the brain’s natural antidepressant effects in terms of the brain’s normal production of serotonin levels (Bagot et al., 2014). Some studies have shown that prenatal stress in animal models (due to separating infant animals from their mothers) resulted in potentially permanent epigenetic alterations which led to depression. At a molecular level, these stresses resulted in a reduction in DNA methylation in the enhancer region of the Avp gene. As Avp expression was reduced, methylation patterns in the genes Nr3c1 (nuclear receptor subfamily 3 class 1) and Bdnf (brain derived neurotrophic factor) were altered in the prefrontal cortex and hippocampus, leading to depression-like behaviors (Nestler et al., 2016). Other psychiatric disorders have epigenetic implications as well. In schizophrenia, RELN (a gene encoding the protein reelin) exhibits
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 6: Racism and health inequities can be deepened by epigenetic mechanisms, reinforcing the critical need to intervene in these issues. Source: Wikimedia Commons
increased methylation in the prefrontal cortex as well as in other brain regions. The hypermethylation decreases RELN expression and diminishes levels of reelin, which is important in controlling neuronal migration. The SOX10 gene, a transcription factor implicated in brain development, was affected in schizophrenic patients as well. Additionally, Human leukocyte antigen (HLA) displayed altered methylation patterns, along with alterations in GAD1 (important for synthesis of the inhibitory neurotransmitter GABA). The epigenetic profile of schizophrenia overlaps with patterns observed in bipolar disorder. The HLA9 gene is known to exhibit aberrant methylation patterns in postmortem bipolar brains and in peripheral blood. The GAD1 gene was similarly altered in bipolar disease patients as in schizophrenia (Nestler et al., 2016).
Social Justice Implications Health Inequities and the Biopsychosocial Model Given the varying environmental conditions experienced by various social and ethnic groups, the environmental factors that affect one’s epigenetic profile have the potential to generate health inequities. These inequities demonstrate the importance of the biopsychosocial model, which accounts for both genetic and environmental factors in describing disease pathology. For example, tobacco-related cardiovascular disease morbidity and mortality is higher in Black and Latinx populations, and Black females experience higher rates of
WINTER 2021
tobacco-related cancer. These differences are not attributable to differences in DNA itself, but to variations in diet, nutrition, alcoholic intake, occupational and environmental exposures, and psychosocial stress that lead to epigenetic modifications of DNA. Different diets greatly affect DNA methylation patterns, as they provide antioxidants, suppress oncogenes, and stimulate growth factors to different degrees. Variations in diet exist between racially defined social groups, with certain groups consuming less vitamin C and E, folic acid, or consuming excess fat, impacting their methylation patterns (Fernander et al., 2007).
"Psychosocial stressors lead to further stresses in the body as catecholamines and glucocorticoids are released at abnormally high and sustained levels. These can contribute to psychiatric illnesses."
Other stressors such as psychosocial stress have been implicated in epigenetic related disease development as well. Psychosocial stressors lead to further stresses in the body as catecholamines and glucocorticoids are released at abnormally high and sustained levels. These can contribute to psychiatric illnesses. Occupational hazards impact methylation as well, with exposure to chemicals in pesticides and asbestos leading to DNA methylation alterations (Fernander et al., 2007). As previously mentioned, early developmental and childhood stresses also result in longterm epigenetic impacts. The immune system is affected long-term by childhood stressors, leading to neuroendocrine dysfunction. Even during development, fetal stressors result in adult-onset diseases and are correlated with lower birth weights. Poor childhood health is associated with a three-fold greater chance of poor health as an adult (Rubin et al., 2016).
295
The differences in environmental conditions among racial groups can be attributed to conditions related to discrimination and lack of economic opportunity. According to cumulative disadvantage theory, racism, social class, and health all go hand-in-hand in affecting epigenetics. Experiencing race-based discrimination and socioeconomic disadvantage has been shown to repeatedly challenge the hypothalamus-pituitary-adrenal (HPA) axis, leading to physical and psychological diseases. Racial and socioeconomic discrimination often result in compound effects. Disadvantaged racial groups usually reside in poorer neighborhoods, furthering the cyclical lack of resources. Nutrition is especially impacted in these poor neighborhoods, which are often referred to as food deserts due to their lack of access to grocery stores (Rubin et al., 2016).
"Environmental conditions also tend to be worse in impoverished neighborhoods due to their geography. As a result, many groups experience higher exposure to allergens and pollution; one study showed that Black participants went to the emergency department twice as many times as White participants for asthma related illnesses."
Environmental conditions also tend to be worse in impoverished neighborhoods due to their geography. As a result, many groups experience higher exposure to allergens and pollution; one study showed that Black participants went to the emergency department twice as many times as White participants for asthma related illnesses. Environmental factors include the social determinants of health, including aspects of life like health literacy, environment, community, stress, and education (Matsui et al., 2019). As these factors are as important in moderating health due to their impacts on epigenetics, the biopsychosocial model is necessary for arriving at a holistic understanding of illness that takes into account genetics and epigenetics. Transgenerational Consequences of Racial Discrimination Our epigenetic tags are strongly influenced by the environment in which we developed. Psychosocial stressors, or the neurophysiological changes caused by the anticipation or perception of challenges to well-being located in our social environment, can arise from early-life adversity, such as childhood abandonment, or chronic socio-economic deprivation/intense trauma, like famine, slavery, and warfare (Cunliffe, 2016). Studies have shown that chronic stress can be highly damaging for humans and the consequences have been studied in nonhuman primates. Social ranking in nonhuman primates, such as chimpanzees and baboons, is generally structured around male dominance established through aggression. Those primates who are subordinate are under the constant threat of provoking the male leader. As such, baseline
296
stress levels in these established hierarchical social structures are usually much higher in subordinates than in the dominant male (Sapolsky, 1993). This chronic stress can have adverse effects on cardiovascular, reproductive, immune, and nervous system functioning, with elevated levels of glucocorticoids and catecholamines specifically leading to hypertension, infertility, and behavioral disorders within subordinates (Sapolsky, 2005; Shively & Clarkson, 1994; Sapolsky & Share, 2004; Cunliffe, 2016). Chronic stress does not only affect an animal's physical health; it can also heavily influence their mental and social behavior as well. Studies have shown that adults who experienced stress can pass on their altered epigenetic profiles to their offspring. In rats, newborn pups who were not nurtured by their mothers have epigenetically different hippocampal neurons (the part of the brain that regulates stress reactivity and behavior), leading to them being more reactive and anxiety-prone as adults (Meaney et al., 2007; Weaver et al., 2004; Thayer & Kuzawa, 2011). The results of these experiments suggest that there is a link between parental behavior and the neurological and social differences seen in their offspring. These parental behaviors could be caused or influenced by a plethora of external factors, such as having to constantly work to provide the basic necessities like food, clothes, and shelter for children, leading to lack of emotional availability. Even a parent’s mental health can influence a child’s epigenetic tags. A study found that maternal depression during pregnancy predicted stress reactivity and methylation of the glucocorticoid receptor (GR) locus in buccal cells for infants three months old (Oberlander et al., 2008; Thayer & Kuzawa, 2011). Such methylation differences of the GR locus in the hippocampus have been linked to teenage suicide victims who experienced childhood abuse (McGowan et al., 2009; Thayer & Kuzawa, 2011). This suggests that, even if a child did not experience abuse themself, their mother’s mental health during pregnancy can prompt similar epigenetic tags as if they had themselves. Evidently, stressors, such as violence, discrimination, and poverty experienced in one’s life can cause epigenetic changes to one’s DNA. These changes can alter a person's physical, mental, or emotional development and influence, either through direct epigenetic heritability, parental behavior and mental health, or some combination of the two, their
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
offspring. When thought about in the context of racial discrimination, it is not unreasonable to think that such a formula has influenced the way people of this generation exist physically, mentally, or emotionally. For instance, a black American who is a descendent of former slaves is likely to have epigenetic tag/genetic alterations that coded to combat extreme, chronic stress caused by the widespread dehumanization and brutality of those only a couple generations before them. Additionally, socially marginalized groups in America are more likely to live in unsafe environments, surrounded by stressors like disease, violence, environmental pollution, and extreme poverty. As stated before, stressors like these are considered early-life adversities. When pairing the probable genetic alterations caused by childhood and/or extreme adult trauma and those of one’s ancestors, it is not hard to believe that racial discrimination can lead, and has led, to widespread epigenetic changes that can influence a person’s predisposed social, mental, and physical wellbeing.
Therapies: Drug Development and Future Directions Histone Deacetylase Inhibitors As discussed previously, one of the key mechanisms of epigenetic regulation is the acetylation of histone proteins. When acetyl groups are added to the amino acid lysine, which is positively charged on located on the histone tail, the histones become neutral, and the nucleosomes are unable to effectively bind to negatively charged DNA (Alhamwe et al., 2018). This loosens the chromatin structure, promoting DNA transcription and increases gene expression. Histone deacetylases (HDACs) are enzymes that catalyze the removal of these acetyl groups from histones and promote repression of gene expression. HDACs also deacetylate non-histone proteins like transcription factors and tumor suppressor proteins. For example, the tumor suppressor protein called p53 would be constitutively active in cells, but the Mdm2 protein (unless repressed itself ) constantly binds to and targets p53 for degradation (Ito et al., 2002). When cells have sustained substantive DNA damage, p53 is post-translationally acetylated, which stabilizes it and allows it to translocate into the nucleus and promote transcription of genes. This activity of p53 in the nucleus prevents the cell from replicating so it cannot
WINTER 2021
propagate its defective DNA. Without this mechanism in place, cancerous tumors are likely to arise. Indeed, p53 is knocked out in many forms of cancer. Furthermore, it has been found that in conditions where HDACs targeting p53 are overactive, p53 is deacetylated and this allows DNA-damaged cells to continue to replicate, eventually leading to cancer. Histone deacetylase inhibitors (HDACi) prevent HDACs from functioning and deacetylating p53. Thus, they have the effect of stabilizing p53 in DNA-damaged cells and preventing them from propagating their defective genetic material. These inhibitors may prove quite useful for treating cancer when combined with other anti-cancer drugs and radiotherapy. A HDACi causes upregulation of the cell cycle gene p21, which is downstream in the pathway of p53 (Kim & Bae, 2011). This leads to cell cycle arrest in tumor cell lines. One major advantage of HDACi is that normal cells are resistant to the inhibitors’ effects while tumor cells are vulnerable. This leaves normal cells untouched during cancer treatment while cancerous tumors are crippled in their growth.
"It has been found that in conditions where HDACs targeting p53 are overactive, p53 is deacetylated and this allows DNA-damaged cells to continue to replicate, eventually leading to cancer."
Certain HDACi, including the drugs vorinostat, romidepsin, belinostat, and panobinostat, have already been approved for treatment of cancers like T-cell lymphoma and multiple myeloma (Eckshlager et al., 2017). Other potential HDACi are in phase I and II clinical trials for their treatment of acute myeloid leukemia and cervical cancer (Xu et al., 2007). While the development of these drugs has shown promise, there is still much to learn and discover about these inhibitors and their possibilities in cancer treatment. DNA Methyltransferase Inhibitors As mentioned earlier, DNA methylation is one of the most stable epigenetic modifications, and it is important for normal development, genomic imprinting, and X-chromosome inactivation (Jin et al., 2011). It should therefore come as no surprise that dysfunction in DNA methylation – hyperactivity or hypoactivity – can lead to cancer and other biological problems. DNA methylation is mediated by DNA methyltransferase (DNMT), which both covalently links methyl groups to cytosines in DNA and is responsible for copying DNA methylation patterns when DNA strands replicate. DNA methylation plays a significant role in carcinogenesis by silencing genes that
297
have tumor-suppressor functions, which can cause cells to lose control of the cell cycle and become cancerous. This usually happens Source: Wikimedia Commons via the hypermethylation of promoters that regulate the transcription of tumor-suppressor genes. Increased methylation of these regions promotes a closed conformation of the chromatin and prevents transcription factors and RNA polymerase from transcribing genes that code for these proteins (Herman & Baylin., 2003). The root problem is usually overactivity or overexpression of DNMTs, which is why DNA methyltransferase inhibitors are valuable as a treatment option.
Figure 7: The structure of histone acetyltransferase.
"DNA methylation plays a significant role in carcinogenesis by silencing genes that have tumorsuppressor functions, which can cause cells to lose control of the cell cycle and become cancerous. "
The most successful existing DNA methyltransferase inhibitors (DNMT inhibitors) are azacitidine and decitabine, but these anti-cancer drugs are quite toxic and are not chemically stable (Gnyszka et al., 2013). Regardless, they are the only epidrugs (epigenetic regulators) approved for treating acute myeloid leukemia and myelodysplastic syndrome. These hypomethylating compounds function as cytidine analogs, and they are integrated into the genome of S phase cells (the period of the cell cycle prior to mitosis during which the chromosomes are replicated) as the DNA replicates. These incorporated compounds disrupt the interaction between DNMTs and the DNA and promote the degradation of DNMTs by proteasomes. A promising demethylating agent right now is zebularine, which is more chemically stable and less cytotoxic than azacitidine and decitabine (Ben-Kasus et al., 2005). This compound inserts itself into DNA and causes DNMT to covalently bind to it, trapping this enzyme before DNA damage signaling triggers the recruitment of proteasomes to degrade DNMTs. Zebularine is advantageous due to its longer half-life because it is metabolized slower. It is also highly stable at acidic and neutral pH, but it still needs to be taken at periodic intervals to ensure consistent DNMT inhibition, both orally and intravenously. Overall, since epigenetic dysregulation can cause cancer just as genetic mutations do, the treatment of these defects in DNA methylation has massive implications for cancer treatment. Bromodomain Inhibitors Another therapy of interest is the so-called bromodomain inhibitors. Bromodomains are proteins segments that recognize acetylated lysine amino acid residues – that is, lysine residues with a methyl group attached to a carbonyl group (carbon double bonded to oxygen)
298
(Zeng & Zhou, 2002). These bromodomains are normally responsible for regulating processes like chromatin remodeling and transcription. However, if the levels of acetylation are abnormally high or the proteins are altered, transcription can become dysregulated. This has been linked to the development of a variety of conditions, such as cancer, inflammation, and increased viral infection (Filippakopoulos & Knapp, 2014). First discovered in 2009, bromodomain inhibitors present an important possible treatment for cancer and various other inflammatory conditions. The first bromodomain inhibitor – thieno-triazolo-1,4,diazepine – was tested against the NUT (nuclear protein in testis) midline carcinoma cell line (Perez-Salvia & Esteller, 2017). This molecule, otherwise known as JQ1, was found to competitively bind to acetyl-lysine motifs and could disrupt a bromodomain-derived protein found in the cancer. Later studies demonstrated that this drug was also quite effective at treating other types of cancer, such as hematological cancers, pancreatic cancer and breast cancer (Perez-Salvia & Esteller, 2017). As a whole, bromodomain inhibitors can be divided into two important subclasses. The mimetic class binds to a bromodomain but does not form a hydrogen bond with the amino acid asparagine, which typically binds to acetylated proteins (Fillippakopoulos & Knapp, 2014). While this class of inhibitors directly inhibits the ability of bromodomains to “read” acetylated lysines, it does not directly prevent them from binding to a bromodomain, which can somewhat limit their effectiveness. For example, the inhibitor known as ischemin was found to only have a modest in vitro
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
affinity for bromodomains; however, it was also found to reduce damage caused by treatment with the chemotherapy agent Doxorubicin (Fillippakopoulos & Knapp, 2014). The other class of bromodomain inhibitors form hydrogen bonds with the asparagine residue, which allows for competitive inhibition of proteins with acetylated lysine residues. Some examples in this class include the derivatives of triazolothienodiazepine and benzodiazepine templates, both of which have high binding affinities and are quite selective (Fillippakopoulos & Knapp, 2014). For example, triazolobenzodiazepine derivatives were found to selectively target the BRD4 bromodomain. Interestingly, concentrations of the drugs on a nano molar scale were found to reduce tumor growth by 50%, demonstrating a potentially less harmful way to treat patients with cancer (Fillippakopoulos & Knapp, 2014).
To treat many of the conditions caused by malfunctions in HAT activity, a number of different HAT inhibitors have been developed. One of the most prominent classes of HAT inhibitors are bisubtrate inhibitors, which mimic the two substrates HAT enzymes use – acetyl CoA and Lysine (Wapenaar & Dekker, 2016). Many of these inhibitors have demonstrated incredibly high levels of selectivity. For example, the inhibitors LysCoA and H3-CoA-20 have both been found to be incredibly potent at selectively binding to the HATs p300 and PCAF, with only a 0.5 micro molar solution required to inhibit 50% of HAT activity (Lau et al., 2000). While they are incredibly effective, their peptide-based nature makes these HAT inhibitors incredibly unstable and they have a low cell permeability, making them less effective than they could be (Wapenaar & Dekker, 2016).
Histone Acetyl Transferase Inhibitors Similar to the bromodomain inhibitors, histone acetyltransferase inhibitors (HAT inhibitors) have also shown great potential in treating cancer, neurological conditions, and inflammatory conditions. Histone acetyltransferases are enzymes that acetylate histone proteins by transferring an acetyl group from the molecule Acetyl CoA (Lee & Workman, 2007). First discovered in 1964 by Allfrey and colleagues, HAT inhibitors have been found to be evolutionarily conserved – maintained in the genetic code – for organisms ranging from yeast to humans (Lee & Workman, 2007).
Another important class of HAT inhibitors are small molecules derived from natural products like barbituric acid and pentamidine. Because they come from natural products, these inhibitors have a higher cell permeability; however, they are much less selective than the bisubstrate inhibitors (Wapenaar & Dekker, 2016). Nonetheless, they can be useful: the natural product derivative garcinol was found to inhibit the proliferation of breast cancer cells and colon carcinogenesis in mice (Wapenaar & Dekker, 2016). Similarly, curcumin, a derivative from the turmeric plant, has been found to be a good inhibitor of cancer activity; however, it is unclear if this is due to its HAT inhibiting properties or its antioxidant properties (Priyadarsini, 2014).
HAT proteins can take on a number of different roles in human cells. “Type A” HATs, which are typically located near the nucleus of a cell, have been found to catalyze processes related to transcription of RNA (Roth et al., 2001). “Type B” HATs, which are typically found in the cytoplasm of a cell, have been found to play a role in DNA repair and histone deposition, as well as chromatin remodeling and gene expression (Blanco-Garcia et al., 2009). However, HAT proteins have also been implicated in a number of different disease processes. For example, malfunctions in the HAT known as Tip60 have been associated with the development of lymphoma (Sun et al., 2015). Similarly, reduction in the effectiveness of HAT proteins by the mutant Ataxin-1 protein have been found to be a potential cause of type 1 spinocerebellar Ataxia, a neurodegenerative disease (Cvetanovic et al., 2012)
WINTER 2021
Protein Methyltransferase Inhibitors Protein methyltransferase inhibitors are molecules that prevent the activity of the enzyme protein methyltransferase, which is responsible for the methylation of a variety of histone and nonhistone proteins. There are two main types of methyltransferase enzymes: protein lysine methyltransferases and protein arginine methyltransferases. The protein lysine methyltransferase catalyzes the reaction wherein a methyl group is added to the R chains of the lysine amino acids of proteins (Kaniskan et al., 2014). Similarly, the protein arginine methyltransferase catalyzes the reaction where a methyl group is added to the amino acid arginine’s side chains. In both cases, the attached methyl group comes from the S-adenosylmethionine cofactor (SAM). Furthermore, both of these groups of enzymes
"To treat many of the conditions caused by malfunctions in HAT activity, a number of different HAT inhibitors have been developed. One of the most prominent classes of HAT inhibitors are bisubtrate inhibitors, which mimic the two substrates HAT enzymes use – acetyl CoA and Lysine."
299
can activate or repress transcription and gene expression, and they have been implicated in different diseases, such as cancer (Kaniskan et al., 2014). Hence, protein methyltransferase inhibitors are being developed in hopes of treating such illnesses. Protein methyltransferase inhibitors prevent enzyme activity by competing with either the substrate or the SAM cofactor. However, one major challenge associated with developed inhibitors that compete with the SAM cofactor is that the SAM cofactor is a hydrophilic molecule. Therefore, the corresponding inhibitor must be hydrophilic enough to be able to bind to the binding pocket of the SAM cofactor on the enzyme, but at the same time, it must be hydrophobic enough to cross the hydrophobic interior of the phospholipid cell membrane bilayer. For the inhibitors that compete with the substrate of the enzyme, the substrate binding site is not extremely hydrophilic, making it easier for inhibitors to act on that substrate site (Ferreira de Freitas et al., 2019). While there is a variety of protein methyltransferase inhibitors, only seven are currently in clinical trials. The first inhibitor to enter clinical trials was EPZ-5676, a SAMcompetitive inhibitor that targets DOT1L, which is known to present in patients with relapsed leukemia. The second inhibitor to enter clinical trials was EPZ-6438; this is also a SAMcompetitive inhibitor, and the first inhibitor for EZH2. Studies have shown that EPZ-6438 is effective in non-Hodgkin's lymphoma patients, as well as in solid tumor patients. EPZ015938/ GSK332659 are the most recent inhibitors that have entered clinical trials, and they target the protein PRMT5; they are uncompetitive SAM inhibitors and can contribute to the treatment for cancer (Copeland, 2018). Other protein methyltransferase inhibitors include the SUV39H1 inhibitor, which was the first histone lysine methyltransferase inhibitor to be discovered. This inhibitor could potentially suppress tumor growth. Furthermore, BIX-1294 and UNC0224 are also inhibitors; however, they target the G9a and GLP proteins that promote cell growth and differentiation and have been implicated in a variety of illnesses, including HIVAIDS, cocaine addiction, lung cancer, prostate cancer, and leukemia (Kaniskan et al., 2017).
Limitations and Conclusions Discoveries of an epigenetic nature come with the same conditions as those of genetic research: they indicate tendencies and probability instead of immovable causal relationships. In recognizing this limitation, scientists can enrich their understanding of both human genomics and the workings of biological systems more generally. However, scientists risk discrimination if they fail to recognize the current limitations of epigenetic research. Just as fears of genetic discrimination have prevented the spread of genetic information to third-parties, it may become necessary to restrict the accessibility of epigenetic information to protect individuals. That being said, with sufficient non-discrimination policies in place, the possible value of epigenetic research must not be overlooked (Dupras et al., 2018). From honeybees to the Dutch famine, it is clear that epigenetics is a phenomenon with far-reaching impacts that stretch from the scientific to the social. Additionally, the molecular mechanisms underlying epigenetics are complex and multifaceted, including everything from chromatin modifications to RNA-based mechanisms. When these mechanisms are dysregulated, it can result in devastating diseases such as cancer or psychiatric illnesses. The field of epigenetics is also intertwined with the fraught social landscape of the United States, and it has consequential social justice implications that arise from both historical and ongoing health inequities and racial discrimination. While many of these social issues require holistic and comprehensive solutions, scientists have made strides in the development of drugs that target specific epigenetic changes. Ultimately, epigenetics continues to engender fascinating research and provide promising new treatment interventions for a broad array of illnesses. References A Pacific Culture among Wild Baboons: Its Emergence and Transmission. (n.d.). Retrieved March 28, 2021, from https:// journals.plos.org/plosbiology/article?id=10.1371/journal. pbio.0020106 Afzal, M. R., Samanta, A., Shah, Z. I., Jeevanantham, V., AbdelLatif, A., Zuba-Surma, E. K., & Dawn, B. (2015). Adult Bone Marrow Cell Therapy for Ischemic Heart Disease: Evidence and Insights From Randomized Controlled Trials. Circulation Research, 117(6), 558–575. https://doi.org/10.1161/ CIRCRESAHA.114.304792 Alaskhar Alhamwe, B., Khalaila, R., Wolf, J., von Bülow, V., Harb, H., Alhamdan, F., Hii, C. S., Prescott, S. L., Ferrante, A., Renz,
300
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
H., Garn, H., & Potaczek, D. P. (2018). Histone modifications and their role in epigenetics of atopy and allergic diseases. Allergy, Asthma, and Clinical Immunology : Official Journal of the Canadian Society of Allergy and Clinical Immunology, 14. https://doi.org/10.1186/s13223-018-0259-4 Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K., & Walter, P. (2002). How Genetic Switches Work. Molecular Biology of the Cell. 4th Edition. https://www.ncbi.nlm.nih.gov/books/ NBK26872/ Axsom, J. E., & Libonati, J. R. (2019). Impact of parental exercise on epigenetic modifications inherited by offspring: A systematic review. Physiological Reports, 7(22), e14287. https://doi.org/10.14814/phy2.14287 Bagot, R. C., Labonte, B., Pena, C. J., & Nestler, E. J. (2014). Epigenetic signaling in psychiatric disorders: Stress and depression. Dialogues in Clinical Neuroscience, 16(3), 281–295. https://doi.org/10.31887/DCNS.2014.16.3/rbagot Becher, U. M., Tiyerili, V., Skowasch, D., Nickenig, G., & Werner, N. (2011). Personalized cardiac regeneration by stem cellsHype or hope? The EPMA Journal, 2(1), 119–130. https://doi. org/10.1007/s13167-011-0068-z Oldroyd, B. P., Reid R. J., Ashe A., & Remnant E. J. (2018). Honey Bees, Royal Jelly, Epigenetics. In Encyclopedia of Reproduction (2nd ed., Vol. 6, pp. 3710–3715). Elsevier Science & Technology. https://ebookcentral-proquest-com. dartmouth.idm.oclc.org/lib/dartmouth-ebooks/reader. action?docID=5456777&ppg=3710 Ben-Kasus, T., Ben-Zvi, Z., Marquez, V. E., Kelley, J. A., & Agbaria, R. (2005). Metabolic activation of zebularine, a novel DNA methylation inhibitor, in human bladder carcinoma cells. Biochemical Pharmacology, 70(1), 121–133. https://doi. org/10.1016/j.bcp.2005.04.010 Biehl, J. K., & Russell, B. (2009). Introduction to Stem Cell Therapy. The Journal of Cardiovascular Nursing, 24(2), 98–105. https://doi.org/10.1097/JCN.0b013e318197a6a5 Biological memories of past environments: Epigenetic pathways to health disparities: Epigenetics: Vol 6, No 7. (n.d.). Retrieved March 28, 2021, from https://www.tandfonline. com/doi/abs/10.4161/epi.6.7.16222 Blanco-García, N., Asensio-Juan, E., de la Cruz, X., & MartínezBalbás, M. A. (2009). Autoacetylation regulates P/CAF nuclear localization. The Journal of Biological Chemistry, 284(3), 1343–1352. https://doi.org/10.1074/jbc.M806075200 Carey, N. (2012). The epigenetics revolution: How modern biology is rewriting our understanding of genetics, disease, and inheritance. Columbia University Press. Choudhuri, S. (2011). From Waddington’s epigenetic landscape to small noncoding RNA: Some important milestones in the history of epigenetics research. Toxicology Mechanisms and Methods, 21(4), 252–274. https://doi.org/10. 3109/15376516.2011.559695 Copeland, R. A. (2018). Protein methyltransferase inhibitors as precision cancer therapeutics: A decade of discovery. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1748), 20170080. https://doi.org/10.1098/ rstb.2017.0080 Cortes, L. R., Cisternas, C. D., & Forger, N. G. (2019). Does Gender Leave an Epigenetic Imprint on the Brain?
WINTER 2021
Frontiers in Neuroscience, 13, 173. https://doi.org/10.3389/ fnins.2019.00173 Cunliffe, V. T. (2016a). The epigenetic impacts of social stress: How does social adversity become biologically embedded? Epigenomics, 8(12), 1653–1669. http://dx.doi.org.dartmouth. idm.oclc.org/10.2217/epi-2016-0075 Cunliffe, V. T. (2016b). The epigenetic impacts of social stress: How does social adversity become biologically embedded? Epigenomics, 8(12), 1653–1669. http://dx.doi.org.dartmouth. idm.oclc.org/10.2217/epi-2016-0075 Cvetanovic, M., Kular, R. K., & Opal, P. (2012). LANP mediates neuritic pathology in Spinocerebellar ataxia type 1. Neurobiology of Disease, 48(3), 526–532. https://doi. org/10.1016/j.nbd.2012.07.024 Denker, H.-W. (2014). Stem Cell Terminology and ‘Synthetic’ Embryos: A New Debate on Totipotency, Omnipotency, and Pluripotency and How It Relates to Recent Experimental Data. Cells Tissues Organs, 199(4), 221–227. https://doi. org/10.1159/000370063 Dolinoy, D. C. (2008). The agouti mouse model: An epigenetic biosensor for nutritional and environmental alterations on the fetal epigenome. Nutrition Reviews, 66 Suppl 1(Suppl 1), S7–S11. PubMed. https://doi.org/10.1111/j.17534887.2008.00056.x Dupont, C., Armant, D. R., & Brenner, C. A. (2009). Epigenetics: Definition, Mechanisms and Clinical Perspective. Seminars in Reproductive Medicine, 27(5), 351–357. https://doi. org/10.1055/s-0029-1237423 Dupras, C., Song, L., Saulnier, K. M., & Joly, Y. (2018). Epigenetic discrimination: Emerging applications of epigenetics pointing to the limitations of policies against genetic discrimination. Frontiers in Genetics, 9, 202–202. https://doi. org/10.3389/fgene.2018.00202 Eckschlager, T., Plch, J., Stiborova, M., & Hrabeta, J. (2017). Histone Deacetylase Inhibitors as Anticancer Drugs. International Journal of Molecular Sciences, 18(7). https://doi. org/10.3390/ijms18071414 Esteller, M. (2008). Epigenetics in Cancer. The New England Journal of Medicine, 358(11), 1148-1159. Doi: 10.1056/ NEJMra072067. Feinberg, A.P, Koldobskiy, M.A, & Göndör, A. (2016). Epigenetic modulators, modifiers and mediators in cancer aetiology and progression. Nature Reviews Genetics, 17(5), 284–299. https://doi.org/10.1038/nrg.2016.13 Felsenfeld, G. (2014). A brief history of epigenetics. Cold Spring Harbor Perspectives in Biology, 6(1), a018200– a018200. https://doi.org/10.1101/cshperspect.a018200 Fernander, A. F., Shavers, V. L., & Hammons, G. J. (2007). A biopsychosocial approach to examining tobacco-related health disparities among racially classified social groups. Addiction, 102, 43–57. https://doi.org/10.1111/j.13600443.2007.01954.x Fernández-Avilés, F., Sanz-Ruiz, R., Climent, A. M., Badimon, L., Bolli, R., Charron, D., Fuster, V., Janssens, S., Kastrup, J., Kim, H.-S., Lüscher, T. F., Martin, J. F., Menasché, P., Simari, R. D., Stone, G. W., Terzic, A., Willerson, J. T., Wu, J. C., the TACTICS (Transnational Alliance for Regenerative Therapies in Cardiovascular Syndromes) Writing Group, … Regulatory
301
and funding strategies subcommittee: (2017). Global position paper on cardiovascular regenerative medicine. European Heart Journal, 38(33), 2532–2546. https://doi.org/10.1093/ eurheartj/ehx248 Ferreira de Freitas, R., Ivanochko, D., & Schapira, M. (2019). Methyltransferase Inhibitors: Competing with, or Exploiting the Bound Cofactor. Molecules, 24(24), 4492. https://doi. org/10.3390/molecules24244492 Filippakopoulos, P., & Knapp, S. (2014). Targeting bromodomains: Epigenetic readers of lysine acetylation. Nature Reviews Drug Discovery, 13(5), 337–356. https://doi. org/10.1038/nrd4286 Gibney, E. R., & Nolan, C. M. (2010). Epigenetics and Gene Expression. Nature, 105, 4–13. https://doi.org/10.1038/ hdy.2010.54 Gnyszka, A., Jastrzębski, Z., & Flis, S. (2013). DNA Methyltransferase Inhibitors and Their Emerging Role in Epigenetic Therapy of Cancer. Anticancer Research, 33(8), 2989–2996. Greally, J. M. (2018). A user’s guide to the ambiguous word “epigenetics.” Nature Reviews Molecular Cell Biology, 19(4), 207–208. https://doi.org/10.1038/nrm.2017.135 Guo, M., Peng, Y., Gao, A., Den, C., & Herman, J. (2019). Epigenetic heterogeneity in cancer. Biomarker Research, 7(23). https://doi.org/10.1186/s40364-019-0174-y Gyöngyösi, M., Wojakowski, W., Lemarchand, P., Lunde, K., Tendera, M., Bartunek, J., Marban, E., Assmus, B., Henry, T. D., Traverse, J. H., Moyé, L. A., Sürder, D., Corti, R., Huikuri, H., Miettinen, J., Wöhrle, J., Obradovic, S., Roncalli, J., Malliaras, K., … Maurer, G. (2015). Meta-Analysis of Cell-based CaRdiac stUdiEs (ACCRUE) in Patients with Acute Myocardial Infarction Based on Individual Patient Data. Circulation Research, 116(8), 1346–1360. https://doi.org/10.1161/CIRCRESAHA.116.304346 Heijmans, B. T., Tobi, E. W., Stein, A. D., Putter, H., Blauw, G. J., Susser, E. S., Slagboom, P. E., & Lumey, L. H. (2008). Persistent epigenetic differences associated with prenatal exposure to famine in humans. Proceedings of the National Academy of Sciences, 105(44), 17046–17049. https://doi.org/10.1073/ pnas.0806560105 Herman, J. G., & Baylin, S. B. (2003). Gene Silencing in Cancer in Association with Promoter Hypermethylation. New England Journal of Medicine, 349(21), 2042–2054. https://doi. org/10.1056/NEJMra023075 Hodes, G. E., Walker, D. M., Labonté, B., Nestler, E. J., & Russo, S. J. (2017). Understanding the epigenetic basis of sex differences in depression: Depression, Epigenetics, Sex Differences. Journal of Neuroscience Research, 95(1–2), 692–702. https://doi. org/10.1002/jnr.23876 Hoffmann, A., Zimmermann, C. A., & Spengler, D. (2015). Molecular epigenetic switches in neurodevelopment in health and disease. Frontiers in Behavioral Neuroscience, 9. https:// doi.org/10.3389/fnbeh.2015.00120 Hwang, J.-Y., Aromolaran, K. A., & Zukin, R. S. (2017). The emerging field of epigenetics in neurodegeneration and neuroprotection. Nature Reviews Neuroscience, 18(6), 347–361. https://doi.org/10.1038/nrn.2017.46 Ilic, D., Stevenson, D., Patel, H., & Braude, P. (2012). Basic principles of human embryonic stem cells. In Anthony, A., (Ed.)
302
Progenitor and Stem Cell Technologies and Therapies. (pp. 29-48). Philadelphia, PA: Woodhead Publishing. https://doi. org/10.1533/9780857096074.1.29 Ito, A., Kawaguchi, Y., Lai, C.-H., Kovacs, J. J., Higashimoto, Y., Appella, E., & Yao, T.-P. (2002). MDM2-HDAC1-mediated deacetylation of p53 is required for its degradation. The EMBO Journal, 21(22), 6236–6245. https://doi.org/10.1093/ emboj/cdf616 Jeevanantham, V., Butler, M., Saad, A., Abdel-Latif, A., Zuba-Surma, E. K., & Dawn, B. (2012). Adult bone marrow cell therapy improves survival and induces long-term improvement in cardiac parameters: A systematic review and meta-analysis. Circulation, 126(5), 551–568. https://doi. org/10.1161/CIRCULATIONAHA.111.086074 Jin, B., Li, Y., & Robertson, K. D. (2011). DNA Methylation. Genes & Cancer, 2(6), 607–617. https://doi. org/10.1177/1947601910393957 Johanna K DiStefano. (2018). The Emerging Role of Long Noncoding RNAs in Human Disease (Vol. 1706, pp. 91–110). https://doi.org/10.1007/978-1-4939-7471-9_6 Kaniskan, H. Ü., Martini, M. L., & Jin, J. (2018). Inhibitors of Protein Methyltransferases and Demethylases. Chemical Reviews, 118(3), 989–1068. https://doi.org/10.1021/acs. chemrev.6b00801 Kim, H.-J., & Bae, S.-C. (2011). Histone deacetylase inhibitors: Molecular mechanisms of action and clinical trials as anticancer drugs. American Journal of Translational Research, 3(2), 166–179. Landgrave-GÃ3mez, J., Mercado-GÃ3mez, O., & GuevaraGuzmán, R. (2015). Epigenetic mechanisms in neurological and neurodegenerative diseases. Frontiers in Cellular Neuroscience, 9. https://doi.org/10.3389/fncel.2015.00058 Lau, O., Kandu, T., & Soccio, R. (2000). HATs off: Selective Synthetic Inhibitors of the Histone Acetyltransferases p300 and PCAF. Molecular Cell, 5(3), 589–595. https://doi. org/10.1016/S1097-2765(00)80452-9 Lavebratt, C., Almgren, M., & Ekström, T. J. (2012). Epigenetic regulation in obesity. International Journal of Obesity, 36(6), 757–765. https://doi.org/10.1038/ijo.2011.178 Lee, K. K., & Workman, J. L. (2007). Histone acetyltransferase complexes: One size doesn’t fit all. Nature Reviews Molecular Cell Biology, 8(4), 284–295. https://doi.org/10.1038/nrm2145 Madonna, R., Van Laake, L. W., Davidson, S. M., Engel, F. B., Hausenloy, D. J., Lecour, S., Leor, J., Perrino, C., Schulz, R., Ytrehus, K., Landmesser, U., Mummery, C. L., Janssens, S., Willerson, J., Eschenhagen, T., Ferdinandy, P., & Sluijter, J. P. G. (2016). Position Paper of the European Society of Cardiology Working Group Cellular Biology of the Heart: Cell-based therapies for myocardial repair and regeneration in ischemic heart disease and heart failure. European Heart Journal, 37(23), 1789–1798. https://doi.org/10.1093/eurheartj/ ehw113 Matsui, E. C., Adamson, A. S., & Peng, R. D. (2019). Time’s up to adopt a biopsychosocial model to address racial and ethnic disparities in asthma outcomes. Journal of Allergy and Clinical Immunology, 143(6), 2024–2025. https://doi.org/10.1016/j. jaci.2019.03.015 McGowan, P. O., Sasaki, A., D’Alessio, A. C., Dymov, S., Labonté,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
B., Szyf, M., Turecki, G., & Meaney, M. J. (2009). Epigenetic regulation of the glucocorticoid receptor in human brain associates with childhood abuse. Nature Neuroscience, 12(3), 342–348. https://doi.org/10.1038/nn.2270
and Improvement Strategies—FullText—Cellular Physiology and Biochemistry 2018, Vol. 48, No. 6—Karger Publishers. (n.d.). Retrieved March 28, 2021, from https://www.karger. com/Article/FullText/492704#ref392
Meaney, M. J., Szyf, M., & Seckl, J. R. (2007). Epigenetic mechanisms of perinatal programming of hypothalamicpituitary-adrenal function and health. Trends in Molecular Medicine, 13(7), 269–277. https://doi.org/10.1016/j. molmed.2007.05.003
Sun, X.-J., Man, N., Tan, Y., Nimer, S. D., & Wang, L. (2015). The Role of Histone Acetyltransferases in Normal and Malignant Hematopoiesis. Frontiers in Oncology, 5. https://doi. org/10.3389/fonc.2015.00108
Meireles, A. L. F., Segabinazi, E., Spindler, C., Gasperini, N. F., Souza dos Santos, A., Pochmann, D., Elsner, V. R., & Marcuzzo, S. (2021). Maternal resistance exercise promotes changes in neuroplastic and epigenetic marks of offspring’s hippocampus during adult life. Physiology & Behavior, 230, 113306. https://doi.org/10.1016/j.physbeh.2020.113306 Nestler, E. J., Peña, C. J., Kundakovic, M., Mitchell, A., & Akbarian, S. (2016). Epigenetic Basis of Mental Illness. The Neuroscientist, 22(5), 447–463. https://doi. org/10.1177/1073858415608147 Oberlander, T. F., Weinberg, J., Papsdorf, M., Grunau, R., Misri, S., & Devlin, A. M. (2008). Prenatal exposure to maternal depression, neonatal methylation of human glucocorticoid receptor gene (NR3C1) and infant cortisol stress responses. Epigenetics, 3(2), 97–106. https://doi.org/10.4161/ epi.3.2.6034 Ogbourne, S., & Antalis, T. M. (1998). Transcriptional control and the role of silencers in transcriptional regulation in eukaryotes. Biochemical Journal, 331(Pt 1), 1–14. Pérez-Salvia, M., & Esteller, M. (2016). Bromodomain inhibitors and cancer therapy: From structures to applications. Epigenetics, 12(5), 323–339. https://doi.org/10.1080/1559229 4.2016.1265710 Priyadarsini, K. I. (2014). The Chemistry of Curcumin: From Extraction to Therapeutic Agent. Molecules, 19(12), 20091– 20112. https://doi.org/10.3390/molecules191220091 Roth, S. Y., Denu, J. M., & Allis, C. D. (2001). Histone acetyltransferases. Annual Review of Biochemistry, 70, 81–120. https://doi.org/10.1146/annurev.biochem.70.1.81 Rubin, L. P. (2016). Maternal and pediatric health and disease: Integrating biopsychosocial models and epigenetics. Pediatric Research, 79(1–2), 127–135. https://doi.org/10.1038/ pr.2015.203 Sapolsky, R. M. (1993). The physiology of dominance in stable versus unstable social hierarchies. In Primate social conflict (pp. 171–204). State University of New York Press. Sapolsky, R. M. (2005). The influence of social hierarchy on primate health. Science (New York, N.Y.), 308(5722), 648–652. https://doi.org/10.1126/science.1106477 Seymour, T., Twigger, A.J., & Kalkulas F. (2015). Pluripotency Genes and Their Functions in the Normal and Aberrant Breast and Brain. International Journal of Molecular Sciences, 16(11), 27288–27301. doi: 10.3390/ijms161126024
Takahashi, K., Tanabe, K., Ohnuki, M., Narita, M., Ichisaka, T., Tomoda, K., & Yamanaka, S. (2007). Induction of Pluripotent Stem Cells from Adult Human Fibroblasts by Defined Factors. Cell, 132(5), 861-872. https://doi.org/10.1016/j. cell.2007.11.019 Thayer, Z. M., & Kuzawa, C. W. (2011a). Biological memories of past environments: Epigenetic pathways to health disparities. Epigenetics, 6(7), 798–803. https://doi.org/10.4161/ epi.6.7.16222 Thayer, Z. M., & Kuzawa, C. W. (2011b). Biological memories of past environments: Epigenetic pathways to health disparities. Epigenetics, 6(7), 798–803. https://doi.org/10.4161/ epi.6.7.16222 Veenendaal, M. V. E., Painter, R. C., Rooij, S. de, Bossuyt, P. M. M., Post, J. van der, Gluckman, P. D., Hanson, M. A., & Roseboom, T. J. (2013). Transgenerational effects of prenatal exposure to the 1944–45 Dutch famine. BJOG: An International Journal of Obstetrics & Gynaecology, 120(5), 548–554. https://doi. org/10.1111/1471-0528.12136 Wapenaar, H., & Dekker, F. J. (2016). Histone acetyltransferases: Challenges in targeting bi-substrate enzymes. Clinical Epigenetics, 8(1), 59. https://doi.org/10.1186/s13148-0160225-2 Weaver, I. C. G., Cervoni, N., Champagne, F. A., D’Alessio, A. C., Sharma, S., Seckl, J. R., Dymov, S., Szyf, M., & Meaney, M. J. (2004). Epigenetic programming by maternal behavior. Nature Neuroscience, 7(8), 847–854. https://doi.org/10.1038/ nn1276 Xu, W. S., Parmigiani, R. B., & Marks, P. A. (2007). Histone deacetylase inhibitors: Molecular mechanisms of action. Oncogene, 26(37), 5541–5552. https://doi.org/10.1038/ sj.onc.1210620 Yong, E. (2012). ENCODE: The rough guide to the human genome. Discover Magazine. https://www.discovermagazine. com/the-sciences/encode-the-rough-guide-to-the-humangenome Yoo, C.B & Jones, P. A. (2006). Epigenetic therapy of cancer: past, present and future. Nature Reviews Drug Discovery 5(1), 37–50. https://doi.org/10.1038/nrd1930. Zeng, L., & Zhou, M. M. (2002). Bromodomain: An acetyl-lysine binding domain. FEBS Letters, 513(1), 124–128. https://doi. org/10.1016/s0014-5793(01)03309-9
Shively, C. A., & Clarkson, T. B. (1994). Social status and coronary artery atherosclerosis in female monkeys. Arteriosclerosis and Thrombosis: A Journal of Vascular Biology, 14(5), 721–726. https://doi.org/10.1161/01. atv.14.5.721 Stem Cell Therapy in Heart Diseases – Cell Types, Mechanisms
WINTER 2021
303
Healthcare: Roots of Inequality STAFF WRITERS: VALENTINA FERNANDEZ, VAISHNAVI KATRAGADDA, ADITI GUPTA, CHELSEA-STARR JONES, AYUSHYA AJMANI, DANIEL CHEN, STEPHANIE FINLEY, TYLER CHEN, KRISTAL WONG, REGAN HARNOIS, WILSON MURANE, JASON DONG, GRACE NGUYEN, JUSTIN FAJAR, JULIETTE COURTINE, ABIGAIL FISCHER, MAEEN ARSLAN, NOA PHILLIPS, STEPHANIE LEBBY BOARD WRITER: ANAHITA KODALI Cover: Plantation owners often refused to provide adequate healthcare to the slaves who returned to their plantations as “free workers” Source: Wikimedia Commons
Introduction The COVID-19 pandemic has left the American healthcare system in shambles. Through the course of 2020, hospitals around the country became overwhelmed with massive influxes of COVID-19 patients. Some hospitals were forced to transfer patients between hospitals, hoping desperately to find beds as ICU’s filled to capacity. In other cases, states were forced open emergency field hospitals in military bases, football fields, and convention centers (Rio & Bogel-Burroughs, 2020). On a macro scale, healthcare systems and hospitals around the US have faced extreme financial losses – the American Hospital Association estimated that in 2020, America’s healthcare systems lost over 200 billion dollars in revenue (Kaye et al., 2020). However, the pandemic has also revealed something much more sinister. COVID-19 has had disproportionate impacts on certain marginalized communities within the United
304
States. Communities of color have faced significantly higher rates of infection and mortality than white communities around the country. Jarringly, people of color are also hospitalized at significantly less rates than white people (Lopez et al., 2021). Other socioeconomic factors have had clear impacts on people’s risk of developing COVID-19 or getting treatment for it, including access to housing, educational disparities, income disparities, and housing challenges (“Health Equity Considerations and Racial and Ethnic Minority Groups,” 2021). These determinants are driven by deeper biases against certain demographics; these include people of color, women and transgender individuals, those in lower socioeconomic classes, and those in certain geographical regions. In this paper, the authors explore several areas of discrimination within medical systems in an attempt to grasp at the roots of inequity in American healthcare. They note that this paper is in no way meant DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Doctors often went to slave auctions in order to attest for the health of the slaves Source: Flickr
to be a comprehensive or even complete list. Rather, it should serve as a reminder that there are inequities woven into nearly every aspect of healthcare in the United States and that there is much work to be done to solve them.
A Brief History of American Medicine Medicine During American Slavery The roots of institutionalized racism in the American healthcare system can be traced back thousands of years to when the ancients Greeks and Romans espoused a hierarchical Great Chain of Being in efforts to understand God’s ordering of the universe. Throughout the Renaissance and Enlightenment, Western scientists and philosophers encoded this taxonomic structure into academia to create arbitrary racial hierarchies based on white supremacist assumptions. Medical researchers Michael Linda Byrd and Clayton write that by “the 19th century, racially oriented European pseudoscientific data, along with that produced by the American school of anthropology, was used in the U.S. to justify and defend Black slavery”(Byrd & Clayton, 2001). In the antebellum era, academia perpetuated dangerous and dehumanizing myths that Black people are scientifically inferior; white physicians, who were sworn to ethical professional codes, themselves participated in slave trade and contributed to the terrible health conditions enslaved individuals were subjected to (Byrd & Clayton, 2001). Doctors accompanied slave owners to auctions for enslaved people to
WINTER 2021
testify the health of the slaves and treated slaves on plantations to ensure their masters could protect their investment (Halperin, 2013). Additionally, many physicians tried to show that Black people were more predisposed to certain disease, as well as more immune to others, all in the pursuit of furthering the institution of slavery. In Medicine and Slavery, Todd L. Savitt describes how physicians at the time claimed, without further investigation, that Black people had higher immunity to “malaria and yellow fever,” and an “increased susceptibility to respiratory diseases, scrofula, dysentery and other maladies” (Savitt, 2002). The prevailing consensus was that racial differences indicated distinct immunities to a wide range of diseases.
"Medical researchers Michael Linda Byrd and Clayton write that by “the 19th century, racially oriented European pseudoscientific data, along with that produced by the American school of anthropology, was used in the U.S. to justify and defend Black slavery."
The slavery health system reflected and indeed profited from the prevalent view that Black bodies are “animal-like’” and purely useful as economic resources (Byrd & Clayton, 2001; Vox, 2017). In fact, many advancements in US medical research came from nonconsensual experiments on enslaved people. Physicians like James Marion Sims, considered the Father of Modern Gynecology, performed experimental reproductive surgeries on female slaves without using anesthesia to perfect procedures like cesarean sections and ovariectomies (Prather et al., 2018). American physicians even introduced “‘Negro Diseases’” to explain poor health in slaves, including “Negro Consumption” and “Drapetomania,” which caused slaves to run away (Byrd & Clayton, 2001). Such racist ideas that Black 305
people have their own diseases or feel less pain bled into physiology textbooks and medical school curriculums, empowering generations of physicians to continue the “pattern of inferior, inconstant, or unavailable health care” for Black Americans (Byrd & Clayton, 2001). Post-Slavery American Healthcare There is no doubt that America’s legacy of slavery still endures in disparities of the healthcare system today. Even after slavery was outlawed in the US, with the exception of slavery as punishment for a crime, racial prejudices carried over to all facets of the everyday lives of African Americas in the United States. The belief that Black people were immune to certain diseases was used as a proslavery argument and presented to an audience that desired validation of their preconceived notions. Savitt explains that although incidence of and immunity to certain diseases was higher in Black Americans, these observations were capitalized on “to illustrate the inferiority of Blacks to white, to rationalize the use of this ‘less fit’ racial group as slaves, to justify subjecting Negro slaves to harsh working conditions in extreme dampness and heat, and to prove to their critics that they recognized the special medical weaknesses of Blacks” (Savitt, 2002).
"As a result of multiple failed attempts to improve the condition of healthcare for Black Americans, the African American community took matters into their own hands and decided to create their own medical institutions to These racist ideologies prevailed even after the improve their access end of the Civil War and abolition, rendering to proper healthcare." African Americans’ efforts to obtain proper
medical careers futile. When slaves became wage-earning agricultural workers, they often struggled to obtain medical coverage as part of their contracts, leaving most powerless over their medical decisions. Of course, the contracts varied depending on the plantation owner in charge. For instance, one contract between A.J. Donelson, a white plantation owner, and his former slaves detailed Donelson’s commitment to “attend to the sick,” but also to “charge twenty-five cents for each dose of medicine, and one dollar for every time he may be called to prescribe” (Long, 2016). Other contracts mirrored the same arrangements under slavery, forbidding workers from leaving plantations, and leaving medical responsibilities completely in the hands of the white plantation owners. To the other extreme, other plantation owners stopped paying attention to sharecroppers’ physical health, interpreting emancipation as the end to their obligation to protect the health of Black people. In other words, because slavery was abolished, some plantation owners believed that they were no longer responsible for the physical health of their “workers,” since they now earned
306
wages. Some describe how white slave owners were more likely to spend more money for their enslaved people’s clothing and medical bills rather than to pay decent wages (Long, 2016). The contracts from the years following the Civil War epitomize the healthcare disparities inflicted upon the African American population of the United States, even after emancipation. While freedom seemed like the light at the end of the tunnel, it led to a state of ill health among African American laborers, who were not able to care for their own communities as a result of unequal access and prevailing racial attitudes. The Freedmen’s Bureau, one of the organizations established by Congress to support newly freed African Americans, continually attempted to enforce medical care in labor contracts and to hold authorities accountable to protecting the health of Black people, unfortunately with minimal success. However, the Bureau shifted their focus to specialize in providing medical attendance and medicine. With this new mission in mind, they proved to be fairly successful; they set up new hospitals and dispensaries and deployed military physicians to work with civilian physicians in the south. Their efforts helped to reduce the malignant impact of smallpox and cholera, and over 46 hospitals were opened. Still, the projects implemented by the Bureau did not fully suffice, as they had several problems, including but not limited to finding qualified staff, obtaining medical supplies, and responding to hostility from racist opponents (Long, 2016). As a result of multiple failed attempts to improve the condition of healthcare for Black Americans, the African American community took matters into their own hands and decided to create their own medical institutions to improve their access to proper healthcare. 1895, the Frederick Douglass Hospital and Training School was established by Black Philadelphians, with the purpose of building a hospital that would “accept African Americans as patients, as residents in surgery, and as nurses in training” (Long, 2016). Generally speaking, freedmen had a strong desire to become literate. This desire was intertwined with the spiritual fellowship in African American churches, many of which emerged as a space for lively discussion, solidarity, and community. In other cities such as Tuskegee, Philadelphia, Charleston, and Washington D.C., many African American communities began to create their own hospitals, medical schools, and training programs for nurses (Long, 2016).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Black Americans created their own hospitals to avoid the racism that they faced in government owned, predominantly white hospitals Source: Picryl
Slow but constant progress to provide African Americans with the equal opportunities they deserved and had a right to was made over the years. Yet, the racial prejudices and challenges they faced justify their distrust towards healthcare to this day. Their legitimate discontent goes back centuries, and is justified in every way, as it derives from the fact that white physicians relied on availability of Black patients for research, dissection, and demonstrations (Savitt, 2002; Thomas and Casper, 2019). While the bodies of African Americans were used prolifically for research purposes, there was a lack of documentation of health and mortality of African Americans. In other words, while Black bodies were unethically used to further scientific innovation, no proper documentation of their own well-being was recorded. It was not until the publication of The Heckler Report in 1985, that the disparities regarding the unequal access and treatment of African Americans healthcare were revealed (Sullivan, 2015). The report was developed thanks to the Secretary of Health and Human Services Margaret Heckler, who brought the issue to national attention, and “shaped minority health policy” to what we know today (Sullivan, 2015). The origins of The Heckler Report trace back to the early 1980s and led to the creation of the Department of Health and Human Services’ Office of Minority Health for the National Institute of Health, as well as other institutions focused on minority health (Sullivan, 2015). Today, healthcare disparities exist in several areas beyond race and ethnicity. Gender, sexuality, economic status, technology, geography, and environment all can warp WINTER 2021
access to proper healthcare. Members of racial and ethnic minority groups, females, those with low socioeconomic status, and those living in rural and underserved areas typically have poorer health but also receive poorer care, and often have less access to healthcare than their counterparts. They are less likely to be insured, creating more barriers to healthcare, are less likely to be medical professionals and are more likely to have communication issues with healthcare professionals. Additionally, members of these groups often reside in conditions that lead to or necessitate behaviors that are harmful for health, leading to increased medical problems down the road. In this paper, the authors aim to give an overview to several areas of healthcare inequity - this is by no means an exhaustive list. They hope that physicians and medical researchers keep these inequities in mind while practicing medicine and make sure that they are providing equitable care for all to the best of their ability.
"Members of racial and ethnic minority groups, females, those with low socioeconomic status, and those living in rural and underserved areas typically have poorer health but also receive poorer care, and often have less access to healthcare than their counterparts."
Racial and Ethnic Disparities Implicit Racial Bias in Physicians When physicians take the Hippocratic Oath, they pledge to use evidence-based medicine and to meet performance-based measures in order to create a uniform health care system. However, varying degrees of bias and cultural stereotypes affect patient-physician interactions. Explicit biases include stereotypes such as believing that Black medical students are less capable than their White counterparts. Implicit biases, on the other hand, are unconscious and often are at odds with one’s personal beliefs, leading to cognitive
307
"Implicit bias originates from the intrinsic human nature to base perceptions of people on received information in order to speed up the decisionmaking process. Several experiments have shown that participants that endorsed prejudiced beliefs and those that held more egalitarian beliefs both were subject to implicit biases."
308
dissonance. Implicit biases are based on cultural stereotypes and in applying group stereotypes down to the individual, which is not necessarily accurate (Chapman et al., 2013). Implicit bias originates from the intrinsic human nature to base perceptions of people on received information in order to speed up the decision-making process. Several experiments have shown that participants that endorsed prejudiced beliefs and those that held more egalitarian beliefs both were subject to implicit biases. In one experiment, two groups were able to create lists of stereotypes based on a cultural group. However, when asked to explain their thoughts on race, low prejudiced participants were more inclined to write statements asserting that race should not be a method by which one should judge a person, while their high prejudiced counterparts were more inclined to express stereotypical beliefs. This shows that stereotypes can manifest as explicit or implicit biases and can be controlled in terms of the consciousness the person has towards their bias. Some studies have shown that bias develops in children by the age of three. While explicit biases can be removed as people grow older, implicit biases are often more concrete and difficult to change (Chapman et al., 2013). Physicians show implicit bias as well, and it is amplified by the time pressures of having to make decisions and the uncertainty in diagnosing a patient. Additionally, physician training emphasizes factors at the group level, teaching population risk factors or unfavorable circumstances within certain demographics, which can perpetuate stereotypes. Several experiments have been conducted to measure this bias, using the implicit association test (IAT). Participants are asked to sort faces and then to sort words into a “good” or “bad” category. They are then shown the faces once again and must press keys associated with those “good” or “bad” words. Participants must answer as quickly as possible, allowing data to be collected on hidden, implicit biases. In many instances, there were shorter response times for the association of White faces with good and Black faces with bad (Chapman et al., 2013). In measuring implicit bias in physicians, one study showed internal medicine physicians and emergency medicine residents of all races 58 faces (both Black and white) and asked questions about pain levels and likelihood of disease. It found that the doctors had a pro-White bias (Green et al., 2007). The study showed physicians of all races Black patients were associated with being
uncooperative, while their White counterparts were seen as compliant. The degree of implicit bias was found to vary based on physician gender and race as well, with women showing less implicit bias than men and non-White residents showing slightly less pro-White sentiment (Chapman et al., 2013). These implicit biases have been shown to further perpetuate disparities among various racial groups. The pro-White bias expressed by many physicians resulted in Black patients feeling that patient-physician communication was poorer and led to worse care. Black patients seen in emergency rooms also typically receive less analgesics than White patients. Hispanic patients were seven times less likely to receive opioids, a statistic which was replicated with Black patients. The study showed that physicians were still able to determine pain accurately in all patients regardless of race; however, the analgesics prescribed were less for Hispanic patients (Chapman et al., 2013). Within a systematic review looking at 15 studies, 14 studies reported low to moderate levels of implicit bias in physicians. People of color (POC) often reported less satisfaction and an overall less patient-centric experience. Oftentimes, POC had to wait longer for their appointments and felt that the physicians were using condescending or dominating verbal cues during their visit. Four studies found that Black patients were seen to be less responsible and cooperative. Hispanic patients were seen as noncompliant and prone to risky behavior. These biases were shown to result in lower quality treatments for Black patients. For example, thrombolysis, a treatment used to dissolve blood clots to improve blood flow, was recommended for White patients much more often than non-White patients. Some of the examined studies showed that Black patients were less likely to fill prescriptions due to the condescending nature of their visits. Another study found that there was no correlation between race and adherence. One area that needs to be further explored is intersectional identities, as patients part of various marginalized groups may experience worse patient-physician relationships and receive inferior care as a result (Hall et al., 2015). Implicit biases pose great dangers in treatments for POC. While becoming aware of bias helps reduce implicit beliefs, it may not be sufficient to ameliorate patient-physician relationships with POC. One method of reducing implicit bias is individuating, a process that relies on
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
focusing on the individual. A study testing this method had participants sort White and Black faces based on similarities and differences. Participants trained to study Black faces showed more accuracy and less implicit bias. Another option includes perspective taking. A study showed that nurses asked to prescribe pain to patients based on images of either White or Black patients in pain were less likely to prescribe analgesics for Black patients. When asked to prescribe medication while imagining the patient’s pain, the prescribed analgesics were the same for both White and Black patients (Chapman et al., 2013). Teaching these methods to healthcare providers is vital in making implicit bias conscious and reducing it in the medical field. Only then will providers be able to provide satisfactory, equitable care to all of their patients. Disparities for Black Patients The American healthcare system does not treat its Black patients with the same level of care and attention as its white ones. Various studies have shown this to be true in the realm of health perception of Black patients, access to health care, and treatment of illness or diseases. It has been shown that many health care providers or those in training believe that Black people do not feel pain to the same degree as white people; this, as well as other false beliefs, are the underlying issues surrounding the way our healthcare system takes care of and looks out for its Black patients (Hoffman et al., 2016). The estimated prevalence of racial discrimination against Black patients with major chronic conditions was 20% in 2014 (Nguyen et al., 2014). Many Black Americans experience discrimination on a daily basis and would be able to recognize it within a health care environment. A group of researchers ran a study on the relationship between perceived discrimination by a doctor or other health care professional and the level of health those Black patients were said to have. These researchers found that there was a statistically significant and negative correlation between the doctor’s perceived discriminatory opinion of the Black patient and how healthy the patient was considered to be. Additionally, there was a significant and positive correlation between the perceived discrimination and the total number of chronic illness, like diabetes and high cholesterol, that Black patients had (Penner et al., 2009). This data suggests that the more a Black patient is discriminated against,
WINTER 2021
the worse off the doctor believes the patient’s health.
"A 2015 study found that Black patients are discriminated against in regard to pain when diagnosed with appendicitis. The study found that Black children (21-years-old or younger) who were suffering from appendicitis (a potentially lifethreatening illness caused when one’s appendix ruptures) were significantly less likely to receive medication for moderate paint as well as opioids for severe pain as white The discrimination against Black patients patients in the same in the health care system extends to their age group." Additionally, Black Americans do not have access to good health care on a systematic level; this is closely tied to residential segregation. One study took a look at the residential segregation for Detroit and how that related to early breast cancer detection (Dajun, 2010). Early breast cancer detection is very important as it reduces possible treatment complications and increases the likelihood of survival (AbraidoLanza et al., 2004; Dajun, 2010). According to the US Census, metropolitan Detroit in 2000 was comprised of 25% Black people and 68.9% white people. At that time, over 75% of Black people lived in the central city and over 90% of white people lived in the suburbs, a noticeable case of residential segregation (Darden and Kamel, 2002; Logam et al., 2004). This study found that in the areas with the highest concentration of Black residents, there were the fewest mammogram programs and, thus, higher levels of late detection for breast cancer and higher mortality rates (Dajun, 2010). This data strongly implies that there is a correlation between racial segregation and access to health care that leads to higher mortality rates for Black patients with breast cancer.
treatment. The prior section discussed implicit racial biases in physicians; many of these biases are discriminate against Black patients. A 2015 study found that Black patients are discriminated against in regard to pain when diagnosed with appendicitis. The study found that Black children (21-years-old or younger) who were suffering from appendicitis (a potentially life-threatening illness caused when one’s appendix ruptures) were significantly less likely to receive medication for moderate paint as well as opioids for severe pain as white patients in the same age group (Goyal et al., 2015). Disparities for Asian Patients Contrary to the myths of physically healthy and financially stable “model minority” Asian American, Asian disparities in U.S. health care are actually very significant. Firstly, it is important to fully debunk the myth that Asian Americans are more well-adjusted and healthier than the average Caucasian. The issue with existing research is that surveys have largely considered Asian Americans as one “monolithic group” rather than individual
309
and unique ethnicities that have their separate healthcare needs (Kim & Keefe, 2010). More specifically, researchers have identified between 27 and 32 unique Asian American groups that all fall into the category of “Asian/Pacific Islanders” which is an obvious issue with data gathering as researchers may choose from one particular group and extrapolate to create a model that is mistakenly believed to represent all Asian Americans. Furthermore, these surveys are vulnerable to nonresponse bias since many Asian Americans, especially those who are not fluent in English, may fail to adequately answer survey questions (Kim & Keefe, 2010). With all this in mind, the disparities and inequalities that Asian Americans face generally span in a few categories such as language and health literacy, health insurance, and foreign or immigration status.
"One of the most common barriers in Asian American access to healthcare is language and health literacy. Many Asian Americans who are not proficient in English, specifically the elderly, have a hard time seeking help, making appointments, communicating with health professionals, and understanding certain illnesses."
One of the most common barriers in Asian American access to healthcare is language and health literacy. Many Asian Americans who are not proficient in English, specifically the elderly, have a hard time seeking help, making appointments, communicating with health professionals, and understanding certain illnesses (Kim & Keefe, 2010). Even in modern day, there is still a lack of consistent translation services throughout health facilities in the U.S. Without these services, many Asian Americans are unable to access healthcare to address their needs. For example, in a study that explored the link between English proficiency and access to mental health services, only 13.8% of Asian Americans who indicated that they had some form of psychiatric disorder had used or accessed any mental health services (Kim et al., 2011). Issues of English proficiency in healthcare access also disrupts cultural family roles. Accordingly, children of recent Asian immigrants tend to understand and speak English much better than their parents and thus are the family translator when dealing with healthcare (Kim & Keefe, 2010). Obviously, this role may feel extremely uncomfortable to many children as they are too young to be facilitating healthcare decisions that could prove to be life-threatening. Although prominent, the language and literacy barrier that Asian Americans face has largely been left untouched and unresolved leading to a large portion of this group having limited access to healthcare services. Another inequality that Asian Americans face in healthcare is that access to health insurance which is connected to immigration status. Many Asian Americans work in small
310
business or multiple low-wage jobs, but oftentimes, these employments do not offer health insurance to employees. While 26% of all Americans are covered by employersponsored health insurance, only 6% of Asian Americans have the same benefits (Kim & Keefe, 2010). Diving further into the statistical comparison to Caucasians, Asian Americans are less likely to receive job-based insurance with a margin of 64% Asian Americans vs. 73 % Caucasians (Richard et al. 2000). Some of this may be explained by poorer coverage among noncitizens and immigrants with 30% of the group remaining uninsured, which is twice the uninsured rate for Caucasians (Richard et al. 2000). Overall, Asian Americans seem to be stuck in an insurance loophole where selfemployed Asian Americans earn too much money to qualify for government-sponsored health insurance but also too little to purchase private insurance, resulting in Asian Americans of lower socioeconomic status having a better chance of being insured that those with higher incomes (Kim & Keefe, 2010). All of these health insurance disparities within Asian American groups further compound on the overall roots of inequality that these ethnicities face. Finally, an important point must be made about foreign and immigration status in relation to healthcare inequality. Accordingly, the “selective immigration hypothesis” states that Asian Americans who immigrated from less economically privileged countries have much better health than those who remained, which could skew surveys researching the status of minority health (Kim & Keef, 2010). However, these stronger health effects of immigrants depreciate as they live longer in the U.S. Another consideration of immigration status is the “salmon bias hypothesis” which argues that unhealthy and unemployed immigrants just return to their homeland while the healthy, employed immigrants remain (Kim & Keefe 2010). Such an event would most definitely skew the data to support the myth that Asian Americans are better off health-wise than other ethnicities. Disparities for Latinx Patients Making up about 18% of the US population, the largest and fastest growing minority group, the Latinx community faces some unique challenges in this country that manifest in health disparities. According to a report published in the CDC’s Morbidity and Mortality Report; when compared to their white counterparts, Latinos had a much higher mortality rate for
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Native Americans often have their own forms of medicine - these are sometimes at odds with Western medicine. The differences between Native American and Western medicine may present barriers to care for Native Americans Source: Wikimedia Commons
diabetes (51% higher) and liver disease and chronic liver disease/cirrhosis (48% higher). One of the biggest challenges this community faces, especially when it comes to analyzing disparities, is the generalization. In the United States there is a severe lack of specificity in health data reporting of the Latino community. When data is stratified by country of origin or nativity, then the numbers become different. In that same report it is also reported that Mexicans and Puerto Ricans had up to 80% higher mortality rates for diabetes than the general population. The greater difference in numbers shows just how important specification is. Now when discussing why these health disparities exist, there are a lot of conclusions that have been speculated. One could be the language barrier—a Spanish-speaking patient would have an extremely hard time explaining their symptoms to a doctor who only speaks English. There is also a need for a more culturally sensitive environment in the medical field; with only 5.8% of physicians identifying as Hispanic, it is no wonder that a majority white physician population wouldn’t know or understand how to create an environment that is suitable for Hispanic patients. Another issue is insurance; in this same report, it is shown that 41.5% of Hispanics did not have health insurance; comparatively, only 15.1% of white patients did not have health insurance (Dominguez et al., 2015). This is also important to consider because it shows another barrier to healthcare—cost. Without insurance, patients are less likely to seek routine medical care due to fear of cost.
WINTER 2021
Without routine healthcare, diseases can go unnoticed until it is too late. Disparities for Native American Patients According to the 2010 Census, 1.7% of the United States population, or 5.2 million Americans, identify as American Indian or Alaska Native (AI/AN) (Demographics NCAI, 2020). Relative to the rest of the US population, AI/ANs are a very diverse, young, and growing population; as of 2008, a third of people identifying as AI/AN were below 18 years old (Sarche & Spicer, 2008). AI/AN communities have been historically marginalized and have experienced healthcare inequity since the beginning of colonization over 500 years ago (Jones, 2006). Dr. David Jones, a current public health researcher at Harvard University, writes that AI/AN have suffered poorer health since the beginning of colonization, “whether the prevailing diseases were smallpox, tuberculosis, alcoholism, or other chronic afflictions of modern society” (Jones, 2006). Higher disease incidence on reservations, in turn, was used to justify racism against indigenous populations; for example, when federal authorities in the late 1800s observed that Sioux mortality from tuberculosis was higher than mortality rates in most major cities due to unhygienic conditions on the reservations, they saw “Sioux tuberculosis as proof of Indians’ inevitable demise” (Jones, 2006). Today, health inequities that persist among AI/AN communities largely reflect the long colonial history of marginalization of indigenous communities.
'In the United States there is a severe lack of specificity in health data reporting of the Latino community. When data is stratified by country of origin or nativity, then the numbers become different. In that same report it is also reported that Mexicans and Puerto Ricans had up to 80% higher mortality rates for diabetes than the general population."
311
According to Mary Smith, a Cherokee Nation member and former chief executive of the Indian Health Service, AI/ANs have a life expectancy 4.4 years shorter compared to the rest of the US population; in addition, AI/ ANs are more likely to die from preventable diseases such as “chronic liver disease, cirrhosis [often as the result of alcohol abuse], diabetes, and chronic respiratory diseases” (Native Americans, n.d.). Also, compared to non-Hispanic white people, AI/ANs adults are also 50% more likely to be obese; obesity is linked to an increased risk for diabetes, heart disease, and stroke (Obesity and American Indians/Alaska Natives - The Office of Minority Health, n.d.). The determinants for these higher incidences of non-communicable diseases among AI/ANs are diverse, but are rooted in “‘inadequate education, disproportionate poverty, discrimination in the delivery of health services, and cultural differences,’” rather than purely genetic origins of disease (Jones, 2006).
"Compared to nonHispanic white people, AI/ANs adults are also 50% more likely to be obese; obesity is linked to an increased risk for diabetes, heart disease, and stroke."
Researchers Michelle Sarche and Paul Spicer at the University of Colorado Denver add that AI/ANs are also more likely to “experience a range of violent and traumatic events,” especially as children (Sarche & Spicer, 2009). Among children, experience of trauma and violence is linked to risky behaviors and poorer health outcomes, including alcohol and drug abuse, suicide attempts, truancy from school, and petty thievery (Sarche & Spicer, 2009). Additionally, according to the National Congress of American Indians, violence against AI/AN women has been especially devastating: 56.1% of AI/AN women have experienced sexual violence and 55.5% of AI/AN women have experienced domestic violence in their lifetimes. Violence against women is also linked to poorer mental and physical health outcomes (World Health Organization, 2012). Within AI/ AN populations, it is evident that trauma and violence has led to these populations being significantly vulnerable; they are therefore disproportionately more likely to have poor health. In conclusion, a long colonial history of socioeconomic disenfranchisement and systemic racism has created significant health inequities in AI/AN populations. These health inequities are reflected in the lower health expectancies, increased rates of non-communicable diseases, and increased experiences and exposures to trauma and violence. To improve health outcomes among indigenous populations, sweeping structural changes in healthcare access and economic opportunities for AI/ANs must occur (Jones, 2006).
312
Disparities for Native Hawaiians and Pacific Islanders Perhaps the smallest minority, the American Native Hawaiian/Pacific Islander (NHOPI) population is defined as those with ties to the indigenous people of Hawaii, Guam, Samoa, or other Pacific Islands and represents roughly 0.04% of the US population. Less than one third of this population lives outside of the Hawaiian Islands and 31.2% of NHOPI in the US are under 18 as opposed to 18.8% in non-Hispanic white people (Office of Minority Health, 2020). The NHOPI population endures higher rates of obesity, hypertension, overweight status and higher rate of asthma and cancer mortality. However, the disproportionate health outcomes of NHOPI are oftentimes masked through grouping with Asian Americans in categories such as Asian-Pacific Islander (API) and Asian American and Pacific Islander (AAPI) (Morisako et al., 2017). In addition, NHOPI populations continue to reap the consequences of historical trauma from the colonial period. Attempts of assimilation by western oppressors endangered traditional island language, dress, and culture, leading to the current disparities in health, education, and welfare seen today (Morisako et al., 2017). According to the CDC, male and female NHOPI suffer higher rates of HIV/AIDS and sexually transmitted diseases (STDs) such as chlamydia, gonorrhea, and syphilis than nonWhite Hispanics in the US (“Native Hawaiians and Other Pacific Islanders,” 2020; “Hepatitis B: Are you at Risk,” 2020). Perhaps most strikingly, incidence of tuberculosis (TB) in NHOPI is forty times higher than in non-Hispanic white populations (CDC, 2020). Leading causes of death in NHOPIs include cancer, heart disease, unintentional injuries/accidents, stroke and diabetes (Office of Minority Health, 2020). As aforementioned, the American obesity epidemic also plagues NHOPIs, but at a thirty percent increase compared to Caucasians. Rates of childhood obesity (21% in children 2 years old and 39% in children 8 years old) are concerning in comparison to national averages (15.6% and 26% respectively), especially because this condition can materialize into more serious comorbidities in adulthood such as hypertension, high blood pressure, high cholesterol, and diabetes (Braden & Nigg, 2016; Watson et al., 2009). Cultural differences in Hawaiian and Pacific Islander attitudes about dietary beliefs may complicate traditional routes of medical intervention, as imposition of Western dietary standards on these populations has historically resulted in poorer health. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Other social determinants of health such as lower levels of educational attainment and lower economic status have largely contributed to these health outcomes and inequities (Office of Minority Health, 2020). Additionally, factors pertaining to healthcare access such as lack of insurance coverage continue to act as barriers to health, and NHOPI have higher rates of uninsured health care coverage (Terada et al., 2016; Office of Minority Health, 2020; Morisako et al., 2017). In addition, differences in language, culture, and belief can distort patient-physician relations leading to lower qualities of care. This fact is especially important in the context of virulent illnesses and diseases very common within NHOPI populations such as Hepatitis B (which often leads to liver cancer) and cancer. These conditions often go unnoticed until significant damage has occurred and require intensive medical attention and monitoring (Office of Minority Health, 2020; CDC, 2020b). In essence, the health disparities of the NHOPI population in the US requires equal, if not more attention, from American physicians especially since the NHOPI are the second fastest growing racial/ethnic group (Hixon et al., 2012).
Gender and Sexuality-Based Disparities Implicit Sexist Bias in Physicians In many ways, gender matters in the ways a physician treats their patient. It is true that, according to Janin Clayton, MD, sex affects many aspects of biology, including metabolism, physiology, the ways diseases manifest themselves, and how treatment should be administered. However, gender and implicit gender bias can negatively contribute to how a patient is treated by health care professionals. About 20% of women have felt that their healthcare provider minimized or dismissed their symptoms and 17% of women felt they were being treated unfairly based on their gender (Paulsen, 2020). Sometimes, physicians will claim that a woman’s physical appearance or attractiveness influences whether or not they’re actually sick, stating things like “you look too good to be sick!” (Nguyen et al., 2018; Samulowitz et al., 2018). Women have reported not being trusted by their physicians based on a multitude of false claims and their distress is dismissed, causing them to work harder to be taken seriously and believed in medical encounters. Doctors’ dismissals of women’s claims are often rooted in negative stereotypes or beliefs about women. Studies have shown that women who experience pain are most often perceived as emotional, not wanting to get better, hysterical, and fabricating the WINTER 2021
pain (Barsky et al., 2001; Samulowitz et al., 2018). Other studies showed that women are believed to have psychological causes for their pain as opposed to somatic causes (Barsky et al., 2001; Samulowitz et al., 2018). As such, women are often given fewer or less effective medication for pain relief and instead are prescribed antidepressants and given a mental health referral. If a physician doesn’t know, or does not care to find out, what is the root cause of a women’s ailment, they may write off a woman’s pain as “medically unexplained” (Samulowitz et al., 2018), implying that the woman is fabricating her pain. All of these false sexist beliefs are highly detrimental to women in the healthcare system as they cannot get the adequate and timely medical treatment they need. Overview of Disparities for Women The inequality amidst the healthcare system is prevalent in many forms, including disparities between cisgender females and cisgender males. These inequalities stretch from insurance, to patient care, to members of the medical community. Data from the Kaiser Family Foundation demonstrates that women make up around one third of professionally active physicians in the United States workforce (Bean, 2020). There is a 3.8:1 male to female physician ratio in the US, with Idaho being the most unequal having around 4.1 male doctors for every one female doctor (Bean, 2020). Given the inequalities present in those providing care, it is unsurprising to see the inequalities in the care patients receive. Weissman et al. performed a study comparing health care access and utilization among adults with serious psychological distress in the years surrounding the Affordable Care Act’s implementation (2018). Women showed decreased health care utilization when compared to men in these circumstances. Furthermore, women were more likely than men to experience delays in care (Weissman et al., 2018). Hispanic women, specifically, used fewer hospital and outpatient services than any other racial/ethnic gender group. Moreover, women were at greater risk of reporting insufficient money for medications, experiencing delays in care, having insufficient financial resources for healthcare services (especially those related to mental health), needing to change the usual place of care, and even having to visit a doctor ten or more times (Weissman et al., 2018).
"There is a 3.8:1 male to female physician ratio in the US, with Idaho being the most unequal having around 4.1 male doctors for every one female doctor."
Gender differences in health and use of health services are a long-standing concern for the US medical system. A study by Cameron et al. from Northwestern looked at health and the use of healthcare by adults over the age of 65 and 313
Figure 4: One issue with transgender healthcare is lack of affordable gender conformation surgery. Activists are forced to protest in order to get access to the healthcare that they need Source: Ted Eytan
"Around 12.2 percent of all women in the US are not covered by any type of private or public health insurance. Around 47 percent of women faced a bill for medical costs that was more than they expected compared to 35 percent of men. 26 percent of women compared to 19 percent of men went without health care or delayed treatment because of financial concerns."
314
found many differences in care. The age group was chosen because the elderly typically require heavy use of medical services. The researchers found that women’s health needs were substantially greater among than the men’s, but they were given less access to preventative care by the healthcare system. This is concerning as preventative care is a leading health concern for women, but women receive preventative care less frequently than men (Hunt, 2019; Cameron et al., 2010). Older women also had fewer hospital admissions and outpatient surgeries than men despite more prevalence of health issues in women aged 65+, especially regarding issues of mobility and disability. In essence, though older women typically require more care, they are underserved by healthcare systems (Cameron et al., 2010). Many of the healthcare inequalities seem to stem from the financial inequality present when comparing the genders. Around 12.2 percent of all women in the US are not covered by any type of private or public health insurance (Hunt, 2019). Around 47 percent of women faced a bill for medical costs that was more than they expected compared to 35 percent of men. 26 percent of women compared to 19 percent of men went without health care or delayed treatment because
of financial concerns. It is clear that some of the healthcare inequality between genders stems from the financial inequality amidst genders in the US (Hunt, 2019). These inequities can be partially explained through the obstacles that women face in making and saving money. There is a wage gap between men and women; additionally, women leave the workforce more than men to act as caregivers for their families (“TCRS - 19th Annual Transamerica Retirement Survey,” n.d.). Disparities for Transgender Patients The transgender population in America is one of the most underserved groups in many respects. On top of facing disproportionate rates of unemployment, mental illness, abuse, and familial rejection, they also face a plethora of barriers and inequalities in the realm of healthcare. A large portion of the healthcare related inequalities faced by transgender individuals stems from stigma and discrimination within medicine. A survey conducted on the prevalence of different forms of discrimination faced by transgender and gender non-conforming (TGGNC) patients related to healthcare produced some discouraging statistics. The survey found that “28% [of TGGNC] were verbally or physically DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
harassed in a doctor’s office” and about 19% were denied access to care because of their gender identity (Grant et al., 2011). These factors decrease a transgender person’s likelihood to seek out appropriate care when the availability of trans-inclusive care is already in short supply (Safer et al., 2016). Across the board, providers are ill-equipped to give appropriate care to transgender patients. The types of care and interventions needed by transgender patients include hormone therapies and gender-affirming surgeries. Even with the increased prevalence of this community over the years and the establishment of appropriate methods of care, very few general medicine providers have any sort of expertise in this area, accounting for one of the main factors inhibiting transgender patients’ access to care (Safer et al., 2016). Even in shifting away from the types of care specific to the transgender patients, surveys have shown that most physicians in emergency rooms have a gap in knowledge when it comes to treating patients from the trans community, even though 88% of these physicians have cared for TGGNC patients at some point in their careers (Willging et al., 2019). This issue is compounded by the fact that members of this community already have issues accessing care. If transgender individuals do not have a general physician for acute medical problems, these invidivuals are usually left with one alternative: the ER. The ER and the healthcare system in general can be seen as a microcosm of our culture and society. Even when these physicians are wellintentioned, biases easily slip their way into the overall healthcare received by transgender individuals. There is simply no easy fix to these systematic issues, but there are little things that can be done in order to ease some of the health-related anxieties trans patients experience. Educating medical students on gender and sexuality as well as providing continuous education to providers in these areas would effectively place less of the burden on the patient when it comes to seeking help, for they would no longer have to teach their physician about their situation and gender identity. Such topics should be taught in an integrated manner, as they are not isolated to one particular area of healthcare, but instead permeate throughout all facets of health and medicine (Willging et al., 2019).
WINTER 2021
Economic Disparities Overview of Income Disparities Income level has a significant impact on both health and healthcare experience. In America, the burden of illness is disproportionately placed on low-income individuals. Poor families are five times more likely to report fair or poor health in surveys than families whose income levels are 400% above the federal poverty level (as of 2021, this was $26,500) (Woolf et al., 2015; “2021 POVERTY GUIDELINES,” 2021). Additionally, poor individuals are more likely to have heart disease, strokes, or diabetes. Income effects on health persist even for those above the federal poverty line. In fact, the bottom 80% of earners in the US have a life expectancy 4.8 years lower than the top 20%’s on average (Muennig et. al., 2005). Health care experience is affected by income, too. In a recent research survey conducted by the Medical Expenditure Panel Survey (MEPS), 144,073 observations were collected regarding healthcare experiences from individuals with various backgrounds. From this data, Okunrintemi et al. analyzed and identified 68,447 individual survey responses to represent approximately 176.8 million adults in the United States in order to study the relationship between income and healthcare experience. They specifically looked at access to care, responsiveness of provider, patient-provider communication, shared decision-making, and overall patient satisfaction. The researchers found that low-income earners consistently had poorer healthcare experiences than highincome earners in each of the 5 aspects. These results were shown to be consistent even when adjusted for covariates including age, sex, race, insurance, education, and region (Okunrintemi et al., 2019).
"Poor families are five times more likely to report fair or poor health in surveys than families whose income levels are 400% above the federal poverty level."
While researchers have concluded that individuals with low incomes have been commonly associated with a negative healthcare experience, researchers have yet to pinpoint the dominant factors contributing to this inequity. Prior studies suggest that these differences are caused by limitations in insurance. Though some try to attribute the issues to lack of health-seeking behavior in low-income populations, there is clear evidence that even low-income patients with an established health care provider still report poorer healthcare experiences and notable differences in several aspects of healthcare service when compared to those with high incomes (Okunrintemi et al., 2019).
315
Another theory points to the outdated infrastructure of today’s modern healthcare system. Although healthcare coverage has been widely expanded for low-income individuals due to the integration of the Affordable Care Act, researchers believe that some healthcare systems have not evolved sufficiently to maintain a high level of quality for their patients. Lower-income individuals are generally relegated to hospitals that have fewer health resources; additionally, these hospitals often lack the funding and resources for renovation or expansions to better accommodate their patients.
"Between 2011 and 2013, 38% of residents in households that made less than $22,500 a year reported being in poor or fair health. By contrast, only 12% of residents in households that made more than $47,700 annually reported being in poor to fair health."
Another factor to consider relates to the patient’s communication with their healthcare providers. Generally, Americans tend to have high levels of distrust with healthcare (Armstrong et al., 2006). This is further propagated by lower-income status. Patients with a lower socioeconomic background often mistrust health systems and healthcare providers, “suspecting financial motivations underlying suggested therapeutic decisions, and these patients may therefore be primed to experience dissatisfaction with the care they receive” (Okunrintemi et al., 2019). Additionally, distrust in healthcare leads to worse self-reporting of disease (Armstrong et al., 2006). These social encounters are pivotal to a patient’s experience in the healthcare system and are often degraded by limited medical training focusing on social aspects of healthcare service, physician burnout, and sociocultural differences between financially disadvantaged patients and their affluent physicians (Okunrintemi et al., 2019). Income Impacts Clinical Care The association between income and health is evident and has been well documented in the literature so far (Lenhart, 2019). Income influences health and life expectancy through various ways including, but not limited to, clinical, behavioral, social, and environmental mechanisms (Chokshi, 2018). For this reason, policies that promote economic equity provide a way to bridge the income inequality gap, which will allow for improved healthcare in under-resourced populations (Chokshi, 2018). The United States has one of the most unique healthcare systems in the entire world. Due to the prevailing influence of capitalism, healthcare in the US has been provided by many district organizations, some being private and others being public (Rosenthal, 2013). The lack of a unified system has led to healthcare facilities being largely owned and operated by private sector businesses. In the US, only 21%
316
of healthcare facilities are government owned. The other 79% are privately run, with nearly 21% of them being for-profit. This nuanced relationship between the public and private sector within US healthcare has oftentimes led to higher costs within the industry, hindering quality and accessible healthcare (“Why is healthcare so expensive in the United States?” 2016). As the only developed country that relies on private health insurance, the United States possesses a special challenge in providing equal access to its diverse & growing population (Vladeck, 2003). Compared to counterparts in countries with universal healthcare models, Americans pay significantly more for their basic healthcare services. In 2018, nearly 92 percent of the population was estimated to have coverage, leaving 27.5 million people, or 8.5 percent of the population, uninsured (US Census Bureau, n.d.). Between 2011 and 2013, 38% of residents in households that made less than $22,500 a year reported being in poor or fair health. By contrast, only 12% of residents in households that made more than $47,700 annually reported being in poor to fair health (Hero et al., 2017). American families that earn less than $35,000 a year are four times as likely to report feelings of nervousness and five times as likely to report constant feelings of sadness, compared to households with an annual income surpassing $100,000 (Chokshi, 2018). A recent study from 2018 states that there is a correlation between higher income and the likelihood of heads of households reporting to be in excellent health of 6.9 to 8.9 percentage points (Lenhart, 2019). The positive correlation between these two is not limited to the bottom end of the income bracket but rather presents as a gradient that manifests itself throughout the income distribution pyramid (Case et al., 2001). This link has also been found to be cyclical, meaning that the relationship between income and health status observed in adults may have antecedents in childhood and manifests itself as a negative feedback loop, sometimes referred to as the “health-poverty trap” (Case et al., 2001; Chokshi, 2018). Children originating from poorer households are more likely to enter adulthood in poorer general health, which may hinder their ability to do well financially in their future, which will then put the health of their progeny at higher risk (Case et al., 2001). Lack of accessibility is at the core of the issue. Low-income Americans face greater obstacles to accessing medical care and are less likely to have adequate health insurance, receive new drugs and technologies, and have reliable primary and specialty care physicians. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 5: Low-income individuals are more at risk of starting unhealthy behaviors, including smoking Source: Wikimedia Commons
Accessibility is an important determinant of health status and is not only limited to sectors directly related to health & physician care. For instance, low-income families experience more difficulty in accessing fresh food and often rely on less nutritious alternatives due to cost-effectiveness. In the same way, their environment may limit opportunities for adequate physical activity, which will lead to higher rates of morbidity and obesity (Chokshi, 2018). To combat the growing impact of economic inequality within healthcare systems, the US government began to first usher in social programs in the 1960s. Some of the social services programs included Medicare and Medicaid, which were designed to alleviate healthcare expenditures for certain groups. While these programs have notably shielded Americans from billions of dollars worth of expenses and improved the quality of life for many, these programs still fail to adequately address income inequality within healthcare (DeParle, 2000). Even though Medicare is designed to alleviate expenses for the elderly, its reliance on the private industry has led to insufficient coverage on many basic treatments (Way & Mayer, 2008). The misallocation of resources for Medicaid on a federal and local level has also created a similar phenomenon where eligibility for the social service has constantly waivered (Tallon, 1990). Due to these failures, the Affordable Care Act was passed to amend the effort to combat inequality within medicine. Within the first few years, the program saved the lives of at least 19,200 adults aged 55 to 64 and significantly increased the number of Americans with health insurance (Miller et al., 2019). However, despite the renewed WINTER 2021
effort towards combating inequality within healthcare coverage, “more than twenty-seven million Americans remain still uninsured—the majority of whom are low-income people.” The shortcomings of each of these programs have directly impacted the wellbeing and livelihood of American citizens. To move forward and continue to address the effect of income within healthcare, more attention must be diverted to understand the root causes of inequality rather than pouring more money into band-aid solutions.
"Substance abuse, for one, is perpetuated through the overconcentration of liquor stores in low-income neighborhoods."
Income Impacts Behavioral Care Low-income Americans are more likely to engage in health-damaging behaviors, including substance abuse, poor nutrition, and physical inactivity, than their higher-income counterparts (Pampel et al., 2010). One reason underlying this is the stress that lower-income individuals are under, resulting in the need for coping mechanisms (Reiss et al., 2019). While many higher-income Americans are able to indulge in lower-risk coping mechanisms given similar situations, low-income Americans do not have the same luxury. Instead, they are forced to turn to cheap and accessible alternatives. Substance abuse, for one, is perpetuated through the over-concentration of liquor stores in low-income neighborhoods (Jones-Webb & Karriker-Jaffe, 2013). The immediate relief that drugs, tobacco, and alcohol provide is easy to become addicted to, which may be one of the reasons why a positive relationship between prevalence of illegal drugs and criminal behavior and low-income neighborhoods exists; those in families that earn less than $35,000 a year are three times more likely to smoke than those in families that annually earn $100,000 or more (Pampel et al., 317
2010; Pleis & Lethbridge-Cejku, 2012). Further perpetuating issues with substance abuse is the lack of accessibility of treatment - many low-income individuals are unable to receive adequate treatment; in 2019, approximately 82.6% of uninsured people were from families with incomes of below 400% of poverty (Tolbert et al., 2020).
"When using data from biased studies, the AI adopts that bias as well. Using information from electronic health records, insurance claims, or device readings could result in a biased AI system due to this data being generated from human decisions, which are often informed by implicit biases."
Another significant issue is the widespread distribution of fast-food restaurants and other unhealthy food sources in low-income areas This, combined with the lack of healthy and affordable food options both result in poor nutrition.Over 35% of the US population was obese in states where the average household income was below $45,000 in 2015 (Bentley et al., 2018; Hilmers et al., 2012). Obesity prevalence among low-income individuals, particularly children, can also be understood through the lack of opportunities to participate in physical activity due to unsafe streets and limited safe playgrounds (Chang & Kim, 2017). From this, it becomes apparent that low-income individuals are surrounded by factors that make them susceptible to unhealthy behaviors. Environmental injustice—”the disproportionate environmental burdens and benefits associated with social inequalities” that has also come to encompass access to healthy foods— evidently seems to have a heavy influence on the creation of such behaviors (Chakraborty et al., 2016). Furthermore, children especially are impressionable and upon observing parental behaviors from an early age, they become prone to imitate such behaviors. Poverty’s cyclical nature often leaves low-income individuals unable to make enough money to receive proper treatment for their ailments, making it extremely difficult to stop behaviors that negatively contribute to health.
Technological Disparities AI-Associated Disparities As technology begins to develop, the healthcare system is being transformed with the introduction of artificial intelligence (AI) and machine learning (ML) devices that can accurately predict diagnoses and offer treatment recommendations to physicians. However, the AI/ML technology in place might pose dangers in their diagnoses, as they are programmed with data and information that reflects human biases. Several types of bias exist within the information supplied to AI. Statistical bias occurs when a result is produced that is different from the true estimate of the data. This bias is highly
318
common due to a variety of factors, such as subpar sampling methods or measurement error. The bias can often be seen within clinical settings. The Framingham Study, for example, looked at AI designed to predict cardiovascular disease. However, the predicted risk for Black patients was 20% lower in comparison to their white counterparts, which is not necessarily an accurate judgement of the data. Social bias exists as well, caused by inequities in care delivery. While social bias can be a factor of statistical bias, it is also affected by both implicit and explicit biases in physicians. As AI machines are programmed to learn from physician diagnoses, the machine may adopt the diagnosis and treatment biases of a physician. In looking at older women suffering from myocardial infarctions, many patients present with atypical symptoms. As a physician would fall to their implicit bias against women, the AI would learn from that and similarly adopt that bias, limiting its ability to make accurate diagnoses (Parikh et al., 2019). AI/ML is limited based on data collection, another factor of statistical bias. When using data from biased studies, the AI adopts that bias as well. Using information from electronic health records, insurance claims, or device readings could result in a biased AI system due to this data being generated from human decisions, which are often informed by the implicit biases discussed above. AI, for example, is less likely to predict risks for patients that are missing data in their health records, resulting in Black women being less likely to be tested for breast cancer mutations than White women despite having similar risk for the mutations (Parikh et al., 2019). This bias caused by poor data collection is also referred to as information and selection bias, caused by poor experimental design or inaccurate analyses. In order to limit this bias, individual and whole population-level data should be collected and integrated within the AI system (Gurupur et al., 2020). Oftentimes, bias within AI is unrecognized and physicians tend to trust AI recommendations over collected clinical data. Despite AI producing questionable results, physicians are likely to accept those recommendations, especially when under pressure due to time constraints and multitasking (Parikh et al., 2019). These poor decision-making strategies are a result of confirmation and automation biases. As the AI produces a diagnosis, physicians are less likely to continue to search for evidence that either supports or negates
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 6: Telehealth consults allow doctors to virtually give their patients’ medical diagnoses. They require the use of significant amounts of technology, presenting barriers for low-income and rural areas Source: Wikimedia Commons
the decision given by the AI, simply accepting the decision at face value (Challen et al., 2019). These biases result in a cycle of implicit biases being perpetuated, as physicians rely on biased AI systems, and AI systems continue to learn through biased physician decisions. In order to reduce these biases, it is imperative that AI learns through physician real-time decision-making processes. As a physician begins to tire due to a high cognitive load, they are more likely to make biased decisions. In order to prevent that, AI can be used to alert physicians of potential biased decisions they make and offer its own recommendations. The data generating process must be made more robust as well, with data collected in all patient groups. Randomized trials are important as well in generating truly unbiased data (Parikh et al., 2019). With more data, it is less likely that AI will adopt biases, having access to statistics from every patient group. Disparities in Telehealth Care As medical technologies continue to advance and medicine becomes more accessible virtually, the medical field has made significant movements towards adaptation of a telehealth model. Telehealth refers to use of electronic resources and telecommunications devices to provide medical care and access to health information remotely; for patients, it allows doctors to provide assessments and treatments over the form or video platforms (Kichloo et al., 2020). Although many Americans have reliable access to the internet and other technologies that facilitate telehealth, there are still many who do not. Typically, those with less internet
WINTER 2021
access tend to be racial minorities, older adults, rural residents, and those with lower levels of education and income (Pew Research, 2019). Historically, insufficient access to the internet has not hampered the quality or consistency of healthcare. The global COVID-19 pandemic and the trends towards a more remote society have brought these technological divisions to the surface. Compared to one year ago, October usage of telehealth increased by 3,060% (Roth, 2020). The increase in telehealth visits corresponds with a decrease in in-person visits, leaving those who are at risk for COVID-19 but without sufficient technological capabilities in limbo. Telehealth was overall used less often by nonwhite people (Pierce & Stevermer 2020). This is due in part to technological limitations but also to the difficulty navigating the technology. Of the non-white people that participated in telehealth, Black people were the more likely to use the audio-only option for telehealth appointments. Rates of telehealth visits are even lower for older Black and Hispanic Americans who have the highest need for these visits (Senz, 2020). The result of the lower rates of telehealth usage by Black and Hispanic Americans can lead to inadequate chronic disease management, thus an even larger disparity in healthcare outcomes (Senz, 2020) Another significant barrier to telehealth that patients face is access to proper Internet bandwidth - rural patients and lower income patients have less bandwidth than their urban and wealthier peers, resulting in lack of accessibility to telehealth. Especially during times that necessitate virtual doctors
"Although many Americans have reliable access to the internet and other technologies that facilitate telehealth, there are still many who do not. Typically, those with less internet access tend to be racial minorities, older adults, rural residents, and those with lower levels of education and income."
319
appointments - like the current pandemic - this will widen the gap in healthcare accessibility (Gajarawala & Pelkowski, 2020). Increasing access to the internet, as well as making sure patients are technologically literate, may help to mitigate some of these concerns.
"Precision medicine (PM) refers to the relatively new approach of patient care in which healthcare is individually tailored on the basis of a person's genes, lifestyle, clinical history, and environment."
Health Literacy Divides In order to understand how disparities in health literacy drive inequity, we have to first define health literacy. Harris et al. write that digital health literacy requires three primary skills: “basic reading and writing skills, working knowledge of using computers, and an understanding of how, why, and when online health information is created, shared, and received” (Harris et al., 2019). Disparities in health literacy often mirror other disparities. In particular, low-income households have lower rates of internet access compared to higher-income groups and are more likely to rely on phones for internet access (Digital Inequality and Low-Income Households, 2016). Digital inequality was previously thought of within a binary framework that separated those who did not have access to the internet from those who did. However, this model does not fully capture some of the differences that have emerged, including varying levels of autonomy and skill in using and accessing technology (Digital Inequality and Low-Income Households, 2016). In fact, as technology develops, health disparities due to differences in health literacy are worsening. Glied et al. tested the hypothesis that “improvements in health technologies tend to increase disparities in health across education groups because education enhances the ability to exploit technological advances” (Glied & Lleras-Muney, 2008). They observed that for diseases where more progress and health-related innovation has been made, moreeducated individuals are more likely to survive than less-educated patients. For treatments where less progress has been made, education has less of an impact. They suggest several mechanisms to explain this effect, including that patients with more education might be better informed about health-related innovation and thus would be able to take advantage of new technologies at earlier stages. This may also be driven by differences in providers—patients with higher levels of educational attainment may be more likely to be treated by providers with better access to new technologies (Glied & Lleras-Muney, 2008). Disparities in Precision Medicine Precision medicine (PM) refers to the relatively
320
new approach of patient care in which healthcare is individually tailored on the basis of a person's genes, lifestyle, clinical history, and environment. It aims to advance medical and scientific discoveries while offering more tailored, precise and accurate health interventions, which will maximize the health benefits for patients (Collins & Varmus, 2017). Unfortunately, precision medicine also has the potential to compound healthcare inequalities, particularly among ethno-racial groups. Particularly because precision medicine requires active participation and trust from a variety of ethno-racial groups, groups who are unable to share their personal data may miss out on the benefits of PM. The distribution of improvements in diagnosis and treatment that PM offers may be distributed unevenly (Cohn et al., 2016). One critical issue is that PM relies heavily on genomic data. The majority of data in genomic databases comes from European samples. For example, in 2016, researchers found that out of 25 million samples, 81% of participants were still of European descent. Besides being problematic from a purely scientific standpoint, the authors note that this also broadcasts the harmful message that the genomic data of Europeans and those with European descent matter more than the genomic data of the rest of the world (Popejoy & Fullerton, 2016). Furthermore, studies of such data are likely to miss disease risk variants that are rare among European populations but common among other groups (Bustamante et al., 2011). Because the databases used have samples that are overwhelmingly from Europeans or from people of European descent, non-Europeans may be misdiagnosed (Amsen, 2019). Similar considerations affect medication responses: for example, some genetic variants related to drug metabolism are more common among individuals of African ancestry. That is, there may be increased sensitivity or diminished response to medications and therapies such as β-agonists, warfarin, and chemotherapy (Yasuda et al., 2008). As a result, it is imperative that a sufficient number of people with nonEuropean ancestry are included in relevant study samples. As it stands, there are a handful of initiatives that have had varying levels of success to reverse these demographic trends. One such initiative, H3Africa, is a collaboration of African clinicians, scientists and bioinformaticians
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 7: Redlining presents a significant problem for communities of color in urban areas. This is a map of St. Louis, MO. Each dot represents 25 residents. Red is white people, Blue is Black people, green is Asians, orange is Hispanic people, and yellow is other. There are clear racial divides present in the city even today, and these result in significant healthcare disparities Source: Flickr
who conduct large-scale sequencing and genetic association studies. Their goal is to create “multigenerational and multidisciplinary capacity building and infrastructural development for genomic medicine in Africa” (Akinemi et al., 2016).
Geographic and Environmental Disparities Differences between Rural and Urban Access to Care It has been well-documented in history that those living in rural areas face larger barriers to healthcare access compared to their urban counterparts. Speaking from a purely geographic and spatial perspective, rural areas are sparser than urban areas – thus resulting in a longer travel distance to and from healthcare providers within a rural setting – something that rural residents generally see as an obstacle in obtaining healthcare. In addition to generally longer travel times for hospital visits and check-ups, hospitals and clinics are finding it harder and harder to attract physicians and other medical professionals to rural areas. And unlike major hospital systems within urban areas, rural hospitals are smaller and relatively singular – meaning that they suffer from
WINTER 2021
unfavorable economies of scale than larger hospital systems would have (Weisgrau, 1995). This, combined with the extra resources spent on attracting medical staff, further exacerbates the financial viability of running rural hospitals. With fewer and fewer healthcare providers in rural areas, there exists a negative feedback loop – with residents finding it more difficult to reach out for healthcare, thus resulting in less patients visiting hospitals and clinics, which causes more hospitals to shut down, making it even more difficult for rural residents to receive healthcare.
"Amplified by the COVID-19 pandemic, online healthcare delivery services have grown tremendously recently, and serve as a great way for rural residents to access healthcare."
Yet, despite worsening physical access to healthcare, rural residents are finding ways to overcome accessibility issues. Amplified by the COVID-19 pandemic, online healthcare delivery services have grown tremendously recently, and serve as a great way for rural residents to access healthcare. These online services have already seen some success in reducing some of the disparities in health communication and information – a disparity widely apparent in rural settings compared to urban settings (Douthit et al., 2015). But even with online services theoretically eliminating the barrier of distance, many rural populations still lack high speed internet, which serves
321
not just as another healthcare barrier, but a general barrier to development overall. Besides these online services, rural residents have also found other ways to access healthcare. Collins et al. reveals that when considering the elderly population in West Texas that saw distance as a conflicting factor in pursuing healthcare, pharmacy visits would often be replaced with mail-order pharmaceuticals, overcoming this distance barrier. Additionally, some healthcare providers also provide transportation services to those who face a transportation barrier (Collins et al., 2007).
"Rural populations are largely poorer than urban areas. Since rural residents typically make less money and the industries more prevalent in rural areas are less likely to provide employersponsored health insurance coverage, healthcare expenses constitute a major barrier for many rural residents."
The financial barrier that many rural residents face in pursuing healthcare is very evident as well. Rural populations are largely poorer than urban areas. Since rural residents typically make less money and the industries more prevalent in rural areas are less likely to provide employersponsored health insurance coverage, healthcare expenses constitute a major barrier for many rural residents (Newkirk et al., 2014). Although the Affordable Care Act did expand Medicaid coverage to many of these rural residents, state legislation has largely prevented this expansion from taking place. Furthermore, insurance policies are also typically less comprehensive for rural areas. Within Arkansas, Kilmer et al. asked residents about their eye care health insurance coverage and how often they went to an eye care professional, and subsequently discovered that residents were less likely to pursue eye care because of their less comprehensive healthcare coverage, which would result in out-of-pocket costs (Kilmer et al., 2010). Numerous other studies have concluded a similar pattern in residents from rural areas electing to not receive health care out of fear of paying out of pocket as a result of less extensive coverage. This also further contributes to the aforementioned negative feedback loop, in which patients are less likely to reach out for healthcare as a result of financial barriers caused by less comprehensive healthcare coverage, thus leading to increased financial pressures on these rural healthcare providers, and thus making it more difficult to obtain healthcare in rural areas. Urban Planning and Healthcare Inequities Urban planning has a mixed history of both alleviating and giving rise to healthcare inequity. A multitude of linkages exist between urban planning and healthcare inequity that the average citizen does not consider but make all the difference for those it affects. The origins of such connectedness can be traced all the way back to the industrial revolution, when,
322
as urbanization increased and grew more complicated, its concerns for healthcare equity lessened. Perhaps the most important connection between urban planning and healthcare inequity is found when examining social segregation. A common trend demonstrable worldwide is that vulnerable and disenfranchised populations are generally separated physically from advantaged groups, which in turn impacts their access to base public needs and burden from environmental factors (Northridge & Freeman, 2011). Urban planning toward vibrant functional public spaces has evolved with coding and regulations, but with only a few key additions connecting to roots of health inequity. These incorporations include public sewerage, building codes, availability to services such as healthy food and recreational products, and transportation options (Northridge & Freeman, 2011). The connection of these examples of public needs to basic health necessities of individuals is promoting sanitation, safety, and overall population wellbeing and happiness, yet are not implemented equally or with the same standards across a locale. Additionally, the relationship between urban planning and the environment introduces another opportunity for increased healthcare inequity. Factories that emit pollutants, hazardous waste facilities, and other dangers to the environment are accounted for in urban planning and can be directly linked to the root of certain healthcare inequalities. Modern urban economic models assume that hazards such as these are considered when pricing housing, yet that in turn leads those with less economic means to be disproportionately driven to and affected by these areas that are associated with health complications (Northridge & Freeman, 2011). Additionally, one critical issue in urban planning is redlining. Redlining refers discriminatory practices that systematically put certain services, including healthcare, out of reach of different demographic groups (Kenton, 2021). Historically, cities around the US have been redlined around racial and ethnic lines, with racial demographics relegated to underserved parts of the city. Today, residents in redlined areas typically have higher levels of cancer, asthma, and poor mental health; additionally, these people have less access to healthcare, perpetuating the health issues that they have (Nardone et al., 2020). Therefore, it is clear one of the most important interactions between urban planning and healthcare is its impact on social divisions, which creates
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
inequality through accessibility to base public needs and by exposure to environmental hazards. Environmental Factors in Healthcare Disparities Healthcare disparities can be caused by environmental injustices. Perhaps the clearest example of is accessibility of clean water. Though the case of Flint, Michigan is perhaps the most famous example of a community without clean drinking water, the problem persists around the United States for lowincome and minority families. Studies have shown that about one-third of America is inadequately hydrated; the burden of this falls mainly on Black, Hispanic, and lower-income populations (Patel & Schmidt, 2017). Drinking unclean water can have severe impacts on health. Even short-term exposure to water with chemical contaminants can cause skin discoloration, nervous system damage, organ damage, and cancer. Additionally, water can contain disease causing pathogens; drinking water with harmful fungi, bacteria, or viruses can cause vomiting, diarrhea, headache, fever, and even kidney failure in some cases (US EPA, O., 2017). Often, the only solution to limited access to clean water is to buy bottled water – however, low-income individuals or for those living in rural communities without stores nearby, this causes an impossible dilemma – they need to decide between not drinking their impure water and facing the effects of dehydration or drinking their impure water and facing other health issues. Another issue is access to clean air – across the country, the air in communities with high populations of low-income individuals typically has relatively high levels of contaminants (Hajat et al., 2015). Breathing impure air for a short time can cause irritation of the eyes, throat, and nose; it can also aggravate respiratory illnesses, like asthma. In the long run, it can cause permanent damage to the lungs and heart. Unlike the issue of water contamination, there is no alternative for breathing contaminated air. One emerging area of research is the epigenetic impact of the environment. In epigenetics, there are three main events that researchers often focus on. The first is histone modification – by editing the N-terminal tails of histone proteins, which are the proteins that compact DNA, the body can alter chromatin structure and make certain genes transcriptionally inactive. The second is DNA methylation; in DNA methylation, GC dinucleotides have methyl-groups added,
WINTER 2021
which ultimately also results in gene silencing. The third is microRNA expression. MicroRNAs are single-stranded and small pieces of RNA that function in RNA silencing; they are a way that the body can regulate protein production post-DNA transcription. Researchers have realized in recent years that exposure to different environmental factors can cause epigenetic changes. These factors can include cigarette smoke, dust mites, diesel exhaust, heavy metals, and other pollutants. As different epigenetic changes are accumulated over time, there can be significant genome-related health issues that arise. Additionally, epigenetic modifications can be passed from parent to children, meaning that these health issues can persist for generations (Ho et al., 2012).
Conclusion There have been significant inequities in America’s medical systems from the advent of medicine in the United States. Unfortunately, the issues from hundreds of years ago persist today. As it stands, the medical systems discriminate against people of color, women, genderqueer and transgender individuals, poor individuals, individuals living in redlined communities, and individuals living in rural communities. These issues are especially difficult to tackle because though they are common throughout the country, each communities’ unique circumstances need to be addressed individually. Hopefully, as the disparities become more highlighted along with the COVID-19 pandemic, decisive policy decisions are made that promote community work across the country so that healthcare can become more equitable for all. References 2021 Poverty Guidelines. (2021, January 26). ASPE. https:// aspe.hhs.gov/2021-poverty-guidelines Abraído-Lanza, A. F., Chao, M. T., & Gammon, M. D. (2004). Breast and Cervical Cancer Screening Among Latinas and Non-Latina Whites. American Journal of Public Health, 94(8), 1393–1398. https://doi.org/10.2105/AJPH.94.8.1393 Akinyemi, R. O., Owolabi, M. O., Oyeniyi, T., Ovbiagele, B., Arnett, D. K., Tiwari, H. K., Walker, R., Ogunniyi, A., Kalaria, R. N., & SIREN group of H3Africa Consortium. (2016). Neurogenomics in Africa: Perspectives, progress, possibilities and priorities. Journal of the Neurological Sciences, 366, 213–223. https://doi.org/10.1016/j.jns.2016.05.006 Armstrong, K., Rose, A., Peters, N., Long, J. A., McMurphy, S., & Shea, J. A. (2006). Distrust of the health care system and self-reported health in the United States. Journal of General Internal Medicine, 21(4), 292–297. https://doi.org/10.1111/ j.1525-1497.2006.00396.x
323
Asian American & Pacific Islander Heritage | Health Equity | CDC. (2020, March 3). https://www.cdc.gov/healthequity/ features/asian-pacific/index.html Barsky, A. J., Peekna, H. M., & Borus, J. F. (2001). Somatic symptom reporting in women and men. Journal of General Internal Medicine, 16(4), 266–275. https://doi.org/10.1046/ j.1525-1497.2001.016004266.x Bean, M. (2020, February 13). Gender ratio of physicians across 50 states. Becker’s Hospital Review. https://www. beckershospitalreview.com/rankings-and-ratings/genderratio-of-physicians-across-50-states.html Bentley, R. A., Ormerod, P., & Ruck, D. J. (2018). Recent origin and evolution of obesity-income correlation across the United States. Palgrave Communications, 4(1), 1–14. https://doi. org/10.1057/s41599-018-0201-x Braden, K. W., & Nigg, C. R. (2016). Modifiable Determinants of Obesity in Native Hawaiian and Pacific Islander Youth. Hawai’i Journal of Medicine & Public Health, 75(6), 162–171. Brown, E. R., Ojeda, V. D., Wyn, R., & Levan, R. (2000). Racial and Ethnic Disparities in Access to Health Insurance and Health Care. https://escholarship.org/uc/item/4sf0p1st Bureau, U. C. (n.d.). Health Insurance Coverage in the United States: 2018. The United States Census Bureau. Retrieved March 1, 2021, from https://www.census.gov/library/ publications/2019/demo/p60-267.html Bureau, U. C. (2021, February 12). The Native Hawaiian and Other Pacific Islander Population: 2010. The United States Census Bureau. https://www.census.gov/library/ publications/2012/dec/c2010br-12.html Burhansstipanov, L. (2000). Urban Native American health issues. Cancer, 88(S5), 1207–1213. https://doi.org/10.1002/ (SICI)1097-0142(20000301)88:5+<1207::AID-CNCR5>3.0.CO;2-T Byrd, W. M., & Clayton, L. A. (2001). Race, medicine, and health care in the United States: A historical survey. Journal of the National Medical Association, 93(3 Suppl), 11S-34S. Cameron, K. A., Song, J., Manheim, L. M., & Dunlop, D. D. (2010). Gender Disparities in Health and Healthcare Use Among Older Adults. Journal of Women’s Health, 19(9), 1643–1650. https:// doi.org/10.1089/jwh.2009.1701 Case, A., Lubotsky, D., & Paxson, C. (2001). ECONOMIC STATUS AND HEALTH IN CHILDHOOD: THE ORIGINS OF THE GRADIENT. National Bureau of Economic Research. Hajat, A., Hsia, C., & O’Neill, M. S. (2015). Socioeconomic Disparities and Air Pollution Exposure: A Global Review. Current Environmental Health Reports, 2(4), 440–450. https://doi. org/10.1007/s40572-015-0069-5 Health Equity Considerations and Racial and Ethnic Minority Groups. (2020, February 11). Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/ community/health-equity/race-ethnicity.html Hepatitis B: Are you at Risk? Information for Native Hawaiians and Pacific Islanders. (2020, July). Centers for Disease Control. Ho, S.-M., Johnson, A., Tarapore, P., Janakiram, V., Zhang, X., & Leung, Y.-K. (2012). Environmental Epigenetics and Its Implication on Disease Risk and Health Outcomes. ILAR Journal, 53(3–4), 289–305. https://doi.org/10.1093/ilar.53.34.289
324
Native Hawaiians and Other Pacific Islanders | Health Disparities | NCHHSTP | CDC. (2020, September 14). Centers for Disease Control. Retrieved from https://www.cdc.gov/ nchhstp/healthdisparities/hawaiians.html Chakraborty, J., Collins, T. W., & Grineski, S. E. (2016). Environmental Justice Research: Contemporary Issues and Emerging Topics. International Journal of Environmental Research and Public Health, 13(11), Article 11. https://doi. org/10.3390/ijerph13111072 Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231–237. https:// doi.org/10.1136/bmjqs-2018-008370 Chang, S. H., & Kim, K. (2017). A review of factors limiting physical activity among young children from low-income families. Journal of Exercise Rehabilitation, 13(4), 375–377. https://doi.org/10.12965/jer.1735060.350 Chapman, E. N., Kaatz, A., & Carnes, M. (2013). Physicians and Implicit Bias: How Doctors May Unwittingly Perpetuate Health Care Disparities. Journal of General Internal Medicine, 28(11), 1504–1510. https://doi.org/10.1007/s11606-013-24411 Chokshi, D. A. (2018). Income, Poverty, and Health Inequality. JAMA, 319(13), 1312. https://doi.org/10.1001/jama.2018.2521 Cohn, E. G., Henderson, G. E., & Appelbaum, P. S. (2017). Distributive justice, diversity, and inclusion in precision medicine: What will success look like? Genetics in Medicine, 19(2), 157–159. https://doi.org/10.1038/gim.2016.92 Collins, B., Borders, T. F., Tebrink, K., & Ke, T. X. (2007). Utilization of prescription medications and ancillary pharmacy services among rural elders in west Texas: Distance barriers and implications for telepharmacy. Journal of Health and Human Services Administration, 30(1), 75–97. Scopus. Cooper, L. (2019, December 19). Confronting the Legacy of Slavery for Health Equity in Baltimore and Across the United States. https://urbanhealth.jhu.edu/blog/home/400yearslate rthelegacyofslavery Dai, D. (2010). Black residential segregation, disparities in spatial access to health care facilities, and late-stage breast cancer diagnosis in metropolitan Detroit. Health & Place, 16(5), 1038–1052. https://doi.org/10.1016/j. healthplace.2010.06.012 Darden, J., Rahbar, M., Jezierski, L., Li, M., & Velie, E. (2010). The Measurement of Neighborhood Socioeconomic Characteristics and Black and White Residential Segregation in Metropolitan Detroit: Implications for the Study of Social Disparities in Health. Annals of the Association of American Geographers, 100(1), 137–158. https://doi. org/10.1080/00045600903379042 Demographics | NCAI. (2021, February 14). https://www.ncai. org/about-tribes/demographics DeParle, N.-A. M. (2000). Celebrating 35 Years of Medicare and Medicaid. Health Care Financing Review, 22(1), 1–7. Digital Inequality and Low-Income Households | HUD USER. (2021, February 13). https://www.huduser.gov/portal/ periodicals/em/fall16/highlight2.html Dominguez, K., Penman-Aguilar, A., Chang, M.-H., Moonesinghe, R., Castellanos, T., Rodriguez-Lainz, A., Schieber,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
R., & Centers for Disease Control and Prevention (CDC). (2015). Vital signs: Leading causes of death, prevalence of diseases and risk factors, and use of health services among Hispanics in the United States - 2009-2013. MMWR. Morbidity and Mortality Weekly Report, 64(17), 469–478. Douthit, N., Kiv, S., Dwolatzky, T., & Biswas, S. (2015). Exposing some important barriers to health care access in the rural USA. Public Health, 129(6), 611–620. https://doi.org/10.1016/j. puhe.2015.04.001 Gajarawala, S. N., & Pelkowski, J. N. (2021). Telehealth Benefits and Barriers. The Journal for Nurse Practitioners, 17(2), 218–221. https://doi.org/10.1016/j.nurpra.2020.09.013 Genetic Medicine Is Poised to Create New Inequality. Here’s How to Fix It. (2019, May 9). Undark Magazine. https://undark. org/2019/05/09/genetic-medicine-is-poised-to-create-newinequality-heres-how-to-fix-it/ GLIED, S., & LLERAS-MUNEY, A. (2008). Technological Innovation and Inequality in Health. Demography, 45(3), 741–761. Goyal, M. K., Kuppermann, N., Cleary, S. D., Teach, S. J., & Chamberlain, J. M. (2015). Racial Disparities in Pain Management of Children With Appendicitis in Emergency Departments. JAMA Pediatrics, 169(11), 996. https://doi. org/10.1001/jamapediatrics.2015.1915 Green, A. R., Carney, D. R., Pallin, D. J., Ngo, L. H., Raymond, K. L., Iezzoni, L. I., & Banaji, M. R. (2007). Implicit Bias among Physicians and its Prediction of Thrombolysis Decisions for Black and White Patients. Journal of General Internal Medicine, 22(9), 1231–1238. https://doi.org/10.1007/s11606007-0258-5 Gurupur, V., & Wan, T. T. H. (2020). Inherent Bias in Artificial Intelligence-Based Decision Support Systems for Healthcare. Medicina, 56(3), 141. https://doi.org/10.3390/ medicina56030141 Hall, W. J., Chapman, M. V., Lee, K. M., Merino, Y. M., Thomas, T. W., Payne, B. K., Eng, E., Day, S. H., & Coyne-Beasley, T. (2015). Implicit Racial/Ethnic Bias Among Health Care Professionals and Its Influence on Health Care Outcomes: A Systematic Review. American Journal of Public Health, 105(12), e60–e76. https://doi.org/10.2105/AJPH.2015.302903 Halperin, E. C. (2013). Lessons from a slave doctor of 184. 7. Harris, K., Jacobs, G., & Reeder, J. (2019a). Health Systems and Adult Basic Education: A Critical Partnership in Supporting Digital Health Literacy. HLRP: Health Literacy Research and Practice, 3(3 Suppl), S33–S36. https://doi. org/10.3928/24748307-20190325-02 Harris, K., Jacobs, G., & Reeder, J. (2019b). Health Systems and Adult Basic Education: A Critical Partnership in Supporting Digital Health Literacy. HLRP: Health Literacy Research and Practice, 3(3 Suppl), S33–S36. https://doi. org/10.3928/24748307-20190325-02 Health, Income, & Poverty: Where We Are & What Could Help. (2018). Project HOPE. https://doi.org/10.1377/ hpb20180817.901935 Health, Income, & Poverty: Where We Are & What Could Help | Health Affairs Brief. (n.d.). Retrieved February 28, 2021, from https://www.healthaffairs.org/do/10.1377/ hpb20180817.901935/full/ Heckler, M. (1985). Report of the Secretary’s Task Force on
WINTER 2021
Black & Minority Health (p. 241). US Department of Health and Human Services. https://www.minorityhealth.hhs.gov/ assets/pdf/checked/1/ANDERSON.pdf Hero, J. O., Zaslavsky, A. M., & Blendon, R. J. (2017). The United States Leads Other Nations In Differences By Income In Perceptions Of Health And Health Care. Health Affairs, 36(6), 1032–1040. https://doi.org/10.1377/hlthaff.2017.0006 Hilmers, A., Hilmers, D. C., & Dave, J. (2012). Neighborhood Disparities in Access to Healthy Foods and Their Effects on Environmental Justice. American Journal of Public Health, 102(9), 1644–1654. https://doi.org/10.2105/ AJPH.2012.300865 Hoffman, K. M., Trawalter, S., Axt, J. R., & Oliver, M. N. (2016). Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between blacks and whites. Proceedings of the National Academy of Sciences, 113(16), 4296–4301. https:// doi.org/10.1073/pnas.1516047113 Hunt, J. (2019, March 15). Health Insurance Coverage for Women by the Numbers. The Balance. https://www. thebalance.com/health-insurance-coverage-for-women-bythe-numbers-4427906 Hurt, A. (2018). What Country Spends The Most (And Least) On Health Care Per Person? NPR. https://www.nhpr.org/post/ what-country-spends-most-and-least-health-care-person Jones, D. S. (2006). The Persistence of American Indian Health Disparities. American Journal of Public Health, 96(12), 2122–2134. https://doi.org/10.2105/AJPH.2004.054262 Jones-Webb, R., & Karriker-Jaffe, K. J. (2013). Neighborhood Disadvantage, High Alcohol Content Beverage Consumption, Drinking Norms, and Drinking Consequences: A Mediation Analysis. Journal of Urban Health : Bulletin of the New York Academy of Medicine, 90(4), 667–684. https://doi. org/10.1007/s11524-013-9786-y Kaye, A. D., Okeagu, C. N., Pham, A. D., Silva, R. A., Hurley, J. J., Arron, B. L., Sarfraz, N., Lee, H. N., Ghali, G. E., Gamble, J. W., Liu, H., Urman, R. D., & Cornett, E. M. (2020). Economic impact of COVID-19 pandemic on healthcare facilities and systems: International perspectives. Best Practice & Research. Clinical Anaesthesiology. https://doi.org/10.1016/j.bpa.2020.11.009 Kenton, W. (n.d.). Redlining. Investopedia. Retrieved March 4, 2021, from https://www.investopedia.com/terms/r/redlining. asp Kichloo, A., Albosta, M., Dettloff, K., Wani, F., El-Amir, Z., Singh, J., Aljadah, M., Chakinala, R. C., Kanugula, A. K., Solanki, S., & Chugh, S. (2020). Telemedicine, the current COVID-19 pandemic and the future: A narrative review and perspectives moving forward in the USA. Family Medicine and Community Health, 8(3), e000530. https://doi.org/10.1136/fmch-2020000530 Kilmer, G., Bynum, L., & Balamurugan, A. (2010). Access to and Use of Eye Care Services in Rural Arkansas. The Journal of Rural Health, 26(1), 30–35. https://doi.org/10.1111/j.17480361.2009.00262.x Kim, G., Aguado Loi, C. X., Chiriboga, D. A., Jang, Y., Parmelee, P., & Allen, R. S. (2011). Limited English proficiency as a barrier to mental health service use: A study of Latino and Asian immigrants with psychiatric disorders. Journal of Psychiatric Research, 45(1), 104–110. https://doi.org/10.1016/j. jpsychires.2010.04.031
325
Kim, W., PhD, R. H. K., & ACSW. (2010). Barriers to Healthcare Among Asian Americans. Social Work in Public Health, 25(3–4), 286–295. https://doi.org/10.1080/19371910903240704 Lenhart, O. (2019). The effects of income on health: New evidence from the Earned Income Tax Credit. Review of Economics of the Household, 17(2), 377–410. https://doi. org/10.1007/s11150-018-9429-x Long, G. (2016). Doctoring Freedom: The Politics of African American Medical Care in Slavery and Emancipation (The John Hope Franklin Series in African American History and Culture). University of North Carolina Press. Lopez, L., Hart, L. H., & Katz, M. H. (2021). Racial and Ethnic Health Disparities Related to COVID-19. JAMA, 325(8), 719. https://doi.org/10.1001/jama.2020.26443 Milano, B. (2019, October 29). How slavery still shadows health care. The Harvard Gazette. https://news.harvard.edu/gazette/ story/2019/10/ramifications-of-slavery-persist-in-health-careinequality/ Miller, S., Johnson, N., & Wherry, L. R. (2019). Medicaid and Mortality: New Evidence from Linked Survey and Administrative Data (No. w26081). National Bureau of Economic Research. https://doi.org/10.3386/w26081 Morisako, A. K., Tauali‘i, M., Ambrose, A. J. H., & Withy, K. (2017). Beyond the Ability to Pay: The Health Status of Native Hawaiians and Other Pacific Islanders in Relationship to Health Insurance. Hawai’i Journal of Medicine & Public Health, 76(3 Suppl 1), 36–41. Muennig, P., Franks, P., Jia, H., Lubetkin, E., & Gold, M. R. (2005). The income-associated burden of disease in the United States. Social Science & Medicine, 61(9), 2018–2026. https://doi. org/10.1016/j.socscimed.2005.04.005 Nardone, A., Chiang, J., & Corburn, J. (2020). Historic Redlining and Urban Health Today in U.S. Cities. Environmental Justice, 13(4), 109–119. https://doi.org/10.1089/env.2020.0011 Native Americans: A Crisis in Health Equity. (2021, February 14). https://www.americanbar.org/groups/crsj/publications/ human_rights_magazine_home/the-state-of-healthcare-inthe-united-states/native-american-crisis-in-health-equity/ Newkirk, V., May 29, A. D. P., & 2014. (2014, May 29). The Affordable Care Act and Insurance Coverage in Rural Areas. KFF. https://www.kff.org/uninsured/issue-brief/the-affordable-careact-and-insurance-coverage-in-rural-areas/ Nguyen, T. T., Vable, A. M., Glymour, M. M., & Nuru-Jeter, A. (2018). Trends for Reported Discrimination in Health Care in a National Sample of Older Adults with Chronic Conditions. Journal of General Internal Medicine, 33(3), 291–297. https:// doi.org/10.1007/s11606-017-4209-5 Obesity and American Indians/Alaska Natives—The Office of Minority Health. (2021, February 14). https://minorityhealth. hhs.gov/omh/browse.aspx?lvl=4&lvlid=40 Office of Minority Health. (2020, January 31). Native Hawaiian/ Other Pacific Islander—The Office of Minority Health. https:// minorityhealth.hhs.gov/omh/browse.aspx?lvl=3&lvlid=65 Okunrintemi, V., Khera, R., Spatz, E. S., Salami, J. A., ValeroElizondo, J., Warraich, H. J., Virani, S. S., Blankstein, R., Blaha, M. J., Pawlik, T. M., Dharmarajan, K., Krumholz, H. M., & Nasir, K. (2019). Association of Income Disparities with Patient-Reported Healthcare Experience. Journal of General Internal Medicine, 34(6), 884–892. https://doi.org/10.1007/s11606-019-04848-4
326
Pampel, F. C., Krueger, P. M., & Denney, J. T. (2010). Socioeconomic Disparities in Health Behaviors. Annual Review of Sociology, 36, 349–370. https://doi.org/10.1146/ annurev.soc.012809.102529 Parikh, R. B., Teeple, S., & Navathe, A. S. (2019). Addressing Bias in Artificial Intelligence in Health Care. JAMA, 322(24), 2377. https://doi.org/10.1001/jama.2019.18058 Patel, A. I., & Schmidt, L. A. (2017). Water Access in the United States: Health Disparities Abound and Solutions Are Urgently Needed. American Journal of Public Health, 107(9), 1354–1356. https://doi.org/10.2105/AJPH.2017.303972 Penner, L. A., Dovidio, J. F., Edmondson, D., Dailey, R. K., Markova, T., Albrecht, T. L., & Gaertner, S. L. (2009). The Experience of Discrimination and Black-White Health Disparities in Medical Care. Journal of Black Psychology, 35(2), 180–203. https://doi.org/10.1177/0095798409333585 Pleis, J. R., & Lethbridge-Cejku, M. (2012). Summary Health Statistics for U.S. Adults: National Health Interview Survey, 2006: (403882008-001) [Data set]. Centers for Disease Control and Prevention. https://doi.org/10.1037/e403882008-001 Popejoy, A. B., & Fullerton, S. M. (2016). Genomics is failing on diversity. Nature News, 538(7624), 161. https://doi. org/10.1038/538161a Prather, C., Fuller, T. R., Jeffries, W. L., Marshall, K. J., Howell, A. V., Belyue-Umole, A., & King, W. (2018). Racism, African American Women, and Their Sexual and Reproductive Health: A Review of Historical and Contemporary Evidence and Implications for Health Equity. Health Equity, 2(1), 249–259. https://doi. org/10.1089/heq.2017.0045 Recognizing, Addressing Unintended Gender Bias in Patient Care. (n.d.). Duke Health Referring Physicians. Retrieved March 4, 2021, from https://physicians.dukehealth.org/ articles/recognizing-addressing-unintended-gender-biaspatient-care Reiss, F., Meyrose, A.-K., Otto, C., Lampert, T., Klasen, F., & Ravens-Sieberer, U. (2019). Socioeconomic status, stressful life situations and mental health problems in children and adolescents: Results of the German BELLA cohort-study. PLOS ONE, 14(3), e0213700. https://doi.org/10.1371/journal. pone.0213700 Repairing racial inequality in genetic research. (2019, May 10). Spectrum | Autism Research News. https://www. spectrumnews.org/opinion/repairing-racial-inequalitygenetic-research/ Rio, G. M. N., & Bogel-Burroughs, N. (2020, October 23). ‘At Capacity’: Covid-19 Patients Push U.S. Hospitals to Brink. The New York Times. https://www.nytimes.com/2020/10/23/us/ covid-hospitalizations.html Roeder, A. (n.d.). Understanding slavery’s legacy in health and medicine. Harvard T.H. Chan School of Public Health. https:// www.hsph.harvard.edu/news/features/understandingslavery-legacy-in-health-medicine/ Rosenthal, E. (2013, December 21). Health Care’s Road to Ruin. The New York Times. https://www.nytimes.com/2013/12/22/ sunday-review/health-cares-road-to-ruin.html Samulowitz, A., Gremyr, I., Eriksson, E., & Hensing, G. (2018). “Brave Men” and “Emotional Women”: A Theory-Guided Literature Review on Gender Bias in Health Care and
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Gendered Norms towards Patients with Chronic Pain. Pain Research and Management, 2018, 1–14. https://doi. org/10.1155/2018/6358624
Use Among U.S. Adults With Serious Psychological Distress. Psychiatric Services, 69(5), 517–522. https://doi.org/10.1176/ appi.ps.201700221
Sarche, M., & Spicer, P. (2008). Poverty and Health Disparities for American Indian and Alaska Native Children: Current Knowledge and Future Prospects. Annals of the New York Academy of Sciences, 1136, 126–136. https://doi. org/10.1196/annals.1425.017
What is precision medicine?: MedlinePlus Genetics. (n.d.). Retrieved February 7, 2021, from https://medlineplus.gov/ genetics/understanding/precisionmedicine/definition/ Why is health care so expensive in the United States? (2016). Journal of Applied Clinical Medical Physics, 17(3), 1–4. https:// doi.org/10.1120/jacmp.v17i3.6426
Savitt, T. (2002). Medicine and Slavery: The Diseases and Healthcare of Blacks in Antebellum Virginia. University of Illinois Press. Sullivan, L. (2015, April 27). The Heckler Report: Reflecting on its beginnings and 30 years of progress. The Sullivan Alliance: Diversity, Equity, and Access. http://www.thesullivanalliance. org/cue/blog/heckler-report.html Tallon, J. R. (1990). Medicaid: Challenges and opportunities. Health Care Financing Review, 1990(Suppl), 5–9. TCRS - 19th Annual Transamerica Retirement Survey. (n.d.). Retrieved March 4, 2021, from https://www. transamericacenter.org/retirement-research/19th-annualretirement-survey
Williams, D. R., & Collins, C. (2001). Racial residential segregation: A fundamental cause of racial disparities in health. Public Health Reports, 116(5), 404–416. https://doi. org/10.1016/S0033-3549(04)50068-7 Woolf, S. H. (n.d.). How Are Income and Wealth Linked to Health and Longevity? 22. Yasuda, S. U., Zhang, L., & Huang, S.-M. (2008). The role of ethnicity in variability in response to drugs: Focus on clinical pharmacology studies. Clinical Pharmacology and Therapeutics, 84(3), 417–423. https://doi.org/10.1038/ clpt.2008.141
Terada, K., Carney, M., Kim, R., Ahn, H. J., & Miyamura, J. (2016). Health Disparities in Native Hawaiians and Other Pacific Islanders Following Hysterectomy for Endometrial Cancer. Hawai’i Journal of Medicine & Public Health, 75(5), 137–139. Thomas, S. B., & Casper, E. (2019). The Burdens of Race and History on Black People’s Health 400 Years After Jamestown. American Journal of Public Health, 109(10), 1346–1347. https://doi.org/10.2105/AJPH.2019.305290 Tolbert, J., Nov 06, A. D. P., & 2020. (2020, November 6). Key Facts about the Uninsured Population. KFF. https://www.kff. org/uninsured/issue-brief/key-facts-about-the-uninsuredpopulation/ US EPA, O. (2017, November 2). Drinking Water [Reports and Assessments]. US EPA. https://www.epa.gov/reportenvironment/drinking-water U.S, F. B. F. L. K. A. is an expert on, Economies, W., investing, Analysis, W. O. 20 Y. of E. in E., & Amadeo, business strategy S. is the P. of the economic website W. M. W. R. T. B. editorial policies K. (n.d.). How Health Care Inequality Increases Costs for Everyone. The Balance. Retrieved February 28, 2021, from https://www.thebalance.com/health-care-inequality-factstypes-effect-solution-4174842 Vladeck, B. (2003). Universal Health Insurance in the United States: Reflections on the Past, the Present, and the Future. American Journal of Public Health, 93(1), 16–19. Vox. (2017, December 7). The US medical system is still haunted by slavery. https://www.youtube.com/ watch?v=IfYRzxeMdGs&feature=emb_title&ab_channel=Vox Way, W. L., & Mayer, F. S. (2008). Failures of Medicare Part D Delivery and Recommendations for Improvement. Pharmacy and Therapeutics, 33(3), 145–181. Weisgrau, S. (1995). Issues in Rural Health: Access, Hospitals, and Reform. Health Care Financing Review, 17(1), 1–14. Weissman, J., Russell, D., Jay, M., & Malaspina, D. (2018). Racial, Ethnic, and Gender Disparities in Health Care Access and
WINTER 2021
327
The History of DNA Sequencing Techniques
STAFF WRITERS: LAUREN FERRIDGE, LIAM LOCKE, ARSHDEEP DHANOA, ANDREW SASSER, VAANI GUPTA, VARUN LINGADAL BOARD WRITERS: ANAHITA KODALI & DEV KAPADIA Cover: DNA is at the core of human genetics. As DNA sequencing technologies have progressed over the years, researchers have developed a better understanding of DNA and of the human genome Source: Pixabay
Introduction Initially not recognized as the carrier of genetic material due to its relatively simple structure, deoxyribonucleic acid (DNA) has become one of the most studied macromolecules in scientific history. DNA is composed of four nitrogenous bases: adenine, guanine, cytosine, and thymine, with the former two being doubleringed purines and the latter two single-ringed pyrimidines. These bases pair up (adenine with thymine and guanine with cytosine) and join at the center of the DNA double helix, or twisted ladder. They are the rungs, while the side of the ladder is composed of repeated units of deoxyribose sugar and phosphate (Alberts et al., 2002). Adding to the structure’s complexity is the orientation of the two helical chains. Deoxyribose is a penta-carbon sugar with the phosphate group attached to its 5’ terminus and a hydroxyl group bonded to its 3’ terminus.
328
Given the structure of the ladder, one end of each strand must possess a phosphate group while the other possesses a deoxyribose sugar. The strands are antiparallel, meaning that they are oriented in opposite directions. (Alberts et al., 2002). Though the sugar-phosphate backbone is covalently bonded, the nucleotide pairs that compose the rungs of the ladder are conjoined through weaker hydrogen bonds, allowing for the DNA molecule to be quickly unzipped (its two strands pulled apart) and nucleotide encoded sequences to be efficiently and quickly reproduced (“The Discovery of the Double Helix, 1951-1953,” n.d.) Once researchers understand the contours of the DNA structure, they soon looked into ways to re-sequence the string of nucleotides held within the molecule. Combining techniques in analytical chemistry and ribonuclease treatments allowed researchers to construct the first nucleotide sequences (Heather & DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: The above image depicts the structure of DNA, with one strand running from the 5’ end to the 3’ end while the other runs in the opposite direction. These strands are twisted into a double helix structure and consist of four different types of nucleotides: thymine, adenine, guanine, and cytosine Source: Wikimedia Commons
Chain, 2015). The biggest breakthrough in DNA sequencing occurred in 1977 with the development of Frederick Sanger’s chaintermination technique. Though genomic sequencing has advanced beyond the Sanger technique, it formed the basis for the discipline for many years.
A Brief History of DNA Sequencing Sanger’s Sequencing of Insulin Frederick Sanger’s sequencing experiments with insulin in 1955 were highly influential for the later development of DNA sequencing techniques. He developed two critical methods that would dictate the future of DNA and genome sequencing experiments: the fluorodinitrobenzene (FDNB) method of N-terminal labeling and two-dimensional fractionation (Heather & Chain, 2016a). With FDNB N-terminal labeling, Sanger reacted insulin with FDNB, which subsequently reacts with the epsilon amino group of lysine, an amino acid. Because the amino group of the protein is linked to the DNB, it is stable to hydrolysis. After hydrolyzation with acid to split polypeptide bonds, the amino acids are separated and different amino acids and Lys residues are visible, helping the research label the N-terminus (“Amino Acid Analysis,” n.d.). Sanger then conducted further analysis to prove that there were two DNP-labeled fractions which were called ‘A’ and ‘B.’ Additional observation proved that fractions A and B had unique sequences suggesting two peptide chains in insulin (Stretton, 2002). Building upon his previous knowledge that N-terminal peptide bonds have serine or threonine residues (first cleaved in acid hydrolysis) helped him deduce five fragments of the B-chain by eliminating the possibility of having those residues at the C-terminus. WINTER 2021
Generally, when studying proteins, researchers are interested in one particular proteome. To remove the unwanted proteins, researchers use fractionation to separate components - this can be done through precipitation, chromatographic, or electrophoretic procedures. Sanger utilized fractionation techniques after determining the N-terminus sequences for fractions A and B to isolate and identify peptides in the rest of the fraction. Partial acid hydrolysis was used on both fractions A and B; fraction B was tested using charcoal fractionation, while fraction A was examined with silica gel ionophoresis (a technique to study the movement of ions along a gel). Discovery of specific peptides was also reliant on the use of specific proteases, trypsin, chymotrypsin, and pepsin to produce larger fragments for analysis. The last part of Sanger’s experiments used paper ionophoresis (a technique in which researchers study the movement of ions along paper) with differing pH levels in combination with paper chromatography to identify the disulfide bonds within insulin. He determined there were two interchain bonds and one intrachain bond in the A-chain. Test results proved that the A chain had about 20 amino acids and the B chain had about 30 amino acids. Furthermore, Sanger was successful in using paper chromatography and end-group analysis to identify 23 dipeptides, 15 tripeptides, 9 tetrapeptides, 2 pentapeptides, and 1 hexapeptide (Stretton, 2002).
"For molecular biologists, Sanger’s work had significant implications. He proved that proteins had specific molecular patterns and makeups; they were not just made of random materials."
For molecular biologists, Sanger’s work had significant implications. He proved that proteins had specific molecular patterns and makeups; they were not just made of random materials. This inspired many x-ray crystallographers investigating the structure of DNA, including Francis Crick, who in 1953 had published a model for DNA alongside James Watson (this model was based off of the x-ray 329
diffraction images created by Rosalind Franklin). Crick was interested in understanding how DNA, the genetic material of the body, could dictate the production of proteins within a cell. After attending a series of Sanger’s lectures in 1954, he began to develop the argument that the nucleotide sequence of DNA could determine the sequence of amino acids in proteins. This, in turn, would determine how the protein folded to its final 3-dimensional shape. Crick’s Source: Wikimedia Commons wanted to show that genetic defects could affect the amino acid sequence of proteins. To do this, he put together a team of researchers to study hemoglobin in control patients and patients with sickle cell anemia; the team utilized Sanger’s method of sequencing, and Sanger himself aided their work. They found that the patients with sickle cell anemia had an altered form of hemoglobin, which proved his initial hypothesis. This work was also significant because it was the first time that researchers were able to understand how cells translate "During the process genetic information stored in DNA into proteins. of replication, the Over the next few years, Sanger and Crick would work together on understanding nucleic acids double stranded DNA and protein structure. The techniques that is split into single Sanger utilized to sequence insulin and his help strands, each of with the hemoglobin-sequencing project would which are replicated prove to be vital in understanding the structure of DNA (“The path to sequencing nucleic acids,” at what is known as a n.d.). Figure 2: Frederick Sanger is commonly referred to as the pioneer of DNA sequencing. He developed several methods for sequencing DNA through his experiments with insulin that started in 1955. Now known as “Sanger sequencing” his original method has laid the bedrock for decades of DNA sequencing innovation and earned him a Nobel Prize in Chemistry in 1958
replication fork."
Ray Wu’s Method for DNA Sequence Determination Ray Wu was the first researcher to determine a DNA sequence. To understand Wu’s approach, it is first important to understand the process of DNA replication. During the process of replication, the double stranded DNA is split into single strands, each of which are replicated at what is known as a replication fork (the Y-shaped where active synthesis occurs). One of the strands – the strand that had 3’ to 5’ directionality – is known as the “leading strand.” The enzyme that creates the replicated DNA strand, DNA polymerase, can make a nucleotide sequence in a continuous, uninterrupted fashion. However, the other strand, which has a 5’ to 3’ directionality, cannot be made uninterrupted; the DNA replication occurs in chunks known as Okazaki fragments. This strand is known as “lagging strand.” In the lagging strand, the Okazaki fragments are connected by DNA ligase to form one, unbroken strand of DNA. In most cases, this is a simple process. However, at the end of the strand, there is a small piece of DNA that cannot be replicated into an Okazaki fragment due to the positioning of the replication fork; therefore, this piece of
330
the DNA cannot be added to the unbroken DNA strand and remains uncopied. This leaves a single stranded overhang on the daughter DNA strand. Wu used this principle to sequence DNA. Using the overhanging ends on DNA strands in Enterobacteria lambda phages (viruses that infect enterobacteria), Wu took DNA polymerase and filled the ends with radioactive nucleotides. He then measured their subsequent incorporation into the sequence and used this information to determine the precise DNA sequence. Wu’s work became a commonly accepted method in DNA sequencing studies and was developed to infer the order of nucleotides anywhere in the genome sequence and not just at the ends (Heather & Chain, 2016a). In the late 1960s and early 1970s, Wu conducted two major studies that would forever shape the future of DNA sequencing. The first was described in his paper “Structure and Base Sequence in the Cohesive Ends of Bacteriophage Lambda DNA.” In this particular set of experiments, Wu tested whether cohered ends of DNA were held together by hydrogen bonds formed between a small number of base pairs. He started with lambda DNA, which is a linear double-stranded DNA phage with complimentary 5’ ends that are 12 base pairs long. He incorporated different base pair residues into the ends of the DNA strands using DNA polymerase so that he could determine the length and sequence of the nucleotides - this allowed him to propose a no-gap DNA nucleotide structure for the end of DNA. Using the equivalencies of Gs to Cs and As to Ts, he was able to prove that the strands have
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Sanger sequencing is a relatively simple way to sequence DNA. First, a locationspecific primer is annealed to a DNA sequence. Several reagents are added to the mixture containing the primers and DNA, including DNA polymerase, dNTPs, and ddNTPs with fluorochromes attached. Then, primer elongation begins. During this elongation, a ddNTP will be randomly inserted, which results in chain-termination. As this will occur multiple times, all possible lengths of chains will be produced. The products are separated using gel electrophoresis and the bands are analyzed by imaging systems that utilize the fluorochromes attached to the ddNTPs. In this way, researchers are able to determine the DNA sequence. Source: Wikimedia Commons
complementary base sequences and that the cohesive ends were the same lengths (Wu & Kaiser, 1968). The second study was described in Wu’s paper “Nucleotide Sequence Analysis of DNA.” In these experiments, Wu was trying to completely sequence the nucleotides at the two ends of lambda DNA. He incorporated radioactive nucleotides into the ends of the DNA using DNA polymerase. After using the radioactivity to gather sequence information, Wu was able to fully sequence the DNA. He determined that the sequence was precisely 12 base pairs long and contained ten G-C pairs and two A-T pairs. Additionally, he determined that two left (or two right) ends cannot cohere to each other but that a left and a right end would cohere as expected, given that the sequence bases are complementary (Padmanabhan et al., 1972). Wu’s work was revolutionary in the field of DNA sequencing. He was able to prove with these studies that DNA polymerase catalysis, nucleotide labeling, and synthetic primers could be used to determine the DNA sequence of not just lambda phages but of any DNA sequence. Sanger would go on to use this primer-extension strategy to develop faster DNA sequencing techniques. Sanger Sequencing & Maxam-Gilbert Sequencing Although the structure of DNA was first published in 1953, nearly twenty-five years elapsed before the first reliable DNA sequencing technologies were developed (Heather & Chain,
WINTER 2021
2016). The so-called “first-generation” of DNA sequencing technology was typified by two experimental procedures developed in the late 1970’s: Sanger sequencing and Maxam-Gilbert sequencing. In 1977, Frederick Sanger used his old findings in insulin and work conducted by Wu to complete the first full genome sequence. He used a technique called chain termination, also called the dideoxy technique or Sanger sequencing. This method relies on a chemical analogue of deoxyribonucleotides, di-deoxyribonucleotides (ddNTPs) in which the 3’ hydroxyl group has been deleted; as a result, these are unable to bind with the 5’ phosphate of the next dNTP and the extension of the chain will be terminated. Sanger added together a mixture of regular nucleotides (dNTPs) and modified nucleotides (ddNTPs) and found that new DNA strands of varying lengths were formed, each of which were capped with a radiolabeled ddNTP (Sanger et al., 1977). He then determined the DNA sequence by noting the identity of the ddNTP caps from the shortest strand to the longest strand (using a technique called polyacrylamide gel electrophoresis). This sequence of ddNTPs spells out the true DNA sequence in order.
"Although the structure of DNA was first published in 1953, nearly twenty-five years elapsed before the first reliable DNA sequencing technologies were developed."
Around the same time as Sanger and colleagues were developing their method for DNA sequencing, Allan Maxam and Walter Gilbert discovered that treatment of DNA with specific reagents could selectively cleave particular nucleotides from the sugar phosphate backbone, and that this backbone could then be broken at the site of cleavage via alkali- or amine-mediated β-elimination. The brilliance
331
of this technique was the development of chemical reactions that could selectively cleave DNA at each base. Treatment with hydrazine cleaves the bonds of cytosine and thymine, but thymine cleavage can be suppressed with sodium chloride. Similarly, dimethyl sulfate methylates guanine and adenine, and adenine can be selectively cleaved with dilute acid. Furthermore, these treatments cleave only 1 in every 50-100 nucleotides, so the result is a library of DNA fragments that dictate the identity of each nucleotide along the length of a DNA fragment (Maxam & Gilbert, 1977). Performing these reactions on a sample of radiolabeled DNA Source: Flickr followed by separation with polyacrylamide gel electrophoresis gives the readout of a DNA sequence.
Figure 4: The above image depicts 454 Life Sciences sequencing machines that use pyrosequencing technology. In the pyrosequencing method, the DNA strand is separated into its individual strands and a complementary DNA strand is formed from one of the parts. These machines can have impressive accuracies in just one run (near 99%) along with high throughput, making pyrosequencing one of the leading DNA sequencing methods currently in use
"The collection of DNA sequencing techniques can be broken down into three generations. The Sanger sequencing technique represented the first generation of DNA sequencing. Second generation techniques, or next-generation sequencing techniques, were developed through the 1990s and early 2000s."
Sanger and Gilbert shared the 1980 Nobel Prize in Chemistry for their contributions to DNA sequencing technology (Kchouk et al., 2017). Maxam-Gilbert sequencing became the preferred technique for a short period after these methods were developed, as it could be performed on a sample of DNA without the need for primers or replication. However, as techniques in DNA synthesis became more robust, Sanger sequencing quickly became the dominant technique and Maxam-Gilbert sequencing became largely obsolete (Heather & Chain, 2016). Although Sanger and MaxamGilbert sequencing have now been replaced by more sophisticated techniques, these procedures were crucial first steps towards the fast and efficient reading of the genetic code.
Modern Day DNA Sequencing The collection of DNA sequencing techniques can be broken down into three generations. The Sanger sequencing technique represented the first generation of DNA sequencing. Second generation techniques, or next-generation sequencing techniques, were developed through the 1990s and early 2000s; unlike first generation sequencing, second generation sequencing systems allowed researchers to sequence the entire human genome in just one experiment. Third generation sequencing involves “long-read” sequencing methods – these technologies are still under development and can produce significantly longer reads and therefore more detailed information about the genome than second generation techniques can. In the following sections, several second generation and third generation sequencing techniques are discussed. Pyrosequencing
332
Pyrosequencing is a second generation sequencing technique. In this method first put forth in 1993 by Bertil Pettersson, Mathias Ulhen and Pal Nyren, a DNA polymerase enzyme is used to synthesize the complementary strand to an existing DNA strand (Greenwood, 2018). In the solid-state version of the pyrosequencing method, the pyrophosphate released from the action of DNA polymerase is first reacted with adenosine 5´ phosphosulfate to form adenosine triphosphate (ATP); this reaction is catalyzed by the enzyme ATPsulfurylase (Greenwood, 2018). Next, the ATP reacts with a luciferin molecule and oxygen, which reproduce the pyrophosphate, as well as producing oxyluciferin and emitting light (Pettersson et. al, 1993). To test their methods, the group attempted to sequence short DNA sequences by binding them to magnetic beads before treating them with sodium hydroxide to leave single stranded DNA molecules. The DNA sequences were then treated with the three enzymes: DNA polymerase, ATP-sulfurylase, and luciferase, as well as one of the four nucleotides to determine how many nucleotide complements are present on the original strand (Petterson et al., 1993). Ultimately, the group suggested that this method might be well suited for “minisequencing” of small DNA strands, largely because the luminescence method is only sensitive to pyrophosphate production when produced at concentrations of less than 200 picomolar. However, they added that because this method does not require gel electrophoresis, it could be useful in scaling up the sequencing of small, relevant strands of DNA (Petterson et. al, 1993). Another solution-based approach to pyrosequencing was discovered in 1998 thanks to the work of Ronaghi et. al. In this method, the enzyme apyrase was added to a solution, which has the ability to destroy leftover nucleotides not incorporated into DNA polymerase (Ronaghi et. al, 1998). One important benefit of this method over the previous solid-state
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
method is that it allows the mixture of enzymes to be maintained for the whole procedure. In solid state pyrosequencing, the free nucleotides leftover would have to be separated from the enzymes after each treatment (Ronaghi et. al, 1998). Ultimately, DNA pyrosequencing was successfully scaled up to meet industrial and academic needs thanks to the work of Rothberg et. al. Rothberg’s group produced a microarray slide with picolitre-sized reaction vessels made from optic fibers (Rothberg et. al, 2005). DNA binds to the magnetic beads and is treated with the required enzymes and nucleotides in a specified order. To demonstrate the power of this method, the group used random DNA fragments created from the genome of Mycoplasma genitalium; they were able to cover some 96% of the genome with 99.96% accuracy after just one run (Rothberg et. al, 2005). Later developed into a machine by 454 Life Sciences, this approach is able to read and identify some 25 million base pairs in four hours, with up to 250 base pairs being read at any given time (Hert et. al, 2008) Sequencing by Ligation (SOLiD sequencing) Another critical second generation sequencing technique is sequencing by ligation, or SOLiD Sequencing. Unlike the single molecule and pyrosequencing approaches (which rely upon the synthesis of an artificial strand of DNA) SOLiD sequencing uses previously known DNA segments to determine the sequence of unknown fragments. In this method, an anchor primer strand is hybridized, or bonded, to the part of a DNA strand whose sequence is known. From this, a number of short oligonucleotides labelled with fluorescent dyes are added to the solution to determine part of the unknown DNA sequence adjacent to the known strand (Ho et. al, 2011). These oligonucleotides, known as query primers, are made up of random nucleotides. That is, except for at one position – the nucleotides being sequenced. Using DNA ligase, the end of the query strand is connected to the end of the anchor strand, which allows for the identity of the next nucleotide in the adjacent unknown strand to be determined based on competition between the binding of the four nucleotides (Gupta & Gupta, 2014). This process is then repeated by stripping the original query and anchor primers, before introducing a mixture of new query primers with a different position to sequence (Ho et. al, 2011).
WINTER 2021
SOLiD sequencing was first performed on a large scale by George Church in 2005 to re-sequence the entire genome of E. coli (Voelkerding et. al, 2009). More recent technology developed by the company Applied Biosystems has allowed for the successful scaling up of sequencing by ligation. In the Applied Biosystems approach, known DNA fragments are first amplified by PCR before being bound to magnetic beads (Gupta & Gupta, 2014). The query primers, which are composed of 8 base pairs and have combinations of two specified nucleotides, are then added to the beads and compete for ligation. To determine which primer was ligated, the fluorescence from the sample is matched to the fluorescence of one of the various query primers (Gupta & Gupta, 2014). The instrument developed by Applied Biosystems is capable of reading lengths of up to 35 nucleotides in 6 days, reading up to 4 Gbp (giga base pairs) on a single strand with a general accuracy that is much higher than other sequencing methods as each nucleotide is sequenced twice (Gupta and Gupta, 2014). Indeed, the company has reported an accuracy of 99.9% accuracy for shorter sequence lengths (Voelkerding et. al, 2009). Despite all the advantages, there is one significant limitation to the sequencing by ligation method in addition to the shorter read length and longer time taken to read a sequence: the reading of palindromic sequences (those that can be read the same in both directions, like the word “racecar”). According to Huang et. al, the sequencing by ligation method was unable to accurately predict a 24 palindromic nucleotide sequence derived from E. Coli. Thus, it was inferred that palindromic sequence impacted the readability of base pairs that were up to 2 nucleotides away from the palindromic region. The group attributed the failures of this method to the formation of a “hairpin structure” between the ends of a palindromic region. These structures form due to the palindromic sequences interacting with each other and forming bonds after they have already been cleaved due to their complementary nature. The hairpin makes it impossible for the query primers used in SOLiD sequences to hybridize to the palindrome given that these methods rely on reading only single strands. On the contrary, the sequencing by synthesis methods, like pyrosequencing, does not suffer from the requirement of only being able to read singular strands and is able to read palindromic fragments accurately (Huang et. al, 2012).
"Another critical second generation sequencing technique is sequencing by ligation, or SOLiD Sequencing. Unlike the single molecule and pyrosequencing approaches (which rely upon the synthesis of an artificial strand of DNA) SOLiD sequencing uses previously known DNA segments to determine the sequence of unknown fragments."
333
Figure 5: The above image depicts an example of a palindromic DNA Sequence, a double stranded DNA molecule where reading the sequence from one direction is identical to reading the sequence on the complementary strand in the opposite direction. These types of sequences are difficult for SOLiD sequencing techniques to read due to the hairpin structure that is formed
Single-molecule real-time sequencing (SMRT) Single molecule real-time (SMRT) sequencing, developed by the company Pacific Bioscience, is a newer, third-generation sequencing technology capable of sequencing long read lengths greater Source: Wikimedia Commons than 20 kilobase pairs (20,000 base pairs). This technology eliminates some of the inherent limitations of short read technologies (Ardui et. al, 2018). Two of the biggest limitations that SMRT sequencing helps eliminate are GC bias and difficulties in mapping to repetitive elements. GC bias refers to the dependence "Single molecule of sequencing techniques on GC bases. In real-time (SMRT) sequencing techniques, there is typically a sequencing, count of fragments mapped to the proportion of GC bases in a region. This results in variability developed by the that can confound the final sequencing results company Pacific (Benjamini & Speed, 2012). Another issue is Bioscience, is a newer, sequencing of repetitive DNA sequences, which third-generation are abundant in DNA in several species. This sequencing repetition presents computational problems for second generation sequencing techniques, as it technology capable makes it difficult to properly align and assemble of sequencing long DNA. The repeats cannot be simply ignored read lengths greater because they may be of biological importance than 20 kilobase pairs (Treangen & Salzberg, 2011).
(20,000 base pairs)." The first two major steps are preparing the
Figure 6: The above image depicts a stem-loop, also commonly referred to as a “hairpin DNA structure” because of its shape. These hairpins form when DNA is cleaved into its individual strands and palindromic sequences in the single strands of DNA interact and reform their hydrogen bonds. This can cause problems for sequencing methods that rely on cleaving double strands of DNA and reading the individual strands, like SOLiD sequencing
sample and preparing the library. The technology starts with high molecular weight DNA and preparation of a library of DNA fragments that are optimized for the sequencing technology. The actual sequencing that occurs is performed by the Sequel II System. This device has millions of tiny wells called zero-mode waveguides (ZMWs) in which single molecules of DNA are immobilized. As DNA polymerase incorporates the nucleotides into growing DNA strands, light is emitted. This light is measured to record nucleotide incorporation in real time. These light signals are converted to long sequences that are known as continuous long reads (CLR) (Ardui et. al, 2018).
efficient decoding of novel genomes. Bacterial genomes are more routinely sequenced using SMRT and this technology has the ability to translate to larger genomes as well. In addition, SMRT is highly accurate with repetitive passes, as one study found that assembly with SMRT achieved 99.999% accuracy, higher than other sequencing technologies. The major downside to SMRT sequencing is that it yields a higher number of errors at individual reads. In order to get to the 99.999% accuracy that is comparably better than other sequencing technologies, researchers could have to pass through the DNA up to fifteen times (Roberts et. al, 2013). Nanopore Sequencing Nanopore sequencing is another thirdgeneration sequencing technology that allows for fast and cost-effective sequencing of large DNA segments. The technique was first theorized in the late 1980’s by David Deamer, who devised a system in which an ionic current was passed into a molecule of single stranded DNA (ssDNA) through a protein pore (nanopore) embedded in a membrane. Because the DNA molecules are negatively charged, they will react to the ions that are entering the chamber and alter the magnitude of the ionic current. The changes in ionic current will vary based on the geometry, size, and chemical composition of the DNA, and thus, by measuring the change in the ionic current, a DNA sequence can be determined (Deamer et al., 2016). The design and development of nanopore sequencing technology required two innovative leaps. First, the nanopore needed to have the appropriate shape to facilitate single-nucleotide sensitivity to changes in ionic current. Second, the ssDNA molecule needed
SMRT sequencing uses sequencing-by-synthesis technology, which involves the use of DNA polymerase to drive the reaction of fluorescently tagged nucleotides that are added to a DNA template, while imaging these nucleotides in real time. Since SMRT sequencing acts on single molecules at a time, there is no degradation of Source: Wikimedia Commons signal. The longer read lengths allow for more
334
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
to move slowly enough through the nanopore to give ample time for signal detection (Kasianowics et al., 1996). Early work on nanopore sequencing used the channel protein ɑ-hemolysin, a heptamer with a pore size of about 1.2 nm. However, the sensing region of this protein was so elongated that the current was sensitive to contributions from twelve nucleotides, rather than the one nucleotide, which is desirable. The protein MspA was also studied and later found to limit sensitivity to a single nucleotide; this is the current choice for nanopore sequencing (Butler et al., 2008; Lu et al., 2016). However, when passed through the channel by a voltage difference across the membrane (DNA is negatively charged and will therefore travel through the pores toward the positive side of the membrane), the ssDNA molecule would travel too fast for reliable detection of the DNA sequence. A team led by Manrao and colleagues out of the University of Washington and the University of Alabama at Birmingham found that MspA paired with the bacterial DNA polymerase phi29 slowed down the movement of the ssDNA molecule through the pore and gave good resolution of each nucleotide in the DNA strand (Manrao et al., 2012).
DNA Sequencing Applications and Concerns
and enabled them to better understand how one DNA molecule interacts with others and the other molecular components of biological systems. There are several applications of nextgeneration DNA sequencing in research that go far beyond simply getting the sequence of organisms. For example, sequencing technologies now allow for mapping of 3-dimensional DNA interactions on a genomic scale through use of Hi-C. Hi-C is a technology that utilizes next-generation sequencing techniques to find the sequences of DNA fragments; it also uses paired end sequencing, in which both ends of the DNA fragment are sequenced and alignable sequence data is generated. When used on cross-linked DNA fragments, or DNA fragments that are covalently bound together, Hi-C allows researchers to understand the interactions between different fragments and understand which fragments are involved in DNA ligation events. Though the resolution of the images produced is still sometimes low, the ability to visualize these types of high-level interactions has allowed researchers to better understand the organization of eukaryotic genes on a global level. Hi-C experiments have provided insight into the location of regulatory elements on DNA sequences and helped demonstrate how these elements modulate transcription of their target genes (S00n et al., 2012).
Research applications DNA sequencing has fundamentally altered the fields of biological and medical research. Beyond advances in speed and cuts in cost, next-generation sequencing techniques have allowed researchers to conduct fine analyses of genomic structure and variation
Another significant application of nextgeneration sequencing techniques is the ability to map not just the DNA sequence but certain epigenetic markers as well. The first class of epigenetic markers are DNA methylation sites. DNA methylation is one of several epigenetic mechanisms that cells utilize to put genes into
WINTER 2021
Figure 7: The figure above shows the differences between two popular sequencing techniques: SMRT sequencing and Nanopore sequencing. SMRT sequencing takes in individual molecules of DNA and incorporates them into a DNA strand. When the molecule is incorporated, light is emitted that signals which nucleotide has been methylated. Nanopore sequencing, on the other hand, runs electric currents through a nanopore that has DNA molecules enclosed. The technology utilizes the change in ionic current caused by DNA interaction to determine the sequence. While SMRT sequencing is accurate with successive runs relative to other technologies, it is inaccurate with singular runs. Nanopore sequencing is inexpensive and user-friendly, but more inaccurate than other sequencing methods, even with successive runs Source: Wikimedia Commons
"Sequencing technologies now allow for mapping of 3-dimensional DNA interactions on a genomic scale through use of Hi-C. Hi-C is a technology that utilizes next-generation sequencing techniques to find the sequences of DNA fragments."
335
Figure 8: This figure shows the difference between normal cytosine and methylated cytosine. In the human body, about 70% of CG dinucleotides are methylated. DNA methylation typically results in the silencing of genes by disrupting the ability of DNAbinding proteins to find the correct sequences to bind to, preventing transcription and translation of proteins Source: Wikimedia Commons
"Another significant application of next-generation sequencing techniques is the ability to map not just the DNA sequence but certain epigenetic markers as well."
336
a transcriptionally-inactive state. Methylation occurs at cytosine bases found in CG dinucleotide pairs. Cells convert cytosine to 5-methylcytosine, ultimately silencing the gene. The role that DNA methylation plays is not completely understood, though researchers do know that it is critical in modulation of gene expression (Phillips, 2008). With next-generation sequencing technique at hand, researchers will be able to better understand the inheritance of DNA methylation markers and their significance for biological processes. For example, hypermethylation has been previously implicated in several cancers. By mapping DNA methylation markers, researchers will be able to better study the precise location of epigenetic markers that cause mutations in tumor-suppressor genes, leading to more possibilities for treatment (Soon et al., 2012). A second epigenetic marker researchers have the ability to map with next-generation sequencing techniques is histone modification. In cells, DNA is wrapped around histone octamers, which are composed of histone proteins; these octamers allow for supercompression of DNA. The histone proteins have tails that project from the octamer; both the tails and the core regions of the histones can be modified, and modifications in both regions can have significant effects on chromatin compaction, gene transcription, DNA repair, and DNA replication (Lawrence et al., 2016). By mapping histone modifications, researchers have been able to determine that there are cell-specific patterns of histone modification that result in different levels of transcription for different genes. Additionally, they have found that aberrant modifications can result in gene dysregulation; this dysregulation has been linked to diseases such as prostate cancer. Therefore, by mapping histone modifications, researchers may be able to find specific histone modification patterns that are highly linked to disease and develop new treatments (Soon et al., 2012).
These few applications only scratch at the surface of the potential that DNA sequencing has in biological and clinical research. As DNA sequencing gets faster and cheaper, more and more information about the human genome will be available for doctors and physician researchers. This information will afford a better understanding of the role that epigenetics play in human disease and help spur the creation of new treatments for these diseases. Ethical Concerns of DNA Sequencing As DNA sequencing technologies improve, they are bound to become an integral part of our healthcare systems. By mapping genomes and epigenomic markers, researchers will have a better understanding of the heritability of different diseases and will be able to come up with new treatments. However, as they collect more and more information, researchers will also have to grapple with ethical concerns, both on an individual and a population level. For individuals, one of the biggest grey areas is data release and identifiability. To achieve the most benefits from DNA sequencing, researchers will need to compile large amounts of genomic and phenotypic data from numerous individuals. This will allow researchers to understand the genomic variation and dynamics that underlie complex human genetic diseases. In essence, individuals cannot interpret their genomic information without looking at it in the context of a larger set of data composed of genetic and phenotypic information from hundreds to thousands of people (Johnson et al., 2020). However, this poses a significant problem when it comes to data identifiability. While many databases take steps to ensure privacy, certain medical sequencing databases collect demographic information about their patients; this can include clinical information, family pedigrees, and employment histories. This
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
information can be used against the patients by third party individuals; thus, it is imperative to set up strict protections for patients who consent to contributing their genomic sequencing data to databases. This brings up another issue for individuals: consent. Given the complexity of genomic sequencing and the rapid rate at which genomic sequencing techniques are advancing, it is difficult to ascertain whether or not patients without good understanding of the genomic sequencing landscape will have enough information about the technology to consent to their data use. The inability for the general public to consider the risks associated with future use of their genomic data poses several ethical issues (Foster, 2006). There are several moral considerations that researchers must fully consider when thinking about DNA sequencing on a population level, too. One of the most important questions is whether or not DNA sequencing and genomic services are equitable for all demographic groups. Research has shown that currently, there is differential access to genomic services for underserved populations, including racial and ethnic minorities and the impoverished. For example, Black women have significantly less access to BRCA1 genetic testing than white women do (Hall & Olopade, 2006; Johnson et al., 2020). Additionally, people of African and Asian ancestry typically have more ambiguity in their genetic profiles than those of European descent do (Johnson et al., 2020). As DNA sequencing technologies continue to advance, the gaps between privileged and underprivileged populations will only continue to widen if left unchecked. Additionally, given that certain demographics are more likely to be affected by genetic diseases than others, it is possible that use of genetic mapping information will result in perpetuation of racist and/or sexist stereotypes that physicians hold (Foster, 2006). The Future of DNA Sequencing While significant advances in DNA sequencing technology have been made over the past 70 years, there is still more research to be done. Scientists are attempting to create affordable and rapid DNA sequencing technologies that would allow them to encode the DNA sequences for every patient, as well as record the DNA sequences for all tissues during different developmental stages (Green et al., 2017). By 2017, the entire DNA sequences of about 400,000 individuals had been recorded
WINTER 2021
while portions of the genomes of another million individuals had been sequenced (Watson et al., 2017). Furthermore, scientists are improving DNA sequencing technology to better expand our understanding of different illnesses. For instance, within the oncology field, researchers are developing liquid biopsies, which is a cancer screening test that is based on DNA sequencing. Liquid biopsy can allow for the detection of tumors from DNAsequence signatures alone in a patient’s blood (Green et al., 2017). In addition, scientists are working towards creating small, portable DNA sequencing technology that allows public health officials and epidemiologists to sequence the genome of different human individuals, other animal, and microorganisms to identify if these organisms carry certain illnesses that can threaten communities (Green et al., 2017). In the present moment, there is particular interest in determining if organisms are acting as viral vectors – carriers of viral particles that can cause disease in humans. Portable sequencing devices in the works include the Oxford Nanopore MinION sequencing device. It is a small box weighing about 90 g (0.2 lbs) which connects to a laptop with a USB cord and interfaces with the included MinKNOW software (Lu et al., 2016). The device records electrical signals when an unzipped DNA helix goes past a nanopore and can then transform these electrical signals to DNA sequences. Researchers simply need to purify their DNA sample and add a reagent which caps the DNA, allowing for its insertion into the nanopore (Jain et al., 2016). MinION sequencing devices can sequence up to 30 Gbp (30 billion base pairs) in a single flow and start at around 1,000 USD in price. The accuracy of MinION’s DNA sequencing is between 65 and 90 percent, making it a useful tool for pathogen surveillance and clinical diagnosis, but less useful when high fidelity DNA sequencing is needed (Lu et al., 2016).
"Scientists are attempting to create affordable and rapid DNA sequencing technologies that would allow them to encode the DNA sequences for every patient, as well as record the DNA sequences for all tissues during different developmental stages."
While the MinION currently does not have the accuracy to be useful for sequencing entire genomes, it has opened the DNA sequencing market to smaller, faster, and portable DNA sequencing technology that would allow identification of pathogens like bacteria and viruses (Watson et al., 2017). Furthermore, this type of technology can also be applied to increasing the food supply by producing crops with desirable traits (Mardis, 2011).
337
Conclusions Since the pioneering work of Frederick Sanger, interest in DNA sequencing has exploded. Consequently, advancements such as the completion of the human genome project have equipped scientists with new and more complete understandings of the structure and sequence of DNA within cells. Today, companies are dedicating their entire business to researching and developing new and innovative methods that can sequence more base pairs in a more efficient manner. This wealth of innovation includes the development of portable DNA sequencing devices, such as the MinION sequencer, which is especially useful for researchers studying natural phenomena outside of the laboratory. With these advancements in DNA technology, however, there are ethical concerns. Increasing our capabilities to sequence and more accurately modify the base pairs is may lead to issues with data privacy and utilization and needs to be closely regulated. . As such, it is expected that these topics will be fiercely debated both in Washington as well as in the board rooms of private companies. Each corporation will have to decide whether their code of personal ethics will be even more stringent than those defined by legislators. Despite of the concerns, there is still much to explore in the DNA sequencing space, and this exploration will surely result in exciting and potentially revolutionary developments in the upcoming years. References Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K., & Walter, P. (2002). The Structure and Function of DNA. Molecular Biology of the Cell. 4th Edition. https://www.ncbi.nlm.nih. gov/books/NBK26821/ Amino Acid Analysis and Chemical Sequencing. (n.d.). Retrieved February 28, 2021, from https://employees.csbsju. edu/hjakubowski/classes/ch331/protstructure/PS_2B1_ AAAnaly_ChemSeq.html Ardui, S., Ameur, A., Vermeesch, J. R., & Hestand, M. S. (2018). Single molecule real-time (SMRT) sequencing comes of age: Applications and utilities for medical diagnostics. Nucleic Acids Research, 46(5), 2159–2168. https://doi.org/10.1093/ nar/gky066 Benjamini, Y., & Speed, T. P. (2012). Summarizing and correcting the GC content bias in high-throughput sequencing. Nucleic Acids Research, 40(10), e72–e72. https:// doi.org/10.1093/nar/gks001 Carlson, R. (2009). The changing economics of DNA synthesis. Nature Biotechnology, 27(12), 1091–1094. https://doi. org/10.1038/nbt1209-1091 Clarke, A. J. (2014). Managing the ethical challenges of nextgeneration sequencing in genomic medicine. British Medical Bulletin, 111(1), 17–30. https://doi.org/10.1093/bmb/ldu017 338
Deamer, D., Akeson, M., & Branton, D. (2016). Three decades of nanopore sequencing. Nature Biotechnology, 34(5), 518–524. https://doi.org/10.1038/nbt.3423 Foster, M. W. (2006). Ethical issues in medical-sequencing research: Implications of genotype-phenotype studies for individuals and populations. Human Molecular Genetics, 15(90001), R45–R49. https://doi.org/10.1093/hmg/ddl049 Green, E. D., Rubin, E. M., & Olson, M. V. (2017). The future of DNA sequencing. Nature News, 550(7675), 179. https://doi. org/10.1038/550179a Greenwood, M. (2018, October 31). What is Pyrosequencing? News-Medical.Net. https://www.news-medical.net/lifesciences/What-is-Pyrosequencing.aspx Gupta, A. K., & Gupta, U. D. (2014). Chapter 19—Next Generation Sequencing and Its Applications. In A. S. Verma & A. Singh (Eds.), Animal Biotechnology (pp. 345–367). Academic Press. https://doi.org/10.1016/B978-0-12-4160026.00019-5 Hall, M. J., & Olopade, O. I. (2006). Disparities in Genetic Testing: Thinking Outside the BRCA Box. Journal of Clinical Oncology, 24(14), 219–2203. https://doi.org/10.1200/ JCO.2006.05.5889 Heather, J. M., & Chain, B. (2016). The sequence of sequencers: The history of sequencing DNA. Genomics, 107(1), 1–8. https://doi.org/10.1016/j.ygeno.2015.11.003 Hert, D. G., Fredlake, C. P., & Barron, A. E. (2008). Advantages and limitations of next-generation sequencing technologies: A comparison of electrophoresis and non-electrophoresis methods. Electrophoresis, 29(23), 4618–4626. https://doi. org/10.1002/elps.200800456 Ho, A., Murphy, M., Wilson, S., Atlas, S. R., & Edwards, J. S. (2011). Sequencing by ligation variation with endonuclease V digestion and deoxyinosine-containing query oligonucleotides. BMC Genomics, 12, 598. https://doi. org/10.1186/1471-2164-12-598 Huang, Y.-F., Chen, S.-C., Chiang, Y.-S., Chen, T.-H., & Chiu, K.-P. (2012). Palindromic sequence impedes sequencing-byligation mechanism. BMC Systems Biology, 6(2), S10. https:// doi.org/10.1186/1752-0509-6-S2-S10 Jain, M., Olsen, H. E., Paten, B., & Akeson, M. (2016). The Oxford Nanopore MinION: Delivery of nanopore sequencing to the genomics community. Genome Biology, 17(1), 239. https:// doi.org/10.1186/s13059-016-1103-0 Johnson, S. B., Slade, I., Giubilini, A., & Graham, M. (2020). Rethinking the ethical principles of genomic medicine services. European Journal of Human Genetics, 28(2), 147–154. https://doi.org/10.1038/s41431-019-0507-1 Kasianowicz, J. J., Brandin, E., Branton, D., & Deamer, D. W. (1996). Characterization of individual polynucleotide molecules using a membrane channel. Proceedings of the National Academy of Sciences, 93(24), 13770–13773. https:// doi.org/10.1073/pnas.93.24.13770 Kchouk, M., Gibrat, J. F., & Elloumi, M. (2017). Generations of Sequencing Technologies: From First to Next Generation. Biology and Medicine, 09(03). https://doi.org/10.4172/09748369.1000395 Kircher, M., & Kelso, J. (2010). High-throughput DNA sequencing – concepts and limitations. BioEssays, 32(6), 524–536. https://doi.org/10.1002/bies.200900181 Lawrence, M., Daujat, S., & Schneider, R. (2016). Lateral Thinking: How Histone Modifications Regulate Gene DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Expression. Trends in Genetics, 32(1), 42–56. https://doi. org/10.1016/j.tig.2015.10.007 Lightbody, G., Haberland, V., Browne, F., Taggart, L., Zheng, H., Parkes, E., & Blayney, J. K. (2019). Review of applications of high-throughput sequencing in personalized medicine: Barriers and facilitators of future progress in research and clinical application. Briefings in Bioinformatics, 20(5), 1795–1811. https://doi.org/10.1093/bib/bby051 Lu, H., Giordano, F., & Ning, Z. (2016). Oxford Nanopore MinION Sequencing and Genome Assembly. Genomics, Proteomics & Bioinformatics, 14(5), 265–279. https://doi. org/10.1016/j.gpb.2016.05.004 Manrao, E. A., Derrington, I. M., Laszlo, A. H., Langford, K. W., Hopper, M. K., Gillgren, N., Pavlenok, M., Niederweis, M., & Gundlach, J. H. (2012). Reading DNA at single-nucleotide resolution with a mutant MspA nanopore and phi29 DNA polymerase. Nature Biotechnology, 30(4), 349–353. https:// doi.org/10.1038/nbt.2171 Mardis, E. R. (2011). A decade’s perspective on DNA sequencing technology. Nature, 470(7333), 198–203. https:// doi.org/10.1038/nature09796 Mardis, E. R. (2017). DNA sequencing technologies: 2006–2016. Nature Protocols, 12(2), 213–218. https://doi. org/10.1038/nprot.2016.182 Maxam, A. M., & Gilbert, W. (1977). A new method for sequencing DNA. Proceedings of the National Academy of Sciences, 74(2), 560–564. https://doi.org/10.1073/ pnas.74.2.560 Padmanabhan, R., Padmanabhan, R., & Wu, R. (1972). Nucleotide sequence analysis of DNA. Biochemical and Biophysical Research Communications, 48(5), 1295–1302. https://doi.org/10.1016/0006-291X(72)90852-2 Pettersson, B., Nyren, P., & Uhlen, M. (1993). Solid Phase DNA Minisequencing by an Enzymatic Luminometric Inorganic Pyrophosphate Detection Assay. Analytical Biochemistry, 208(1), 171–175. https://doi.org/10.1006/abio.1993.1024 Phillips, T. (2008). The Role of Methylation in Gene Expression. Nature Education, 1(1), 116. Pinxten, W., & Howard, H. C. (2014). Ethical issues raised by whole genome sequencing. Best Practice & Research Clinical Gastroenterology, 28(2), 269–279. https://doi.org/10.1016/j. bpg.2014.02.004 Reuter, J. A., Spacek, D., & Snyder, M. P. (2015). HighThroughput Sequencing Technologies. Molecular Cell, 58(4), 586–597. https://doi.org/10.1016/j.molcel.2015.05.004 Rizzo, J. M., & Buck, M. J. (2012). Key Principles and Clinical Applications of “Next-Generation” DNA Sequencing. Cancer Prevention Research, 5(7), 887–900. https://doi. org/10.1158/1940-6207.CAPR-11-0432 Roberts, R. J., Carneiro, M. O., & Schatz, M. C. (2013). The advantages of SMRT sequencing. Genome Biology, 14(7), 405. https://doi.org/10.1186/gb-2013-14-7-405
Attiya, S., Bader, J. S., Bemben, L. A., Berka, J., Braverman, M. S., Chen, Y.-J., Chen, Z., Dewell, S. B., Du, L., Fierro, J. M., Gomes, X. V., Godwin, B. C., He, W., Helgesen, S., Ho, C. H., … Begley, R. F. (2005). Genome sequencing in microfabricated high-density picolitre reactors. Nature, 437(7057), 376–380. https://doi. org/10.1038/nature03959 Sanger, F., Nicklen, S., & Coulson, A. R. (1977). DNA sequencing with chain-terminating inhibitors. Proceedings of the National Academy of Sciences, 74(12), 5463–5467. https://doi. org/10.1073/pnas.74.12.5463 Sequencing 101: From DNA to Discovery - The Steps of SMRT Sequencing. (2020, July 7). PacBio. https://www.pacb.com/ blog/steps-of-smrt-sequencing/ Shendure, J., & Aiden, E. L. (2012). The expanding scope of DNA sequencing. Nature Biotechnology, 30(11), 1084–1094. https://doi.org/10.1038/nbt.2421 Soon, W. W., Hariharan, M., & Snyder, M. P. (2013). High‐ throughput sequencing for biology and medicine. Molecular Systems Biology, 9(1), 640. https://doi.org/10.1038/ msb.2012.61 Stretton, A. O. W. (2002). The First Sequence: Fred Sanger and Insulin. Genetics, 162(2), 527–532. The Basic Science of Genome Editing—Human Genome Editing—NCBI Bookshelf. (n.d.). Retrieved March 22, 2021, from https://www.ncbi.nlm.nih.gov/books/NBK447276/ The Discovery of the Double Helix, 1951-1953. (n.d.). Francis Crick - Profiles in Science. Retrieved January 29, 2021, from https://profiles.nlm.nih.gov/spotlight/sc/feature/doublehelix Treangen, T. J., & Salzberg, S. L. (2012). Repetitive DNA and next-generation sequencing: Computational challenges and solutions. Nature Reviews Genetics, 13(1), 36–46. https://doi. org/10.1038/nrg3117 Voelkerding, K. V., Dames, S. A., & Durtschi, J. D. (2009). NextGeneration Sequencing: From Basic Research to Diagnostics. Clinical Chemistry, 55(4), 641–658. https://doi.org/10.1373/ clinchem.2008.112789 What is the Human Genome Project? (2018, October 28). National Human Genome Research Institute. https://www. genome.gov/human-genome-project/What WhatisBiotechnology • The sciences, places and people that have created biotechnology. (n.d.). WhatisBiotechnology. Org. Retrieved March 25, 2021, from https://www. whatisbiotechnology.org/ Wu, R., & Kaiser, A. D. (1968a). Structure and Base Sequence in the Cohesive Ends of Bacteriophage Lambda DNA. Journal of Molecular Biology, 35, 523–537. Wu, R., & Kaiser, A. D. (1968b). Structure and base sequence in the cohesive ends of bacteriophage lambda DNA. Journal of Molecular Biology, 35(3), 523–537. https://doi.org/10.1016/ S0022-2836(68)80012-9
Ronaghi, M., Uhlén, M., & Nyrén, P. (1998). A Sequencing Method Based on Real-Time Pyrophosphate. Science, 281(5375), 363–365. https://doi.org/10.1126/ science.281.5375.363 Rossi, G., Manfrin, A., & Lutolf, M. P. (2018). Progress and potential in organoid research. Nature Reviews Genetics, 19(11), 671–687. https://doi.org/10.1038/s41576-018-0051-9 Rothberg, J. M., Margulies, M., Egholm, M., Altman, W. E., WINTER 2021
339
Organoids-on-a-Chip: Celebrating Decades of Work STAFF WRITERS: DANIEL KOTREBAI, AUDREY HERRALD, MELANIE PRAKASH, RACHEL MATTHEW, SOYEON (SOPHIE) CHO, STEPHANIE DAMISH, CAMERON SABET, ANDY CAVANAUGH, CAROLINE CONWAY BOARD WRITERS: NISHI JAIN & DEV KAPADIA Cover: The stomach, like many if not all other human organs, is an extremely complex system to model. For researchers wanting to perform experiments in the gastrointestinal environment, real stomachs are difficult to obtain, and organoid systems are extremely difficult to build. These challenges have led to the introduction of organoids-ona-chip, which use microfluidics to mimic the organ system at a much more feasible level Source: Pixabay
Overview Organoids, derived from pluripotent stem cells (PSC) that can self-renew and produce any type of cell in the body, are important for modeling tissue physiology and disease. These allow biomedical researchers to more accurately study the biochemical pathways of differentiation and disease pathogenesis in human organs. Since organoids are derived from stem cells, researchers can develop individualized therapeutic interventions whose effects can be faithfully reproduced in their human counterparts. Since animal models are not human, labs that use translational research models instead of organoids risk discovering pathophysiology unique to animals other than humans. Current organoid models are primarily three-dimensional, but due to a random configuration of 3D structure that results from complex biological interactions, these models
340
do not faithfully replicate the analogous organs in humans. The local microenvironment of a developing organ tends to be much more dynamic, with various directions for organogenesis. Simulating this complex environment poses a challenge that, once surmounted, can help research scientists better simulate in-vitro organ growth. At the intersection of stem cell biology and engineering lies the solution: organoids-on-achip. The most important step for the development of organoids-on-a-chip focuses on the structural organization of a given organ, the types of cells native to the tissue, and the organ-specific microenvironments (Park et al., 2019). For example, the lung contains alveolar epithelial cells and pulmonary microvascular endothelial cells separated by a thin wall called an interstitium. With this knowledge, two different cell types as well as the organ’s general DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
structural organization can be identified and replicated. The local microenvironment of this organ contains air and blood flow, and the diaphragm stretches this unit mechanically. This general understanding of the characteristics of lung tissue can be leveraged to develop an organ-on-a-chip modeling the lung’s alveolar-capillary unit. Here, microchannels are separated by a thin and flexible membrane. Further, epithelial and endothelial cells can be separated on either side, with the alveolar side exposed to the air and the vascular side exposed to blood flow. Then, vacuums stretch and contract microchambers on either side of the culture channel, mimicking the deformation of the organ during regular breathing patterns. The development of Organoids-on-a-Chip was made possible by advancements in microfluidics, 3D printing, and stem cells. Microfluidics is the study of the precise control and manipulation of fluids traveling through micro-channels. Microfluidics, when applied to cell culture, allow researchers to maintain and analyze cultures in more detail on the microlevel; microfluidic cell cultures depict a clear understanding of the interplay between cell culture parameters and micro environmental elements, which traditional cell cultures lack (Sosa-Hernandez et al., 2018). The manipulation of these fluids allows for factors such as pressure, pH, flow rate, osmotic pressure, toxins, and nutrient contents to be tightly controlled in the cultured tissue or organoid (Sosa-Hernandez et al., 2018). Applying microfluidics to cell cultures creates a more realistic, functional organ model that is a defining feature of organoids-on-a-chip. Traditional monolayer cultures, in which a single layer of cells are grown on a petri dish, lack the architecture and complexity to emulate in vivo biological processes (Corrò et al., 2020). These new advancements in microfluidic technology WINTER 2021
allows for three-dimensional cell cultures. The growth of complex, three-dimensional cell cultures requires properly timed activation of morphogenetic signaling pathways to induce cell fate decisions and form specific cell types. In traditional culture, the organization into three-dimensional structures is accomplished through exogenous morphogens, which diffuse and produce biochemical gradients in the local micro-environment of the cells. These gradients are not easily controlled and can fail to simulate in vivo tissue patterning. A microfluidic device addresses this shortcoming in tissue patterning by using microchannels as a source and sink for soluble factors to create stable morphogen gradients (Park et al., 2019). Similarly, microchannels have been used to mimic vasculature present during embryogenesis to give the appropriate blood flow necessary for the later stages of organogenesis (Park et, al 2019).
Figure 1: The image depicts two different intestinal organoids grown under different conditions. The loose green net depicts an organoid grown by itself, which followed a random pattern that did not have structural integrity and failed. The tight green net around the red net depicts a successfully grown organoid that was more intensely regulated by the researchers Source: Flickr
Advancements in 3D printing have greatly assisted the development of organoids-on-achip. 3D printing has simplified the fabrication of microfluidic devices, allowing for the incorporation of multi-(bio)-materials like living cells and growth factors, highly defined device generation, and defined distribution into the device (Sosa-Hernandez et al., 2018). Additionally, the success of organoid-on-achip technology depends on its ability to leave the laboratory, which puts pressure on ease of manufacturing and scalability. 3D printing can automate the necessary scale up by allowing for rapid prototyping of devices with complex microarchitectures. One step continuous printing allows for a shorter production time, higher reproducibility, and does not require manual handling (Zhang et al., 2018).
"Advancements in 3D printing have greatly assisted the development of organoids-ona-chip. 3D printing has simplified the fabrication of microfluidic devices, allowing for the incorporation of multi-(bio)-materials like living cells and growth factors, highly defined device Stem cells are paramount for organoids-on-a- generation, and chip development as a better understanding defined distribution of pluripotent stem cell biology enables better into the device." cultivation of clinically relevant cell lines (Zhang et al., 2018). For example, stem cells permit the culturing of cells that are limited in numbers in the human body, such as cardiomyocytes, which would otherwise be unattainable. Additionally, serial passaging of cell lines, a process where multiple generations of cells are produced from previous generatoins, and genetic changes made to immortalized cell lines can alter cell genotypes and phenotypes. However, pluripotent stem cells give rise to progenitor cells or mature cells; therefore, researchers can more easily adjust for disease conditions in a number of different cells. Furthermore, pluripotent stem cells can be
341
created from specific patients by reprogramming adult somatic tissue (induced pluripotent stem cells) and combining them with organoidon-a-chip technology to create personalized devices for tasks such as optimizing drug doses for specific patients (Zhang et al., 2018). This individualization has stirred up great interest in Organoids-on-a-Chip in the pharmaceutical industry, which has been invested in developing devices that predict pharmacokinetics and Source: Wikimedia Commons pharmacodynamics of drugs (Zhang et al., 2018). The pharmaceutical industry is specifically interested in the development of multi-organoid systems that mimic the complex physiology of interorgan interactions.
Figure 2: This flow chart outlines the production of a cerebral organoid from human pluripotent stem cells. By cultivating embryoid bodies that induce neuroectoderm, these cells are specifically destined to become cerebral organoid, and similar processes are undertaken to produce organoids of other cell types
"In organoid-on-achip methodology, scientists culture the stem cell scaffold in a specific construction. The scaffolds are designed with porous membranes to allow for nutrient diffusion and growth; additionally, the scaffolds are designed with a chip to provide a solid, non-living platform on which the biomaterial can grow."
342
3D organoids, despite their utility, face a challenge due to the limitations of passive diffusion. Usually, 3D organoids require passive diffusion to conduct basic metabolic processes like nutrient and oxygen intake and waste product disposal. However, as these organoids grow, this passive diffusion is no longer sufficient for such a large tissue (Clevers, 2017). Research demonstrates that proper vascularization is necessary to deliver the right blood supply, which can help address some of the life span and nutrient delivery challenges that face organoids (Weinstein, 1999). Therefore, the ability for organoids-on-a-chip to support vascularization is critical to nutrient uptake and, more broadly, the extension of organoid lifespan.
Organoids-on-a-Chip Today, Organoids-on-a-Chip are grown in 3D cultures from stem cells on an extracellular matrix (Yu, Hunziker, & Choudhury, 2019). Many types of stem cells may be used for this application, including primary, pluripotent, embryonic, and adult stem cells. Scientists use a scaffold that cells grow on top of in order to direct growth on the extracellular matrix. Stem cells naturally differentiate and (re) generate depending upon their environment, so organoids are engineered with extracellular matrix scaffolding which facilitates growth in shapes reminiscent of real organs. The exact properties of this scaffold’s shape, as well as the intended cellular differentiation and distribution, vary depending on the organ being simulated. However, most matrices are constructed from “biomimetic” (also known as “biosynthetic”) materials, which are designed to imitate the characteristics of the real organ environment. Biomimetic materials also exhibit improved mechanical properties typical of synthetic materials while maintaining biocompatibility with collagen, laminin, fibronectin, silk, and
other components of the extracellular matrix. Biomimetic materials are engineered using a variety of methods, each of which manipulates natural raw materials into synthetic shapes. Soft lithography utilizes elastomeric stamps coated in extracellular matrix materials to create precise micropatterns on a substrate surface; electrospinning deposits a biopolymer solution and hydrogel fibers manipulated by electric charge in order to form a nanofibrous networks; and 3D bioprinting deposits colloids and hydrogels in layers to gradually build up a 3D structure (Hussey, Dziki, & Badylak, 2018). In organoid-on-a-chip methodology, scientists culture the stem cell scaffold in a specific construction. The scaffolds are designed with porous membranes to allow for nutrient diffusion and growth; additionally, the scaffolds are designed with a chip to provide a solid, non-living platform on which the biomaterial can grow. Microfluidic platforms, for example, contain microchannels and micropillars within the chip or scaffold structure to enhance turnover of growth medium and facilitate diffusion of gases and other solutes in the fluid (Yu, Hunziker, & Choudhury, 2019). Many benefits of using organoid-on-a-chip technology emerge from the regulation and precision enabled by the structural “chip.”
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Applications of Organoids-on-a-Chip One key distinction between organoids-ona-chip and traditional organoid technologies is the combination of key structural and functional properties of an organ with the precisely engineered and highly tunable microenvironment afforded by a man-made chip (Park et al., 2019). In traditional (non-chipped) organoids, current methods of culture allow only for randomly oriented stem cell growth. This random orientation makes it difficult to employ traditional organoids as models for drug efficacy or disease pathogenesis (Clevers, 2016). Traditional organoids also lack the complex array of chemical signals that guide organogenesis in the body, which hinders the ability to mirror their in-vivo counterparts (Rossi et al., 2018). Organoids-on-a-chip address these concerns by enabling ordered organogenesis within a highly tunable microenvironment (the chip). Recent advancements in organoid-ona-chip technology have brought a myriad of physiological applications within reach, many of which enhance the accuracy and replicability of organoid growth. Such enhancements enable in-vitro modeling of important physiological processes, several examples of which are outlined in the following sections. Nervous System One prominent application of organoidon-a-chip technology is in the modeling of neurodevelopmental disorders (Wang et al., 2018). Many brain disorders cannot be diagnosed until symptom onset, which often occurs after significant structural changes to the central nervous system; organoids-on-a-chip afford researchers an opportunity to observe how specific genetic or environmental changes impact human brain development, and allows clinicians to better identify at-risk individuals. Animal models can play a similar role, though the transferability of results between species poses significant limitations to this type of research (Ryan et al., 2019). In a recent study, Wang and colleagues at the Chinese Academy of Sciences utilized organoid-on-a-chip technology to investigate the effects of nicotine exposure on early neural development. The team developed a novel brain organoid-on-a-chip system using human induced pluripotent stem cells (hiPSCs). These hiPSCs displayed an ordered organization in 3D cell culture and in-vivo neural differentiation, regionalization, and cortical organization— three key elements of early brain development that were monitored closely throughout the
WINTER 2021
study. The organoid-on-a-chip system enabled researchers to detect abnormal differentiation, regionalization, and cortical development in specific regions of the brain upon nicotine exposure, all while avoiding issues of interspecies transferability and prenatal imaging. This example from Wang and colleagues is one of many applications for organoid-on-a-chip technology in the nervous system. While current organoid-on-a-chip technology has not yet facilitated true in-vivo transplantation, an eventual application of organoids as a form of regenerative medicine has gained attention in recent years (Clevers et al., 2017; Nakamura et al., 2018). Researchers postulate that organoids-on-a-chip have the potential to help overcome two of the most prohibitive barriers to transplant technology— namely (1) the need for high-throughput, pretransplantation analysis of the replacement organ, and (2) the incorporation of complex anatomical features (e.g., nerves) in synthetic transplant structures. The technological tunability of organoids-on-a-chip enables relatively efficient and inexpensive analyses of candidate organoids, and the ordered culture patterns of organoids-on-a-chip may also enhance the biocompatibility of traditional organoids by enabling the deposition of complex structures like nerves and blood vessels (Park et al., 2019). At present, the complexity of the organoid-chip microfabrication poses significant limitations to large-scale adoption for in-vivo organoid transplantation. However, researchers are optimistic that continued improvements in the efficiency of microfabrication and the throughput-capacity of culture systems will bring clinically relevant advancements in regenerative medicine and organ transplantation.
"One prominent application of organoid-on-achip technology is in the modeling of neurodevelopmental disorders. Many brain disorders cannot be diagnosed until symptom onset, which often occurs after significant structural changes to the central nervous system; organoids-on-a-chip afford researchers an opportunity to observe how specific genetic or environmental changes impact human brain development."
Gastrointestinal Humans develop gastrointestinal glands, the cells that produce hydrochloric acid in the stomach, in their first trimester of pregnancy, but mice develop them postnatally. Furthermore, mice cannot be infected by some viruses—like the norovirus—while humans can. Mice also can display different disease phenotypes than their human counterparts. Organoid technologies have been applied to the gastrointestinal system in two major forms: enteroids and human intestinal organoids (HIOs) (Singh et al., 2020). Enteroids, or epithelial organoids, are derived from adult stem cells (ASCs) with a limited range of differentiation. More specifically, enteroids are developed
343
from intestinal stem cells (ISCs) produced in the crypts. A crypt is a recess next to protruding villi comprised of epithelial cells. Due to the origin of the ISCs, enteroids only consist of epithelial cells, such as enterocytes, Paneth cells, and goblet cells (Sato et al., 2011; Dedhia et al., 2016). However, enteroids lack certain intestinal cells like mesenchymal cells, pericryptal fibroblasts, neuronal cells, and more, which poses significant limitations for in vivo use. Engraftment, for instance, is one measure of cell transplantation success that measures the number of cells that successfully enter the body and start growing. A study out of the Tokyo Medical and Dental University observed that transplanting enteroids into mice colons did not result in high rates of engraftment (Yui et al., 2012), suggesting that enteroids may have limited applications in human intestinal development.
"Human intestinal organoids are derived from hPSCs and consist of both epithelial and mesenchymal cells."
Human intestinal organoids are derived from hPSCs and consist of both epithelial and mesenchymal cells. Thus, a major advantage of the HIOs is that they reflect the overall mechanisms in the human intestine system better than the enteroids. In the original model of HIO development, directed differentiation of hPSCs was conducted for 3 days in vitro, using activin A, a transforming growth factor ß (TGF-ß) (McCracken et al., 2011). Activin A helps the hPSCs produce a larger proportion of endoderm cells marked by FOXA2 and SOX17 that are critical to the development of the endoderm (Spence et al., 2011). Then, WNT3A and FGF4 help develop the spherical units of mid and hindgut tissue called spheroids, which develop into a rudimentary unit consisting of both epithelial and mesenchymal cells (Spence et al., 2011). Following the 2011 study by Spence et al. (2011), various modifications and applications have been made to the intestinal organoid development process. One study changed the duration of exposure to growth factors like WNT3A (Trisno et al., 2018), which allows for the production of human esophageal organoids (from foregut spheroids). Increasing exposure time to FGF4 and an inhibitor caused the HIO has been used to model the ileum, the third section of the intestine (Tsai et al., 2017). Furthermore, recent studies have expanded beyond in vitro HIOs, since in vitro HIOs only reflected the development of the GI tract in the first trimester or the fetal stage. Watson et al. (2014) developed an in vivo model by transplanting an in vitro HIO after 28 days in culture into the vasculature of immunocompromised mice. This model yielded HIOs with improved functions: the structure of
344
the crypt and the villus had formed an axis, and the organization of smooth muscle cells indicated the presence of matured epithelium and mesenchymal cells (Watson et al., 2014). To further support the in vivo model, another study added an enteric (intestinal) nervous system (ENS) within the human intestinal organoids using neural crest cells (NCCs) derived from PSCs (Workman et al., 2017). This study provided a novel approach to studying conditions related to the intestinal ENS, such as the Hirschsprug’s disease (Workman et al., 2017). Extensions of human intestinal organoid technology have provide insight into the development of the gastrointestinal tract, and more innovative approaches will arise in the future. Interest in replicating the immune system of the intestines, such as with cultures composed of both antigens and microbes (Min et al., 2020), is an emerging field due to the ability to isolate individual and complex interactions of immune system components in the chip for study. Recent studies in neural cells may prompt further research on other stromal cells, or supporting cells of an organ, such as immune cells and fibroblasts (Min et al., 2020). Urinary Bladder organoids can be used for drug screening to develop individualized therapies and help scientists map and explore bladder cancer pathogenesis. Currently, there are few models for the bladder epithelium—also known as the urothelium—to support bladder cancer research. Urothelial organoids can faithfully mimic the urothelium and be grown efficiently at a low cost. One lab even created genetic knockouts of Trp53 and Stag2 tumor suppressors in the urothelium to better model bladder cancer, demonstrating the bright future for urinary organoids in cancer research (Mullenders et al., 2020). Respiratory Organoids-on-a-chip can also be used to understanding the respiratory system. In the human body, gas exchange is a cyclic, mechanical process involving numerous cell types, culminating in the transfer of carbon dioxide and oxygen across the alveolarcapillary membrane of the lungs. Lung organoids, which are commonly grown from epithelial stem and progenitor cells, provide an organ-level method of studying this process. Through treatment with ECM proteins and differentiating growth factors, organoids can
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
develop into various cell types, including those that constitute the lung. The diversity of cell types accessible to stem cell differentiation is crucial to functional respiratory organoids, considering that respiration is dependent on the interaction of numerous types of epithelial and endothelial cells (Barkauskas et al., 2017). Additionally, organoids known as tracheospheres, alveolospheres, and bronchiospheres have been developed from adult basal cells, offering opportunities for the multi-organ study of respiration (Gkatzis et al., 2018). These organoids can more accurately model the cells’ in vivo functions and interactions than traditional animal models or monolayer cell cultures. Respiratory organoids may also offer insight into the regenerative properties of the lung. Basal cells in the respiratory system act as progenitor cells and are able to differentiate and proliferate in response to injuries or exposure to infectious agents (Tata & Rajagopal, 2017). This cellular plasticity has prompted further study on the potential of regenerative therapeutics (Gkatzis et al., 2018). However, there is still considerable uncertainty regarding the specifics of this process, which organoids could help to fill. Respiratory diseases are a prevalent international health issue, with chronic obstructive pulmonary disease (COPD) currently being the third highest cause of death worldwide. Organoids can be used to develop treatments for respiratory diseases such as COPD, asthma, and emphysema by modeling organ-level characteristics of the conditions. For example, organoids can be used to model goblet cell hyperplasia, inflammation, alveolar destruction, and other symptoms of diseases which involve interactions between multiple types of epithelial cells in the respiratory system. Cell models developed from induced pluripotent stem cells of patients with diseases such as cystic fibrosis and chronic asthma provide an effective way of testing disease progression, treatment, and drug efficacy (Barkauskas et al., 2017). Although animal models for these diseases exist, differences in physiology may make these studies less clinically relevant. For example, Neutrophils constitute 50-70% of circulating leukocytes in humans, and increased numbers are associated with COPD and asthma. In mice, neutrophils represent only 10-25% of leukocytes, so mouse models of COPD may not accurately reflect the disease in humans (Benam et al., 2016).
WINTER 2021
Endocrine Organoids may also be used to mimic endocrine systems by replicating hormone production. For instance, a possible use of endocrine organoids is in beta cell replacement therapy for patients with diabetes. In patients with diabetes mellitus (type 2 diabetes), beta cells, the cells that make insulin, fail to function due to overuse. A new therapy developed out of the Hubrecht Institute in the Netherlands investigates the efficacy of islet transplantation. Currently, there are not enough donors to form a large enough sample size for a defendable conclusion. However, if organoids could mimic new beta cells with properties that would allow them to restore insulin production, this could be an effective solution. Another challenge is that in order to develop these cells, they must be grown within the 3D environment of tissue in order to develop the correct cell orientation (Loomans et al., 2018). However, scientists are developing strategies to circumvent these obstacles. In the same study, the developed 3D culture system had cells with fundamental Beta cell characteristics, including high aldehyde dehydrogenase activity and pancreatic progenitor markers (markers that can lead to the renewable characteristics of the tissue). Many of these tissue-expressed signals were comprised of human fetal tissues, a popular stem cell material for biomedical analysis because their versatility makes them easier to study than adult stem cells. Stem cells may be the key to understanding organic cell characteristics due to its replication of qualities for a specific tissue type (Loomans et al., 2018).
"Organoids may also be used to mimic endocrine systems by replicating hormone production. For instance, a possible use of endocrine organoids is in beta cell replacement therapy for patients with diabetes."
Pancreatic ductal adenocarcinoma (PDAC) – a deadly, aggressive cancer of the pancreas – is another instance in which organoids can be useful. It is important to note that organoid systems reverse engineer cell signaling pathways in development, and this reverse engineering is one of their most prominent functions. The usual treatment for PDAC requires chemotherapy in some combination with other therapeutic strategies, and a challenge for early detection is that scientists do not know all of the cancer specific biomarkers. In fact, biomarkers can often change between types of cancer. Diagnosis through genetic sequencing is also limited in cancers like PDAC because not all PDAC patients have identifiable mutations. An application of organoids in this sense requires genetically engineered mouse models, monolayer cell lines, conditionally reprogrammed cells, patient derived
345
xenografts, and a 3D ex vivo culture system. Essentially, these different technologies could be implemented to help grow up the cells to maturity with full function controlled by the researchers throughout experiments (Tiriac et al., 2019). Organoids create an opportunity to study both healthy, non-transformed tissues of the sick individuals, but also to study the development of the cancer tissues. Once both the cancer and the healthy cells are developed as organoids, they can be compared using transcriptome analysis. Organoids create a way to reverse engineer cancer - for instance, one study found that to grow a particular PDAC cancer, IL1 and TGF Beta were major signals for cancer proliferation. Identifying these pro-proliferative factors in individualized organoids may give clinicians better tools to diagnose and treat cancers in the future (Tiriac et al., 2019).
"When it comes to generating tissue for transplantation, organoids have a few vital advantages over traditional tissue donations. Obtaining tissue from organoid cultures instead of harvesting tissue from a donor removes the risk of donor complications that may result from the harvesting procedure."
The pancreas has epithelial clusters called islets of Langerhans. These clusters are vital to the secretion of several hormones including glucagon, insulin, somatostatin, and pancreatic polypeptide. Beta cells in the islets of Langerhans develop from stem cells. However, during development, beta cells are often co-cultured with other cells. For example, Mesenchymal stem cells are cultured with stromal cells in order to increase insulin production. These cells were transplanted into an organoid-on-a-chip to test for their effect in curbing hypoglycemia. In this study, the generated organoids had a high sensitivity to glucose treatment and had a high calcium flow. Media was circulated through the cells in order to expose the organoids to fluid stress. The derived organoids had the phenotypic complexity of human islets and the expression of mature pancreatic beta cells (Dayem et al., 2019). Mammary cells use organoid-on-a-chip technology in order to closely study the proliferation of tissue that cannot easily be studied in vivo. In one study, a growth signaling pathway was used to trace mammary stem cells using Mammary cells with the R-spondin receptor in a Wingless-related integration cell, which regulates cell growth during embryonic development. Mammary organoids were grown from a Lgr5+ mammary cell. The Lgr5+ Organoids grown from a single cell developed into four cells that eventually developed into viable tissue, established by a successful response to estrogen (Zhang et al., 2017).
346
Adrenal organoids have also been investigated in endocrine applications. Over time (especially during early life), the adrenal gland is remodeled. Scientists took advantage of the plasticity of young adrenal tissue to observe how the adrenal gland remodels during fetal development. By staging the model at different gestational stages, the scientists were able to observe similar developmental pathways seen in other developing tissues, such as NOTCH1 and adrenal steroidogenic factor (Park et al., 2019).
Industry Analysis and Economics of Organoid Development As previously mentioned, organoids could be key to the future of regenerative medicine due to their potential as unlimited sources of tissue, though studies on transplantable organoids have primarily focused on animal populations due to transplantation safety concerns (Park et al., 2019). Despite the lack of clinical experiments to date, researchers like Rossi et al. (2018) remain optimistic that organoids might supply humans with both transplantable tissue and cells for cell therapy (a process in which specific stem cell types are used to repair damaged tissue). Demonstrations of tissue generation in animal models have been promising so far. In one experiment, mouse embryonic stem cells were used with an organoid protocol to address retinal degeneration in mice via transplantation. Mature photoreceptors were developed from the transplanted tissue, and sensitivity to light was recovered in some of the mouse models (Rossi et al., 2018). Intestinal organoids (derived from colon stem cells) have also been successfully transplanted into the mouse colon epithelium, revealing a possible future treatment for inflammatory bowel diseases in humans (Sasai, 2013). When it comes to generating tissue for transplantation, organoids have a few vital advantages over traditional tissue donations. Obtaining tissue from organoid cultures instead of harvesting tissue from a donor removes the risk of donor complications that may result from the harvesting procedure. Additionally, stem cells in organoid cultures can be primed ex vivo using synthetic matrices with particular mechanical/biophysical properties, and this can improve their regenerative capacity prior to transplantation. For instance, hydrogel substrates mimicking the stiffness of muscles were found to promote self-renewal in muscle stem cells, increasing their regeneration following transplantation. Overall, studies
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: The FDA process for drug development is long, arduous, and complex. These drugs can easily take over a decade from their initial conception to final FDA approval, so any method that saves time for pharmaceutical companies is given attention. Organoids-on-a-Chip have been one such technology that is coming into the foreground of discussion in Washington as their use can decrease the timeline of drug development and subsequently the cost
suggest that mechanical cues can promote organogenesis, meaning that in vitro treatments of organoid cultures might improve their in vivo capacity for organ replacement/ repair (Vining & Mooney, 2017). Organoid cultures might also be used with in vitro genetic correction strategies to replace tissues impacted by an individual’s genetic disorder (Rossi et al., 2018). In short, organoids have vast applications in the field of regenerative medicine, but current studies have yet to prove that organoid-derived transplantations are safe for widespread use in humans. Future research will need to transition from animal to clinical models.
Limitations Organoids-on-a-chip, despite their large expected benefit, still have many hurdles to overcome. As of now, organoids-on-a-chip face regulations for commercial use, and this new form of technology must go through a complex validation process in order to be widely accepted. In order to test its validity, there must be a submission of reliable data for microphysiological organ systems that produce similar results to long-standing approaches (Livingston et al., 2016). Not only is there a long process required to test its validity, but rules and regulations must be set in place for patient consent, donor permission, and licensing if it is to be used as a clinical tool. Aside from the need for more testing, consultations with institutional and corporate legal representatives are necessary to protect patient confidentiality and intellectual property, such as licensing and revenue sharing (Livingston et al., 2016). This is not a trivial matter, as the legal plans will set a precedent for this new technology, especially considering both the scientific and engineering communities will be involved.
WINTER 2021
In fact, the National Center for Advancing Translational Science has created a Tissue Chip Program to regulate the exchange of patient information and experimental data between the several communities involved (Livingston et al., 2016). These regulations should also be flexible so that this tissue chip technology could be modified to include additional cell types as they become available. As tissue chip teams continue to improve platforms, there must be additional, on-going feedback from FDA and industry partners to researchers (Low et al., 2020). These partnerships are a necessary aspect to consider with tissue chip technology. In the context of commercialization, there must be consistent input from the academic community of scientists and engineers, the biotechnology sector, and the pharmaceutical industry going forward. For example, multiple organizations and private companies hold thousands of cells from donors. If these cells and the data on them were made readily available to the tissue chip community, it would further advance the development of microphysiological organ systems and possibly tackle the lack of primary cells from donors (Low et al., 2020). As of now, organoid technology must account for the several communities that are involved and regulate communication between them.
Source: Wikimedia Commons
"Organoids-on-a-chip, despite their large expected benefit, still have many hurdles to overcome. As of now, organoids-on-achip face regulations for commercial use, and this new form of technology must go through a complex validation process in order to be widely accepted."
Communication between different groups surrounding organoids-on-a-chip is especially important when considering the necessity for FDA approval; having human organ chips approved by the FDA for diagnostic or treatment use will be a complex process. If there is a more widespread use of tissue technology, there will likely be more FDA applications and, in turn, more familiarity with the process of tissue chip technology. There are several other partnerships that are necessary going forward with microphysiological organ systems
347
aside from FDA input, such as identifying and documenting biomarkers for improved R&D in the pharmaceutical industry (Livingston et al., 2016). This communication is also necessary when thinking internationally, where the European Union has distinct requirements related to animal testing. Aside from facilitating information exchange between the private and public sectors of the tissue chip community, there are a few additional improvements that can be built into Organoids-on-a-Chip when dealing with the integration of organ platforms. For example, the microphysiological organ systems program has partnered with ThermoFisher Inc. to tackle one problem: finding a universal medium for multiple cell types (Low et al., 2020). Additionally, microphysiological organ systems have run into problems when scaling tissue, as improper scaling can lead to faulty interactions between cells. Other biological limitations that must be vetted for the utility of tissue chips in toxicity screening applications include iPSC cell sourcing, vascularization of tissues, and inclusion of immune components (Low et al., 2020). In addition, there are still challenges involved in providing a uniform microfluidic platform design that can tackle each diverse organ system. When connecting multiple organ systems, engineers must come up with a solution to the varying flow rates in each organ system. Not to mention, when connecting these organ systems, bubbles still may form in the fluids and ruin the sterility and disrupt cell culture. Lastly, many researchers are investigating the plastic used in organs on a chip, polydimethylsiloxane (PDMS), that is highly lipophilic and binds to many drugs, which could be problematic when taking drug concentrations into consideration (Low et al., 2020). These engineering problems are being tackled with appropriate funding allocated to tissue chips. Many smaller private sector entities would be willing to contribute capital to move tissue chips further to SBIR/STTR funding or angel investors (Livingston et al., 2016). During this time, organoid technology must develop a marketing strategy to attract investors, where public investors will be attracted to data generating the idea of a developing organ system platform, while private sector partners will look for increased product visibility (Livingston et al., 2016). Tissue chip technology is a powerful tool, but must first undergo a complex, long-term validation
348
process to be widely accepted for commercial use. Additionally, there are several partnerships that need to be mediated between public and private sectors of science that can significantly assist the development of technology through the sharing of financial and intellectual capital. However, organizations like the National Center for Advancing Translational Science are set to address these problems. After completing a long-term commitment to validate data results, establishing patient privacy and consent, mediating commercial and educational use, and distributing the technology to industry and academic researchers, the future of organ chip technology seems promising. References Achberger, K., Probst, C., Haderspeck, J., Bolz, S., Rogal, J., Chuchuy, J., Nikolova, M., Cora, V., Antkowiak, L., Haq, W., Shen, N., Schenke-Layland, K., Ueffing, M., Liebau, S., & Loskill, P. (2019). Merging organoid and organ-on-a-chip technology to generate complex multi-layer tissue models in a human retina-on-a-chip platform. ELife, 8, e46188. https://doi. org/10.7554/eLife.46188 Aziz, A. U. R., Geng, C., Fu, M., Yu, X., Qin, K., & Liu, B. (2017). The Role of Microfluidics for Organ on Chip Simulations. Bioengineering, 4(2), 39. https://doi.org/10.3390/ bioengineering4020039 Barkauskas, C. E., Chung, M.-I., Fioret, B., Gao, X., Katsura, H., & Hogan, B. L. M. (2017). Lung organoids: Current uses and future promise. Development (Cambridge, England), 144(6), 986–997. https://doi.org/10.1242/dev.140103 Benam, K. H., Villenave, R., Lucchesi, C., Varone, A., Hubeau, C., Lee, H.-H., Alves, S. E., Salmon, M., Ferrante, T. C., Weaver, J. C., Bahinski, A., Hamilton, G. A., & Ingber, D. E. (2016). Small airway-on-a-chip enables analysis of human lung inflammation and drug responses in vitro. Nature Methods, 13(2), 151–157. https://doi.org/10.1038/nmeth.3697 Cho, K. W., Lee, W. H., Kim, B.-S., & Kim, D.-H. (2020). Sensors in heart-on-a-chip: A review on recent progress. Talanta, 219, 121269. https://doi.org/10.1016/j.talanta.2020.121269 Clevers, H., Lancaster, M., & Takebe, T. (2017). Advances in Organoid Technology: Hans Clevers, Madeline Lancaster, and Takanori Takebe. Cell Stem Cell, 20(6), 759–762. https://doi. org/10.1016/j.stem.2017.05.014 Corrò, C., Novellasdemunt, L., & Li, V. S. W. (2020). A brief history of organoids. American Journal of Physiology-Cell Physiology, 319(1), C151–C165. https://doi.org/10.1152/ ajpcell.00120.2020 Dayem, A. A., Lee, S. B., Kim, K., Lim, K. M., Jeon, T., & Cho, S.-G. (2019). Recent advances in organoid culture for insulin production and diabetes therapy: Methods and challenges. BMB Reports, 52(5), 295–303. https://doi.org/10.5483/ BMBRep.2019.52.5.089 Dedhia, P. H., Bertaux-Skeirik, N., Zavros, Y., & Spence, J. R. (2016). Organoid Models of Human Gastrointestinal Development and Disease. Gastroenterology, 150(5), 1098–1112. https://doi.org/10.1053/j.gastro.2015.12.042 Esch, E. W., Bahinski, A., & Huh, D. (2015). Organs-on-chips
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
at the frontiers of drug discovery. Nature Reviews Drug Discovery, 14(4), 248–260. https://doi.org/10.1038/nrd4539 Fatehullah, A., Tan, S. H., & Barker, N. (2016). Organoids as an in vitro model of human development and disease. Nature Cell Biology, 18(3), 246–254. https://doi.org/10.1038/ncb3312 Gkatzis, K., Taghizadeh, S., Huh, D., Stainier, D. Y. R., & Bellusci, S. (2018). Use of three-dimensional organoids and lung-ona-chip methods to study lung development, regeneration and disease. European Respiratory Journal, 52(5), 1800876. https://doi.org/10.1183/13993003.00876-2018 Huh, D., Hamilton, G. A., & Ingber, D. E. (2011). From 3D cell culture to organs-on-chips. Trends in Cell Biology, 21(12), 745–754. https://doi.org/10.1016/j.tcb.2011.09.005 Hussey, G. S., Dziki, J. L., & Badylak, S. F. (2018). Extracellular matrix-based materials for regenerative medicine. Nature Reviews Materials, 3(7), 159–173. https://doi.org/10.1038/ s41578-018-0023-x Kimura, H., Sakai, Y., & Fujii, T. (2018). Organ/body-on-a-chip based on microfluidic technology for drug discovery. Drug Metabolism and Pharmacokinetics, 33(1), 43–48. https://doi. org/10.1016/j.dmpk.2017.11.003 Livingston, C. A., Fabre, K. M., & Tagle, D. A. (2016). Facilitating the commercialization and use of organ platforms generated by the microphysiological systems (Tissue Chip) program through public–private partnerships. Computational and Structural Biotechnology Journal, 14, 207–210. https://doi. org/10.1016/j.csbj.2016.04.003 Loomans, C. J. M., Williams Giuliani, N., Balak, J., Ringnalda, F., van Gurp, L., Huch, M., Boj, S. F., Sato, T., Kester, L., de Sousa Lopes, S. M. C., Roost, M. S., Bonner-Weir, S., Engelse, M. A., Rabelink, T. J., Heimberg, H., Vries, R. G. J., van Oudenaarden, A., Carlotti, F., Clevers, H., & de Koning, E. J. P. (2018). Expansion of Adult Human Pancreatic Tissue Yields Organoids Harboring Progenitor Cells with Endocrine Differentiation Potential. Stem Cell Reports, 10(3), 712–724. https://doi.org/10.1016/j. stemcr.2018.02.005
Technology for Biomedical Engineering Applications. Current Trends in Biomedical Engineering & Biosciences, 9(2), 23–25. Rossi, G., Manfrin, A., & Lutolf, M. P. (2018). Progress and potential in organoid research. Nature Reviews Genetics, 19(11), 671–687. https://doi.org/10.1038/s41576-018-0051-9 Ryan, A. M., Berman, R. F., & Bauman, M. D. (2019). Bridging the species gap in translational research for neurodevelopmental disorders. Neurobiology of Learning and Memory, 165, 106950. https://doi.org/10.1016/j. nlm.2018.10.006 Sasai, Y. (2013). Next-Generation Regenerative Medicine: Organogenesis from Stem Cells in 3D Culture. Cell Stem Cell, 12(5), 520–530. https://doi.org/10.1016/j.stem.2013.04.009 Sato, T., Stange, D. E., Ferrante, M., Vries, R. G. J., Van Es, J. H., Van den Brink, S., Van Houdt, W. J., Pronk, A., Van Gorp, J., Siersema, P. D., & Clevers, H. (2011). Long-term expansion of epithelial organoids from human colon, adenoma, adenocarcinoma, and Barrett’s epithelium. Gastroenterology, 141(5), 1762–1772. https://doi.org/10.1053/j. gastro.2011.07.050 Simian, M., & Bissell, M. J. (2017). Organoids: A historical perspective of thinking in three dimensions. Journal of Cell Biology, 216(1), 31–40. https://doi.org/10.1083/ jcb.201610056 Singh, A., & Lutolf, M. (2020). Breaking the Barriers in Engineering Organoids and Tissues with Advanced Materials. Advanced Functional Materials, 30(48), 2008531. https://doi. org/10.1002/adfm.202008531 Spence, J. R., Mayhew, C. N., Rankin, S. A., Kuhar, M. F., Vallance, J. E., Tolle, K., Hoskins, E. E., Kalinichenko, V. V., Wells, S. I., Zorn, A. M., Shroyer, N. F., & Wells, J. M. (2011). Directed differentiation of human pluripotent stem cells into intestinal tissue in vitro. Nature, 470(7332), 105–109. https://doi. org/10.1038/nature09691
Low, L. A., Mummery, C., Berridge, B. R., Austin, C. P., & Tagle, D. A. (2020). Organs-on-chips: Into the next decade. Nature Reviews Drug Discovery, 1–17. https://doi.org/10.1038/ s41573-020-0079-3
Sosa-Hernández, J. E., Villalba-Rodríguez, A. M., RomeroCastillo, K. D., Aguilar-Aguila-Isaías, M. A., García-Reyes, I. E., Hernández-Antonio, A., Ahmed, I., Sharma, A., Parra-Saldívar, R., & Iqbal, H. M. N. (2018). Organs-on-a-Chip Module: A Review from the Development and Applications Perspective. Micromachines, 9(10), 536. https://doi.org/10.3390/ mi9100536
McCracken, K. W., Howell, J. C., Wells, J. M., & Spence, J. R. (2011). Generating human intestinal tissue from pluripotent stem cells in vitro. Nature Protocols, 6(12), 1920–1928. https:// doi.org/10.1038/nprot.2011.410
Tata, P. R., & Rajagopal, J. (2017). Plasticity in the lung: Making and breaking cell identity. Development (Cambridge, England), 144(5), 755–766. https://doi.org/10.1242/ dev.143784
Min, S., Kim, S., & Cho, S.-W. (2020). Gastrointestinal tract modeling using organoids engineered with cellular and microbiota niches. Experimental & Molecular Medicine, 52(2), 227–237. https://doi.org/10.1038/s12276-020-0386-0
Tiriac, H., Plenker, D., Baker, L. A., & Tuveson, D. A. (2019). Organoid Models for Translational Pancreatic Cancer Research. Current Opinion in Genetics & Development, 54, 7–11. https://doi.org/10.1016/j.gde.2019.02.003
Mullenders, J., Jongh, E. de, Brousali, A., Roosen, M., Blom, J. P. A., Begthel, H., Korving, J., Jonges, T., Kranenburg, O., Meijer, R., & Clevers, H. C. (2019). Mouse and human urothelial cancer organoids: A tool for bladder cancer research. Proceedings of the National Academy of Sciences, 116(10), 4567–4574. https://doi.org/10.1073/pnas.1803595116
Trisno, S. L., Philo, K. E. D., McCracken, K. W., Catá, E. M., Ruiz-Torres, S., Rankin, S. A., Han, L., Nasr, T., Chaturvedi, P., Rothenberg, M. E., Mandegar, M. A., Wells, S. I., Zorn, A. M., & Wells, J. M. (2018). Esophageal Organoids from Human Pluripotent Stem Cells Delineate Sox2 Functions during Esophageal Specification. Cell Stem Cell, 23(4), 501-515.e7. https://doi.org/10.1016/j.stem.2018.08.008
Park, S. E., Georgescu, A., & Huh, D. (2019). Organoids-on-achip. Science, 364(6444), 960–965. https://doi.org/10.1126/ science.aaw7894 Rosser, J. M., & Ertl, P. (2017). Organ-On-A-Chip: Enabling
WINTER 2021
Tsai, Y.-H., Nattiv, R., Dedhia, P. H., Nagy, M. S., Chin, A. M., Thomson, M., Klein, O. D., & Spence, J. R. (2017). In vitro patterning of pluripotent stem cell-derived intestine recapitulates in vivo human development. Development
349
(Cambridge, England), 144(6), 1045–1055. https://doi. org/10.1242/dev.138453 Vining, K. H., & Mooney, D. J. (2017). Mechanical forces direct stem cell behaviour in development and regeneration. Nature Reviews Molecular Cell Biology, 18(12), 728–742. https://doi. org/10.1038/nrm.2017.108 Wang, Y., Wang, L., Zhu, Y., & Qin, J. (2018). Human brain organoid-on-a-chip to model prenatal nicotine exposure. Lab on a Chip, 18(6), 851–860. https://doi.org/10.1039/C7LC01084B Weinstein, B. M. (1999). What guides early embryonic blood vessel formation? Developmental Dynamics: An Official Publication of the American Association of Anatomists, 215(1), 2–11. https://doi.org/10.1002/(SICI)10970177(199905)215:1<2::AID-DVDY2>3.0.CO;2-U Workman, M. J., Mahe, M. M., Trisno, S., Poling, H. M., Watson, C. L., Sundaram, N., Chang, C.-F., Schiesser, J., Aubert, P., Stanley, E. G., Elefanty, A. G., Miyaoka, Y., Mandegar, M. A., Conklin, B. R., Neunlist, M., Brugmann, S. A., Helmrath, M. A., & Wells, J. M. (2017). Engineered human pluripotent-stem-cell-derived intestinal tissues with a functional enteric nervous system. Nature Medicine, 23(1), 49–59. https://doi.org/10.1038/ nm.4233 Yu, F., Hunziker, W., & Choudhury, D. (2019). Engineering Microfluidic Organoid-on-a-Chip Platforms. Micromachines, 10(3). https://doi.org/10.3390/mi10030165 Yui, S., Nakamura, T., Sato, T., Nemoto, Y., Mizutani, T., Zheng, X., Ichinose, S., Nagaishi, T., Okamoto, R., Tsuchiya, K., Clevers, H., & Watanabe, M. (2012). Functional engraftment of colon epithelium expanded in vitro from a single adult Lgr5+ stem cell. Nature Medicine, 18(4), 618–623. https://doi.org/10.1038/ nm.2695 Zhang, L., Adileh, M., Martin, M. L., Klingler, S., White, J., Ma, X., Howe, L. R., Brown, A. M. C., & Kolesnick, R. (2017). Establishing estrogen-responsive mouse mammary organoids from single Lgr5+ cells,,. Cellular Signalling, 29, 41–51. https://doi. org/10.1016/j.cellsig.2016.08.001 Zhang, B., Korolj, A., Lai, B. F. L., & Radisic, M. (2018). Advances in organ-on-a-chip engineering. Nature Reviews Materials, 3(8), 257–278. https://doi.org/10.1038/s41578-018-0034-7 Zhu, Y., Wang, L., Yu, H., Yin, F., Wang, Y., Liu, H., Jiang, L., & Qin, J. (2017). In situ generation of human brain organoids on a micropillar array. Lab on a Chip, 17(17), 2941–2950. https://doi. org/10.1039/C7LC00682A
350
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
WINTER 2021
351
Selected Review of Topics in Primary Care Affected by COVID-19 JENNY CHEN, ARSHDEEP DHANOA, ERIKA HERNANDEZ, BASILE MONTAGNESE, & CHRISTOPHER CONNORS
Cover: Patient on video call with physician Source: Wikipedia, Ceibos
Introduction The COVID-19 pandemic has created (and revealed) several significant underlying issues in primary care. Primary care is a wide field concerning health promotion, disease prevention, health maintenance, counseling, diagnosis of chronic illnesses, and more. Primary care physicians, or ‘PCPs,’ are typically the first point of contact for many patients as the first people that patients go to for routine check-ups and undiagnosed medical issues. As such, PCPs hold a powerful position in the determination of a patient’s perception of medicine, of their condition, and of how the patient chooses to undergo necessary treatments. This review will outline problems in primary care that have been elicited by the onset and perpetuation of COVID-19 and provide insight into possible solutions that primary care systems can implement to mitigate such issues following the resolution of the pandemic.
352
Beginning with an overview of the rapid onset of telemedicine, we will outline how PCPs can leverage this technology moving forward in order to resolve underlying issues related to healthcare accessibility. We will then transition into investigating the limited role that PCPs have played in the COVID-19 vaccine distribution, providing solutions with programs in local vaccine coordination centers. Afterwards, we discuss how the pandemic has exacerbated food insecurity and nutritional deficiencies in some communities, and how PCPs can combat these problems by contributing to public health policies. Lastly, we will propose ways in which PCPs can work to alleviate increases in mental health concerns brought on by the isolation and uncertainty of the pandemic.
Telemedicine in the Future Telehealth, broadly defined as practicing medicine over a distance with technology, has seen a significant increase during the COVID-19 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
pandemic in providing distanced medical services in primary care. Telehealth comes with several benefits in healthcare accessibility and a great mitigating factor in the case of future limits on in-person consultations, and as such it is crucial that the healthcare system integrates the infrastructure to rapidly implement telehealth. In order to successfully implement telehealth into the current healthcare system, changes in payment, privacy, and licensing must be critically analyzed to accommodate telehealth in a sustainable manner (Schachar, C. et. al, 2020). Furthermore, incorporating lessons learned from this pandemic pertinent to these proposed changes as well as to geographic distribution of services, concerns regarding internet accessibility, as well as physician training in telemedicine are essential in revealing which aspects of telemedicine work and which require further optimization. One necessary step in the right direction is necessarily a focus on integrating telehealth training into primary care education, providing patients with resources to easily perform at home diagnostics, and developing protocols for specific patients and for situations in which telehealth works best. In the months since telehealth’s rapid adoption and expansion during the initial phases of the pandemic, there are new questions about its benefits and drawbacks and what its future should look like. Some of the obvious benefits of telemedicine include its remote accessibility and the comfort of getting professional care at home, but an inevitable drawback to this was a reduction in the quality of care. One study based in Israel explored PCP’s opinions on practicing telehealth and its prospective longevity post-pandemic. The study found that after the conclusion of initial lockdowns, phone visitations and other telehealth medicinal practices returned to pre-pandemic levels with the exception of photos and video clips telemedicine practices which were still used frequently. Moreover, physicians preferred traditional in-person primary care practices as opposed to telehealth practices, partially due to the fact that they were trained and have been practicing in-person care (Grossman, Z. et al., 2020). This preference for in-person practice was driven by what PCPs were trained to do, so the perceived reduction in care could be a result of inadequate training rather than an inadequate method. In a similar sense, research has shown that PCPs have responded positively to the incorporation of telehealth education into their training, indicating that telehealth
WINTER 2021
could become a mainstay of medical practice if taught in physician education and improved to physicians’ standards (Fleming, D. et al., 2009). In addition to investigations into the impact of telehealth from a PCP perspective, significant efforts have been made to understand telehealth from a patient's perspective. For instance, a recent study found that making patients perform simple tasks such as monitoring their own blood pressure (BP) levels and using other simple hypertension diagnostic tools as part of telehealth were wellreceived by the subjects who had an average overall contentment rating of 4.81/5.00. The participants also reported higher feelings of satisfaction, control, and support by screening themselves. In addition, many felt that they learned a lot about their conditions by using diagnostic tools themselves (Cottrell, E. et al., 2012). This research supports the notion of providing patients with simple self-reporting diagnostic tools in telehealth can provide a viable method of increasing patient morale as well monitoring patient health in the case of a future pandemic. In another study, patients also responded favorably to telehealth visits due to convenience and cost reduction, highlighting another positive aspect of telemedicine (Powell, R. et al., 2017). Moving forward, due to these perceived positive effects on patient morale and choice of telehealth from a cost and convenience perspective, it is important to investigate which circumstances would allow for implementation of video visits relative to in-patient visits. Results from this investigation could be crucial not only in the wave of a future pandemic in distinguishing which medical scenarios may be resolved via telemedicine, but even in traditional circumstances in order to expand healthcare accessibility.
"A recent study found that making patients perform simple tasks such as monitoring their own blood pressure (BP) levels and using other simple hypertension diagnostic tools as part of telehealth were well-received by the subjects who had an average overall contentment rating of 4.81/5.00. The participants also reported higher feelings of satisfaction, control, and support by screening themselves."
The Role of Primary Care Physicians in COVID-19 Vaccine Distribution In late 2020 finally came the glimmer at the end of the tunnel: the development of the vaccine against COVID-19. Easily one of the largest developments in the past few months, the COVID-19 vaccine was developed by pharmaceutical giants including Pfizer, Moderna, BioNTech, and later, Johnson and Johnson. Despite development and approval in November of 2020, however, according to trade groups for small-practice doctors, most primary care offices still had not received doses of the vaccine by January 2021 (Khazan, 2021). This inefficiency reveals yet another PCP burden created by COVID-19: the administration and distribution of vaccines to patients and 353
Figure 1: Doctor holding a vial of the Pfizer-BioNTech COVID-19 vaccine. Source: Wikimedia Commons
"According to a study with the Green Center and Primary Care Collaborative, only 34% of facilities in the U.S. have enough staff to administer the vaccine, and 1 in 5 facilities lack the ability to pay for the vaccine or its storage."
354
providers. As of February 2021, most COVID-19 vaccines were only available to frontline workers, teachers, and people over 65, and were distributed by a system that operates on a firstcome, first-serve basis. Despite administering 46% of all vaccines, the largest percentage of the groups included, many physicians were against this system. Physicians argued that such a system perpetuates inequalities, as wealthier patients typically have the ability to sign up before those who may not have access to resources such as computers, the internet, and basic technological literacy (Westfall, J. et. al, 2021). Individuals living near health facilities are also at an advantage compared those living in more rural areas (Emanuel, E. et. al, 2020). A solution to this, however, could involve the broader use of primary care facilities, as this may allow for communities (including rural and remote ones) to access the COVID-19 vaccine in the coming months (Westfall, J. et. al, 2021). However, as per the current administration’s policies, the involvement of primary care facilities in vaccine distribution is largely up to the state and thus, varies widely across the country. As such, in the face of a rapidly evolving pandemic and decisions made on a policy level by policy makers, the promise of such primary care facilities as vaccination centers could be lost. For example, the Pennsylvania Department of Health recently made an announcement that they would be removing primary care physicians from the list of those permitted to administer the COVID-19 vaccine, a move that garnered protests from doctors’ groups like the Pennsylvania Academy of Family Physicians
(Laker, 2021). In countries such as Israel, where primary care plays a more central role in healthcare delivery, doctors have been more involved in the vaccination process. This may be why 30% of their population has received at least one dose of the vaccine, compared to the United States at 6% (Our World in Data, 2021). However, it is also important to note that such differences may also be attributed to differences in population size and the number of vaccines each government has bought to distribute. However, difficulty does not only lie in distributing the vaccine. According to a study with the Green Center and Primary Care Collaborative, only 34% of facilities in the U.S. have enough staff to administer the vaccine, and 1 in 5 facilities lack the ability to pay for the vaccine or its storage (Larry A. Green Center, 2021). The Pfizer-BioNTech vaccine requires storage at subzero temperatures, meaning that the kinds of freezers that keep the vaccines stable are generally only available in hospitals and labs. However, unlike its competitors, the AstraZeneca COVID-19 vaccine can be stored at higher temperatures, which might reduce barriers that primary care facilities face in distribution. Regardless of the specific vaccine’s conditions, however, many primary care physicians are urging for the development of local coordination centers to help with the distribution of vaccine supplies and to provide additional logistic support.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Woman with mask on looking out of the window Source: Wikimedia Commons
Addressing Nutritional Deficiencies and Strains on Food Systems Through a Primary Care Lens The transition of the COVID-19 outbreak from epidemic to pandemic has strained public health systems and how they distribute resources. In light of this, scientists from the European Journal of Clinical Nutrition stated that community resilience has emerged as the biggest line of defense against COVID-19 transmission—highlighting its role as arguably the most important human resource (Naja, F., & Hamadeh, R., 2020). Among the largest issues that communities are faced with, beyond the consequences of contracting and spreading COVID-19, is food insecurity. As discussed in the same Clinical Nutrition article, lockdowns have greatly mitigated the flow of foods, whether on the national, regional, or even local scales. Additionally, isolating at home has dramatically shifted the lifestyles of people around the world: sedentary periods are likely to increase, physical exertion is likely to decrease, and dietary habits themselves can become erratic with frequent snacking tendencies. In the same article, scientists accredit a fraction of these increasingly erratic eating patterns as possible side-effects of the anxieties that people may face, including stresses brought on from economic destabilization, rising unemployment, and the loss of loved ones (Naja, F., & Hamadeh, R., 2020). Refining primary care to correct these nutritional deficits would be of particular salience.
WINTER 2021
Applying their holistic knowledge of the body and robust understanding of nutrition, primary care providers (PCPs) can be at the front-line in addressing the acute and chronic nutritional needs of community members. In the future, PCPs can work with public health agencies and governments to advocate for food insecurity legislature to prioritize foods that are most worthwhile to patients, and PCPs should optimally tailor the contents of government food aid kits (Ong, M. et al., 2020). Simply analyzing the food relief packages from countries like the Philippines exposes limited micro- and macronutrients available, with insignificant levels of fiber, minerals, and vitamins, which could be alleviated with better cooperation between the government and PCPs in the public sector. Hence, primary care providers could best prepare for the next pandemic by seeking larger roles in the public sphere and critiquing public health policies to make nutritional food aid accessible despite community-wide lockdowns.
"Among the largest issues that communities are faced with, beyond the consequences of contracting and spreading COVID-19, is food insecurity. As discussed in the same Clinical Nutrition article, lockdowns have greatly mitigated the flow of foods, whether on the national, regional, or even local scales."
The Underlying Effects of the COVID-19 Pandemic and Isolation Period on Mental Health In addition to causing the aforementioned strains on food systems, COVID-19 pandemic has also introduced the risk of a “second pandemic” of mental health crises in health systems and communities (Choi et al., 2020). The “second pandemic” is caused by significant psychological distress of hospitalization for
355
patients, staff, and family members directly affected by COVID-19, as well as social distancing measures that have increased risks for loneliness, isolation, and anxiety in the general population (Choi et al. 2020). Four groups of people are especially at risk for mental health and psychosocial consequences stemming from the COVID-19 pandemic: 1) those who have been directly or indirectly in contact with the virus; 2) those already vulnerable to biological or psychosocial stressors, including individuals with mental health issues; 3) health professionals (due to higher levels of exposure); and 4) even individuals following the news through media channels (Fiorilla & Gorwood, 2020). Primary care providers can help address these important issues by taking a greater interest in mental health and intervening with patients struggling with mental health by providing resources and referrals to specialists. Scholars and healthcare workers note how a comprehensive public health response to the pandemic should include 1) awareness of psychological aspects of hospitalization on patients, families, and staff affected by COVID-19; 2) planning for emergency and acute psychiatric patient care if hospitals become swamped with COVID-19 patients; and 3) innovative ways to provide mental health care in communities that require social distancing and have strained health system resources (Choi et al., 2020). These public health efforts can be better supported with the help of primary care physicians through primary care psychiatry and referrals to telepsychiatry, which will limit the burden on mental health specialists (Rohilla et al., 2020). Scholars also recommend that health care practitioners, including primary care physicians, look at the pandemic as an opportunity to highlight how physical and mental health are interrelated and to not neglect mental health, both in and out of a state of medical emergency like the COVID-19 pandemic (Grover et al., 2020). They further recommend that all COVID-19 hospitals and units have WiFi facilities that allow for necessary psychological screenings and treatment to be carried out for patients with COVID-19 infections (Grover et al., 2020). Overall, it is expected that further expansion of health insurance coverage of mental health following the beneficial effects of the Affordable Care Act (Baumgartner et al., 2020), combined with promoting primary care support and better integrating it with care by specialists, will help support mental health care in the years following the pandemic (Moreno et al., 2020). Thus, primary care providers can play an important role in mitigating the mental health crises resulting from the pandemic. 356
Conclusion The COVID-19 pandemic has illuminated both the strengths and weaknesses of the U.S. healthcare system, demonstrating how further incorporation of telemedicine will help PCPs adapt more quickly and effectively in the case of a future pandemic. Telemedicine, not without its drawbacks, also stands to demonstrate additional optimization if it is to be continued post-pandemic. PCPs also have the potential to step up in future vaccine rollouts if allocated more resources and the infrastructure to do so, such as local coordination centers. Additionally, PCPs can play a crucial role in addressing some of the issues caused by COVID-19, such as nutrition and mental health. Since primary care providers are often the first point of contact for patients, they have an important role to play in the current pandemic and will hopefully contribute to a more robust response to the next one. References Baumgartner, J. C., Aboulafia, G. N., & McIntosh, A. (2020, April 3). The ACA at 10: How Has It Impacted Mental Health Care? The ACA at 10: How Has It Impacted Mental Health Care? | Commonwealth Fund. https://www.commonwealthfund. org/blog/2020/aca-10-how-has-it-impacted-mental-healthcare. Choi, K. R., Heilemann, M. V., Fauer, A., & Mead, M. (2020). A second pandemic: mental health spillover from the novel coronavirus (COVID-19). Journal of the American Psychiatric Nurses Association, 26(4), 340-343. Cottrell, E., McMillan, K., & Chambers, R. (2012). A crosssectional survey and service evaluation of simple telehealth in primary care: what do patients think? BMJ Open, 6, e001392. https://doi.org/10.1136/bmjopen-2012-001392 Emanuel, E. J., Persad, G., Upshur, R., Thome, B., Parker, M., Glickman, A., Zhang, C., Boyle, C., Smith, M., & Phillips, J. P. (2020). Fair Allocation of Scarce Medical Resources in the Time of Covid-19. New England Journal of Medicine, 21, 2049–2055. https://doi.org/10.1056/nejmsb2005114 Fiorillo, A., & Gorwood, P. (2020). The consequences of the COVID-19 pandemic on mental health and implications for clinical practice. European Psychiatry, 63(1). Fleming, D. A., Riley, S. L., Boren, S., Hoffman, K. G., Edison, K. E., & Brooks, C. S. (2009). Incorporating Telehealth into Primary Care Resident Outpatient Training. Telemedicine and E-Health, 3, 277–282. https://doi.org/10.1089/ tmj.2008.0113 Grossman, Z. (2020). The future of telemedicine visits after COVID-19: perceptions of primary care pediatricians | Israel Journal of Health Policy Research | Full Text. Israel Journal of Health Policy Research. https://doi.org/10.1186/s13584-02000414-0 Grover, S., Dua, D., Sahoo, S., Mehra, A., Nehra, R., & Chakrabarti, S. (2020). Why all COVID-19 hospitals should have mental health professionals: The importance of mental health in a worldwide crisis!. Asian journal of psychiatry, 51, 102147. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Khazan, O. (2021, January 27). Why Is Vaccination Going So Slowly? - The Atlantic. The Atlantic.https://www.theatlantic. com/politics/archive/2021/01/why-vaccination-slow-pri mary-care-doctors/617823/ Laker, B. (2021, February 14). Pennsylvania is limiting who administers the COVID-19 vaccine. Doctors’ groups say that’s a mistake. Https://Www.Inquirer.Com; The Philadelphia Inquirer. https://www.inquirer.com/news/covid-vaccinepennsylvania-primary-care-physicians-20210214.html Moreno, C., Wykes, T., Galderisi, S., Nordentoft, M., Crossley, N., Jones, N., ... & Arango, C. (2020). How mental health care should change as a consequence of the COVID-19 pandemic. The Lancet Psychiatry. Naja, F., & Hamadeh, R. (2020). Nutrition Amid the COVID-19 Pandemic; a Multi-Level Framework for Action. European Journal of Clinical Nutrition, vol.74, 1117-1121. https://doi.org/10.1038/s41430-020-0634-3. Ong, M. et al. (2020). Addressing the COVID-19 Nutrition Crisis in Vulnerable Communities: Applying a Primary Care Perspective. Journal of Primary Care and Community Health, vol.11, 1-4. https://doi.org/10.1177/2150132720946951. Our World in Data. (2021). Coronavirus (COVID-19) vaccinations. Powell, R. E., Henstenburg, J. M., Cooper, G., Hollander, J. E., & Rising, K. L. (2017). Patient Perceptions of Telehealth Primary Care Video Visits. The Annals of Family Medicine, 3, 225–229. https://doi.org/10.1370/afm.2095 Rohilla, J., Tak, P., Jhanwar, S., & Hasan, S. (2020). Primary care physician's approach for mental health impact of COVID-19. Journal of Family Medicine and Primary Care, 9(7), 3189. Schachar, C., Engel, J., & Elwyn, G. (2020). Implications for Telehealth in a Post-pandemic Future. JAMA. https://doi.org/ doi:10.1001/jama.2020.7943 The Larry A. Green Center. (2021). Quick COVID-19 Primary Care Survey [Data set]. Westfall, J., Wilkinson, E., Jetty, A., Petterson, S., & Jabbarpour, Y. (2021). Primary Care’s Historic Role in Vaccination and Potential Role in COVID-19 Immunization Programs. My University. https://doi.org/10.7302/11
WINTER 2021
357
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE Hinman Box 6225 Dartmouth College Hanover, NH 03755 USA http://dujs.dartmouth.edu dujs@dartmouth.edu
WINTER 2021
358