DUJS
Dartmouth Undergraduate Journal of Science SUMMER 2020
|
VOL. XXII
|
N O. 3
EMERGING
FINDING A NEW NORMAL
Hunting for Planet Nine by its Influence on Neptune
p. 8 1
Tripping on a Psychedelic Revolution: A Historical and Scientific Overview
p. 144
The Psychology of the Pandemic
p. 314 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Note from the Editorial Board Journalism is a powerful endeavor, especially when in practice with science. It requires patience in data analysis and interpretation, dedication in learning the core scientific principles at the heart of the discipline, and spirit in the subject matter as a whole. With the investment of time and energy that encapsulate the achievement of these three ideals, writers are able to do justice to their subject while also making it accessible to an educated, albeit not specialized, audience. This term, the Dartmouth Undergraduate Journal of Science saw another record number of submissions, exceeding expectation not only in quantity but also in breadth and diversity. There was a continuation of the traditional individual articles, in which students wrote about topics of their choice, completing a thorough literature review followed by multiple weeks of drafting. Additionally, we had multiple group print articles that were written – doubling in number from the pilot term in the spring – in which students voted on contemporary science topics and tackled the literature review and drafting process together, led by a board member. The experience of learning how to read publications, understanding the scientific principles that dictate the subject, and learning how to translate both into a cohesive piece accessible to all was a continued focus in this journal. Among topics published in this edition of our journal are a timely investigation into the current state and future models of telemedicine by Chris Connors ’21, two original mathematical proofs published by Evan Craft ’20 featuring derivations of Einstein’s and Schrodinger’s equations, as well as a look into cellular autophagy and its implication in the onset of cancerous tissue by Zoe Chen ’23. With this term being many writers’ second, third, or even fourth consecutive term writing for the journal, an emphasis was also placed on continuing lines of inquiry across multiple terms in order to better understand a general subject through a variety of different lenses. For instance, Dev Kapadia ’23 took a look into contract research organizations (CROs) that are responsible for nearly all clinical trials run in the United States – an extension of the economics of drug development article that he contributed to the term before.
The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community and beyond by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EXECUTIVE BOARD President: Sam Neff '21 Editor-in-Chief: Nishi Jain '21 Chief Copy Editors: Anna Brinks '21, Liam Locke '21, Megan Zhou '21 EDITORIAL BOARD Managing Editors: Anahita Kodali '23, Dev Kapadia '23, Dina Rabadi '22, Kristal Wong '22, Maddie Brown '22 Assistant Editors: Aditi Gupta '23, Alex Gavitt '23, Daniel Cho '22, Eric Youth '23, Sophia Koval '21 STAFF WRITERS Dartmouth Students
Dartmouth Students
Alex Gavitt '23
(Continued)
Anahita Kodali '23
Shannon Sartain '21
Andrew Sasser '23
Sophia Arana '22
Anna Kolln '22
Sophia Koval '21
Anna Lehmann '23
Sudharsan Balasubramani '22
Audrey Herrald '23
Tara Pillai '23
Ben Schelling '21
Zoe Chen '23
Bryn Williams '23
This journal also features a whopping eleven group articles – one of which is an independent environmental analysis of trash accumulation as a result of currents and wind patterns by a group of junior and senior Dartmouth students: Ben Schelling ’21, Maxwell Bond ’20, Sarah Jennewein ’21, and Shannon Sartain ’21. Joining this article is an investigation of the genetic contribution to political affiliation, a critique on the relationship between capitalism and climate change, a look into the rapidly evolving genetic therapy landscape, and a powerful piece that described the use of computer algorithms and screening techniques to assist drug discovery and design, among others. We as a board have been fortunate enough to see the growth of individual writers over their months of involvement at DUJS and are excited to witness their further growth as scientific journalists. Focus for this term was placed on congruency of writing – ensuring a stepwise progression of background toward the end explanation of the subject matter in a contemporary context that is inclusive of recent scientific findings – in an effort to ensure that we as a journal are acting as advocates for the readers. This meant providing sufficient context for the topic, being up front about implications of the findings, and then presenting said findings in a format that followed a logical line of reasoning. The summer allowed for additional time to develop this skillset among our ranks of writers, and application toward both group and individual writers allowed for powerful pieces to be produced across the board.
Carolina Guerrero '23 Chris Connors '21
Hampton High School (PA):
Daniel Abate '23
Manya Kodali
Daniel Cho '22 Dev Kapadia '23
Monta Vista High School (CA):
Dina Rabadi '22
Arushi Agastwar
Emily Zhang '23
Avishi Agastwar
Eva Legge '22 Evan Craft '20
Sage Hill School (CA):
Gil Assi '22
Michele Zheng
George Shan '23 Grace Lu '23
Waukee High School (WI):
James Bell '21
Sai Rayasam
Jess Chen '21 Jenny Song '23
University of Delaware (DE):
Julia Robitaille '23
Arya Faghri
Kamila Zacowics '22 Kamren Khan '22
University of Wisconsin
Kay Vuong '22
Madison (WI)
Leandro Giglio '23
Timmy Davenport
Maddie Brown '22 Maxwell Bond '20
University of Lincoln (Lincoln,
Michael Moyo '22
England, UK):
Nephi Seo '23
Bethany Clarkson
Nina Klee '23 Roberto Rodriguez '23
Writers and editors alike put a tremendous amount of effort into developing these articles and conveying essential STEM concepts and findings. We all sincerely hope that you enjoy reading our works as much as we enjoyed researching, collaborating, and editing. Sincerely, Nishi Jain Editor-in-Chief
DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs.dartmouth.science@gmail.com Copyright © 2020 The Trustees of Dartmouth College
Sam Hedley '23 Sarah Jennewein '21
SPECIAL THANKS Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company
Table of Contents Individual Articles Hunting for Planet Nine by its Influence on Neptune Alex Gavitt '23, pg. 8
8
Racial Bias Against Black Americans in the American Healthcare System Anahita Kodali '23, pg. 18
Mechanochemistry – A Powerful and “Green” Tool for Synthesis Andrew Sasser '23, pg. 24
Fast Fashion and the Challenge of Textile Recycling
24
Arushi Agastwar, Monta Vista High School Senior, pg. 30
The Cellular Adhesion and Cellular Replication of SARS-COV-2 Arya Faghri, University of Delaware, pg. 36
Gasotransmitters: New Frontiers in Neuroscience Audrey Herrald '23, pg. 44
The Science of Anti-Aging
44
Avishi Agastwar, Monta Vista High School Senior, pg. 52
Algal Blooms and Phosphorus Loading in Lake Erie: Past, Present, and Future Ben Schelling, '21, pg. 58
Differences in microbial flora found on male and female Clusia sp. flowers Bethany Clarkson, University of Lincoln (UK) Graduate, pg. 70
The Facial Expressions of Mice Can Teach Us About Mental Illness
78
Bryn Williams '23, pg. 78
How Telemedicine could Revolutionize Primary Care Chris Connors '21, pg. 84
Evidence Suggesting the Possibility of Regression and Reversal of Liver Cirrhosis Daniel Abate '23, pg. 90
CR-grOw: The Rise and Future of Contract Research Organizations
96
4
Dev Kapadia '23, pg. 96
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table of Contents Continued Individual Articles Preventative Medicine: The Key to Stopping Cancer in its Tracks Dina Rabadi '22, pg. 104
Challenges and Opportunities in Providing Palliative Care to COVID-19 Patients
104
Emily Zhang, '23, pg. 112
The Botanical Mind: How Plant Intelligence ‘Changes Everything’ Eva Legge '22, pg. 118
On the Structure of Field Theories I Evan Craft '20, pg. 128
On the Structure of Field Theories II
118
Evan Craft '20, pg. 136
The Modernization of Anesthetics Gil Assi '22, pg. 138
Tripping on a Psychedelic Revolution: A Historical and Scientific Overview with Dr. Rick Strassman and Ken Babbs Julia Robitaille '23, pg. 144
144
The Functions and Relevance of Music in the Medical Setting Kamren Khan '23 and Yvon Bryan, pg. 152
Meta-analysis Regarding the Use of External-Beam Radiation Therapy as a Treatment for Thyroid Cancer Manya Kodali, Hampton High School, and Dr. Vivek Verma, Pg. 158
The Role of Epigenetics in Tumorigenesis
164
Michele Zheng, Sage Hill School Senior, Pg. 164
Selective Autophagy and Its Potential to Treat Neurodegenerative Diseases Sam Hedley '23, Pg 172
The Role of Autophagy and Its Effect on Oncogenesis Zoe Chen '23, Pg. 180
180
COVID-19 Response in Vietnam Kamila Zakowicz '22 and Kay Vuong '22, pg. 188
SUMMER 2020
5
Table of Contents Continued Group Articles The Role of Ocean Currents and Local Wind Patterns in Determining Onshore Trash Accumulation on Little Cayman Island Ben Schelling '21, Maxwell Bond '20, Sarah Jennewein '21, Shannon Sartain '21 Pg. 200
200
Astrobiology: The Origins of Life in the Universe Staff Writers: Sudharsan Balasubramani '22, Andrew Sasser '23, Sai Rayasam (Waukee High School Junior), Timmy Davenport (University of Wisconsin Junior), Avishi Agastwar (Monta Vista High School Senior) Board Writer: Liam Locke '21 Pg. 206
Capitalism and Conservation: A Critical Analysis of Eco-Capitalist Strategies
206
Staff Writers: Eva Legge '22, Jess Chen '21, Timmy Davenport (University of Wisconsin Junior), James Bell '21, Leandro Giglio '23 Board Writer: Anna Brinks '21 Pg. 222
The Chemistry of Cosmetics
242
Staff Writers: Anna Kolln '22, Anahita Kodali '23, Maddie Brown '22 Board Writer: Nishi Jain '21 Pg. 242
Inoculation to Operation Warp Speed: The Evolution of Vaccines Staff Writers: Andrew Sasser '23, Anna Lehmann '23, Carolina Guerrero '23, Michael Moyo '22, Sophia Arana '22, Sophia Koval '21, Sudharsan Balasubramani '22 Board Writer: Anna Brinks '21 Pg. 254
254
6
The Genetic Engineering Revolution Staff Writers: Bryn Williams '23, Dev Kapadia '23, Sai Rayasam (Waukee High School Junior), Sudharsan Balasubramani '22 Board Writer: Sam Neff '21 Pg. 274
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table of Contents Continued Group Articles An investigation into the field of Genopolitics Staff Writers: Grace Lu '23, Jenny Song '23, Zoe Chen '23, Nephi Seo '23 Board Writer: Nishi Jain '21 Pg. 290
Mastering the Microbiome Staff Writers: Anahita Kodali '23, Audrey Herrald '23, Carolina Guerrero '23, Sophia Koval '21, Tara Pillai '23 Board Writer: Sam Neff '21 Pg. 298
290
The Psychology of the Pandemic Staff Writers: Daniel Cho '22, Timmy Davenport (University of Wisconsin Junior), Nina Klee '23, Michele Zheng (Sage Hill School Senior), Roberto Rodriguez '23, Jenny Song '23 Board Writers: Sam Neff '21 and Megan Zhou '21 Pg. 314
298
Rational Drug Design: Using Biology, Chemistry, and Physics to Develop New Drug Therapies Staff Writers: George Shan '23, Carolina Guerrero '23, Samantha Hedley '23, Anna Kolln '22, Dev Kapadia '23, Michael Moyo '22, Sophia Arana '22 Board Writer: Liam Locke '21 Pg. 332
332
The Rise of Regenerative Medicine Staff Writers: Bryn Williams '23, Sudharsan Balasubramani '22, Daniel Cho '22, George Shan '23, Jenny Song '23, Arushi Agastwar (Monta Vista High School Senior) Board Writers: Nishi Jain '21 and Megan Zhou '21 Pg. 356
SUMMER 2020
356
7
Hunting for Planet Nine by its Influence on Neptune BY ALEX GAVITT '23 Cover image: Artist’s impression of Planet Nine. The yellow ring around the Sun is Neptune’s orbit. Source: Wikimedia Commons/ nagualdesign and Tomruen, CC-BY-SA 4.0
8
Abstract
What is a Planet?
As more and more Kuiper belt objects have been discovered, astronomers have begun to notice that many of their orbits are aligned contrary to the expected random distribution around the Sun. Furthermore, some have extremely long or highly inclined orbits that cannot be explained by the gravitational influence of known objects. As a result, some astronomers have suggested that there may be a ninth planet, orbiting far beyond Neptune, that is influencing the orbits of these Kuiper belt objects. This paper provides scientific background for the Planet Nine hypothesis and describes a calculation of the transit timing variation (TTV) it would introduce in Neptune’s orbit. Ultimately, it finds that the TTV would be too small and occur over too long a timescale to be useful in finding Planet Nine.
Wanderers The word “planet” comes from the ancient Greek word for “wanderer.” This definition offers a profound insight into humanity’s original understanding of the planets: they clearly move through the sky independent of the slow and fixed movement of the other stars. Under the geocentric model of the solar system, astronomers counted seven planets that fit this definition: Mercury, Venus, Mars, Jupiter, and Saturn, as well as the Sun and the Moon. The advent of the heliocentric solar system, then, was the first nail in the coffin of this definition, for it stripped the wandering Sun and Moon of the title “planet,” while also adding the Earth to the planetary ranks, despite its apparent lack of motion (Brown, 2010, pp. 18–21). What sealed the fate of the “wanderer” definition was the discovery of Uranus in 1781 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
by the astronomer William Herschel. While he initially assumed this new object was a comet, its near-circular orbit and lack of a tail quickly prompted the realization that it was, in reality, a new planet. As Uranus is not generally visible with the naked eye, it was not one of the “wanderers” known to the ancient Greeks. Soon after the discovery of Uranus, astronomers realized that all was not well with their models of the solar system. Predictions of Uranus’ orbit from Newton’s Law of Gravitation differed noticeably from its observed orbit, suggesting that there was another object nearby, massive enough to influence Uranus’ orbit. Calculations narrowed down this object’s likely position, and astronomers soon found the planet Neptune, whose gravitational influence resolved the errors in Uranus’ orbit (Krajnović, 2016; Brown, 2010, pp. 21–24). The Rise and Fall of Planets Once Uranus had established the precedent for new planets, astronomers began finding many more. The first of these was Ceres, located between Mars and Jupiter; the discovery of Ceres was soon followed by the discovery of the nearby objects Pallas, Juno, and Vesta. Further discoveries of objects in this region eventually resulted in those four losing their status as planets and being reclassified as something new: asteroids, part of a vast “asteroid belt” between Mars and Jupiter. Undaunted by this reclassification, some astronomers continued to search for new planets. In 1930, the astronomer Clyde Tombaugh found the object Pluto, orbiting out past Neptune, which became the solar system’s ninth and final planet. Though Pluto is a bit irregular compared to the other planets—its orbit is more elliptical (so much so that it crosses Neptune’s orbit) and is inclined from the plane of the other planets’ orbits, and it is smaller and much less massive than even Mercury—but it was generally accepted as a planet. By the 1990s, however, astronomers began discovering other objects orbiting in the same region as Pluto, a region that became known as the Kuiper belt. The discovery of these objects raised the specter that Pluto might fall from planethood, just as Ceres and its companions had (Brown, 2010, pp. 21–27). Pluto held on to its planetary status for about a decade and a half after the first Kuiper belt objects were discovered. In 2005, however, a team of astronomers led by Caltech professor Michael Brown discovered an object that forced the issue: a Kuiper belt object, now known as Eris, estimated at slightly larger than
SUMMER 2020
Pluto. Astronomers were left with a choice: either accept Eris—and possibly many more— as a planet or reclassify Pluto as something else. Eventually, in 2006, the International Astronomical Union (IAU) adopted a formal definition of a planet, making the first time that explicit requirements for planethood had been laid down. Under the IAU definition, a planet in the solar system must (IAU, 2006): 1. Be in orbit around the Sun; 2. Have sufficient mass for its self-gravity to overcome rigid body forces so that it assumes a hydrostatic equilibrium (nearly round) shape; and 3. Have cleared the neighborhood around its orbit. Pluto, by virtue of the surrounding Kuiper belt objects, has not cleared its neighborhood, and so failed the new definition. A footnote in the IAU definition stated clearly that “the eight planets are: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, and Neptune,” and the matter was considered settled. Past the eight planets are the trans-Neptunian objects, or TNOs. Originally, these objects were split into two main groups: the Kuiper belt and the Oort cloud. Kuiper belt objects (KBOs) orbit within about 20 astronomical units (AU) of Neptune, while the Oort cloud lies upwards of a thousand AU away. Kuiper belt objects are difficult to observe since they are so far away, which is why most were not discovered until the 1990s. The Oort cloud, on the other hand, while thought to be a repository of comets, is too far away to observe and remains theoretical.
“Pluto, by virtue of the surrounding Kuiper belt objects, has not cleared its neighborhood, and so failed the new definition."
Planet Nine Recent observations have suggested that the matter of planets may not be as settled as astronomers believed in 2006. In 2016, astronomers Michael Brown—the same one who discovered Eris—and Konstantin Batygin published a paper laying out the evidence for a ninth planet orbiting far beyond Neptune. Unlike Pluto and the other Kuiper belt objects, this would be a true planet; its mass is currently estimated at five to ten times that of the Earth’s (Batygin et al., 2019, p. 36). Just as the existence of Neptune was originally inferred from its gravitational influence on Uranus, the evidence for Planet Nine comes from anomalous observations, this time of Kuiper belt objects, that would be explained by Planet Nine’s gravitational presence. In their initial paper, Batygin and Brown put forth three main arguments for Planet Nine: the presence 9
Figure 1: Diagram showing the components of specifying an orbit, including the argument of perihelion (labeled as argument of periapsis, as perihelion refers specifically to objects that orbit the Sun) and the longitude of the ascending node. Source: Wikimedia Commons/ Lasunncty, CC-BY-SA 3.0
"... they also suggested that an unseen planet could be responsible for Sedna's orbit...”
10
of remote objects like Sedna, the clustering observed in the perihelia of many TNOs, and the presence of TNOs that orbit at a high inclination compared to the plane of the eight known planets (Batygin & Brown, 2016, p. 1). Sedna and 2012 VP113 One of those remote objects that does not fit in the Kuiper belt is the TNO Sedna, which has a highly eccentric orbit with a perihelion, or minimum distance from the Sun, of 76 AU. While some other TNOs do travel that far from the Sun, at the time, no others were known that always stayed at least that far away—most objects that travel that far were launched outward as a result of a close encounter with Neptune’s gravitational field, but Sedna never even gets close to Neptune. When Sedna was first observed in 2004, its discoverers—Brown and fellow astronomers Chadwick Trujillo and David Rabinowitz—suggested a few possibilities for this strange orbit. They found that a star passing by the solar system perpendicular to the plane of the Earth’s orbit around the Sun, known as the ecliptic, at a speed of ~30 km/s and a distance
of ~500 AU could lift Sedna from a more normal orbit to its observed one. However, they suggested it would be more likely that the Sun formed as part of a stellar cluster and the other stars in the cluster pushed Sedna into its orbit before the solar system moved away from the cluster. Ironically, they also suggested that an unseen planet could be responsible for Sedna’s orbit, though they considered that very unlikely and the planet they suggested was much smaller and closer than Planet Nine (Brown et al., 2004, p. 648). Then, in 2014, Trujillo and Scott Shepherd found another object like Sedna: 2012 VP113, whose perihelion is 80 AU. Interestingly, they did not find any Kuiper belt objects between 55 and 75 AU, despite the fact that such objects would be closer and therefore easier to detect. Based on that absence, Trujillo and Sheppard suggested that Sedna and 2012 VP113 may be better categorized as inner Oort Cloud objects (Trujillo & Shepherd, 2014, p. 471).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Diagram showing some of the anomalous TNO orbits (left) and a theoretical orbit for Planet Nine (right), with the orbits of Saturn, Uranus, and Neptune in the center for reference. Source: Wikimedia Commons/ nagualdesign, CC-0
Perihelion Clustering In their paper, Trujillo and Sheppard also noticed that Sedna and 2012 VP113 have something else in common: their arguments of perihelion are very similar. The argument of perihelion, represented by the variable ω, is the angle formed between a line from the object’s perihelion to the Sun and the ecliptic. The gravitational influence of the giant planets (Jupiter, Saturn, Uranus, and Neptune) is supposed to randomize these values over the lifespan of the solar system (Batygin & Brown, 2016, p. 1). Instead, Trujillo and Sheppard found that not only did Sedna and 2012 VP113 have similar arguments of perihelion but that every known Kuiper belt object with a semi-major axis greater than 150 AU and a perihelion distance greater than Neptune’s orbital distance (30 AU) has an argument of perihelion within 55° of about 340° (Trujillo & Shepherd, 2014, p. 472). In their paper, Trujillo and Sheppard suggested a planet of about 5ME orbiting the Sun with a semi-major axis of about 250 AU could be responsible for this alignment (Trujillo & Shepherd, 2014, pp. 472–473). Batygin and Brown were intrigued by this possibility and set out to explore it in more detail. In their paper, they noted that Neptune’s gravity can affect objects with perihelion distances greater than its own, which would disturb any effects caused by Planet Nine’s gravity. After simulating the orbits of the Kuiper belt objects identified by Trujillo and Sheppard, they found that objects with
SUMMER 2020
perihelia less than 36 AU generally fall under Neptune’s influence. For the remaining objects, they found that the arguments of perihelion clustered around 318° ± 8°. Intriguingly, they also found that the longitudes of ascending nodes, the angle, Ω, in the ecliptic between a defined reference point and the intersection of the ecliptic and the plane of the object’s orbit, for these objects clustered around 113° ± 13°. Taken together, the clustering of these measurements indicates that the orbits of these objects are actually aligned in physical space and therefore are likely being influenced in the same way (Batygin & Brown, 2016, p. 2). Highly-inclined TNOs To explain this alignment, Batygin and Brown began running simulations of a ninth planet whose gravity would pull the orbits of the Kuiper belt objects into the proper shape. After some trial and error, they found a scenario that predicted both remote objects like Sedna and the perihelion clustering: a planet of about 10ME and a semi-major axis of about 700 AU, orbiting 180° away from the clustered perihelia (Batygin & Brown, 2016, p. 11). This “antialignment,” with the planet orbiting opposite the Kuiper belt objects, was initially puzzling, as such an orbit would generally be unstable and lead to collisions. Batygin and Brown, however, realized that such an orbit could be stable if Planet Nine had sufficient gravity to trap the Kuiper belt objects in mean-motion resonance. This resonance develops over time
“After some trial and error, they found a scenario that predicted both remote objects like Sedna and the perihelion clustering: a planet of about 10ME and a semimajor axis of about 700 AU, orbiting 180 degrees away from teh clustered perihelia."
11
as objects orbit and exchange energy. If one object is sufficiently more massive than the other, this effect ultimately results in the smaller body making an integer number of orbits for every orbit the massive body completes. This effect is seen elsewhere in the solar system, for example, between Neptune and Pluto, where Pluto makes three orbits for every two that Neptune completes, which prevents them from colliding despite the fact that Pluto crosses Neptune’s orbit. Likewise, under this model, the aligned Kuiper belt objects would be in meanmotion resonance with Planet Nine, allowing the planet to orbit opposite the Kuiper belt objects and still be stable (Batygin & Brown, 2016, p. 6; Batygin & Morbidelli, 2017). However, because of Planet Nine’s high eccentricity, many of these resonances are more complex ratios (e.g. 2:7), and therefore are not helpful for narrowing down Planet Nine’s location in the sky (Bailey et al., 2018).
“One study found that Planet Nine's gravity could explain the solar obliquity, an unexplained misalignment between the Sun's axis of rotation and the celestial plane.”
Interestingly, this model made another prediction: that Planet Nine’s gravity would push some trans-Neptunian objects farther away and to a higher inclination, to the point where they would disappear from view, only to then bring them back into view in a highly inclined orbit, possibly even one perpendicular to the celestial plane. While Batygin and Brown were initially puzzled by that prediction, they soon realized there is such a population of trans-Neptunian objects: at the time, astronomers had already found several TNOs with highly inclined orbits, some of which are in fact roughly perpendicular to the celestial plane. The presence of these objects has never been explained, but the fact that the Planet Nine model offers an explanation for a phenomenon it was not trying to explain lends a measure of additional credence to its other predictions (Batygin & Brown, 2016, p. 10). Further Developments In their initial paper, Batygin and Brown noted that discovering additional Kuiper belt objects would be crucial to eliminating any sampling bias and refining the orbital parameters of Planet Nine. Since then, additional objects with orbits that fit the predictions of the Planet Nine hypothesis, such as 2015 TG387 and 2015 BP519, have been found (Sheppard et al., 2019, p. 10; Becker et al., 2018, pp. 10–11). One study found that Planet Nine’s gravity could explain the solar obliquity, an unexplained misalignment between the Sun's axis of rotation and the celestial plane (Bailey et al., 2016), though later revisions to estimates of Planet Nine’s mass and
12
orbit concluded that it cannot account for all of the solar obliquity (Batygin et al., 2019, p. 38). Some have suggested that the observed clustering of Kuiper belt objects may just be the result of observational biases (Shankman et al., 2017), but both subsequent observations and a detailed statistical analysis suggest that is not the case (Brown & Batygin, 2019). Since the clustering appears real, most alternate theories suggest that it is caused by a celestial body or bodies other than a planet, such as a ring of icy objects with an orbit and total mass similar to Planet Nine or a black hole with a mass similar to Planet Nine that was captured by the Sun’s gravity (Sefilian & Touma, 2019; Scholtz & Unwin, 2019). Nevertheless, Planet Nine remains the simplest and most likely explanation for the anomalous orbits observed in TNOs, though attempts to locate it with a telescope have yet to succeed. Depending on how far away Planet Nine is and where it is in its orbit, it may be just at the limit of the observational abilities of powerful telescopes like the Subaru Telescope. If it is too far away for current telescopes to see, the Vera Rubin Observatory in Chile, which is projected to begin observations in late 2022 (LSST, n.d.), should be able to find it. In 2019, Batygin and Brown published a thorough analysis of the evidence for Planet Nine thus far. Whereas the original paper suggested Planet Nine would be upwards of 10ME and have a semi-major axis of roughly 700 AU, the most recent data indicates it should be both smaller (mass between five and ten ME) and closer (semi-major axis between 500 and 800 AU); a higher mass indicates a greater distance. Despite the decrease in mass, a 5ME Planet Nine would actually be easier to detect than the original 10ME proposal because it would be closer to Earth (Batygin et al., 2019, p. 39). However, while the decrease in mass refines the Planet Nine hypothesis to better fit its core prediction—the alignment of objects in the Kuiper belt—it also means that Planet Nine cannot explain the entirety of the solar obliquity (Batygin et al., 2019, p. 38).
Neptune's TTV as a Result of Planet Nine Introduction To prove or disprove the existence of Planet Nine, scientists need predictions that can be tested and observed; one possible method for generating these predictions can be borrowed from the techniques of exoplanet hunters. One of the primary ways astronomers search
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Plot of Neptune's TTV over time in the 5ME (top) and 10ME (bottom) scenarios. Source: Created by the writer using the TTV2Fast2Furious code, CC-BY-SA 4.0
for exoplanets is by looking for stars that periodically dim slightly, which can indicate that a planet is moving between the star and the Earth. This method is very good at finding large planets that orbit close to their star (since such planets will block more light) and is dependent on the planet’s period—if the planet completes an orbit every 10 days, for example, astronomers should see the star dim every 10 days, like clockwork. If the timing of these transits instead rises and falls over time, that suggests that there may be another planet that is gravitationally tugging on the first. While stars are gravitationally dominant in their planetary system and generally keep their planets orbiting at a constant rate, they are not the only gravitationally noteworthy bodies. The planets themselves tug on each other, tweaking their orbits slightly. As a result, each time a planet goes around its host star, it takes a slightly different amount of time. The difference in these times is known as transit-timing variation (TTV). These variations
SUMMER 2020
increase and decrease over time, like a wave. As a result, astronomers generally talk about the amplitude of a TTV (the maximum amount, positive or negative, by which one planet changes the period of another planet) and the period of the TTV cycle (how long it takes a planet’s TTV to come back to the same value and slope, not to be confused with the period of the planet’s orbit). TTV observations don’t always establish an exact orbit for the second planet, but they can help prove its existence and narrow down the range of possible orbits. Since TTV effects are usually small (on the order of minutes), they are easier to notice in planets with short orbital periods—which, as planets orbit more quickly the closer they are to their star, are exactly the kind of planets that are easy to detect by the dimming of a star. Nevertheless, the general principle holds and can also be applied to more remote planets like Neptune and Planet Nine. Calculating these orbital effects is complex and, in this paper, was accomplished with the
“While stars are gravitationally dominant in their planetary system and generally keep their planets orbiting at a constant rate, they are not the only gravitationally noteworthy bodies.”
13
TTV2Fast2Furious code (available at https:// github.com/shadden/TTV2Fast2Furious). This code takes in information about the orbital characteristics of at least two planets and runs that information through a matrix equation, before outputting a graph of both planets’ TTV over time. The code was originally developed for calculations with exoplanets—specifically the ones that orbit close to their host star that are easier to detect. While the program is capable of integrating the data over a sufficient number of transits to show a complete TTV cycle for Neptune, even on its innermost orbit, Planet Nine completes only about one orbit over that timespan (giving it, at most, two transits). Increasing the timespan to include enough Planet Nine transits returned an error in the code as the integral exceeded its maximum number of subdivisions. Since the purpose of this calculation is to explore the feasibility of detecting a TTV in Neptune’s orbit assuming that Planet Nine exists, the TTV of Planet Nine is irrelevant anyways and it was removed from the final graphs to avoid any confusion. Because of the extremely long orbital periods compared to those of the planets the code was designed for, the code was also adjusted to display time in years instead of days on the graphs.
“In order to calculate a planet's TTV, the TTV2Fast2Furious code needs to know the mass, period, eccentricity, and initial transit time of both the "target' planet and the planet perturbing that target planet's orbit.”
14
Calculation Details In order to calculate a planet’s TTV, the TTV2Fast2Furious code needs to know the mass, period, eccentricity, and initial transit time of both the “target” planet and the planet perturbing that target planet’s orbit. This section details the steps taken to supply those parameters and calculate Neptune’s TTV as a result of Planet Nine’s gravitational influence for the minimum and maximum predicted mass of Planet Nine (5ME and 10ME respectively). While the mass of Planet Nine could also lie somewhere between these values, by considering both extremes, these calculations find the maximum and minimum TTV values, essentially establishing the best and worst case scenarios for detecting Planet Nine’s influence on Neptune. Data for Neptune was taken from Williams (September 2018) and the full orbital parameters for both the 5ME and 10ME scenarios for Planet Nine were taken from Batygin et al. (2019). A complete list of adopted values is in Appendix A. The given mass values for Neptune and Planet Nine were converted to solar masses (the units expected by the code) using the values of MS and ME from NASA’s Sun and Earth data (Williams, February 2018, 2020). Given the period
of Neptune’s orbit in sidereal days, the period of Planet Nine, P9, can then be defined in terms of its semi-major axis (a9) and Neptune’s semimajor axis (aN) and period (PN), with Kepler’s Third Law, which yields:
The eccentricities of both Neptune and Planet Nine are given in the sources and required no further conversion. Since the initial transit time for the planets would depend on the location of a hypothetical alien observer, the code was tested with different values to see how much they would impact the results. This testing found that the value assigned to the initial transit time had only a small effect on Neptune’s initial TTV value and no noticeable effect on its TTV amplitude or period. In the final calculations, Neptune was assigned an initial transit time of half its period (indicating that it transits at aphelion, where the transit probability is higher) and Planet Nine an initial transit time of a quarter of its period. For the 10ME scenario, the code was also adjusted to calculate results out to 200 transits of Neptune, which, because Planet Nine is farther away and therefore has less of a gravitational impact on Neptune, was necessary to produce a full TTV cycle. Results Running the code resulted in a TTV amplitude of 0.4 minutes and period of 12,300 years in the 5ME scenario and a TTV amplitude of 0.2 minutes and a period of 22,000 years in the 10ME scenario. For comparison, the first significant planet discovered by TTV, Kepler-19c, gives the transiting planet Kepler-19b a TTV amplitude of 5 minutes with a period of 316 days (Ballard et al., 2011, p. 15). Since its discovery, Neptune has completed just a little bit more than one orbit. As such, even in the best-case scenario of a 5ME Planet Nine, finding Planet Nine by looking at variations in Neptune’s orbit does not seem feasible.
Conclusion Indirect evidence for the existence of Planet Nine has grown over time, but astronomers have yet to actually observe it. The principal challenge is that astronomers have only had a few decades to observe the TNOs it influences, and they have periods of hundreds of years. Astronomically speaking, they have barely
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
moved since they were discovered. In contrast, when the only other planet to be discovered by calculation—Neptune—was found, astronomers had already known about Uranus for 65 years and had been able to observe most of its orbit. Indeed, seeing the perturbations over time in Uranus’ orbit is what enabled Le Verrier to predict Neptune’s orbit. While the goal that motivated this paper was to help narrow down the range of possible orbits for Planet Nine by calculate the TTVs expected for Neptune as a result of Planet Nine’s gravitational influence, the calculation ultimately found that they are likely too small and occur over too long a timescale to be of much use in finding Planet Nine. Nevertheless, the search for Planet Nine continues. If it is not found within the next few years, the Vera Rubin Observatory, which is scheduled to begin observations in 2022, should have the observational capacity to detect Planet Nine and will be conducting observations as part of a survey of the whole sky every night that it can, which should finally resolve the search for Planet Nine.
This paper would not exist without Dartmouth College Professor Elisabeth Newton, who taught me quite a bit about astronomy, guided me from a vague notion of writing something about Planet Nine to the idea of doing a TTV analysis, helped me understand the TTV2Fast2Furious code, and provided feedback on the paper. This paper also wouldn’t read anywhere near as well as it does without the efforts of the editors who worked on it; thank you to Nathalie Korhonen Cuestas in my Astronomy 15 class and the DUJS editors Sam Neff, Madeline Brown, and Eric Youth for all of their suggestions and comments. And finally, I am enormously grateful to a few people whose influence helped me personally write this paper: Professor Mike Brown of the California Institute of Technology, whose work inspired my interest in Planet Nine, and the friends who kept me sane through the writing process— they know who they are.
“If it is not found within the next few years, the Vera Rubin Observatory, which is scheduled to begin observations in 2022, should have the observational capacity to detect Planet Nine...”
Acknowledgments Appendix A: Adopted Values
SUMMER 2020
15
References
black hole? arXiv e-prints. https://arxiv.org/abs/1909.11090
Bailey, E., Batygin, K., & Brown, M. E. (2016). Solar obliquity induced by Planet Nine. The Astronomical Journal, 152(5). https://doi.org/10.3847/0004-6256/152/5/126 Bailey, E., Brown, M. E., Batygin, K. (2018). Feasibility of a resonance-based Planet Nine search. The Astronomical Journal, 156(2). https://doi.org/10.3847/1538-3881/aaccf4
Sefilian, A. A. & Touma, J. R. (2019). Shepherding in a selfgravitating disk of trans-Neptunian objects. The Astronomical Journal, 157(2). https://doi.org/10.3847/1538-3881/aaf0fc
Ballard, S., Fabrycky, D., Fressin, F., Charbonneau, D., Desert, J., Torres, G., Marcy, G., Burke, C. J., Isaacson, H., Henze, C., Steffen, J. H., Ciardi, D. R., Howell, S. B., Cochran, W. D., Endl, M., Bryson, S. T., Rowe, J. F., Holman, M. J., Liassauer, J. J., … Borucki, W. J. (2011). The Kepler-19 System: A transiting 2.2 RE planet and a second planet detected via transit timing variations. The Astrophysical Journal, 743(2). https://doi.org/10.1088/0004637X/743/2/200 Batygin, K., Adams, F. C., Brown, M. E., & Becker, J. C. (2019). The Planet Nine hypothesis. Physics Reports, 805, 1-53. https://doi. org/10.1016/j.physrep.2019.01.009
Shankman, C., Kavelaars, J. J., Bannister, M. T., Gladman, B. J., Lawler, S. M., Chen, Y., Jakubik, M., Kaib, N., Alexandersen, M., Gwyn, S. D. J., Petit, J., & Volk, K. (2017). OSSOS. VI. Striking biases in the detection of large semimajor axis transNeptunian objects. The Astronomical Journal, 154(2). https:// doi.org/10.3847/1538-3881/aa7aed Sheppard, S. S., Trujillo, C. A., Tholen, D. J., & Kaib, N. (2019). A new high perihelion trans-Plutonian inner Oort cloud object: 2015 TG387. The Astronomical Journal, 157(4). https://doi. org/10.3847/1538-3881/ab0895 Trujillo, C. A. & Sheppard, S. S. (2014). A Sedna-like body with a perihelion of 80 astronomical units. Nature, 507, 471-474. https://doi.org/10.1038/nature13156
Batygin, K. & Brown, M. E. (2016). Evidence for a distant giant planet in the solar system. The Astronomical Journal, 151(2). https://doi.org/10.3847/0004-6256/151/2/22 Batygin, K. & Morbidelli, A. (2017). Dynamical evolution induced by Planet Nine. The Astronomical Journal, 154(6). https://doi.org/10.3847/1538-3881/aa937c Becker, J. C., Khain, T., Hamilton, S. J., Adams, F. C., Gerdes, D. W., Zullo, L., Franson, K., Millholland, S., Bernstein, G. M., Sako, M., Bernardinelli, P., Napier, K., Markwardt, L., Lin, H. W., Wester, W., Abdalla, F. B., Allam, S., Annis, J., Avila, S., … Walker, A. R. (2018). Discovery and dynamical analysis of an extreme trans-Neptunian object with a high orbital inclination. The Astronomical Journal, 156(2). https://doi.org/10.3847/15383881/aad042 Brown, M. E. & Batygin, K. (2019). Orbital clustering in the distant solar system. The Astronomical Journal, 157(2). https:// doi.org/10.3847/1538-3881/aaf051 Brown, M. E., Trujillo, C., & Rabinowitz, D. (2004). Discovery of a candidate inner Oort cloud planetoid. The Astrophysical Journal, 617(1), 645-649. https://doi.org/10.1086/422095 Brown, M. E. (2010). How I killed Pluto and why it had it coming. Spiegel & Grau. IAU. (2006, August 24). IAU 2006 General Assembly: Result of the IAU Resolution votes. IAU. Retrieved August 26, 2020, from https://www.iau.org/news/pressreleases/detail/iau0603/ Krajnović, D. (2016). The contrivance of Neptune. Astronomy and Geophysics, 57(5), 5.28-5.34. https://doi.org/10.1093/ astrogeo/atw183 LSST. (n.d.). Fact sheets. Rubin Observatory. Retrieved August 26, 2020, from https://www.lsst.org/about/fact-sheets Williams, D. R. (2018, February 23). Sun Fact Sheet. NASA. Retrieved September 24, 2020, from https://nssdc.gsfc.nasa. gov/planetary/factsheet/sunfact.html Williams, D. R. (2018, September 27). Neptune Fact Sheet. NASA. Retrieved September 24, 2020, from https://nssdc.gsfc. nasa.gov/planetary/factsheet/neptunefact.html Williams, D. R. (2020, April 02). Earth Fact Sheet. NASA. Retrieved September 24, 2020, from https://nssdc.gsfc.nasa. gov/planetary/factsheet/earthfact.html Scholtz, J. & Unwin, J. (2019). What if Planet 9 is a primordial
16
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
17
Racial Bias Against Black Americans in the American Healthcare System BY ANAHITA KODALI '23 Cover Image: Pictured above is a doctor drawing blood as part of the Tuskegee Syphilis study. The men who were in the study were promised free healthcare as incentive for their participation, but the doctors used a litany of placebos and diagnostic procedures (including blood tests) in place of actual medical procedures that could treat syphilis. The lasting impacts of the study on Black patient trust in the healthcare systems are still felt today. Source: Wikimedia Commons
18
Introduction There are deep historic inequities among America’s different demographic groups. Specifically, Black Americans are at a significant disadvantage in many areas, from wage gaps to discrimination in hiring practices to lack of access to education. The current global pandemic has highlighted yet another area that Black Americans are underserved: the healthcare industry. Black Americans have historically, on average, had poorer health than their White counterparts. Many argue that the disparity in health outcomes was due to genetic differences, cultural practices, or socioeconomic issues. However, even when all factors are held constant, Black individuals consistently have more indicators of poor health than White individuals (Charatz-Litt, 1992). Understanding why necessitates a deeper look at the issues that Black Americans face in medicine. This paper reviews the racist past of American medicine as well as modern
racism in physician attitudes to Black people and medical technologies. It is important to note that for the purposes of simplicity and clarity, much of this paper discusses the differences between the treatment of Black and White Americans as a dichotomy. However, in reality, racism in American healthcare effects other people of color as well. Current rising racial tensions and protests across the United States highlight the struggles of Black Americans with predominantly White American institutions, which is why this paper chooses to focus on their experience with the healthcare system.
Medicine's Ugly History Slavery was the first major barrier to Black Americans receiving proper healthcare. The American Medical Association (AMA) was formed in 1847 as a result of years of DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Generally, people only consider the obvious effects of the segregation laws in the Jim Crow codes. However, one critical piece often ignored – Black people were often denied access to the best hospitals in the country, forcing them to instead either go to subpar healthcare facilities or treat illnesses themselves at home. This resulted in overall poorer health for America’s Black demographic relative to America’s Whites. Source: Wikimedia Commons
conversations between White physicians wanting to protect their practice from homeopathic, or alternative, medicine. They established standards of science and professionalism while securing for themselves the elite status of physicians in society, resulting in a boom of White physician practice. Unfortunately, these standards did not apply to slaves (“AMA History, ” 2018). Slave masters actually often tried to give slaves professional medical care when they fell ill or got injured; though perhaps counterintuitive at first, this proved to be more economically viable than having to pay for more slaves if one became too ill to work (Lander & Pritchett, 2009). However, slave masters had to contend with several issues. For one, physicians were less available in the South than they were in the North, meaning that slave masters often could not find a doctor close enough to treat their slaves quickly. Secondly, even though it made economic sense to treat slaves for illness, services were often too expensive for individual owners to give to slaves. Thirdly, even when slave masters had access to doctors and could afford to pay them, it was difficult to find a doctor willing to work on Black patients; those that did, often did shoddy work as the standards set by the AMA did not apply to slaves. Finally, even doctors who attempted to do adequate work on slaves would give slaves different treatments than they would a White patient because they did not understand how Black people presented different illnesses (Charatz-Litt, 1992).
SUMMER 2020
Several doctors believed that Black people had less sensitivity to pain than White people, and some doctors still hold the belief today (CharatzLitt, 1992). Others believed that Black people’s brains were smaller than brains of other races, while their senses of smell, taste, and hearing were stronger. Additionally, physicians applied certain theories and disease models specifically to slaves, including “Drapetomania,” a theory that slaves ran away because of mental illness, and “Cachexia Africana,” a pathology in which Black people would eat dirt; this further worsened the level of care given to slaves (Sullivan, n.d.).
“This lack of understanding of Black anatomy (as well as a general lack of knowledge about human biology) led to the horrific practice of experimentation on slaves.”
This lack of understanding of Black anatomy (as well as a general lack of knowledge about human biology) led to the horrific practice of experimentation on slaves. As medicine was quickly advancing, demands for human specimens for physician training and research increased. Slaves represented a very unique and attractive group for researchers; from a scientific perspective, they could be used as good research models, and from a legal perspective, they had no autonomy. Thus, slaves were often used as practice for surgeons, ways to study autopsy, and experimental models for new techniques (Savitt, 1982). Slaves had to endure amputations, electric shocks, tumor removals, and more terrible experiments, often without using anesthesia (Kenny). The improper treatment of Black Americans by doctors persisted even after the emancipation of slaves. Though constitutionally equal to White Americans, Black people were 19
Figure 2: Having a more diverse medical field allows doctors and medical researchers to understand better how diseases (like Lupus) present themselves differently in different demographics of patients. This allows doctors to better and more precisely serve all of their patients. Source: Wikimedia Commons
“Generally, physicians are much more unlikely to believe their Black patient's descriptions of pain and therefore have skewed perceptions of the pain of their patient.”
systemically denied access to proper healthcare due to the implementation of segregation and the Jim Crow Laws (see Figure 1). As a result, Black people were unable to receive necessary medical treatments or go to the best medical practices in the nation (Hunkele, n.d.). At the same time, Black people were exploited by White doctors to advance medicine. One of the most well-known examples is the Tuskegee Syphilis Experiment (see Cover Figure). The US Public Health Service conducted the Tuskegee Syphilis Experiment from 1930 into the 1952 in order to see the effects of untreated syphilis on 600 Black men; even when penicillin treatments (which cured syphilis) became widely available in the 1950s, doctors did not give the men adequate care. By the end, at least 100 Black men had died from syphilis related complications (Brandt, 1978). The tale of Henrietta Lacks, a poor Black southern woman who got treated for cervical cancer at Johns Hopkins University in the 1950s, is another clear case of the American healthcare system failing its Black patients. Doctors took samples of the cells in Mrs. Lacks’s tumor cells without informed consent, which ultimately led to an immortal cell line that has opened the pathway for innovation in countless scientific and medical pursuits, including research into cancer, AIDS, radiation therapy, and genomic mapping. Mrs. Lacks’s family was not immediately informed and never received compensation for her contributions (Khan, 2011). While, these two stories highlight the historic improper care of Black Americans and caused Black Americans to lose trust in the healthcare system, there are many more specific reasons for this lack of trust. Unfortunately, as medicine improves and the inequities in healthcare for Black people persists, the level of disparity between White and Black Americans has stayed stagnant (National Center for Health Statistics, 2006). Though there are more legal protections in place for Black Americans today than there were decades ago, there are still significant barriers to Black people receiving adequate care.
Physician Attitudes Towards Black Patients There still exists a significant degree of bias in the American healthcare system that causes physicians to treat Black patients unfairly. These issues begin in physician training and education. Despite receiving a more comprehensive education than their counterparts in the 1800s did, a significant number of physicians share a fundamental and dangerous misunderstanding
20
of pain and pain management for Black patients (see Figure 2). Half of White medical students and residents surveyed in 2016 held one or more incorrect beliefs of Black biology and pain tolerance, including: Black people have thicker skin than White people; Black people’s nerves are less sensitive than White people’s nerves; and Black people’s blood coagulates more quickly than White people’s. These beliefs have all been proven to be wrong (Hoffman et. al., 2016). Issues in training and misunderstanding of Black anatomy and biology translate directly to problems in the hospital. Generally, physicians are much more unlikely to believe their Black patient’s descriptions of pain and therefore have skewed perceptions of the pain of their patient (Staton et. al., 2007). This perception has resulted in White patients consistently receive more pain medications than Black patients. Researchers found that 74% of White patients received pain medication for injuries related to bone fractures while only 50% of Black patients received pain medications for similar injuries (Josefson, 2000). The disparity can be specifically found in adolescent patients. For example, White children are much more likely than their Black counterparts to receive pain treatment for appendicitis (Goyal et. al., 2015). Issues with pain management have led to significant trust issues between patients and physicians. Researchers have found that there are significant differences in physician-patient
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
trust that are related to racial differences. Interactions between Black patients and nonBlack physicians are relatively unfavorable when compared to interactions between Black patients and Black physicians. (Martin et. al., 2013). Black patients tend to trust Black doctors more than White doctors; similarly, when Black men receive care from Black physicians, they are significantly more likely to receive effective care. Specifically, the Black men are more likely to receive preventative care with these findings most pronounced for men who had strong distrust in the medical system (Alsan et. al., n.d.). Thus, emphasis on creating more diversity in the medical field could help alleviate some of the issues caused by biased physician training.
Bias in Medical Technologies Researchers and entrepreneurs alike have increasingly voiced their concerns about racial bias in technology over the past few years. Biotechnology in medicine is no exception. As the medical field becomes more technology driven, the existing issues with algorithms and biotechnology will continue to grow, which could aggravate the already dangerous racial disparities in medicine. Healthcare centers around the country use commercial algorithms to guide help guide clinical decisions and decide which patients receive additional care. These algorithms affect the healthcare that millions of Americans receive. Researchers studied one algorithm that is representative of the majority of popular algorithms employed by hospitals and found that Black patients were significantly sicker than their White counterparts for any given risk score (risk scores are the way that doctors decide what course of treatment to give to their patients). Unwell Black patients were given the same risk score as healthy White patients, resulting in a reduction of Black patients that were identified for extra care by more than half. This bias results from the algorithms predicting costs of health instead of health needs directly. Costs and healthcare needs are directly correlated (sicker patients require more care and therefore more money), so at first glance, using cost as a proxy for needs makes sense. However, historically, the healthcare system spends significantly less money on Black patients because of several issues, including direct discrimination and challenges to the patient-physician relationship. Because less money is spent on Black patients, the algorithms conclude that Black patients are healthier than White patients. By solving the
SUMMER 2020
issue, the team estimated that the number of Black patients receiving extra care would jump from 17.7% to 46.5% (Obermeyer et. al., 2019). The advent of precision medicine also represents a major problem for Black patients. While precision medicine has the potential to greatly improve care by allowing for hyperindividualized care, unchecked, it will also propagate a host of biases towards Black patients. There are three main areas that precision medicine can derive this bias. The first is collection of biased data; Black people are historically underrepresented in research datasets; this underrepresentation results in wider variability and less accurate results, as well as a lack of understanding of the nuances in Black patient presentation of certain diseases. This results in biased conclusions being drawn. The second is integration of biased data into precision medicine algorithms; as previously discussed, datasets are already biased. These biases are then reinforced when biased AI technologies are used to create clinical algorithms. The third is influence of preexisting structural racism while precision medicine is being adopted by hospitals; structural racism affects which hospitals adopt precision medicine and which patients will receive access to it and will ultimately hurt Black patients, who are relatively underprioritized when compared to White patients because of both algorithm bias and direct discrimination (Geneviève et. al., 2020).
“As the medical field becomes more technology driven, the existing issues with algorithms and biotechnology will continue to grow, which could aggravate the already dangerous racial disparities in medicine.�
Conclusions and Future Directions It is abundantly clear that Black people are not cared for properly in the current American healthcare system relative to White counterparts. From historic inequities to issues with modern physician attitudes and growing concerns about the prevalence of biased technology, there are a myriad of problems that need to be solved to ensure Black people receive equal, accessible, and adequate healthcare. By bringing more diversity to medicine, the American healthcare system could see immediate benefits with physicianpatient trust. However, in the long term, significant steps need to be taken to dismantle the systemic issues that exist that are the root causes of unequal access to medicine.
21
References Alsan, M., Garrick, O., & Graziani, G. C. (n.d.). Does Diversity Matter for Health? Experimental Evidence from Oakland. 56. AMA History. (2018, November 20). Retrieved July 22, 2020 from https://www.ama-assn.org/about/ama-history/amahistory Brandt, A. M. (1978). Racism and Research: The Case of the Tuskegee Syphilis Study. The Hastings Center Report, 8(6), 21. https://doi.org/10.2307/3561468 Charatz-Litt, C. (1992). A chronicle of racism: The effects of the White medical community on Black health. Journal of the National Medical Association, 84(8), 717–725.
Experimentation and Demonstration in the Old South. The Journal of Southern History, 48(3), 331. https://doi. org/10.2307/2207450 Staton, L. J., Panda, M., Chen, I., Genao, I., Kurz, J., Pasanen, M., Mechaber, A. J., Menon, M., O’Rorke, J., Wood, J., Rosenberg, E., Faeslis, C., Carey, T., Calleson, D., & Cykert, S. (2007). When race matters: Disagreement in pain perception between patients and their physicians in primary care. Journal of the National Medical Association, 99(5), 532–538. Sullivan, G. (n.d.). Plantation Medicine and Health Care in the Old South. 21.
Geneviève, L. D., Martani, A., Shaw, D., Elger, B. S., & Wangmo, T. (2020). Structural racism in precision medicine: Leaving no one behind. BMC Medical Ethics, 21(1), 17. https://doi.org/10.1186/s12910-020-0457-8 Goyal, M. K., Kuppermann, N., Cleary, S. D., Teach, S. J., & Chamberlain, J. M. (2015). Racial Disparities in Pain Management of Children With Appendicitis in Emergency Departments. JAMA Pediatrics, 169(11), 996. https://doi. org/10.1001/jamapediatrics.2015.1915 Hoffman, K. M., Trawalter, S., Axt, J. R., & Oliver, M. N. (2016). Racial bias in pain assessment and treatment recommendations, and false beliefs about biological differences between Blacks and Whites. Proceedings of the National Academy of Sciences, 113(16), 4296–4301. https://doi. org/10.1073/pnas.1516047113 Hunkele, K. L. (n.d.). Segregation in United States Healthcare: From Reconstruction to Deluxe Jim Crow. 51. Josefson, null. (2000). Pain relief in US emergency rooms is related to patients’ race. BMJ (Clinical Research Ed.), 320(7228), 139A. Kenny, S. C. (2015). Power, opportunism, racism: Human experiments under American slavery. Endeavour, 39(1), 10–20. https://doi.org/10.1016/j.endeavour.2015.02.002 Khan, F. A. (2011). The Immortal Life of Henrietta Lacks. Journal of the Islamic Medical Association of North America, 43(2). https://doi.org/10.5915/43-2-8609 Lander, K., & Pritchett, J. (2009). When to Care: The Economic Rationale of Slavery Health Care Provision. Social Science History, 33(2), 155–182. https://doi.org/10.1215/014555322008-018 Martin, K. D., Roter, D. L., Beach, M. C., Carson, K. A., & Cooper, L. A. (2013). Physician communication behaviors and trust among Black and White patients with hypertension. Medical Care, 51(2), 151–157. https://doi.org/10.1097/ MLR.0b013e31827632a2 National Center for Health Statistics. Health United States 2006 with chartbook on trends in the health of Americans. US Government Printing Office, Hyattsville, MD (2006) Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi. org/10.1126/science.aax2342 Savitt, T. L. (1982). The Use of Blacks for Medical
22
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
23
Mechanochemistry – A Powerful and “Green” Tool for Synthesis BY ANDREW SASSER '23 Cover Image: This scientist is using a mortar and pestle to grind different reagents and facilitate a mechanochemical reaction. Unlike most synthetic reactions, mechanochemical reactions do not require any solvent Source: Flickr.com
Introduction Over the past two centuries, the field of synthetic chemistry has experienced remarkable growth. Prior to the 19th century, scientists accepted the doctrine of vitalism, which suggested that living organisms were fueled by a “vital force” separate from the physical and natural world, and that organic compounds could not be synthesized from inorganic reagents. However, following Friedrich Wohler’s 1828 synthesis of the organic molecule urea from the inorganic molecule ammonia, the field of synthetic chemistry exploded, with chemists producing compounds such as the anti-cancer drug Taxol and the pesticide Strychnine from simple organic and inorganic reagents (Museum of Organic Chemistry, 2011). Modern synthetic techniques, however, still have some significant drawbacks. For one, synthetic schemes can be highly inefficient. On average, the synthesis of fine chemicals produces 5-50
24
kilograms(kg) of by-product per kg of product, and the synthesis of pharmaceuticals can generate over 100 kg of waste product per kg. Second, most syntheses use toxic solvents, which often comprise the largest amount of “auxiliary waste.” (Li and Trost, 2008). Third, reactions can also require a large amount of energy, especially if heating is necessary. As a result of these inefficiencies, the field of “Green Chemistry” has been developed to maximize atom economy – the ratio of the mass of the atoms in the product to that of the reagents – and thus promote efficiency (Chen et. al, 2015). In an effort to minimize waste products, some chemists have turned towards solventless reactions to promote higher product recovery. One class of solventless reactions, referred to as mechanochemistry, uses mechanical force to promote chemical reactions. This paper will demonstrate how mechanochemical reactions have not only reduced energy requirements and improved atom economy but have also DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
1991, which makes of use of magnets to better control the rate of milling (Calka and Radlinski, 1991).
Modern Techniques
led to new reaction pathways that are not achievable under conditions where solvent is present
History of Mechanochemistry Although mechanochemical techniques have existed since the beginning of recorded history, their use in chemical synthesis has only developed recently. Prior to the late 20th century, mechanochemical techniques relied upon simple pestles and mortars. The famed English chemist Michael Faraday, for instance, used a mixture of zinc, tin, iron, and copper to reduce silver chloride (Takacs, 2013). In the 1880s, M.C. Lea discovered that mechanochemical processes can facilitate reactions not achievable by the heating of solutions. Lea found that mercuric chloride could decompose when ground in a mortar but would sublime (turn to gas) upon heating (Boldyreva, 2013). Research in mechanochemistry substantially increased over the 20th century due to the evolution of techniques. As the mortar and pestle proved to be too cumbersome, scientists instead turned to ball milling devices that could supply the large amounts of energy needed to initiate chemical reactions. First developed in 1923, small-scale ball mills became increasingly popular throughout the 1940s and 50s (Takacs, 2013). Mill development has since focused on increasing the motion and energy of individual balls. For example, the planetary mill, first developed in 1961, makes use of a centrifuge to increase impact velocity. Similarly, Calka and Radlinski developed the uni-ball mill in
SUMMER 2020
Further technical advances have enabled an even greater degree of control over mechanochemical reactions. Of particular importance in mechanochemistry are techniques which add catalytic material to a reaction mixture. One of these methods, called liquid-assisted grinding (LAG), has allowed for reaction pathways not traditionally accessible by neat grinding, or grinding with a mortar and pestle. LAG introduces a small amount of liquid into a reaction mixture, which greatly accelerates reaction rates while also promoting the formation of new isomeric products (Do & Friščić, 2017). LAG has also been shown to increase reaction yield; for example, when water was added in the oxidative addition step of the halogenation of Rhenium (I) complexes, yield increased from 82% to 94%. Additionally, the isomeric excess of the diagonal isomer increased from 45% to 99%, improving reaction selectivity (Hernandez et. al, 2014). Another modern technique used in mechanochemistry is twin screw extrusion (TSE). In contrast to traditional “batch” mills, TSE results in a “continuous” mode of production as mechanical force is constantly applied to the products. In TSE, the material is passed through a confined barrel assisted by the rotation of two rotating screws. The configuration of these screws can be modified to control shear and compression forces on the molecule. Additionally, the feed rate of material can be modified to directly control the amount of force applied; this allows for greater temperature control than standard ball mills (Crawford et. al, 2017). TSE has been particularly useful in synthetic processes; for instance, Medina et. al produced pharmaceutical cocrystals from caffeine and oxalic acid – a process previously only achieved under solution crystallization (Medina et. al, 2010). As solution crystallization can lead to the formation of solvate products, TSE increases general yield by eliminating the need for solvent.
Figure I: Planetary ball mills in mechanochemistry take advantage of centrifugal force at high speeds. In planetary ball mills, “grinding jars” are arranged on a wheel and spin opposite to the direction of the wheel. The impact of the balls raises reaction rates by decreasing the surface area of the reactants. Source: Wikimedia Commons
“As the mortar and pestle proved to be too cumbersome, scientists instead turned to ball milling devices that could supply the large amounts of energy needed to initiate chemical reactions.”
Mechanics of Mechanochemical Reactions With regard to mechanochemical synthesis, the energy required to overcome activation energy barriers is provided by the transformation 25
Figure 2: MOFs are comprised of metal ions coordinated to specific organic ligands. Normally synthesized under solution conditions, some MOFs have recently been produced through mechanical force Source: Wikimedia Commons
“Mechanochemical reactions are also more sensitive than their solvent counterparts to changes in the stoichiometric ratio of reactants and temperature.”
of mechanical work into heat energy, often provided through either deformation or friction forces. As suggested by PG Fox, prolonged direct contact between reactants is necessary for a solid-phase mechanochemical reaction to proceed (Fox, 1975). Evidence also suggests that mechanochemical reactions must occur in a liquid-like phase. The aldol condensation reaction, for instance, has shown to produce at least partial melts, where at least some of the reaction mixture is melted, in 67% of ketone/ aldehyde mixtures (Rothenberg et. al, 2001) Notably, when reactants are ground together, the melting point of the mixture drops significantly. For example, when both alpha glycine and beta-malonic acid were milled together a melt was observed at 60˚C, significantly lower than the standard 112˚C melting point of the salt product. This reduction in melting point significantly lowers the thermal requirement for reactions and generally increases reaction rates (Michalchuck et. al, 2014). While mechanochemical reactions occur at significantly lower temperatures than solvent reactions, they are still governed by many of the same principles. For instance, the success of any reaction depends heavily on the nucleophilic or electrophilic character of the involved reagents. As described by Machucha and Juaristi, the yield for the mechanochemical dipeptide-catalysed
26
aldol-condensation reaction increased from 7% for p-methoxybenzaldehyde to 84% for o-chlorobenzaldehyde. The increased electronwithdrawing effects associated with the o-chlorobenzaldehyde raised the reaction rate due to increased pi-stacking, as the electronrich napthyl ring on the catalyst formed a more rigid transition state with the electron-poor benzaldehyde (Machucha and Juaristi, 2015). Similarly, the mechanochemical synthesis of thioureas required 9 hours of milling when a 4 methoxy thiocyanate derivative was used as the electrophile, as reported by Štrukil. In contrast, only 3 hours of milling were required when the methoxy group was replaced with the more electron-poor nitro group, as the benzyl ring became more electron deficient and thus susceptible to nucleophilic attack by the aniline (Štrukil, 2017). Mechanochemical reactions are also more sensitive than their solvent counterparts to changes in the stoichiometric ratio of reactants and temperature. As reported by Užarević et. al, the dry milling of cyanoguanidine and cadmium chloride produced a 1-dimensional polymer when ground in a 1:1 ratio, whereas a 3-dimensional polymer was formed when the reagents were ground in a 1:2 ratio (Užarević et. al, 2016). Notably, the group found that when the temperature of the milling reaction
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: This is the Reaction mechanism for Suzuki-Miyaura Cross Coupling, a common organometallic reaction used for producing carbon-carbon single bonds Source: Wikimedia Commons
was raised to 60˚C, there was an almost immediate formation of the 3-dimensional polymer, followed by rapid conversion to the 1-dimensional polymer once the cadmium chloride had been consumed. In contrast, at room temperatures, an amorphous intermediate phase that lasted for almost 20 minutes of ball-milling was observed (Užarević et. al, 2016).
Applications of Mechanochemical Synthesis Due to modern technical achievements, mechanochemistry has proven to be a versatile synthetic tool. Recent studies have demonstrated the potential use of mechanochemistry in metal-catalysed reactions. For instance, Fulmer et. al found that the Sonogashira coupling reaction could be replicated via high speed ball-milling without the need for a traditional copper iodide catalyst (Fulmer et. al, 2009). Instead, the group used a copper vial and copper balls as a catalytic surface for the reaction, which produced yields of up to 89% - comparable to the solvent reaction11. Similarly, Chen et. al found silver foil to be an effective catalyst in the cyclopropanation of alkenes with diazoacetates. In comparison to the standard catalyst for the reaction (dirhodium (II) salts), silver foil is both significantly more abundant and recyclable (Chen et. al, 2015).
SUMMER 2020
Another important application of mechanochemical reactions is in the synthesis of metal-organic frameworks, or MOFs. As porous materials made up of metal ions and organic ligands, MOFs have been proven to be potential candidates for fuel storage, carbon capture, and catalysis (Furukawa et. al, 2013). Pichon et. al demonstrated the potential for solvent free synthesis of MOFs through the reaction of copper acetate mononhydride, Cu(O2CCH3)2·H2O, with isonicotinic acid under ball milling conditions (Pichon et. al, 2006). Similarly, Friščić & Fábián synthesized Zinc fumarate, an MOF commonly used in functional materials, from Zinc oxide via liquid-assisted grinding (Friščić & Fábián, 2009). Significantly, the simple metal oxides in the mechanochemical synthesis of MOFs can replace the more expensive and toxic metal nitrates commonly used in current MOF synthesis (Do & Friščić, 2017).
“Recent studies have demonstrated the potential use of mechanochemistry in metal-catalyzed reactions.”
Benefits of Mechanochemistry over Solvent Reactions Although mechanochemistry is a stillemerging field, it has already demonstrated significant advantages over traditional solution synthesis methods. Some of these advantages are explored below. Reaction Energetics and Efficiency: Compared to common methods of microwave irradiation and solution heating, 27
mechanochemical reactions may be significantly more efficient at transferring energy for comparable yield. Experimental studies suggest that ball mills can deliver anywhere between 95.4 and 117.7 kJ mol-1 (McKissic et. al, 2014) As ball mills can deliver such a large quantity of energy, it is not surprising to see a corresponding increase in reaction rate compared to standard methods. A study conducted by Rodriguez et. al suggests that, on average, high-energy ball milling can drastically reduce reaction time for an enantioselective aldol reaction by over 50%. This reduction was achieved with similar yields and enantiomeric excess for both the stirring reactions and aldol reactions (Rodriguez et. al, 2006). Similarly, a study by Schneider et. al suggests that ball-milling has significant energy efficiency advantages over traditional microwave processes in Suzuki-Miyaura coupling reactions. Traditional microwave irradiation techniques were found to require 7.6 kWh mol-1 to produce 5 mmol of product, whereas planetary mills generated 100 mmol of product using just 5 kWh mol-1 of electrical energy (Schenider et. al, 2009).
“Mechanochemical reactions also enhance the selectivity of desired products as compared to solution conditions.”
Reaction Selectivity: Mechanochemical reactions also enhance the selectivity of desired products as compared to solution conditions. Products with a higher degree of thermodynamic stability are generally preferentially selected due to the different thermodynamic environment associated with solid-state mechanochemical reactions. For instance, Balema et. al observed that the mechanochemical Wittig reaction preferentially selected for the more thermodynamically stable E-stilbenes; in contrast, solution environments normally produce mixtures of Z and E-stilbene isomer (Balema et. al, 2002). Similarly, Belenguer et. al observed that mechanochemical conditions drove the thermodynamic equilibration of a disulfide dimerization reaction from two homodimers (reactants) to heterodimers (products). The group suggested that this may be due to the fact that forcing one of the homodimers into an energetically unstable lattice structure raises the lattice energy by 12.7 kJ mol-1 compared to an isolated molecule in solution, thus reducing the activation energy and raising reaction rate (Belenguer et. al, 2011). De Novo Synthesis: Finally, some mechanochemical reactions have enabled the synthesis of products that could not
28
be synthesized using other methods. One of these products is a dimer of C6o, better known as fullerene. Previously believed to be thermally impossible to produce due to the highly strained and electron-deficient double bonds, Shiro et. al successfully produced a C120 dimer using a high-speed vibration mill and potassium cyanide as a catalyst (Shiro et. al, 2009). In addition to circumventing the problems of hindered stereochemistry, mechanochemical pathways also avoid the common problem of solvent interference. As reported by Rheingold et. al, the synthesis of tris(allyl)aluminum complexes was only possible via ball-milling; attempts to conduct the synthesis in hexane solutions were entirely unsuccessful due to solvent interference (Rheingold et. al, 2014).
Conclusion While mechanochemical techniques have been used for thousands of years, only recently have they evolved to become a potential tool for synthesis. Thanks to modern technological innovations, synthetic chemists can more easily control the energy input of reactions, while also drastically speeding up the reaction rates. Additionally, mechanochemistry has promoted greater atom economy and reduced solvent waste in modern syntheses, making it an excellent example of green chemistry. Furthermore, due to the different chemical environment associated with solidstate chemistry, synthetic pathways can be fine-tuned to raise reaction speed, change selectivity and even synthesize “impossible” molecules. Though the physical underpinnings of mechanochemical techniques are still being explored, evidently, they present an incredibly viable alternative to traditional reaction pathways. References Balema, V. P., Wiench, J. W., Pruski, M., & Pecharsky, V. K. (2002). Mechanically Induced Solid-State Generation of Phosphorus Ylides and the Solvent-Free Wittig Reaction. Journal of the American Chemical Society, 124(22), 6244–6245. https://doi. org/10.1021/ja017908p Belenguer, A. M., Friščić, T., Day, G. M., & Sanders, J. K. M. (2011). Solid-state dynamic combinatorial chemistry: reversibility and thermodynamic product selection in covalent mechanosynthesis. Chemical Science, 2(4), 696–700. https://doi.org/10.1039/C0SC00533A Boldyreva, E. (2013). Mechanochemistry of inorganic and organic systems: what is similar, what is different? Chemical Society Reviews, 42(18), 7719–7738. https://doi.org/10.1039/ C3CS60052A
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Calka, A., & Radlinski, A. P. (1991). Universal high performance ball-milling device and its application for mechanical alloying. Materials Science and Engineering: A, 134, 1350–1353. https://doi.org/10.1016/0921-5093(91)90989-Z Cano-Ruiz, J. A., & McRae, G. J. (1998). Environmentally conscious chemical process design. Annual Review of Energy and the Environment, 23(1), 499–536. https://doi. org/10.1146/annurev.energy.23.1.499 Chen, L., Bovee, M. O., Lemma, B. E., Keithley, K. S. M., Pilson, S. L., Coleman, M. G., & Mack, J. (2015). An Inexpensive and Recyclable Silver-Foil Catalyst for the Cyclopropanation of Alkenes with Diazoacetates under Mechanochemical Conditions. Angewandte Chemie International Edition, 54(38), 11084–11087. https://doi.org/10.1002/ anie.201504236 Crawford, D. E., Miskimmin, C. K. G., Albadarin, A. B., Walker, G., & James, S. L. (2017). Organic synthesis by Twin Screw Extrusion (TSE): continuous, scalable and solvent-free. Green Chemistry, 19(6), 1507–1518. https://doi.org/10.1039/ C6GC03413F Do, J.-L., & Friščić, T. (2017). Mechanochemistry: A Force of Synthesis. ACS Central Science, 3(1), 13–19. https://doi. org/10.1021/acscentsci.6b00277 Fox, P. G. (1975). Mechanically initiated chemical reactions in solids. Journal of Materials Science, 10(2), 340–360. https:// doi.org/10.1007/BF00540358 Friščić, T., & Fábián, L. (2009). Mechanochemical conversion of a metal oxide into coordination polymers and porous frameworks using liquid-assisted grinding (LAG). CrystEngComm, 11(5), 743–745. https://doi.org/10.1039/ B822934C Fulmer, D. A., Shearouse, W. C., Medonza, S. T., & Mack, J. (2009). Solvent-free Sonogashira coupling reaction viahigh speed ball milling. Green Chemistry, 11(11), 1821–1825. https://doi.org/10.1039/B915669K Furukawa, H., Cordova, K. E., O’Keeffe, M., & Yaghi, O. M. (2013). The Chemistry and Applications of Metal-Organic Frameworks. Science, 341(6149). https://doi.org/10.1126/ science.1230444 Hernández, J. G., Macdonald, N. A. J., Mottillo, C., Butler, I. S., & Friščić, T. (2014). A mechanochemical strategy for oxidative addition: remarkable yields and stereoselectivity in the halogenation of organometallic Re(I) complexes. Green Chemistry, 16(3), 1087–1092. https://doi.org/10.1039/ C3GC42104J Kinne-Saffran, E., & Kinne, R. K. H. (1999). Vitalism and Synthesis of Urea. American Journal of Nephrology, 19(2), 290–294. https://doi.org/10.1159/000013463 Li, C.-J., & Trost, B. M. (2008). Green chemistry for chemical synthesis. Proceedings of the National Academy of Sciences of the United States of America, 105(36), 13197–13202. https://doi.org/10.1073/pnas.0804348105 Machuca, E., & Juaristi, E. (2015). Organocatalytic activity of α,α-dipeptide derivatives of (S)-proline in the asymmetric aldol reaction in absence of solvent. Evidence for noncovalent π–π interactions in the transition state. Tetrahedron Letters, 56(9), 1144–1148. https://doi.org/10.1016/j. tetlet.2015.01.079
SUMMER 2020
McKissic, K. S., Caruso, J. T., Blair, R. G., & Mack, J. (2014). Comparison of shaking versus baking: further understanding the energetics of a mechanochemical reaction. Green Chemistry, 16(3), 1628–1632. https://doi.org/10.1039/ C3GC41496E Medina, C., Daurio, D., Nagapudi, K., & Alvarez-Nunez, F. (2010). Manufacture of pharmaceutical co-crystals using twin screw extrusion: a solvent-less and scalable process. Journal of Pharmaceutical Sciences, 99(4), 1693–1696. https://doi. org/10.1002/jps.21942 Michalchuk, A. A. L., Tumanov, I. A., Drebushchak, V. A., & Boldyreva, E. V. (2014). Advances in elucidating mechanochemical complexities via implementation of a simple organic system. Faraday Discussions, 170(0), 311–335. https://doi.org/10.1039/C3FD00150D Pichon, A., Lazuen-Garay, A., & James, S. L. (2006). Solventfree synthesis of a microporous metal–organic framework. CrystEngComm, 8(3), 211–214. https://doi.org/10.1039/ B513750K Rightmire, N. R., Hanusa, T. P., & Rheingold, A. L. (2014). Mechanochemical Synthesis of [1,3-(SiMe 3 ) 2 C 3 H 3 ] 3(Al,Sc), a Base-Free Tris(allyl)aluminum Complex and Its Scandium Analogue. Organometallics, 33(21), 5952–5955. https://doi.org/10.1021/om5009204 Rodríguez, B., Rantanen, T., & Bolm, C. (2006). Solvent-Free Asymmetric Organocatalysis in a Ball Mill. Angewandte Chemie, 118(41), 7078–7080. https://doi.org/10.1002/ ange.200602820 Rothenberg, G., Downie, A. P., Raston, C. L., & Scott, J. L. (2001). Understanding Solid/Solid Organic Reactions. Journal of the American Chemical Society, 123(36), 8701–8708. https://doi. org/10.1021/ja0034388 Schneider, F., Szuppa, T., Stolle, A., Ondruschka, B., & Hopf, H. (2009). Energetic assessment of the Suzuki–Miyaura reaction: a curtate life cycle assessment as an easily understandable and applicable tool for reaction optimization. Green Chemistry, 11(11), 1894–1899. https://doi.org/10.1039/ B915744C Štrukil, V. (2017). Mechanochemical synthesis of thioureas, ureas and guanidines. Beilstein Journal of Organic Chemistry, 13(1), 1828–1849. https://doi.org/10.3762/bjoc.13.178 Takacs, L. (2013). The historical development of mechanochemistry. Chemical Society Reviews, 42(18), 7649–7659. https://doi.org/10.1039/C2CS35442J Taxol – The Drama behind Total Synthesis. (2011, July 27). https://web.archive.org/web/20110727152818/http://www. org-chem.org/yuuki/taxol/taxol_en.html Užarević, K., Štrukil, V., Mottillo, C., Julien, P. A., Puškarić, A., Friščić, T., & Halasz, I. (2016). Exploring the Effect of Temperature on a Mechanochemical Reaction by in Situ Synchrotron Powder X-ray Diffraction. Crystal Growth & Design, 16(4), 2342–2347. https://doi.org/10.1021/acs. cgd.6b00137 Wang, G.-W., Komatsu, K., Murata, Y., & Shiro, M. (1997). Synthesis and X-ray structure of dumb-bell-shaped C 120. Nature, 387(6633), 583–586. https://doi.org/10.1038/42439
29
Fast Fashion and the Challenge of Textile Recycling BY ARUSHI AGASTWAR, MONTA VISTA HIGH SCHOOL SENIOR Cover Image: Mixed textiles and clothing recycling bin alongside bins for plastic, glass, and aluminum bottles Source: Wikimedia Commons
30
Introduction In recent decades, the textile industry has seen tremendous growth due to an increase in general population size, a growing middle class in particular, and the presence of higher disposable income. Alongside these broader trends, this growth has also been influenced by a new surge in textile and apparel consumption referred to as Fast Fashion. Consumer fashion brands like Zara, Forever 21, and H&M are disrupting the seasonal design cycles of clothes. Traditional legacy brands like Tommy Hilfiger, Levis, and Dolce and Gabbana used to have around two or three seasonal lines in a year. Now, there is an entirely new collection of apparel put out on a biweekly basis. Fast Fashion brands leverage the urgency created by short supply cycles, causing people to either purchase immediately or miss out on the latest trends. The business model of these brands is to swiftly make clothes that are inexpensive and expendable - and the quick turnaround
of raw material to finished apparel has been facilitated greatly by social media platforms (Caro & MartĂnez-de-AlbĂŠniz, 2015). Social media influencers constantly affect social media feeds with new styles, creating an apparent psychological need to keep up with new trends. Unfortunately, however, this Fast Fashion model is neither cost-effective nor sustainable for the environment. Today, the average individual is purchasing more clothes than ever before. Due to a steady rise in clothing consumption during the past three decades, more than 80 billion garments were produced worldwide in 2019 (Thomas, 2019). For the American Market, this increase translates to the average American consumer purchasing more than 63 pieces of clothing per year, or 1.2 garments a week, which is more than 6 times the global average (Bick et al, 2018). Such large-scale consumption of clothing has led to significant problems, such DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: The long-term environmental impacts of the textile industry is often overlooked. Source: Flickr
as unwanted garments and textiles being prematurely discarded. Due to consumers holding onto clothing for only half as long as they used to, around 82 pounds of clothing are typically thrown away each year. Of this, around 85 percent ends up in the landfill and only half of the other remaining 15 percent is reused, whilst the other half is recycled (Bick et al., 2018; Wicker, 2016).
Why We Need Recycling According to the Ellen MacArthur Foundation, the textile production process is often highly resource-intensive. In 2015, the total Greenhouse Gas Emissions (GHG) for the textile and apparel industry was 1,200 million tons. Additionally, water usage by the industry was 93 billion cubic meters, fertilizers for cotton growth were 8 million tons, and pesticides for cotton were 200,000 tons (Ellen MacArthur Foundation, 2017). In the textile industry, there are dozens of types of fabrics. Synthetic fabrics like polyester, nylon, rayon, acrylic, and elastane are based on plastics and designed to last longer. This causes them to take hundreds of years to decompose. This is also the case with natural fabrics including cotton, linen, wool, and silk. During the fabric manufacturing process, the fibers go through a series of processes like dyeing and finishing, which renders these fabrics difficult to decompose (Mukhopadhyay, n.d). Once in landfills, an apparel made of natural fibers is several hundred times more slow to degrade
SUMMER 2020
compared to unprocessed raw fiber which has not been converted to yarn or fabrics. Additionally, their decomposition can lead to the release of certain greenhouse gases in the atmosphere which contribute to increasing global temperatures. Recycling textiles can have a substantial environmental benefit, including a 53% reduction in greenhouse gas emissions, a 45% reduction in chemical pollution, and a 95% reduction in water pollution (ECOSIGN, 2017). And Textile recycling also substantially reduces the dependence on virgin natural resources and allows us to extract the maximum value out of fibers. This is a strong positive because the globe is currently facing increasingly high population projections that demand resources for settlement and sustenance. As a result, societies will have to choose between using land for living, land for food cultivation, land for growing cotton, and land for grazing sheep for
“Recycling textiles can have a substantial environmental benefit, including a 53% reduction in greenhouse gas emissions, a 45% reduction in chemical pollution, and a 95% reduction in water pollution.�
Figure 2: Width of a synthetic fiber as compared to other materials Source: Wikimedia Commons
31
Figure 3: How mechanical recycling compares to other forms of municipal waste Source: Flickr
wool. Textile recycling is a major way to lessen the desperateness of the situation. For every piece of cloth recycled, the amount of resources used to produce virgin fabric is significantly reduced (Watson, 2017).
“Textiles are recycled in two prominent ways today: mechanical and chemical."
The processes leading up to recycling include the collection of textiles from various sources, consolidation of the textiles, classification according to fabric type and color, and transportation from the collection sites to the sorting facilities. As each of these individual steps is highly labor-intensive, they generate a multitude of employment opportunities. This, in addition to the environmental benefits of reduced landfill and incineration (which helps reduce air pollution), are further positive aspects of plastic recycling (Tai J, et al., 2017).
The Recycling Process Textiles are recycled in two prominent ways today: mechanical and chemical. During mechanical recycling, the fabrics are shredded into shorter fibers. However, these shorter fibers yield a yarn of lower quality and strength and are difficult to turn back into fabrics; shredded materials are optimum for “non-woven” fabrics. As non-woven fabrics are formed directly from fibers, there is no need for yarn formation and weaving or knitting the yarn. As shown by the data in Figure 3, the percentage of municipal waste produced by mechanical recycling has been increasing over the past few decades. The most recent data point shows that the percentage of municipal waste produced by mechanical recycling is relatively similar to the percentage of municipal 32
waste produced by landfilling (Payne, 2013). During chemical recycling, natural fibers such as cotton, linen, or bamboo, which are made of cellulose, can be dissolved much like paper. Nonetheless, after every successive dissolution, the length of the cellulosic polymer chains gets shorter, resulting in a weak yarn and substandard fabric. Even after the first round of recycling, the cotton fibers are not long enough to be properly knitted or woven. To obtain a higher fabric strength, textile industries need to bundle these fibers with virgin fibers (Payne, 2013).
Challenges in Textile Recycling Globally, these textile fibers are often blended together to achieve special optical effects or to improve performance by maximizing the beneficial properties of each fiber type. However, these blends pose significant challenges when it comes to the recycling of the fabrics – each fiber has a different recycling process. Since the blending is done at the fiber stage, the mechanical separation process of these blended fibers has not yet been developed, making it difficult to recycle a blended fabric (Ellen MacArthur Foundation, 2017). Some other issues in recycling are: 1. The processing, dyes, and finishes on the raw textile fabric make it very difficult to recycle the material. This is because dyes and finishes are chemically bonded to the surface of the fabric. The chemical bonds between the dyes, finishes and fiber polymers like cellulose, polyester, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
nylon are specifically engineered for each type of dye and fabric. This allows the fabrics to hold onto the dyes and finishes for a long time, preventing damage with successive washing or due to sunlight. As a result, they are particularly hard to separate from the fabrics (Marconi et al., 2018). 2. Some apparel is now supplemented with metal wires and various polymer materials (such is the case with technical textiles). These blends of electrical components are done at the weaving or the knitting stage of the fabric production and pose a serious challenge to fabric recycling. Mechanical recycling, which is done using shredding of fabric, will not be able to separate these wires from the fabric (Tai et al., 2011). 3. The fibrous material collected after mechanical recycling fabric by shredding is often inconsistent. It is difficult to change the process parameters for each new batch that comes in. This is because the machine settings used for the yarn production from this recycled fiber is dependent on the fiber length. For the recycling process to run seamlessly, there has to be a consistent increase in the amount of acceptable textile waste as input for the facility to be profitable (Xiao et al., 2018).
Discussion Consumers are becoming increasingly aware of the consequences of human consumption. As a result, clothing brands are introducing new and more ‘conscious’ lines to cater to this market segment. However, these eco-friendly SUMMER 2020
brands and clothing lines are often priced at a premium and are therefore largely inaccessible to the vast majority of the population who have lesser disposable incomes to spend on clothes. Some companies are even criticized for the practice of ‘greenwashing’, or misleading ecoconscious consumers into thinking that their products are environmentally friendly when they really are not any more so than other fabrics on the market.
Figure 4: Public Recycling bin for textiles and shoes
Several brands have implemented approaches that do not directly address the issue of textile biodegradation and recycling. H&M stores, for instance, have recollection bins where people can discard their clothes and receive H&M cash coupons in exchange
“Consumers are becoming increasingly aware of the consequences of human consumption. As a result, clothing brands are introducing new and more 'conscious' lines to cater to this market segment."
To be more environmentally sustainable, the textile industry needs to make significant changes regarding its use of resources. In the past, there has not been an adequate consideration of the environmental impacts of the textile production process. With fast fashion changing the way people purchase and interact with their clothes, the dangers of textile production to the environment have become increasingly pertinent.
Source: Geograph
References Wicker On 09/01/16 at 6:40 AM EDT, A. (2017, March 16). The earth is covered in the waste of your old clothes. https:// www.newsweek.com/2016/09/09/old-clothes-fashionwaste-crisis-494824.html. Caro, F., & Martínez-de-Albéniz, V. (1970, January 1). Fast Fashion: Business Model Overview and Research Opportunities. https://link.springer.com/ chapter/10.1007/978-1-4899-7562-1_9. Circular Fashion - A New Textiles Economy: Redesigning fashion's future. (2017, November 28). https://www. ellenmacarthurfoundation.org/publications/a-new-textileseconomy-redesigning-fashions-future. Claudio, L., Siegle, L., MA. Sant'Ana, F. K., S. Akhter, S. R., G. Gebremichael, A. K., & TP. Lyon, A. W. M. (1970, January 1). The global environmental injustice of fast fashion. https://doi. org/10.1186/s12940-018-0433-7. Cuc, S., & Vidović, M. (1970, January 1). [PDF] Environmental Sustainability through Clothing Recycling: Semantic Scholar. undefined. https://www.semanticscholar.org/paper/ Environmental-Sustainability-through-Clothing-Cuc-Vidovi% C4%87/81069e26c688be1d475a1337f40fe68e6414969f. Elander, M., & Ljungkvist, H. (2016). Critical aspects in design for fiber-to-fiber recycling of ...http:// mistrafuturefashion.com/wp-content/uploads/2016/06/MFFreport-2016-1-Critical-aspects.pdf. Marconi, M., Landi, D., Meo, I., & Germani, M. (2018, April 18). Reuse of Tires Textile Fibers in Plastic Compounds: Is this Scenario Environmentally Sustainable? Procedia CIRP. https://www.sciencedirect.com/science/article/pii/ 33
S2212827117308508. Mukhopadhyay, S. Textile Engineering - Textile Fibres. NPTEL. https://nptel.ac.in/courses/116/102/116102026/. Payne, A. (2015, August 7). Open- and closed-loop recycling of textile and apparel products. Handbook of Life Cycle Assessment (LCA) of Textiles and Clothing.https://www. sciencedirect.com/science/article/pii/B978008100169100006X. Tai, J., Zhang, W., Che, Y., & Feng, D. (2011). Municipal solid waste source-separated collection in China: A comparative analysis. Waste management (New York, N.Y.). https://pubmed.ncbi.nlm. nih.gov/21504843/. Textile recycling as a contribution to circular economy and production waste enhancement. View Cart. http://www. ecosign-project.eu/news/textile-recycling-as-a-contributionto-circular-economy-and-production-waste-enhancement/. Thomas, D. (2019, August 29). The High Price of Fast Fashion. The Wall Street Journal. https://www.wsj.com/articles/thehigh-price-of-fast-fashion-11567096637. Watson, D., Gylling, A., Andersson, T., & Heikkilä, P. (1970, January 1). Textile-to-textile recycling: Ten Nordic brands that are leading the way. VTT's Research Information Portal. https:// cris.vtt.fi/en/publications/textile-to-textile-recycling-tennordic-brands-that-are-leading-t. Women's Clothing & Fashion - shop the latest trends: H&M GB. H&M. https://www2.hm.com/en_gb/ladies.html. Xiao, S., Dong, H., Geng, Y., & Brander, M. (2018, March 22). An overview of China's recyclable waste recycling and recommendations for integrated solutions. Resources, Conservation and Recycling. https://www.sciencedirect.com/ science/article/pii/S0921344918300910. Zara.com. JOIN LIFE: ZARA United States. JOIN LIFE | ZARA United States. https://www.zara.com/us/en/ sustainability-l1449.html.
34
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
35
The Cellular Adhesion and Cellular Replication of SARS-COV-2 BY ARYA FAGHRI, UNIVERSITY OF DELAWARE '24 Cover Image: A depiction of the extracellular component of the SARS-COV-2 cell. All the membrane proteins have specific functions in both entry to the human cell and activity within the human cell. Source: Wikimedia Commons
36
Introduction COVID-19 is a viral disease caused by an infection from the Severe Acute Respiratory Syndrome Coronavirus or SARS-COV-2. The coronavirus itself is a sphere-shaped virus containing a plasma membrane with various proteins that protect its single-stranded RNA genome (Patel, 2020). This virus is zoonotic, meaning that it initially jumped from another species to a human host, most likely a pangolin or a bat (Patel, 2020). Once in the human body, the coronavirus enters cells so that it can multiply. It enters via the process of cellular adhesion in which it attaches its surfacelevel proteins to a specific human receptor enzyme called Angiotensin Converting Enzyme II (ACE-2) (Patel, 2020). ACE-2 is a type I integral membrane enzyme categorized as a carboxypeptidase, responsible for the conversion of angiotensin II to Angiotensin (17) (Warner, 2004). Once inside the human cell, the coronavirus looks to hijack the biosynthetic
machinery within that cell to reproduce its genomic RNA and proteins to form new copies of itself. Having undergone this cellular replication, the newly produced coronavirus is then released into the extracellular fluid and can travel to other cells to complete this same process.
Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-COV-2) and its mechanics Coronaviruses are positive-strand (5’ to 3’) RNA viruses that have a variety of proteins embedded within their plasma membrane, a set of intracellular proteins, and a 30,000 nucleotide-long RNA genome also packed within the cell (Schelle et al., 2005). Proteins within the plasma membrane bind ACE-2 receptor enzymes on human cells to facilitate viral uptake (National Institute of Health NIAD Team, 2020). Spike (S) glycoproteins and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
hemagglutinin esterase dimers specifically facilitate this bonding, while membrane (M) proteins form much of the protein coat that contains the virus interior. Envelope (E) proteins regulate viral assembly once the virus is inside the cell. Lastly, novel structure proteins (nsp’s) are proteins of very small size that exhibit an assisting role in the coronavirus replication and assembly processes (Schelle et al., 2005, Zeng et al., 2008, Boopathi et al., 2020). The coronavirus genome is highly protected within the viral capsid (protein coat). One set of protective proteins are the nucleocapsid (N) proteins, which form the intracellular component of the virus (Schelle et al., 2005). The viral RNA genome is constantly replicated, transcribed, and translated to multiply and spread throughout the human body. The virus can be compared to other viruses such as influenza; however, the timing and the degree of effects toward organ systems produced by the coronavirus is distinctive. Particularly, the coronavirus strategically ceases the activity of any organs associated with the immune system which allows it to be mobile and capable of severe harm.
Angiotensin Converting Enzyme II (ACE-2) and its mechanics Angiotensin Converting Enzyme II, more commonly known as ACE-2, is related to Angiotensin Converting Enzyme, ACE, which is an enzyme in the human body that helps maintain blood pressure (among other roles) (Warner, 2004, Turner, 2015). The main cellular function ACE-2 possesses is to convert the protein Angiotensin II to Angiotensin (1-7) (R&D Systems, 2015). 3.1 Structure: ACE-2 is a wrench-shaped receptor protein consisting of an N-terminal signal sequence, a transmembrane domain, a single catalytic domain, and a C-terminal cytosolic domain (Turner, 2020). The single catalytic domain is the outermost portion of the protein, the N-terminal signal sequence is also found outside the cell (extracellular), and the C-terminal signal sequence is found inside the cell (intracellular) (Clarke et al., 2011). 3.2 Location: ACE-2 is a receptor protein located on the plasma membranes of lung cells, skin cells, stomach cells, liver cells, kidney cells, bone marrow cells, and much of the respiratory
SUMMER 2020
tract (Hamming el at., 2004). Recent studies have found that ACE-2 is also located on the hypothalamus, brainstem, and cerebral cortex, leading to the concern that the brain is also in danger of having its cells affected by the coronavirus (Kabbani el at., 2020). 3.3 Biological Function: The primary function of ACE-2 is to split the protein Angiotensin II into the protein Angiotensin (1-7) which serves to block organ damage and helps control blood pressure (R&D Systems, 2015, Hiremath, 2020, Sriram et al., 2020). The splitting occurs in the extracellular fluid and also acts in an anti-inflammatory capacity by reducing the activity of bradykinin, a peptide that instigates inflammation (Cyagen Newsletter, 2020, UniProt, 2020). Furthermore, ACE-2 regulates the human gastrointestinal microbiome as well by primarily defending the digestion and metabolization processes of the large intestine from inflammation (Kuba, 2013).
"The primary functino of ACE-2 is to split the protein Angiotensin II into the protein Angiotensin(1-7), which serves to block organ damage and helps control blood pressure.�
Affinity of the Coronavirus for ACE-2 The coronavirus is most likely to enter the body through the respiratory system because respiratory droplets from afflicted people contain viral particles (Ghose, 2020). These droplets can travel through the air a remarkable distance and enter a healthy person’s body (Atkinson, 2009, Ghose, 2020) Within several hours, the coronavirus particles will cross the mucous membranes of the respiratory system and enter the cells of the human body (Anderson et al., 2020). When the coronavirus enters through the respiratory tract, it makes its way towards healthy cells containing ACE-2 receptors to begin the attachment process. Why does the coronavirus have such a high affinity for ACE2? First, the structural features of ACE-2 make it favorable for the coronavirus. Specifically, ACE2 has a more compact conformation in its single catalytic domain which gives it a strong affinity for the coronavirus. It also has an S protein that has a structural binding complementary to ACE-2 (Shang, 2020). Once the coronavirus has identified a host cell, there are a series of reactions that allow it to move intracellularly, beginning with action from the S protein (Zeng et al., 2008, Boopathi et al., 2020), The S protein itself consists of a receptorbinding domain known as subunit S1, an intracellular backbone, and a transmembrane anchor known as the S2 subunit (Li, 2016). The 37
Figure 1: The initial translation process completed to produce pp1a and pp1ab. Image created by author. Inspiration derived from: (Sino Biological, 2006) (Burkard, 2014) (Smirnov, 2018)
S1 subunit makes up the outer surface of the S protein and it is the boundary that faces ACE-2 during attachment (Lee, 2020). Fusion begins when the N-terminal of the S1 subunit on the S protein of the coronavirus binds to the ACE-2 receptor in a complementary fashion (Verdecchi et al., 2020, Lee, 2020). Once that initial binding occurs, a structural change in the S protein takes place which activates another receptor known as transmembrane protease serine 2 receptor (TMPRSS2) which shares a very close position on the plasma membrane with ACE-2 and plays a large role in viral entry in general (Hiremath, 2020, Verdecchi et al., 2020, Lee, 2020). TMPRSS2 cleaves the S protein between the S1 and S2 subunits (Verdecchi et al., 2020). Once the S protein is cleaved, the S2 subunit completes the attachment process by exposing a fusion peptide that is attached to the phospholipid bilayer (Verdecchi et al., 2020, Lee, 2020, Di Giorgio et al., 2020). Once the fusion occurs, the coronavirus travels through the phospholipid bilayer of the cell and is ready to go through a series of reactions and changes to complete its goal of replicating (Lee, 2020).
“Post-fusion, ACE2 is bound by the S protein and is therefore unable to perform its designated function."
Post-fusion, ACE-2 is bound by the S protein and is therefore unable to perform its designated function (Han et al., 2006, Xu et al., 2017, National Heart, Lung, and Blood Institute, 2020). This is dangerous because ACE-2 protects the pulmonary system from inflammation (Zhang, 2020, Povlsen et al., 2020). Specifically, ACE-2 has a protective role in acute-lung failure as it delivers key contributions to lung protection and security which prompts that the loss of ACE-2 function will instigate inflammatory contusions on the lungs and the components of the respiratory tract (Kuba, 2005, Imai, 2005, Verdecchi et al., 2020).
The replication process of the 38
coronavirus The coronavirus has a pre-programmed objective of growing and generating an abundant supply of exact copies in a process known as viral replication (Wessner, 2010). Viral replication allows the virus to survive and thrive, eventually invading organ systems. Because the coronavirus can’t divide and replicate on its own, it hijacks and utilizes the organelles, molecular reactions, and metabolic processes of a human host cell to carry out the synthesis of necessary nucleic acids and proteins (Boundless Microbiology, 2020). This allows the coronavirus to exploit host cell capabilities in membrane fusion, translation, proteolysis, replication, and transcription (Shereen et al., 2020). These processes are all different operations that the RNA can conduct in order to modify, edit, and produce a new RNA strand. Additionally, the coronavirus will utilize organelles as well, namely the rough endoplasmic reticulum, which is involved with ribosomes in protein synthesis, the golgi apparatus, which modifies the newly produced proteins and ships them to the appropriate location in the cell, vesicles, which serve as transporters of material across the cell, and the cytosol which is the fluid between the plasma membrane and the organelles (Shereen et al., 2020). The coronavirus will spend most of its time and effort in the ribosomes and the cytosol with the rough endoplasmic reticulum, golgi apparatus, and vesicles mostly serving as preparation for exocytosis, or exit from the cell. As a whole, the entirety of the coronavirus replication process can be broken down into a four-step process: attachment and fusion as described in the preceding section, virus disassembly, viral biosynthesis, and virus assembly (Hammer, 2020). Virus Disassembly:
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: The replicase/ transcriptase complex completed to produce the mRNA template Image created by author. Inspiration derived from: (Sino Biological, 2006) (Burkard, 2014) (Smirnov, 2018)
The coronavirus, consisting of all its proteins and RNA once in the cytosol, is ready to be disassembled via membrane uncoating (Haywood, 2010). Membrane uncoating is completed by lysosomes —groups of enzymes that hydrolyze and recycle excess material in the cell— within the host cell and involves separating the viral membrane and exposing the intracellular single-stranded RNA for it to be utilized throughout the replication process (Haywood, 2010). With this process complete, the 30,000-nucleotide genomic RNA is now uncoated and prepared to begin the viral biosynthesis step of replication. Viral Biosynthesis: The positive genomic RNA strand (5’ to 3’) will now begin a series of biosynthetic steps to produce new viral RNA and viral proteins. First, the RNA will undergo translation. The translation step begins with the coronavirus RNA attaching and fusing into a nearby ribosome in the cytosol (Sino Biological, 2006). Then, ORF1a, one region of the viral RNA genome, is translated to polyprotein 1a (pp1a) before being halted by an RNA pseudoknot located right before the termination codon of ORF1a (Ziebuhr et al., 1999, Fehr et al., 2015). The RNA pseudoknot on the slippery sequence alters the reading frame by using the translating ribosome to shift the reading frame one nucleotide in the negative direction (-1) of RNA (Lim et al., 2016). This process is known as programmed -1 ribosomal frameshifting (-1 PRF) and it causes this shift of a nucleotide to allow the ribosome to bypass the ORF1a termination codon and keep translation going for the ORF1b (Plant et al., 2008). The
SUMMER 2020
translation process is then complete at the end of ORF1b and results in a larger hybrid protein that covers two open reading frames known as polyprotein 1ab (pp1ab) (Ziebuhr et al., 1999). With pp1a and pp1ab now produced through translation, the two polyproteins will undergo proteolysis, a process in which proteins are degraded to smaller component polypeptides, which results in the production of fifteen novel structure proteins (nsp) (Wit et al., 2016, Nature, 2020).
“The positive genomic RNA strand (5' to 3') will now begin a series of biosynthetic steps to produce new viral RNA and viral proteins.”
Second, the initial positive genomic coronavirus RNA strand (5’ to 3’) will go through a replicasetranscriptase complex which begins with the initial coronavirus RNA traveling down the cytosol to link and combine with the fifteen novel structure proteins that were produced in the translation phase (Wit et al., 2016). With the initial RNA covered with novel structure proteins, it will now replicate itself and produce a negative genomic RNA strand (3’ to 5’) in the opposite order compared to the initial positive genomic RNA strand (Sino Biological, 2006, Wit et al., 2016). The first molecular process of replication will use the negative RNA strand to produce a positive genomic coronavirus RNA strand (5’ to 3’) - one that is very similar to the initial RNA from the attached coronavirus that originally fused into the cytosol (Sino Biological, 2006, Wit et al., 2016). This newly produced positive genomic RNA strand will make up the intracellular aspect of the new coronavirus (Sino Biological, 2006). The RNA strand will travel further down the cytosol where it will wait until the assembly stage. The second
39
Figure 3: The final translation process completed to produce the viral proteins that are constructed along with the RNA to produce the new coronavirus cell. Image created by the author. Inspiration derived from: (Sino Biological, 2006)
“Assembly of the coronavirus occurs largely in the endoplasmic reticulum-golgi intermediate component (ERGIC) as follows...”
40
molecular process is discontinuous transcription using an enzyme called RNA polymerase that transcribes the negative genomic RNA strand to produce a template of subgenomic mRNA strands (Wit et al., 2016, Smirnov et al., 2018). The negative genomic RNA strand consists of a leader transcription regulatory sequence (TRS), and around nine body transcription regulatory sequences (TRSs) all with very small distances from each other to make up a specific portion near the 5’ end of the RNA strand (Marle et al., 1999). RNA polymerase then transcribes from the 5’ end until it arrives at the first body TRS (Sawicki et al., 2003). RNA polymerase stops here and instantly jumps to the leader TRS located at the 3’ end (Komissarova et al., 1996). The segment of the RNA that was jumped over from is not included in the mRNA transcript (Wit et al., 2016). Once transcribed, the small mRNA strand is sent to a template right below the negative genomic RNA strand, in the cytosol (Komissarova et al., 1996, Sawicki et al., 2003). The RNA polymerase then transcribes the RNA again from the 5’ end, until it arrives at the second body TRS which is located at a further position than the first TRS (Marle et al., 1999, Sawicki et al., 2003). RNA polymerase will stop here and again, jump to the leader TRS where it will perform transcription, and produce another mRNA strand that will be sent to the template (Komissarova et al., 1996). The process is identical to the first body TRS; however, since the distance of the second body TRS is farther than the first body TRS, RNA polymerase will transcribe more RNA to arrive at the second
body TRS. This makes the second mRNA strand produced slightly bigger than the first. The same process will be used for the next several body TRSs successively with each mRNA transcript coming out slightly longer than the previous body TRS that was transcribed. Discontinuous transcription will lead to the production of a template of nine total subgenomic mRNAs which gradually increase in size and exert their functions in the next phase of viral biosynthesis (Sino Biological, 2006). Virus Assembly: Assembly of the coronavirus occurs largely in the endoplasmic reticulum-golgi intermediate component (ERGIC) as follows: (Fehr et al., 2015, Lodish, 2020). M protein: The M protein is the commander of virus assembly in the ERGIC (Siu et al., 2008). To begin, the M protein will bind the E protein to form artificial microproteins which maintain the bond with the E protein, and ultimately organize, mature, and produce the viral envelope of the new coronavirus (Siu et al., 2008, Lim et al., 2016). With the viral envelope now produced, there is an opening for the extracellular viral proteins to attach and initiate their positioning on the viral envelope. Next, the M protein binds the S protein to configure the proper locations and amount of S proteins that will be placed on the viral membrane of the new coronavirus (de Haan et al., 2005, Fehr et
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
al., 2015). If there is an oversupply of S proteins, then the M proteins will release the extra into the cytosol which lysosomes will digest. E protein: The E protein manipulates the plasma membrane to produce membrane curvature and ensure the coronavirus is a stable sphereshaped virus (Raamsman et al., 2000, Fehr et al., 2015). The membrane curvature is particularly valuable as it provides the mobility and transportability it requires to travel through the body. The E protein also has a function of thoroughly evaluating the M protein to ensure it is accurately completing its functions and progressing the assembly process (Boscarino et al., 2008). S protein: The S protein, while extremely active in fusion, has a very limited role in the assembly of the coronavirus, only functioning as a trafficking and regulating agent in the ERGIC (Fehr et al., 2015). N protein: The N protein progresses viral-like proteins to the ERGIC which then promotes a more stable viral envelopment when it binds with the other viral proteins to stabilize the newly constructed virus (Lim et al., 2016). The N protein is responsible for placing the genomic RNA strand in the accurate position within the new coronavirus, which is the final step in viral assembly (Krijnse-Locker et al., 1994). Altogether, the four viral proteins complete all their functions which slowly, but accurately, completes the assembly of the coronavirus. However, there are still a series of steps the coronavirus goes through to ensure that it can leave the human cell accurately. For instance, the golgi apparatus scans and packages the newly produced coronavirus to place it into golgi vesicles (Sino Biological, 2006, Antibodies Online, 2020). The golgi vesicles act as transporters of the coronavirus from the golgi apparatus to the plasma membrane of the human cell. When the vesicle arrives at the plasma membrane, it will be released into the extracellular fluid, a process known as exocytosis (Antibodies Online, 2020). Exocytosis completes the process of coronavirus replication, and results in another virulent particle that can now travel to other organ systems and continue hijacking cells to repeat this process. Overall, it may be said that the coronavirus is novel, which is valid; although, the biological forethought and resistance that the coronavirus
SUMMER 2020
wields is a representation that the coronavirus is a truly unparalleled biological virus that can gain immediate imperious control all through the entirety of an organism. References Anderson, E. L., Turnham, P., Griffin, J., & Clarke, C. (2020, May 1). Consideration of the Aerosol Transmission for COVID-19 and Public Health. PubMed. https://pubmed.ncbi.nlm.nih. gov/32356927/ Antibodies Online. (2020, March 31). SARS-CoV-2 Life Cycle: Stages and Inhibition Targets. https://www.antibodies-online. com/resources/18/5410/sars-cov-2-life-cycle-stages-andinhibition-targets/ Atkinson, J. (2009). Respiratory droplets - Natural Ventilation for Infection Control in Health-Care Settings - NCBI Bookshelf. National Institute of Health (NIH). https://www.ncbi.nlm.nih. gov/books/NBK143281/ Boopathi, S., Poma, A., & Kolandaivel, P. (2020, April 30). Novel 2019 coronavirus structure, mechanism of action, antiviral drug promises and rule out against its treatment. Taylor & Francis. https://www.tandfonline.com/doi/full/10.1080/0739 1102.2020.1758788 Boscarino, J. A., Logan, H. L., Lacny, J. J., & Gallagher, T. M. (2008, January 9). Envelope Protein Palmitoylations Are Crucial for Murine Coronavirus Assembly. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC2258982/ Burkard, C. (2014, November 6). Coronavirus Entry Occurs through the Endo-/Lysosomal Pathway in a ProteolysisDependent Manner. PLOS Pathogens. https://journals.plos. org/plospathogens/article?id=10.1371/journal.ppat.1004502 Clarke, N. & Turner, A. (2011, November 10). AngiotensinConverting Enzyme 2: The First Decade. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC3216391/ Cyagen Newsletter. (2020, April 1). What Roles Does ACE2 Play? | ACE2 Mice | Cyagen US Inc. Cyagen. https://www. cyagen.com/us/en/community/technical-bulletin/ace2.html de Haan, C. A. M., & Rottier, P. J. M. (2005, August 31). Molecular Interactions in the Assembly of Coronaviruses. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC7112327/ Di Giorgio, S., Martignano, F., Torcia, M. G., Mattiuz, G., & Conticello, S. G. (2020). Evidence for host-dependent RNA editing in the transcriptome of SARS-CoV-2 in humans. ScienceAdvances. https://advances.sciencemag.org/content/ advances/early/2020/05/15/sciadv.abb5813.full.pdf Fehr, A., & Perlman, S. (2015, February 12). Coronaviruses: An Overview of Their Replication and Pathogenesis. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC4369385/ Ghose, T. (2020, April 7). How are people being infected with COVID-19? Live Science. https://www.livescience.com/howcovid-19-spreads-transmission-routes.html Hammer, S. M. (2020) Viral Replication. Columbia Viral Replication. http://www.columbia.edu/itc/hs/medical/
41
pathophys/id/2004/lecture/notes/viral_rep_Hammer.pdf Hamming, I., & Timens, W. (2004, June). Tissue distribution of ACE2 protein, the functional receptor for SARS coronavirus. A first step in understanding SARS pathogenesis. PubMed. https://pubmed.ncbi.nlm.nih.gov/15141377/ Han, D., Penn-Nicholson, A., & Cho, M. (2006, February 28). Identification of critical determinants on ACE2 for SARS-CoV entry and development of a potent entry inhibitor. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC7111894/ Haywood, A. M. (2010, November 1). Membrane Uncoating of Intact Enveloped Viruses. Journal of Virology. https://jvi.asm. org/content/84/21/10946 Hiremath, S. (2020, March 14). Should ACE-inhibitors and ARBs be stopped with COVID-19? NephJC. http://www.nephjc.com/ news/covidace2
Lodish, H. (2020, August). Overview of the Secretory Pathway - Molecular Cell Biology - NCBI Bookshelf. National Institute of Health (NIH). https://www.ncbi.nlm.nih.gov/books/ NBK21471/ Marle, G., Dobbe, J., Gultyaev, A., Luytjes, W., Spaan, W., & Snijder, E. (1999, October 12). Arterivirus discontinuous mRNA transcription is guided by base pairing between sense and antisense transcription-regulating sequences. PNAS. https:// www.pnas.org/content/96/21/12056.figures-only National Heart, Lung, and Blood Institute. (2020). Respiratory Failure | NHLBI, NIH. National Institute of Health. https://www. nhlbi.nih.gov/health-topics/respiratory-failure
Imai, Y. (2005, July 7). Angiotensin-converting enzyme 2 protects from severe acute lung failure. Nature. https:// www.nature.com/articles/nature03712?error=cookies_not_ supported&code=89ced488-272f-47f4-ac2b-24e8bf2b5e94
Nature. (2020, August 21). Cell density-dependent proteolysis by HtrA1 induces translocation of zyxin to the nucleus and increased cell survival. https://www.nature.com/subjects/ proteolysis?error=cookies_not_supported&code=867889c6b015-41ae-a853-b94272da3f45
Jia, H. P., Look, D., & Tan, P. (2009, May 1). Ectodomain shedding of angiotensin converting enzyme 2 in human airway epithelia. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC2711803/
Patel, N. (2020, April 15). How does the coronavirus work? MIT Technology Review. https://www.technologyreview. com/2020/04/15/999476/explainer-how-does-thecoronavirus-work/
Kabbani, N., & Olds, J. (2020, March 27). Does COVID19 Infect the Brain? If So, Smokers Might Be at a Higher Risk. Molecular Pharmacology. http://molpharm.aspetjournals.org/content/ molpharm/97/5/351.full.pdf
Plant, E., & Dinman, J. (2008, May 1). The role of programmed-1 ribosomal frameshifting in coronavirus propagation. PubMed Central (PMC). https://www.ncbi.nlm. nih.gov/pmc/articles/PMC2435135/
Komissarova, N., & Kashlev, M. (1996, December 26). RNA Polymerase Switches between Inactivated and Activated States By Translocating Back and Forth along the DNA and the RNA*. Journal of Biological Chemistry. https://www.jbc.org/ content/272/24/15329.full.pdf
Povlsen, A. L., Grimm, D., Wehland, M., Infranger, M., & Kruger, M. (2020, January 18). The Vasoactive Mas Receptor in Essential Hypertension. MDPI. https://www.mdpi.com/20770383/9/1/267/htm
Krijnse-Locker, J., Ericcson, M., Rottier, P., & Griffiths, G. (1994, January 1). Characterization of the budding compartment of mouse hepatitis virus: evidence that transport from the RER to the Golgi complex requires only one vesicular transport step | Journal of Cell Biology | Rockefeller University Press. Rockefeller University Press. https://rupress.org/jcb/ article/124/1/55/56282/Characterization-of-the-buddingcompartment-of Kuba, K. (2005, July 10). A crucial role of angiotensin converting enzyme 2 (ACE2) in SARS coronavirus–induced lung injury. Nature Medicine. https://www.nature.com/articles/ nm1267?error=cookies_not_supported&code=7ab7aa6e385f-417c-a2af-11908b669599 Kuba, K. (2013, January 18). Multiple functions of angiotensinconverting enzyme 2 and its relevance in cardiovascular diseases. PubMed. https://pubmed.ncbi.nlm.nih. gov/23328447/ Lee, J. (2020, April 1). How the SARS-CoV-2 Coronavirus Enters Host Cells and How To Block It. Promega Connections. https:// www.promegaconnections.com/how-the-coronavirus-entershost-cells-and-how-to-block-it/ Li, F. (2016, September 29). Structure, Function, and Evolution of Coronavirus Spike Proteins. PubMed Central (PMC). https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC5457962/
42
Lim, Y. X., Ng, Y. L., & Tam, J. P. (2016, July 25). Human Coronaviruses: A Review of Virus–Host Interactions. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC5456285/
Raamsman, M. J. B., Locker, J. K., de Hooge, A., de Vries, A. A. F., Griffiths, G., Vennema, H., & Rottier, P. J. M. (2000, March 1). Characterization of the Coronavirus Mouse Hepatitis Virus Strain A59 Small Membrane Protein E. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC111715/ R&D Systems, a biotech brand. (2015). ACE-2: The Receptor for SARS-CoV-2. https://www.rndsystems.com/resources/ articles/ace-2-sars-receptor-identified Sawicki, S., Sawicki, D., & Siddell, S. (2003, August 23). A Contemporary View of Coronavirus Transcription. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC1797243/ Schelle, B., Karl, N., Ludewig, B., Siddell, S., & Thiel, V. (2005, June 1). Selective Replication of Coronavirus Genomes That Express Nucleocapsid Protein. PubMed Central (PMC). https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC1112145/ Shang, J. (2020, March 30). Structural basis of receptor recognition by SARS-CoV-2. Nature. https://www.nature. com/articles/s41586-020-2179-y?error=cookies_not_ supported&code=c040882d-a7b9-4224-b9b5-2b2ca6ff8da8 Shereen, M., Khan, S., Kazmi, A., Bashir, N., & Siddique, R. (2020, July 1). COVID-19 infection: Origin, transmission, and characteristics of human coronaviruses. ScienceDirect. https://www.sciencedirect.com/science/article/pii/ S2090123220300540
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Sigma Aldrich. (2020). Coronavirus (SARS-CoV-2) Viral Proteins. https://www.sigmaaldrich.com/technicaldocuments/protocols/biology/ncov-coronavirus-proteins. html
Xu, J., Fan, J., & Wu, F. (2017, May 8). The ACE2/ Angiotensin-(1–7)/Mas Receptor Axis: Pleiotropic Roles in Cancer. Frontiers. https://www.frontiersin.org/ articles/10.3389/fphys.2017.00276/full
Sino Biological, Biological Solution Specialist. (2006). Coronavirus Replication. Sino Biological. https://www. sinobiological.com/research/virus/coronavirus-replication
Zeng, Q., Langereis, M., van Vliet, A. L. W., Huizinga, E., & de Groot, R. J. (2008, July 1). Structure of coronavirus hemagglutinin-esterase offers insight into corona and influenza virus evolution. PubMed Central (PMC). https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC2449365/
Siu, Y. L., Teoh, K. T., Lo, J., Chan, C. M., Kien, F., Escriou, N., Tsao, S. W., Nicholls, J. M., Altmeyer, R., Pieris, J. S. M., Bruzzone, R., & Nal, B. (2008, November 15). The M, E, and N Structural Proteins of the Severe Acute Respiratory Syndrome Coronavirus Are Required for Efficient Assembly, Trafficking, and Release of Virus-Like Particles. Journal of Virology. https:// jvi.asm.org/content/82/22/11318 Smirnov, E., Hornacek, M., Vacik, T., Cmarko, D., & Raska, I. (2018, January 30). Discontinuous transcription. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC5973254/ Sriram, K., Insel, P., & Loomda, R. (2020, May 14). What is the ACE2 receptor, how is it connected to coronavirus and why might it be key to treating COVID-19? The experts explain. The Conversation. https://theconversation.com/what-isthe-ace2-receptor-how-is-it-connected-to-coronavirusand-why-might-it-be-key-to-treating-covid-19-the-expertsexplain-136928
Zhang, H. (2020, March 3). Angiotensin-converting enzyme 2 (ACE2) as a SARS-CoV-2 receptor: molecular mechanisms and potential therapeutic target. Intensive Care Medicine. https://link.springer.com/article/10.1007/s00134-020-059859?error=cookies_not_supported&code=d9b9c760-3c954b58-a03f-668954481245 Ziebuhr, J., & Siddell, S. (1999, January). Processing of the Human Coronavirus 229E Replicase Polyproteins by the VirusEncoded 3C-Like Proteinase: Identification of Proteolytic Products and Cleavage Sites Common to pp1a and pp1ab. PubMed Central (PMC). https://www.ncbi.nlm.nih.gov/pmc/ articles/PMC103821/
Turner, A. (2015, January 1). ACE2 Cell Biology, Regulation, and Physiological Functions. ScienceDirect. https://www.sciencedirect.com/science/article/pii/ B9780128013649000250?via%3Dihub Turner, A. J. (2020, April). ACEH/ACE2 is a novel mammalian metallocarboxypeptidase and a homologue of angiotensinconverting enzyme insensitive to ACE inhibitors. PubMed. https://pubmed.ncbi.nlm.nih.gov/12025971/ UniProt. (2020). ACE2 - Angiotensin-converting enzyme 2 precursor - Homo sapiens (Human) - ACE2 gene & protein. https://www.uniprot.org/uniprot/Q9BYF1 Verdecchi, P., Cavallini, C., Spanevello, A., & Angeli, F. (2020). The pivotal link between ACE2 deficiency and SARS-CoV-2 infection. European Journal of Internal Medicine. https://www.ejinme.com/action/showPdf?pii =S0953-6205%2820%2930151-5 Viral Replication | Boundless Microbiology. (2020) Lumen Learning. https://courses.lumenlearning.com/boundlessmicrobiology/chapter/viral-replication/ Warner, F. J. (2004, November). Angiotensin-converting enzyme-2: a molecular and cellular perspective. PubMed. https://pubmed.ncbi.nlm.nih.gov/15549171/ Wessner, D. (2010). Origin of Viruses | Learn Science at Scitable. Nature. https://www.nature.com/scitable/ topicpage/the-origins-of-viruses-14398218/?error=cookies_ not_supported&code=996434bb-dc5a-49f8-b0c65f250616315b Wit, E., van Doremalen, N., Falzarano, D., & Munster, V. (2016, June 27). SARS and MERS: recent insights into emerging coronaviruses. Nature Reviews Microbiology. https://www. nature.com/articles/nrmicro.2016.81?error=cookies_ not_supported&code=f8f66712-0405-4bc7-908e58cb5d96d03f#Sec2
SUMMER 2020
43
Gasotransmitters: New Frontiers in Neuroscience BY AUDREY HERRALD '23 Cover Image: A threedimensional representation of the nitric oxygen synthase (NOS) protein, a critical player in the synthesis of nitric oxide (NO), the first identified gasotransmitter Created by Jawahar Swaminathan and MSD staff at the European Bioinformatics Institute, from Wikimedia Commons
44
Introduction The average adult brain is comprised of about 120 billion neurons (Herculano-Houzel, 2001). Communication between the neurons in this vast network is important for cognition, behavior, and even physiological function; in short, neural signaling is a critical element of life (Neural Signaling, 2001). This signaling generally occurs through one or some combination of two mechanisms: electrical transmission and chemical transmission. The “all-or-nothing” action potential is a relatively well-known example of electrical transmission, and endogenous signaling molecules called neurotransmitters are equally familiar. However, a lesser-known but profoundly important class of molecules regulates our cardiovascular, nervous, gastrointestinal, excretory, and immune systems—in addition to many cellular functions, including apoptosis, proliferation, inflammation, cellular metabolism, oxygen sensing, and gene
transcription (Handbook of Hormones, 2016). Their name? Gasotransmitters. The phrase “gasotransmitter” was first coined in 2002 when a team of researchers identified hydrogen sulfide (H2S) as the third gaseous signaling molecule of its kind (Wang et al., 2002). The rotten-egg smelling molecule joined nitric oxide (NO) and carbon monoxide (CO) in the group of molecules referred to as gasotransmitters. Since then, advancements in understanding of the cellular signaling process have led to the proposed identification of other gasotransmitters, like ammonia (NH3). These molecules dictate a wide variety of physiological processes, and the mechanisms of effect are accordingly varied. Section 1 of this paper will address the various functional mechanisms of the three broadly accepted gasotransmitters, in addition to providing a clearer profile of what, exactly, these gasotransmitters look like and how they mediate neural connectivity. Section DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
(NO3−), nitrous oxide (N2O), and nitroxyl (HNO) for NO). These derivatives are classified within the gasotransmitter family, even though some exist in non-gaseous forms, because they often perform signaling functions in place of their primary molecule or help to buffer fluctuations in gasotransmitter levels (Ida et al., 2014). The derivatives play important physiological roles, which are addressed in subsequent sections. First, though, it is important to understand the general functional mechanisms of the primary gasotransmitters. All molecules classified as gasotransmitters must fulfil six defining criteria (Wang, 2002):
Figure 1: Representation of the enzyme soluble guanylate cyclase (sGC), the only known target of the NO gasotransmitter Created by Audrey Herrald, reference Elango, 2017
(i) They are small molecules of gas. (ii) They are freely permeable through membranes and therefore act without specified membrane receptors. 2 will address the role of gasotransmitters in a number of neurological diseases and psychiatric conditions, along with any potential avenues for gasotransmitter-related treatment.
(iii) They are generated endogenously. (iv) They have well-defined, specific functions at physiologically relevant, generally exceptionally low, concentrations.
Form and Function
(v) Their functions can be mimicked by exogenous counterparts.
What are gasotransmitters? Gasotransmitters are gaseous signaling molecules produced within the body. Their characterization as such is recent, and discussions regarding the molecules’ official designation are still ongoing; some suggest the term “gaseous messengers,” while others advocate for “small-molecule signaling species” (Wang, 2018). The term “gasotransmitter,” however, refers to a specific set of criteria that do not necessarily apply to alternate namings— hence the term’s selection for this article. Gasotransmitters, for example, are endogenous. This means that oxygen (O2), which can be classified as both a “gaseous messenger” and a “small-molecule signaling species,” cannot be a gasotransmitter (Wang, 2018). The proposal of alternate terms invites ambiguity. However, these proposals also point to an important truth: interest in gasotransmitters is quickly growing within the scientific community. Accordingly, the synthesis, form, and function of gasotransmitters are emerging with increasing clarity. Currently, the three molecules widely recognized as gasotransmitters are nitric oxide (NO), carbon monoxide (CO), and hydrogen sulfide (H2S). Each of these primary gasotransmitters has a series of chemical derivatives (such as nitrite (NO2−), nitrate SUMMER 2020
(vi) Their cellular effects may or may not be mediated by secondary messengers, but specific final targets are implicated. The first three criteria refer to the molecules themselves, while the last three refer to their physiological effects. First, there is the size and state requirement. “Small” here is defined as having a molecular mass between 28 and 34 amu, which excludes a vast array of endogenous species (Wang, 2002). The requirement of a gaseous state is also important; gasotransmitters exist either in gaseous form or are dissolved in circulation, intracellular fluid, and/or interstitial (between cells) fluid (Wang, 2018). Next comes membrane permeability. Gasotransmitters do not require cognate membrane receptors to interact with cells; their gaseous state enables them to ‘slip through the gate’ without the need for a gatekeeper. This is where gasotransmitters differ from well-known transmitters like hormones, neurotransmitters, and drugs, all of which require and interact extensively with cellular receptors. Third, gasotransmitters must be produced endogenously. Thus far, all identified gasotransmitters are not only produced endogenously, but produced specifically through highly regulated enzymatic processes. The careful bioregulation of these molecules
“Currently, the three molecules widely recognized as gasotransmitters are nitric oxide (NO), carbon monoxide (CO), and hydrogen sulfide (H2S).”
45
Figure 2: Molecular orbital diagram for NO Source: Wikimedia Commons
makes sense, as CO, NO, and H2S can all be highly toxic in unregulated conditions (CO and NO interfere with oxygen exchange and H2S is a respiratory tract irritant) (Wareham et al., 2018).
“The possibility for mimicry allows scientists to selectively manipulate certain qualities (like concentration, for example) of the gasotransmitters, and then observe their physiological effects.”
46
Now come the function-related stipulations for molecules in the gasotransmitter family. The fourth criterion has to do with both concentration and physiological effect: gasotransmitters must play a role that is both well-defined and specific to that molecule, and this specific molecular function must take place at a known (and generally low) endogenous concentration. This means that manipulating the concentration of gasotransmitters within the body should evoke specific physiological changes, in line with the identified role of the molecule. The fifth criterion has particularly noteworthy implications for the study of gasotransmitters. It states that exogenous (created outside of the body) substances can effectively mimic the activity of endogenous gasotransmitters. This possibility for mimicry allows scientists to selectively manipulate certain qualities (like concentration, for example) of the gasotransmitters, and then observe their physiological effects. Studies like these help researchers understand more about which characteristics of gasotransmitters hold particular significance. An often-utilized tool for these types of studies is a class of substances called NO-releasing compounds, which solve the issue of NO administration (high reactivity in air and concentration-dependent function) by reversibly storing and releasing NO under specific conditions (Cheng, 2019). The recent engineering of these “NO donors” is just one example of the many scientific developments
elicited by our emerging understanding of gasotransmitters. Finally, the sixth criterion refers to both signaling mechanism and signaling result. All gasotransmitters have certain physiological effects, from regulation of the immune, cardiovascular, and nervous systems to initiation of cellular events like apoptosis, inflammation, and proliferation (Handbook of Hormones, 2016). These physiological effects depend on a series of interactions between gasotransmitters and proteins. The sixth gasotransmitter qualification criterion clarifies that even though gasotransmitters must have specific physiological effects, these effects need not result from direct interaction with a gasotransmitter. In other words, the series of events from gasotransmitter to end-result can be long, provided that the path between the pair can be clearly traced. One simple example of this principle involves the dilation of blood vessels via NO signaling: while NO can induce vasodilation directly by binding to oxygencarrying myoglobin (thus enabling oxygendependent muscle cells in vasculature walls to dilate), NO might also interact with the enzyme guanylyl cyclase (GC) to initiate a chain reaction with a series of intermediaries prior to inducing blood vessel dilation (Ormerod et al., 2011). Either way, according to the six generally accepted criteria listed above, NO passes the test and is classified as a gasotransmitter. Meet the threesome Though technological advancements and new research keep leading to the proposal
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Electron configuration for Fe2+ Created with MEL VR Virtual Chemistry by Audrey Herrald.
of additional gasotransmitters, the family of signaling molecules currently consists of three definite members: Nitric oxide (NO), carbon monoxide (CO) and hydrogen sulfide (H2S) (Shefa et al., 2017; Wang et al., 2020). Each of these molecules interact differently with various physiological targets, and their mechanisms of synthesis are unique. First, the knowns: scientists believe that they have successfully identified many of the biological targets for NO, CO, and H2S. The mechanisms of interaction with these targets, however, remain somewhat ambiguous; and researchers still hope to uncover the identities of the proteins that modulate gasotransmitter function, the effect of gasotransmitter production on other gasotransmitters, and the characteristics of gasotransmitter sensor proteins, among other questions. Future research will build in part upon two key areas of gasotransmitter understanding, as outlined by field-pioneer Rui Wang: interactions with producers, and interactions with targets (Wang, 2018). First, for producer interactions: the identification criteria for gasotransmitters stipulate that gasotransmitters must be created inside the body. This makes synthesis an important element of gasotransmitter function. Each of the three main gasotransmitters, H2S, NO, and CO, are produced through enzymatic processes. Hydrogen sulfide production depends primarily upon three enzymes: cystathionine γ-lyase (CSE), cystathionine β-synthase (CBS), and 3-mercaptopyruvate sulfurtransferase (MST) (Zhu 2017). Enzymatic production of NO is catalyzed by three subtypes of nitrous oxide synthase (NOS) enzymes, known as eNOS, iNOS, and neuronal NO synthase (nNOS) (Wang 2018). Lastly, endogenous CO is produced following a catalysis process that involves oxygenase (HO).
SUMMER 2020
In these enzymatic processes, different amino acids—either obtained through consumption of protein-rich foods or produced through the endogenous breakdown of other consumed proteins—bind to one of the enzymes mentioned above. When this bond occurs, slow-moving chemical reactions accelerate greatly (more than a millionfold), resulting in the production of new substances (Castro, 2014). Variations of this enzymatic synthesis generate H2S, NO, and CO. Upon production, gasotransmitters begin interacting with their physiological targets. Importantly, though one might read that gasotransmitters regulate the immune system or increase breathing rate, (Handbook of Hormones, 2016), gasotransmitters (like other bodily signaling molecules) seldom act directly on these systems. In reality, newly synthesized molecules of H2S, NO, and CO tend to exert small effects on specific targets, the results of which trigger physiological processes that might indeed lead to improvements in the immune system or an uptick in respiration. Each gasotransmitter has a wide range of initial targets and a wider range of physiological functions, many of which likely remain to be discovered. However, a few key mechanisms for physiological function have been thoroughly outlined. The next section provides a demonstrative example of one such pathway: the reduction of tension in blood vessel walls, also known as “vasorelaxation,” initiated by the gasotransmitter NO (Wang 2018).
“... though one might read that gasotransmitters regulate the immune system or increase breathing rate, gasotransmitters (like other bodily signaling molecules) seldom act directly on these systems.”
Nitric Oxide and Vasorelaxation: From synthesis to final effect Nitric oxide (NO) has only one known target, called soluble guanylate cyclase (sGC). sGC
47
is an enzyme (encoded by three different genes) that is comprised of two subunits. Both subunits of the sGC enzyme have four domains, each involved in different elements of enzyme function. Figure 1 provides a visual representation of the two subunits and eight domains. One of the sGC subunits, the “beta” subunit, includes a small, adjunct molecule called a ligand. This specific ligand is histidine, a common amino acid, coordinated to an ironcontaining molecule known as a “heme.” The heme, Fe2+, is the specific target of NO.
“Due to its many roles, loss of NO bioactivity contributes to disease in many conditions - making restoration of NO bioavailability an attractive therapeutic avenue for many diseases and disorders."
Nitric oxide binds to the Fe2+ heme with particularly high affinity. Why? The answer lies in the arrangement of electrons associated with each substance. In NO, electrons from the nitrogen and oxygen atoms come together in such a way that one “molecular orbital,” called the pi* orbital, remains only partially filled. (Molecular orbitals are mathematical functions that help describe the probability of finding electrons in certain regions around the molecule.) The partially filled pi* orbital in NO, shown as lines without arrows (without “electrons”) in Figure 2, can help visualize the outer-most electrons that are associated with NO. The heme on the beta subunit of sGC is Fe2+, which can be visualized with a representation like the one in Figure 3. As in Figure 2, the arrows represent electrons. In general, for two species to bond by sharing electrons, these two species must possess valence electrons in comparable energy levels to one another. Further, the orbitals—the regions where these electrons are most likely to be found—must match up in a particular way. In the case of NO and Fe2+, if both species were to retain the electron organizations depicted in Figures 2 and 3, the electrons wouldn’t quite match up. However, a phenomenon known as “back bonding” enables electrons from Fe2+ to move from their highest-energy atomic orbitals (see the “3d” label in Figure 3) to the pi* molecular orbital (see the empty lines in the center of Figure 2) of NO (Cotton et al., 1999). Nitric oxide, with one unpaired electron, has a particular affinity for back bonding in appropriate situations (Cotton et al., 1999). A combination of back bonding, electrostatic attractions, and a number of related forces (all driven towards an end-goal of reducing the system’s free energy) allow the gasotransmitter to group itself tightly into molecular complex with the Fe2+ heme and the amino acid ligand, all just one element of the much larger sGC enzyme.
48
Upon bonding, the presence of NO leads to more than a 200-fold increase in sGC activity (Offermanns et al., 2008). Activation of the heme on the beta subunit of sGC (see Figure 1) initiates activity in the catalytic region of the enzyme, where sGC catalyzes the transition of a common nucleotide called guanosine 5′-triphosphate (GTP) to an important messenger compound known as cGMP. Remarkably, sGC is one of only two enzymes that produces cGMP. This means that sGC (and thereby NO, being a key sGC transmitter) is intimately involved in many signal transduction pathways. One of these pathways capitalizes on the rise in cGMP concentration (a result of NOinitiated sGC activation) to set off a signaling cascade. cGMP-dependent protein synthesis, as well as cation channels that open and close in accordance with cGMP concentration, all contribute to the release of a chemical signal (myosin phosphate). Finally, myosin phosphate drives smooth muscle cells to release their stores of excess calcium—thereby relaxing the muscles and leading to NO-initiated vasodilation (Denninger et al., 1999). The dilation of blood vessels is one physiological result of NO signaling, but the molecule has a wide variety of additional functions. Due to its many roles, loss of NO bioactivity contributes to disease in many conditions—making restoration of NO bioavailability an attractive therapeutic avenue for many diseases and disorders (Helms & Kim-Shapiro, 2013). And even though NO signaling affects regions throughout the body, one of its most prominent areas of impact, together with CO and H2S, is the nervous system. In the next section, several relations between gasotransmitters and neurological conditions are outlined.
Gasotransmitters and Neurological Conditions Gasotransmitters have been implicated in a number of neurological conditions, including Alzheimer’s disease, autism spectrum disorder, Parkinson’s disease, and multiple sclerosis (MS) (Steinert et al,. 2010). These disorders all have in common the degeneration or disruption of neural plasticity. Neural plasticity, or synaptic plasticity, is the ability of neurons in the brain and spinal cord to adapt in response to changes in the environment or damage to neural tissue (Sharma et al., 2016). This plasticity is critical for proper communication among neurons, maintenance of bodily homeostasis, and for the regulation of neural transmission (Shefa et al., 2018). It makes sense,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
then, that disruptions to neural plasticity can harbor devastating effects; from psychiatric disorders like schizophrenia and bipolar disorder to neurodegenerative disorders like Alzheimer’s disease, the link between synaptic plasticity and many common neurological conditions is becoming increasingly clear (Shefa at al., 2018; Kumar et al., 2017). Now, as gasotransmitters—NO in particular—have emerged as important regulators of synaptic plasticity, the transmitters’ roles in neurological conditions are an increasingly common subject of investigation (Shefa at al., 2018). Among common neurodegenerative diseases, Alzheimer’s disease is one of the most closely associated with synaptic plasticity (Kumar et al., 2017). While the loss of synaptic plasticity— closely related to working memory—is not the cause of Alzheimer’s disease, it may be both a symptom and a pre-diagnosis warning sign; recent research provides in vivo evidence for reduced synaptic plasticity in the brain’s frontal lobe among early- and mid-stage Alzheimer’s disease patients (Kumar et al., 2017). Why might reduced plasticity be a problem? Dr. Sanjeev Kumar, lead author of a recent study on Alzheimer’s disease and synaptic plasticity, explains that healthy neuronal plasticity supports the brain’s “cognitive reserve,” or the protection that offsets poorer functioning in other brain areas and shields against the development of neurodegenerative disease (Kassam, 2017). Thus, any physiological process with a significant effect on synaptic plasticity would likely have an effect on neurodegenerative diseases like Alzheimer’s, as well—and gasotransmitters fit precisely this description. The upregulation or downregulation of gasotransmitters may be a cause of neurodegenerative disorders, and targeted enhancement of gasotransmitter function may prove therapeutic by restoring synaptic plasticity (Shefa et al., 2018). Another facet of the relationship between gasotransmitters and neurological disorders is the effect of gasotransmitters on oxidative stressors. The term “oxidative stress” describes an imbalance between harmful free radicals and their ameliorating pair, antioxidants, within the brain (Pizzino et al., 2017). This imbalance, when present, leaves neural cells vulnerable to attack from unchecked free radicals—the results of which include protein misfolding, glia cell over-activation, mitochondrial dysfunction, and (in the worst cases) subsequent cellular destruction (Kim at al., 2015). Enter
SUMMER 2020
gasotransmitters. The molecules boast a variety of defenses against neurodegenerationinducing oxidative stress. NO, perhaps the most ubiquitous of the gasotransmitters, has the ability to activate a subset of brain receptors known as NMDA receptors, or NMDARs, which help defend against the harmful effects of free radicals (Huque et al., 2009). NMDARs also initiate a signaling pattern that leads to the generation of (more) endogenous NO. The increased levels of NO drive NMDAR activation, and this positive feedback loop can serve as a critical natural defender against oxidative stress (Shefa et al., 2018). The other gasotransmitters, too, can help protect the brain from neurological disorders. H2S appears to have a role in defending against major depressive disorder; studies have shown robust anti-depressive effects via a signaling pathway known as the tropomyosin receptor kinase B -rapamycin (TrKB-mTOR) pathway (Hou et al., 2017). CO has been shown to restore synaptic function in both Alzheimer’s disease and schizophrenia, two vastly different disorders, though with the common characteristic of neural deterioration (McGlashan, 1998; Magalingam, 2018). Lastly, NO emerges again, this time as an upregulator of a messenger molecule called cyclic AMP (cAMP)—which binds a restorative protein called CREB and subsequently reduces the influx of Ca2+ during neural signaling; this action has been linked to reduced symptomology in both schizophrenia and major depressive disorder.
“The other gasotransmitters, too, can help protect the brain from neurological disorders. H2S appears to have a role in defending against major depressive disorder.”
The possibilities for gasotransmitter-related treatment of neurodegenerative and neuropsychiatric disorders are undoubtedly numerous. The next hurdle to cross here will be the development of a more thorough understanding of the signaling mechanisms associated with these gaseous transmitters (Wang 2018). The identified links between gasotransmitters and synaptic plasticity, as well as between gasotransmitters and oxidative stress, are both promising signs that further work on gasotransmitters may improve treatments for currently currently incurable conditions.
Conclusion The term “gasotransmitter” is truly only as old as a typical first-year college student. A keyword-search reveals just four published books on the subject, undoubtedly preceding a host of additional works. Since one of the first defining papers on gasotransmitters was
49
published 18 years ago, the molecular family has garnered significant scientific attention, and knowledge on the subject will only continue to grow. As it does, among the most exciting prospects is the development of new treatments for neurodegenerative diseases and other neurological disorders. Two of the most recent insights—that gasotransmitters seem to mediate neural repair and harbor protective effects against oxidative stress—are certain to generate subsequent investigation, and they might just offer insight as to the most effective mechanisms for management of neurodegenerative conditions (Shefa at al., 2017). In the meantime, researchers will likely continue to investigate the signaling mechanisms of these transmitters— particularly the identity of sensor proteins and the interactions between gases. New molecules are bound to join ammonia in working their way towards gasotransmitter-classification. The field is new and growing, with the potential for lifesaving clinical applications inching ever closer.
Kim, G. H., Kim, J. E., Rhie, S. J., & Yoon, S. (2015). The Role of Oxidative Stress in Neurodegenerative Diseases. Experimental Neurobiology, 24(4), 325–340. https://doi. org/10.5607/en.2015.24.4.325
References
Pizzino, G., Irrera, N., Cucinotta, M., Pallio, G., Mannino, F., Arcoraci, V., Squadrito, F., Altavilla, D., & Bitto, A. (2017). Oxidative Stress: Harms and Benefits for Human Health. Oxidative Medicine and Cellular Longevity, 2017. https://doi. org/10.1155/2017/8416763
Cheng, J., He, K., Shen, Z., Zhang, G., Yu, Y., & Hu, J. (2019). Nitric Oxide (NO)-Releasing Macromolecules: Rational Design and Biomedical Applications. Frontiers in Chemistry, 7. https://doi. org/10.3389/fchem.2019.00530 Declines in plasticity reveals promising treatments for Alzheimer’s disease. (n.d.). Drug Target Review. Retrieved July 24, 2020, from https://www.drugtargetreview.com/ news/26979/plasticity-alzheimers/ Donald, J. A. (2016). Chapter 103—Gasotransmitter Family. In Y. Takei, H. Ando, & K. Tsutsui (Eds.), Handbook of Hormones (pp. 601–602). Academic Press. https://doi.org/10.1016/B978-0-12801028-0.00103-3 Herculano-Houzel, S. (2009). The Human Brain in Numbers: A Linearly Scaled-up Primate Brain. Frontiers in Human Neuroscience, 3. https://doi.org/10.3389/neuro.09.031.2009 Hoque, K. E., Indorkar, R. P., Sammut, S., & West, A. R. (2010). Impact of dopamine-glutamate interactions on striatal neuronal nitric oxide synthase activity. Psychopharmacology, 207(4), 571–581. https://doi.org/10.1007/s00213-009-1687-0 Hou, X.-Y., Hu, Z.-L., Zhang, D.-Z., Lu, W., Zhou, J., Wu, P.-F., Guan, X.-L., Han, Q.-Q., Deng, S.-L., Zhang, H., Chen, J.-G., & Wang, F. (2017). Rapid Antidepressant Effect of Hydrogen Sulfide: Evidence for Activation of mTORC1-TrkB-AMPA Receptor Pathways. Antioxidants & Redox Signaling, 27(8), 472–488. https://doi.org/10.1089/ars.2016.6737 Ida, T., Sawa, T., Ihara, H., Tsuchiya, Y., Watanabe, Y., Kumagai, Y., Suematsu, M., Motohashi, H., Fujii, S., Matsunaga, T., Yamamoto, M., Ono, K., Devarie-Baez, N. O., Xian, M., Fukuto, J. M., & Akaike, T. (2014). Reactive cysteine persulfides and S-polythiolation regulate oxidative stress and redox signaling. Proceedings of the National Academy of Sciences of the United States of America, 111(21), 7606–7611. https://doi.org/10.1073/ pnas.1321232111
50
Kumar, S., Zomorrodi, R., Ghazala, Z., Goodman, M. S., Blumberger, D. M., Cheam, A., Fischer, C., Daskalakis, Z. J., Mulsant, B. H., Pollock, B. G., & Rajji, T. K. (2017). Extent of Dorsolateral Prefrontal Cortex Plasticity and Its Association With Working Memory in Patients With Alzheimer Disease. JAMA Psychiatry, 74(12), 1266–1274. https://doi.org/10.1001/ jamapsychiatry.2017.3292 Magalingam, K. B., Radhakrishnan, A., Ping, N. S., & Haleagrahara, N. (2018, March 8). Current Concepts of Neurodegenerative Mechanisms in Alzheimer’s Disease [Review Article]. BioMed Research International; Hindawi. https://doi.org/10.1155/2018/3740461 McGlashan, T. H. (1998). The profiles of clinical deterioration in schizophrenia. Journal of Psychiatric Research, 32(3–4), 133–141. https://doi.org/10.1016/s0022-3956(97)00015-0 Ormerod, J. O. M., Ashrafian, H., Maher, A. R., Arif, S., Steeples, V., Born, G. V. R., Egginton, S., Feelisch, M., Watkins, H., & Frenneaux, M. P. (2011). The role of vascular myoglobin in nitrite-mediated blood vessel relaxation. Cardiovascular Research, 89(3), 560–565. https://doi.org/10.1093/cvr/cvq299
Purves, D., Augustine, G. J., Fitzpatrick, D., Katz, L. C., LaMantia, A.-S., McNamara, J. O., & Williams, S. M. (2001). Neural Signaling. Neuroscience. 2nd Edition. https://www.ncbi.nlm. nih.gov/books/NBK10882/ SHARMA, N., CLASSEN, J., & COHEN, L. G. (2013). Neural plasticity and its contribution to functional recovery. Handbook of Clinical Neurology, 110, 3–12. https://doi. org/10.1016/B978-0-444-52901-5.00001-0 Shefa, U., Yeo, S. G., Kim, M.-S., Song, I. O., Jung, J., Jeong, N. Y., & Huh, Y. (2017, March 12). Role of Gasotransmitters in Oxidative Stresses, Neuroinflammation, and Neuronal Repair [Review Article]. BioMed Research International; Hindawi. https://doi.org/10.1155/2017/1689341 Steinert, J. R., Chernova, T., & Forsythe, I. D. (2010). Nitric oxide signaling in brain function, dysfunction, and dementia. The Neuroscientist: A Review Journal Bringing Neurobiology, Neurology and Psychiatry, 16(4), 435–452. https://doi. org/10.1177/1073858410366481 Wang, R. (2018). Chapter 1 Overview of Gasotransmitters and the Related Signaling Network. 1–28. https://doi. org/10.1039/9781788013000-00001 Wareham, L. K., Southam, H. M., & Poole, R. K. (2018). Do nitric oxide, carbon monoxide and hydrogen sulfide really qualify as ‘gasotransmitters’ in bacteria? Biochemical Society Transactions, 46(5), 1107–1118. https://doi.org/10.1042/ BST20170311 Yakovlev, A. V., Kurmasheva, E. D., Giniatullin, R., Khalilov, I., & Sitdikova, G. F. (2017). Hydrogen sulfide inhibits giant depolarizing potentials and abolishes epileptiform activity of neonatal rat hippocampal slices. Neuroscience, 340, 153–165.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
https://doi.org/10.1016/j.neuroscience.2016.10.051 Zabłocka, A. (2006). [Alzheimer’s disease as neurodegenerative disorder]. Postepy Higieny I Medycyny Doswiadczalnej (Online), 60, 209–216.
SUMMER 2020
51
The Science of Anti-Aging BY AVISHI AGASTWAR, MONTA VISTA HIGH SCHOOL SENIOR Cover Image: Old man sitting at banks of river Source: Image by Isakarakus from Pixabay
52
Introduction to Aging Anti-aging medicine has long been a topic of discussion among biogerontologists – scientists who study the process of aging. They apply their understanding to discover methodologies and medicinal processes to delay the aging process. Anti-aging is the process of delaying old age or the onset of ailments and medical conditions that accelerate the process of getting old and eventually lead to death (Juengst, 2013). Research in this domain has been highly sought after by individuals looking for ways to live longer and healthier lives. Scientific research and development-based companies like Calico, AgeX, BioAge, BioViva, and the Longevity Fund are working towards addressing the large gaps in our understanding about the science of aging. This research involves interventions like DNA repair, enhancement of the immune system, synthesis and degradation of proteins, cell replacement, and cell regeneration.
From a scientific perspective, aging is defined as the gradual decline in the body’s capacity for cellular repair and increased risk of age-related pathology. At the cellular level, aging manifests in the form of nuclear DNA mutations, changes in the rates of synthesis of cells, and heightened levels of protein degradation (Hayflick, 2004). While bodily functions are hampered, the speed at which cellular aging happens varies between individuals as well as between different cells within a particular individual (Rattan, 2014). From an evolutionary perspective, this process of gradual degradation is explained by the notion that the rapid and efficient processes of body function repair are essential only until a species procreates. After procreation, there is decreased need for efficient functioning to continue indefinitely. In other words, evolution selects for a species capacity to reproduce, not to live a very long life.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1. Life expectancy at birth by world region from 1950 to 2050 Source: Wikimedia Commons
Cellular repair mechanisms are monitored by genes called the Longevity Assurance Genes. These genes overlook the maintenance and repair functions of our cells. DNA repairing ability, counteracting stress, and prevention of unregulated proliferation of cells is positively correlated to enhanced lifespan of an individual. These abilities are deeply encoded in the genetics of the individual. This is supported by the fact that until very recently in the evolutionary timeline of our species – that spans millions of years – the Homo sapiens’ life expectancy was close to 30 years (Rattan, 2004). In 1900, the average life expectancy was 49 years; today, as a result of antibiotics and vaccination, it has jumped to 73 years, surpassing the evolutionarily essential lifespan (Hayflick, 2004). However, biotechnology and medical science are yet to find a solution to the problem of chronic diseases.
Approaches to Counter-Aging Anti-aging research is multifaceted and must be approached from multiple angles. Consequently, scientists have three general paradigms for framing their research. The first of these is to look at the eventual onset of chronic ailments brought about as a result of old age (Laberge et al., 2019). Scientists operating under this paradigm do not intend to increase the lifespan of the human body, but instead work to improve the quality of life of people as they age. As a result, people not only live longer but live healthier and remain a contributing part of society. This is valuable to society in that it has the potential to lower healthcare and
SUMMER 2020
social security expenditures dramatically. The second anti-aging approach is to work towards a longer lifespan by delaying the processes that result in aging. Scientists working under this paradigm claim that an average life expectancy of 112 years is a reasonable prediction (Juengst, 2013). Finally, the third and most ambitious paradigm is to attempt to reconfigure and revitalize the basic cellular processes that take place in the human body, slowing down the process of aging and potentially extending the human lifespan. The goal is to backpedal the damage already done by aging in adults and preemptively prevent such damage in young people. It is accomplished by using DNA reconfiguration to specific amino acids to make targeted changes to the operation of a person’s body (Juengst, 2013).
“DNA repairing ability, cunteracting stress, and prevention of unregulated proliferation of cells is positively correlated to enhanced lifespan of an individual."
Figure 2. The skin is part of the integumentary system and consist of multiple layers that have a role in anti-aging Source: Open Learning Initiative
53
Figure 3. Cyronics chambers at the Cryonics Institute. Source: Wikimedia Commons
Current Scientific and Commercial Research in Anti-Aging “Another reearch firm named AgeX is working on induced tissue regeneration that can harness the regenerative capability of human tissue and repair it.”
As of right now, there are no robust anti-aging medicines on the market that have yielded any substantial benefits for the human body. With that said, there is a lot of ongoing research in the field, as demonstrated by the examples that follow. In Brisbane, California, scientists at Unity Biotechnology are tackling the issue of cellular senescence. Senescence occurs when cells stop dividing further, causing age-associated illnesses like inflammation and degradation of tissues in the body and the extracellular environment that surrounds them. The researchers at Unity Biotechnology diligently remove these senescent cells from the bodies of their patients (Unity Biotechnology, n.d.). Cellular senescence is irreversible because there is currently no known process that can cause a senescent cell to regenerate further. Cellular senescence does serve a purpose – it is known to suppress cancer development at an early stage - but it also contributes to many age-related pathologies later in life (Campisi and Judith, 2013). Another research firm named AgeX is working on induced tissue regeneration that can harness the regenerative capability of human tissue and repair it. This research looked at certain genes that changed their state from on or off at the time of loss of regenerative potential. They have identified gene COX7A1 as a marker for cells that have lost the regenerative potential and have subsequently formulated a technique to suppress the expression of this gene.(AgeX
54
Therapeutics, n.d.). Scientists at Cryonics are working to preserve one’s health by putting a corpse in cryo-sleep by cooling it to liquid nitrogen temperatures. This procedure is based on the idea that a recently deceased body still has the tissues intact. When the recently deceased body is put into cryosleep, it preserves the tissues (Cryonics Institute, 2013). Scientists hope that future scientific advancements have the potential to restore the body to a healthy state (Best, 2008). Meanwhile, CRISPR biomedical technology offers the ability to locate, edit, and manipulate DNA and, as a result, influence functions of any organism; this capability has accelerated biotechnology and medicinal research (Hsu et al., 2014). Research into the phenomenon of hormesis has taken precedence in the field of anti-aging. These efforts rest on the principle that the functional integrity and life of a cell can be maintained without having to make radical alterations to the mechanism controlling its reproductive lifespan. In general, hormesis involves the regulated administration of microdoses of an otherwise harmful stimulus that has been shown to yield an increase in the lifespan of an organism (Rattan, 2004). The theory behind these microdoses is that a limited amount of stress for a controlled duration can cause a spike in the stress response of a cell, which facilitates tissue maintenance, and therefore counteracts the process of tissue aging (Rattan, 2004). The researchers at Sinclair Lab at Harvard Medical School are working with
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4. Food and Drug Administration’s (FDA) Typical Drug Development and Approval Process Source: Wikimedia Commons
Nicotinamide Adenine Dinucleotide (NAD+), which has presented itself as a promising molecule for hormesis therapy. It is observed that NAD+ concentration in the tissues gradually decreases as the human body ages. These researchers have used another compound Nicotinamide Mononucleotide abbreviated as NMN to synthesize NAD+ in the body and thus increase its concentration in the tissues. NMN is made from Vitamin B and is naturally occurring in our body. NMN is a precursor to NAD+ and promotes its production. NMN is sent to the cells through the small intestine and converted to NAD+ using the NMN transporter Slc12a8 (Sinclair Lab, n.d.).
Process of Regulatory Approval The field of anti-aging medicine is not devoid of false or misguided claims. One common example of this is the advice to take a high dosage of certain vitamins and hormones for anti-aging purposes, often without prescription from medical professionals. Companies are directly marketing these supplements mostly on the basis of their anti-aging properties, yet these approaches are neither data-driven nor sufficiently backed by scientific research. Although some of these techniques have been used to treat certain diseases in the body of the elderly, they do not alter the aging process as a whole. Before a medical treatment can be put on the market, it must be approved by the US Food and Drug Administration (FDA). This process is rigorous and involves multiple steps, including discovery and research, pre-clinical trials, clinical trial, FDA review and FDA post-market safety assessment, which increases the cost and time needed to develop a new treatment. SUMMER 2020
The FDA itself does not conduct trials for the drug, but oversees the validation process. The FDA also inspects the facilities where the drugs will be manufactured to ensure quality of production is maintained. Nevertheless, the approval process is important because it helps establish the effectiveness and validity of the treatment and helps consumers be aware of the risks involved (U.S. Food and Drug Administration).
Ethical Concerns Regarding AntiAging Research The topic of anti-aging treatments also provokes many ethical questions regarding the science and technology involved. One concern is the actual benefit of living longer in the first place - a longer life does not necessarily mean more years in good health. In addition to these concerns about the quality of life, a longer lifespan would also mean an added burden to the healthcare system (Juengst, 2013). That being said, many companies working in this field of research are working towards improving healthspan, (i.e. the number of years that a person lives a healthy life and without any chronic ailments), not just extending people’s lifespan (Crimmins, 2015). The subsequent ethical reasoning revolves around researches and companies adhering to FDA guidelines and do not leave out any necessary steps in the drug development process in order to cut costs or time required. Another ethical concern to be addressed is the widespread non-discriminatory access to any drugs or procedures that are developed.
“One concern is the actual benefit of living longer in the first place - a longer life does not necessarily mean more years in good health.”
Conclusion Biogerontologists have classified aging as the 55
decline in molecular repair that leads to agerelated diseases. Factors like DNA mutation, inflammation, cellular senescence, and protein degradation determine the onset and the extent of age-related diseases. For the past few centuries, advances in medical science have increased average human life expectancy by nearly 50 years; this has led to an increasingly large elderly population, and at the same time, increased the incidence of pathology related to old age. Different groups of scientists follow varied approaches to tackle aging, including lengthening the healthspan to retracing and revitalizing organs functionality. This is a principle that anti-aging scientists ought to consider carefully when proposing new therapeutics – improving lifespan but neglecting to preserve or improve quality of life would be a negative development from both a humanitarian and an economic perspective.
harvard.edu/sinclair/research.php. The Science – Unity Biotechnology. UnityBiotech. https:// unitybiotechnology.com/the-science/. SI;, R. Aging, anti-aging, and hormesis. Mechanisms of ageing and development. https://pubmed.ncbi.nlm.nih. gov/15063104/. Technology. AgeX Therapeutics. https://www.agexinc.com/ technology/. Yohn , C., O'Brien, R., Lussier, S., Berridge, C., Ryan , R., Guermazi, A., … An, M. Senescent Synoviocytes in Knee Osteoarthritis Correlate with Disease Biomarkers, Synovitis, and Knee Pain. ACR Meeting Abstracts. https://acrabstracts. org/abstract/senescent-synoviocytes-in-knee-osteoarthritiscorrelate-with-disease-biomarkers-synovitis-and-knee-pain/.
References Best, B. P. (2008, April). Scientific justification of cryonics practice. Rejuvenation research. https://www.ncbi.nlm.nih.gov/ pmc/articles/PMC4733321/. The Case for Cryonics: Cryonics Institute. The Case for Cryonics | Cryonics Institute. https://www.cryonics.org/about-us/thecase-for-cryonics/. Center for Drug Evaluation and Research. Drug Development & Approval Process. U.S. Food and Drug Administration. https:// www.fda.gov/drugs/development-approval-process-drugs. Crimmins, E. M. (2015, November 10). Lifespan and Healthspan: Past, Present, and Promise. OUP Academic. https://academic. oup.com/gerontologist/article/55/6/901/2605490. Hayflick, L. (2004, June 1). Aging: The Reality: "Anti-Aging" Is an Oxymoron. OUP Academic. https://doi.org/10.1093/ gerona/59.6.B573. Hsu, P. D., Lander, E. S., & Zhang, F. (2014). Development and Applications of CRISPR-Cas9 for Genome ... https://www.cell. com/cell/fulltext/S0092-8674(14)00604-7. Judith CampisiBuck Institute for Research on Aging, J. C. (2013). Aging, Cellular Senescence, and Cancer. Annual Reviews. https://www.annualreviews.org/doi/10.1146/annurevphysiol-030212-183653. Juengst, E. T., Binstock, R. H., Mehlman, M. J., & Post, S. G. (2003, February 28). Antiaging Research and the Need for Public Dialogue. https://science.sciencemag.org/ content/299/5611/1323.full. NE. Sharpless, R. A. D. P., C. Lopez-Otin, M. A. B., SE. Artandi, H. M. B., ME. Carlson, C. S., NS. Gavrilova, L. A. G., Rink, J. C., … MB. Schultz, D. A. S. (1970, January 1). Stem cells and anti-aging genes: double-edged sword-do the same job of life extension. Stem Cell Research & Therapy. https://doi.org/10.1186/s13287017-0746-4. Research: The Sinclair Lab: Harvard Medical School, Department of Genetics. Research | The Sinclair Lab | Harvard Medical School, Department of Genetics. https://genetics.med.
56
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
57
Algal Blooms and Phosphorus Loading in Lake Erie: Past, Present, and Future BY BEN SCHELLING '21 Cover Photo: Lake Erie Beach Sign Source: Wikimedia Commons
History “You’re glumping the pond where the Humming-Fish hummed! No more can they hum, or their gills are all gummed. So I’m sending them off. Oh their future is dreary. They’ll walk on their fins and get woefully weary in search of some water that isn’t so smeary. I hear things are just as bad up in Lake Erie” (Dr. Seuss, 1971) Lake Erie has always held the reputation as the most polluted of the Great Lakes. One of the most infamous environmental disasters in history took place on the Cuyahoga River in Cleveland, Ohio. In June of 1969, the river was so polluted from the surrounding industrial city that it caught on fire, with flames reaching over five stories high (Ohio History Connection). While this was not the first time a fire broke
58
out on the river (it has happened over a dozen times in recorded history), it did spark a movement that eventually led to the formation of the United States Environmental Protection Agency (EPA). While there has not been a fire on Lake Erie water since, there have been algal blooms, a much more complex environmental concern. In the 1960s, the population along the coast of Lake Erie was growing and so was the pollution entering the lake. Laundry detergents with high phosphorus concentrations were common at this time as well. Along with the phosphorusrich detergent, tons of human sewage made its way into the lake, unnaturally increasing the phosphorus concentration (Hasler, 1969). The excess phosphorus increased the phytoplankton biomass on the lake (Burns and Ross, 1972). This process is called eutrophication. Eutrophic simply means a body of water with abundant nutrients to support DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: A map of the bathymetry of Lake Erie. Red areas represent shallow water and blue represent deep water. Note that the west-most basin is the shallowest Source: NOAA, 2020
life within it. The shallower the lake, the more eutrophic (RMB Environmental Laboratories). This is because in shallow lakes the littoral zone, the portion of the lake where light can reach the bottom, is greater. When light reaches the bottom, photosynthetic life proliferates. Lake Erie is the shallowest of the Great Lakes, so it is the most susceptible to eutrophication. The western basin of the lake is extremely shallow, so it is even more susceptible than the rest of the lake (Figure 1). When the phytoplankton die, they sink to the bottom and decompose, which requires oxygen. When large masses of phytoplankton decompose, they require so much oxygen that they decrease the dissolved oxygen level of the lake and can create “dead zones”. Dead zones are portions of water that have such low dissolved oxygen concentrations that they cannot support life (NOAA, 2020). Phosphorus loading is highlighted as a key driver of eutrophication. Unlike other nutrients, such as nitrogen, phosphorus has no gaseous cycle. Once phosphorus is in the water or sediments, it remains there much longer than other nutrients (Schindler, 1977). The low oxygen condition of Lake Erie caused a trophic cascade, and consequently, the abundance of benthic invertebrate species decreased. Mayfly nymphs, a benthic invertebrate, were once found at an abundance of 500 per square meter, but in the late 1960s, they were found in densities of only five per square meter, which is a 99% decrease (Hasler, 1969). At the same time, the populations of popular commercial fish such as herring, walleye, and blue pike
SUMMER 2020
were decreasing as well. Lake Erie’s reputation was so poor that reporters named it “the dead lake” (Ashworth, 1982). Notably, Dr. Suess mentions the lake’s poor health in his book The Lorax (Figure 2). While popular media slammed Lake Erie, dedicated scientists worked hard to find the root of the issue and develop a solution. In 1969, Arthur Hasler published a compelling paper titled “Cultural Eutrophication is Reversible.” Hasler highlighted the main issue – the point source-pollution from detergents and sewage – and emphasized that, “It is of the greatest urgency to prevent further damage to water resources and to take corrective steps to reverse present damages.” In 1972, the newly formed EPA released an extensive report on the health of Lake Erie titled “Project Hypo”. Project Hypo investigated the causes of the low dissolved oxygen conditions in the lake’s hypolimnion (the deepest and coldest water) that caused the trophic cascade. The report suggests that if the phosphorus loading into the lake in 1970 was much smaller, that the total phytoplankton biomass in 1970 would have been much smaller as well (Noel and Ross, 1972).
“Unlike other nutrients, phosphorus has no gaseous cycle. Once phosphorus is in the water or sediments, it remains there much longer than other nutrients.”
Shortly after the publication of Project Hypo, the United States and Canadian governments jointly created the Great Lakes Water Quality Agreement (GLWQA), 1972. This agreement set a goal of 6.0 mg/L of O2 and limit of 1 mg/L of phosphorus. To achieve these goals, the agreement set a total annual limit of 11,000 metric tons (MT) of phosphorus discharge into 59
Figure 2: The Lorax, written by Dr. Seuss in 1971, contains a verse about Lake Erie and its dismal water conditions:
"You're glumping the pond where the Humming-Fish hummed! No more can they hum, for their gills are all glummed So I'm sending them off. Oh, their future is dreary They'll walk on their fins and get woefully weary in search of some water that isn't so smeary I hear things are just as bad up in Lake Erie." Source: Available under the Creative Commons License at Pixy.org; Book written by Dr. Seuss (1971)
“In the early 1990s, it seemed that the GLWQA had sufficiently reduced phosphorus loading. But as early as the mid-1990s, phytoplankton masses began to reappear."
Lake Erie. While the GLWQA of 1972 focused on point sources from sewage, the 1978 amendment also mandated the reduction of phosphorus heavy detergents (United States Government and Canadian Government, 1972). For the most part, the GLWQA was a success. From 1972 to 1975, the nearshore phytoplankton biomass decreased by 42% (Nicholls et al., 1977). This was the first evidence to suggest that Lake Erie was making a comeback. Eventually, the total annual phosphorus discharge into the lake fell below the 11,000 MT limit and the health of Lake Erie improved. An assessment of phosphorus and phytoplankton from 1970 to 1987 suggested, “cautious optimism on the phosphorusreduction program” Makarewicz and Bertram, 1991). Phosphorus levels were reduced from 15,260 MT per year to 2,445, an 84% reduction. Similarly, the phytoplankton biomass across the whole lake was reduced to 89% of its original size before the GLWQA (Makarewicz et al., 1993; Makarewicz and Bertram 1991). The water quality in the lake improved so much that Dr. Seuss decided to remove the portion of The Lorax that mentions poor ecosystem heath of Lake Erie (Figure 2; Fortner, 2019).
Soluble Reactive Phosphorus In the early 1990s, it seemed that the GLWQA had sufficiently reduced phosphorous loading.
60
But as early as the mid 1990s, phytoplankton masses began to reappear (Bridgeman et al., 2013). On average, there was no increase in total phosphorus (TP), and the total annual discharge was under the 11,000 MT limit. So, what could have caused this increase in phytoplankton biomass? The answer is soluble reactive phosphorus (SRP). Phosphorus is found in many different forms. The biggest distinction is between bioavailable phosphorus (soluble reactive phosphorus or “SRP”, sometimes called dissolved reactive phosphorus “DRP”) and non-bioavailable phosphorus (non-reactive phosphorus “NRP”). Bioavailable phosphorus is a form that is readily available for uptake by any organism. Nonbioavailable phosphorus, such as iron-bound phosphates, are transported as particles into the lake, settle in the sediment, and are never used by algae or bacteria (Hecky et al., 2004). In the 1990s, the fraction of SRP that made up TP increased (Figure 3). Figure 3: The top figure displays total phosphorus (TP) discharged measured in metric tons (MT) from the Maumee River from 1991 to 2019. The red portion represents the non-reactivate phosphorus (NRP) and the blue represents the soluble reactive phosphorus (SRP). The bottom figure represents the ratio of soluble reactive phosphorus (SRP) to DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
total phosphorus (TP). Data is from National Center for Water Quality Research (Heidelberg University, 2020). Image created by author in JMP 14.2.0.
Dreissenid Mussels Around the time Lake Erie was making a comeback, invasive Dreissenid mussels were introduced into the great lakes from vessels in the Proto-Caspian Sea (Zhang et al., 2011). They were most likely transported in the ballast water; water contained on a ship to help control buoyancy. It is estimated that 80% of the total bottom of Lake Erie was colonized by the mussels by 1992 (Makarewicz et al., 2000). There are two types of Dreissenid mussels, zebra and quagga. Both species are benthic feeders and play a major role in the nutrient cycling within the lake. Some organisms, such as Dreissenid mussels, can alter the chemistry of non-bioavailable phosphorus to a bioavailable form. This is a process known as nutrient remineralization: the transformation of particle nutrients (nonbioavailable) that have settled out of the water column into soluble forms (bioavailable) that can be transported through the water column and made available to growing algae and bacteria (Conroy et al., 2005). Dreissenid mussels can filter through 30% of the suspended matter each day, which is much faster than other organisms (Makarewicz et al., 2000). Benthic detritovores eat the feces of Dreissenid mussels which furthers the nutrient remineralization process and produces even more bioavailable phosphorus (Hecky et al., 2004). The influence of Dreissenid mussels on a lake ecosystem depends on the physical conditions of the lake. In deeper water, Dreissenid mussels will decrease the SRP concentration, but in shallower water, they will enhance it (Steinmetz, 2015). A study of 61 lakes across Michigan found that lakes with more zebra mussels have a smaller population of phytoplankton because zebra mussels may graze on phytoplankton (Raikow et al., 2004). Interestingly, lakes with low nutrient concentrations showed an increase in Microcystis, a species of cyanobacteria, with an increase in zebra mussel abundance. This may be due to the anti-grazing properties of the Microcystis strain. Similarly, in the Saginaw Bay, the introduction of Dreissenid mussels led to an outbreak of Microcystis (Bierman et al., 2005). This highlights the complex relationship between invasive Dreissenid mussels and their
SUMMER 2020
ecosystem. Lake Erie meets the criteria, being the shallowest of the great lakes, for an ecosystem to increase SRP with the introduction of Dreissenid mussels. While TP levels decreased, SRP levels increased partially due to the colonization of invasive Dreissenid mussels in Lake Erie (Conroy et al., 2005; Hecky et al., 2004; Kleinman et al., 2015; Steinmetz 2015; Zhang et al., 2011).
Tillage Another explanation for the increase in SRP is the start of no till and conservation tillage. Tilling is the agricultural preparation of soil by mechanical agitations such as digging, stirring, and overturning. The 1983 revision of the GLWQA, focused on reducing TP and preventing erosion, suggests no till and conservation tillage as a method to reduce TP run-off and erosion (United States Government and Canadian Government, 1972). No till practices also increase biological activity in the soil, which leads to better soil structure and improves water storage (Ulén et al., 2010). In the Maumee and Sandusky watersheds (the watersheds that drain into the western basin of Lake Erie), half of corn and soybean farmers practiced conservation tillage (Richards et al., 2002). At first, this practice appeared to work as TP loads decreased. The no till practices were a major driver in reducing particulate phosphorus (another term for NRP) run off into the western basin of Lake Erie (Richards et al., 2009). While reducing TP run off, no till practices were also enhancing the SRP run off (Baker et al., 2017; Kleinman et al., 2011; Smith et al., 2015a; Ulén et al., 2010).
“Around the time Lake Erie was making a comeback, invasive Dreissenid mussels were introduced into the great lakes from vessels in the ProtoCaspian Sea...”
However, no till and conservation tillage practices allowed for phosphorus to vertically stratify in the soil (Baker et al., 2017; Duiker and Beegle 2006; Jarvie et al., 2017). Soil stratification is the process of soil particles separating based on their physical properties, resulting in visible banding. Phosphorus stratification concentrates SRP near the soil surface, which heightens the possibility of run off (Baker et al., 2017, Duiker and Beegle 2006). Another potential pathway for SRP run off is through macropores (Smith et al., 2015b). Macropores are large clumps of soil that SRP concentrates in. Reduced tillage systems promote the development of macropores. In the watersheds of Lake Erie’s western basin, no till practices decreased TP losses by 69%, but doubled the
61
Figure 3: The top figure displays total phosphorus (TP) discharged measured in metric tons (MT) from the Maumee River from 1991 to 2019. The red portion represents the nonreactivate phosphorus (NRP) and the blue represents the soluble reactive phosphorus (SRP). The bottom figure represents the ratio of soluble reactive phosphorus (SRP) to total phosphorus (TP). Data is from National Center for Water Quality Research Source: Heidelberg University, 2020. Image created by author in JMP 14.2.0.
loss of SRP (Smith et al., 2015a). In Norway, a study found no till practices to reduce NRP run off but increase SRP run off by a factor of 4 (UlĂŠn et al., 2010). Once thought of as a conservation practice to reduce phosphorus loading, reduced tillage provides a new phosphorus source that stimulated algal bloom growth into the new millennium.
Toledo Water Crisis “From August 2nd to 4th, 2014, the water supply from the city of Toledo was contaminated."
62
From August 2nd to 4th, 2014, the water supply for the city of Toledo was contaminated. On August 2nd, at 2am, the governor of Ohio put out a no drink order on the water supply (WTOL, 2019). The water supply was contaminated with Microcystin, a toxin produced by the cyanobacteria Microcystis aeruginosa. When this is found at a concentration over 1 mg/L, the water is considered unsafe to drink (Kasich et al., 2016). Toledo was not prepared to supply
400,000 people with alternative drinking water. There was no secondary water supply for emergencies (Jetoo et al., 2015). By 12pm, the governor declared a state of emergency (WTOL, 2019). It would be three whole days before the water in Toledo was safe to drink again. The most common freshwater bloom-forming cyanobacteria is Microcystis aeruginosa. M. aeruginosa releases a toxin called microcystin that has many adverse health effects. Exposure to microcystin during recreational activities can lead to acute illness causing abdominal pain, headache, sore throat, vomiting, diarrhea, blistering around the mouth, and pneumonia (United States EPA, 2020a). Ingestion though drinking water can cause more serious health effects such as liver or kidney failure and may even increase the risk of ALS (Banack et al., 2015). While microcystin poisoning is rare, it can cause death. Dogs that encounter contaminated water are at high risk, and there
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
are many reported deaths due to M. aeruginosa ingestion in dogs (Stewart et al., 2008). There are reports of human and dog illnesses caused by the toxin in 50 countries and 27 states (United States EPA, 2020a). While M. aeruginosa naturally occurs in freshwater systems, the high abundance seen today is almost entirely anthropogenic, or due to human activity. Similar to other phytoplankton, an unnatural amount of phosphorus can promote excessive growth (Haney and Ikawa 2001; Michalak et al., 2013). Increasing water temperatures as a result of climate change also promotes M. aeruginosa growth (Liu et al., 2011). When water temperatures are higher for longer periods of time, lake stratification is more pronounced for longer. Lake stratification is when the lake is separated into different temperature zones. When the air is warm enough, the top layer of the lake warms while the bottom remains cold. This separation reduces mixing between temperature zones, allowing for M. aeruginosa, which is more buoyant than water, to ascend to the surface where blooms form (Paerl and Huisman, 2009; Wood et al., 2017). Higher atmospheric CO2 concentrations are also advantageous for M. aeruginosa growth. M. aeruginosa displays phenotypic plasticity; a change in morphology, behavior, or physiology based on environmental conditions. The phenotypic plasticity of M. aeruginosa allows it to fix carbon from the atmosphere at a greater rate than other aquatic organisms (Ji et al., 2020).
Current Legislation After the 2014 Toledo water crisis, the state of Ohio created new legislation to prevent a similar event in the future. The first legislation passed was Senate Bill 150. This bill requires certification by the Ohio Department of Agriculture (DOA) for anyone who applies fertilizer to 50 acres of land (Cera et al., 2014). The certified card holder must record the type of fertilizer being applied and who is applying it, as well as the time, place, and rate of application. This information must be available for audits by DOA. Currently, Ohio is the only state with legislation of this kind (Snyder, 2018). Violation of the law can result in the loss of fertilizer registration or refusal to register someone for fertilizer application. This law was made effective on August 21, 2014. The second legislation was Senate Bill 1,
SUMMER 2020
effective July 3, 2015. This bill set restrictions on the fertilizer application methods to reduce the SRP discharge into the lake. Senate Bill 1 states that no person in the watershed of the western basin of Lake Erie may apply fertilizer or manure on snow-covered or frozen soil, if the top two inches of soil are saturated from precipitation, or if the weather forecast calls for greater than 50% chance of precipitation exceeding 1 inch in a 12 hour period (Gardner and Peterson, 2015). It may seem odd, but December often has the greatest SRP loading of any month (Figure 4). The average discharge from the Maumee River varied significantly by month for TP (Figure 4: top, ANOVA: P<0.0001, F1,12=14.38) and for SRP (Figure 4: bottom, ANOVA: P<0.0001, F1,12=12.03). According to Dr. Baker, it is important to avoid winter application for two reasons. The first reason is that SRP is concentrated at the surface from no till practices and the crop residue that is left to decompose at the surface. Crop residue is the plant biomass that remains in the ground after a harvest, and it is often made up of the stems of plants. The second reason that Dr. Baker highlights is that the rainfall in December is elevated, causing large discharges of water carrying SRP. Senate Bill 1 also mandates that fertilizer and manure must be injected into the ground as opposed to broadcast application, where the fertilizer is applied to the soil surface. According to Dr. Baker, injection prevents phosphorus run off in two ways. First, it reduces the buildup of phosphorus at the surface that is already enhanced by phosphorus stratification. This type of phosphorus loss is known as an acute loss; the loss of phosphorus from immediate rainfall before it interacts with the soil. The second reason is the physical intrusion of injection disrupts the formation of macropores that act as a “pipeline” for SRP to travel from agriculture to the water. This type of phosphorus loss is known as a chronic loss; the loss of phosphorus from the soil itself.
“After the 2014 Toledo water crisis, the state of Ohio created new legislation to prevent a similar event in the future.”
The GLWQA saw great success in only a matter of years, so should we expect the 2015 Senate Bill 1 to cause rapid change as well? By using phosphorus data from 2000 to 2019, there is not a significant difference before and after the legislation for TP (ANOVA: P=0.26, F1,12=1.28) or for SRP (ANOVA: P=0.40, F1,12,=0.71). Before Senate Bill 1, the average TP load by month was 89.6 MT and after it is 108 MT (Figure 5). Before the average SRP load by month was 24.4 MT, after it is 26 MT (Figure 5).
63
Figure 4: The average phosphorus loads into the Maumee River by month from 2000-2019. The top displays TP and the bottom displays SRP, both are measured in MT. Error bars represent 1 SE around the mean. Data is from National Center for Water Quality Research Source: Heidelberg University, 2020. Image created by author in JMP 14.2.0.
“Agricultural practices are not the only factors influencing run off. The weather has a large impact on the amount of phosphorus discharge.”
64
much smaller 2014 bloom. Does this mean that Ohio Senate Bill 1 was a failure? It is hard to say. Agricultural practices are not the only factors influencing run off. The weather has a large impact on the amount of phosphorus discharge. For example, the phosphorus loads in 2019 were extremely large (Figure 3). This is not a result of farmers violating the law. Rather, as suggested by Dr. Baker, it was the result of an unusual amount of rainfall in 2018 and 2019. The large amount of rain washed loads of phosphorus into the lake. While a large load, the SRP levels were one third less than expected from this amount of rainfall. Dr. Baker predicts that this is because farmers had fewer opportunities to apply fertilizer due to the heavy rain. The result of the high loads in 2019 was one of the largest blooms on record. It was so large that it covered an area six times the size of Cleveland and could be seen from space (Johnston, 2019). Although it was before the 2015 legislation, the record-setting 2011 bloom was also a result of extreme meteorological events and long-term agricultural practices, not short-term phosphorus loads that year (Michalak et al., 2013). The 2011 bloom biomass reached a size of 40,000 MT and the 2019 bloom biomass reached a predicted 46,300 MT, topping the previous (Obenour et al., 2014; Scavia et al., 2019). The 2014 bloom was only 22,000 MT, less than half that of 2011 and 2019. Neither the 2011 nor the 2019 bloom caused a water crisis like the
Toledo was unlucky in 2014. The bloom formed around the intake for the water treatment plant. The M. aeruginosa rich water that collected in the Maumee Bay was blown east on the southern shoreline and past the Toledo water intake (Steffen et al., 2017). This assessment of the 2014 crisis is correct according to Dr. Baker. He also notes that, along with prevailing winds and biomass movement at the surface, the location of the mass within the water column contributed to the drinking water contamination. The water treatment plant intake is not at the surface, where blooms are most prevalent. When it is windy, as it was in August 2014, the bloom is stirred up and brought down to the intake. Research suggests a viral outbreak among the M. aeruginosa population in the bay, causing an unusual toxin concentration (Erickson 2017, Steffen et al., 2017). A DNA analysis of the 2014 bloom reveals that a virus was spreading between the M. aeruginosa, causing the cells to lyse, or break open. The lysing of the cells released an unusual amount of microcystin toxin into the water, elevating the toxin concentration entering the water intake (Steffen et al., 2017). While the media labels this crisis as “unlucky”, the combination of winds and viral outbreak is likely to happen again.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 5: The average monthly phosphorus loading for the Maumee River, 2000-2019 (top: TP, bottom: SRP) before and after the passing of Ohio Senate Bill 1 in 2015. The error bars represent 1 SE from the mean and the values reported are the means. Data is from National Center for Water Quality Research Source: Heidelberg University, 2020. Image created by author in JMP 14.2.0.
The question still remains: was Ohio Senate Bill 1 successful at reducing the risk of water contamination? The answer is not as clear as the success of the GLWQA, which showed relatively rapid results. This is an understandable difference since reducing point source pollution is simpler than non-point source pollution. Ohio Senate Bill 1 may be the best mitigation tactic given what is known. Unfortunately, the most influential factor impacting bloom formation is one that cannot be regulated by the law – it is governed by weather. The frequency of extreme weather events is currently much greater than it used to be, increasing the chances of another crisis. Therefore, a comparison between the two legislations is unfair. Today, it is an entirely different ballgame.
Future Legislation It is important to remember what is at stake when considering future policies for Lake Erie. There is a $15.1 billion dollar tourism industry on the coast of Lake Erie, most of which is concentrated in the western basin (Briscoe, 2019). The closure of beaches due to health concerns from algal blooms hurts this extremely lucrative tourism industry that many Ohioans rely on. More importantly, algal
SUMMER 2020
blooms threaten the water supply for over 11 million people (United States EPA, 2020b). The 2014 water crisis contaminated the water supply of over 400,000 people and cost an estimated $65 million (NOAA, 2015). The cost of damage from algal blooms is great but the potential health crisis from contaminating the water is much greater. In general, public health is most important. If scientists have identified agriculture as the major source of phosphorus causing blooms, then why don’t we eliminate the pollution source entirely (Briscoe, 2019)? Why did the state of Ohio not ban the use of fertilizers if public health is the number one priority? It is not that simple. The Ohio agriculture industry is massive. There are over 75,000 farms in Ohio, contributing $105 billion to the annual state economy and directly accounting for 13% of the state’s total business (Ohio Department of Agriculture, 2018). A law that completely restricts fertilizer application would destroy this massive industry, leaving thousands unemployed. The Ohio economy would take a huge hit, and the absence of products from Ohio’s agriculture industry to in-state and out-of-state markets would be disastrous. While public health still comes before economics, the downfalls of
“The 2014 water crisis contaminated the water supply of over 400,000 people and cost an estimated $65 million.”
65
eliminating fertilizer application are too great to ignore. The solution must protect clean drinking water supplies without a major disruption to the largest Ohio industry. This is what Senate Bill 1 has started to do. There are restrictions on application methods to reduce drinking water contamination concerns, while still allowing the agriculture industry to move forward. There are still possibilities for new policies that satisfy public health and the agriculture industry. One possibility is the improvement of riparian zones. Riparian zones are vegetation areas that act as buffers between agriculture run off and waterways. They have been successful in reducing phosphorus run off (Jaynes et al., 2014). So, why have we not increased the abundance and size of these buffer zones? Riparian buffers do not catch the run off using the current tile drainage system that is in place for the majority of Ohio farms. Tile drainage is a drainage system that collects excess water underground which is then transported though pipes that bypass the important riparian buffer zones. Currently, 49% of Ohio farms use tile drainage systems, which is much greater than the 14% national average (Zulauf and Brown, 2019). One possible solution is to redirect the drainage systems so the riparian buffers can catch excess nutrients. A tile drainage redirection method has been developed in Iowa and has proven to catch all nitrate runoff (Jaynes et al., 2014; Jaynes and Isenhart 2014). While this has not been proven to work for phosphorus run off, it is likely to do so, since riparian zones generally reduce all types of run off. This is an excellent opportunity for future research and has great promise for reducing phosphorus run off in Ohio.
â&#x20AC;&#x153;The most promising opportunity to decrease Ohio run off, that was not mentioned in any current legislation, are blind inlets.â&#x20AC;?
66
The most promising opportunity to decrease Ohio run off, that was not mentioned in any current legislation, are blind inlets. Blind inlets are mat drainage structures that act as a buffer in the lowest point of a field. This is a buffer that interferes with water flow before it enters a drainage pipe or passes through a riparian buffer zone. Unlike the little research that has been done on tile drainage redirection, there has been significant research on blind inlets documenting the reductions in phosphorus loading in comparison to tile drainage methods. Compared to tile drainage, blind inlets can reduce TP loads by 60-71.9% and SRP by 50-79.4% (Feyereisen et al., 2015; Smith et al., 2015b; Smith and Livingston 2013). These numbers are extremely promising and the implementation of these drainage systems, while a significant structural change, is relatively simple. A combination of eliminating tile drainage, allowing for passage
of water through riparian buffers and the inclusion of blind inlets is the ideal next step in reducing run off while supporting Ohio farms (See Table 1 for a breakdown of the potential improvements to Ohio agriculture legislation).
Statistical Methods The phosphorus data was retrieved form the National Center For Water Quality Research for the Maumee River (Heidelberg University, 2020). Negative concentrations and discharges were excluded from the data because they make no physical sense. All tests were performed on Log(X+1) transformed data. All statistics were done on JMP Pro 14.2.0.
Acknowledgements Special thanks to Doctor David Baker, a professor at Heidelberg University. Dr. Baker has 72 publications and has been cited more than 3,000 times. He has also been at the forefront of research on algal blooms and phosphorus loading in Lake Erie since the 1960s. His professional and informed opinions appear throughout the text. Thank you to Melissa Desiervo, a PHD candidate at Dartmouth College in the Ecology, Evolution, Environment and Society graduate program. Melissa advised the statistical analysis. This paper would not be possible without the general advising of Professor Michael Cox, Dartmouth College. Your guidance is greatly appreciated.
Key Terms Bio-available: The form of a nutrient that is available for a living organism to use. This is when the nutrient is not bound to an organic compound. Dreissenid mussels: A type of mussel (zebra and quagga) that are from the Proto-Caspian Sea. They are invasive in Lake Erie and alter aquatic phosphorus cycling. Eutrophication: The excessive growth of algae in a body of water due to an oversaturation of nutrients. Great Lakes Water Quality Agreement (GLWQA): The 1972 agreement between the United States and Canadian governments that sets phosphorus loading limits and targets the reduction of point source pollution.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Macropores: Relatively large clusters of soil that can form from no till or conservation tillage. Microcystin: The toxin that is produced by many cyanobacteria, most notably Microcystis aeruginosa. Microcystis aeruginosa: This is the scientific name for the cyanobacteria that dominate the recent algal blooms in Lake Erie. Mineralization: The decomposition of chemical compounds in organic matter, releasing nutrients in a soluble inorganic form that is bioavailable. Trophic Cascade: Powerful indirect interactions that can control an entire ecosystem when a trophic level in a food web is suppressed. Non-Reactive Phosphorus (NRP): The portion of TP that is non-bioavailable. Ohio Senate Bill 1: Addresses agricultural regulations and application of fertilizer. Establishes Lake Erie water quality protections. Ohio Senate Bill 150: Sets the reequipment of a permit to anyone who applies fertilizer to a plot of land greater than 50 acres. Phytoplankton: All aquatic autotrophic life between 60 and 50 microns in length. Stratification Soil Stratification: The separation of soil types by their physical properties resulting in abrupt porosity changes at various depths. Phosphorus Stratification: The accumulation of phosphorus at the soil surface where it can easily be transported by precipitation into a water source. Lake Stratification: The separation of water into three layers based on temperature and density: Epilimnion (shallowest layer, warmest, least dense), Metalimnion (middle layer), and Hypolimnion (deepest layer, coldest, most dense). Soluble Reactive Phosphorus (SRP): Also called dissolved reactive phosphorus (DRP), is the portion of TP that is bioavailable. Tillage: The agricultural preparation of soil by
SUMMER 2020
mechanical agitation such as digging, stirring, and overturning. Conservation tillage: The practice of reducing the amount of tilling. References Ashworth, W. (1982). The Late, Great Lakes: An Environmental History. Baker, D., Johnson, L., & Confesor, R. (2017). Vertical Stratification of Soil Phosphorus as a Concern for Dissolved Phosphorus Runoff in the Lake Erie Basin. Journal of Environmental Quality, 46, 1287–1295. Banack, S., Caller, T., & Henegan, P. (2015). Detection of Cyanotoxins, β-N-methylamino-L-alanine and Microcystins, from a Lake Surrounded by Cases of Amyotrophic Lateral Sclerosis. Toxins, 7, 322–336. Bierman, V., Kaur, J., & DePinto, J. (2005). Modeling the Role of Zebra Mussels in the Proliferation of Blue-green Algae in Saginaw Bay, Lake Huron. Journal of Great Lakes Research, 31, 32–55. Bridgeman, T., Chaffin, J., & Filbrun, J. (2013). A novel method for tracking western Lake Erie Microcystis blooms, 2002–2011. Journal of Great Lakes Research, 39, 83–89. Briscoe, T. (2019). The shallowest Great Lake provides drinking water for more people than any other. Algae blooms are making it toxic — and it’s getting worse. Chicago Tribute. Burns, N., & Ross, C. (1972). Project Hypo: An Intensive Study of the Lake Erie Central Basin Hypolimnion and Related Surface Water Phenomena. United States Environmental Protection Agency. To revise the law governing the abatement of agricultural pollution, to require a person that applies fertilizer for the purposes of agricultural production to be certified to do so by the Director of Agriculture, to make other changes to the Agricultural Additives, Lime, and Fertilizer Law., no. 150 (2014). Conroy, J., Edwards, W., & Pontius, R. (2005). Soluble nitrogen and phosphorus excretion of exotic freshwater mussels (Dreissena spp.): potential impacts for nutrient remineralisation in western Lake Erie. Freshwater Biology, 50, 1146–1162. Dr. Suess. (1971). The Lorax. Duiker, S., & Beegle, D. (2006). Soil fertility distributions in longterm no-till, chisel/disk and moldboard plow/disk systems. Soil and Tillage Research, 88, 30–41. Erickson, J. (2017). Virus infection may be linked to 2014 Toledo water crisis. Michigan News. Feyereisen, G., Francesconi, W., & Smith, D. (2015). Effect of Replacing Surface Inlets with Blind or Gravel Inlets on Sediment and Phosphorus Subsurface Drainage Losses. Journal of Environmental Quality, 44, 594–604. Fortner, R. (2019). There’s Nothing Smeary About Lake Erie Anymore. Ohio Sea Grant.Addresses agricultural regulations and application of fertilizer, no. 1 (2015). Haney, J., & Ikawa. (2001). A Survey of 50 NH Lakes for
67
Microcystins (MCs). University of New Hampshire Scholar’s Repository, 127.
Bathymetry of Lake Erie & Lake Saint Clair. https://www.ngdc. noaa.gov/mgg/greatlakes/erie.html
Hasler, A. (1969). Cultural Eutrophication Is Reversible. Bioscience, 19(5), 425–431.
National Oceanic and Atmospheric Administration. (2015). Harmful Algal Blooms (HABs) in the Great Lakes (p. 3).
Hecky, R. E., Smith, R. E. H., & Barton, D. R. (2004). The nearshore phosphorus shunt: a consequence of ecosystem engineering by dreissenids in the Laurentian Great Lakes. He Canadian Journal of Fisheries and Aquatic Sciences, 61, 1285–1293.
National Oceanic and Atmospheric Administration. (2020). What is a Dead Zone?https://oceanservice.noaa.gov/facts/ deadzone.html
Heidelberg University. (2020). National Center for Water Quality Research. Tributary Data Download. Jarvie, H., Johnson, L., & Sharpley, A. (2017). Increased Soluble Phosphorus Loads to Lake Erie: Unintended Consequences of Conservation Practices? Journal of Environmental Quality, 46, 123–132. Jaynes, D., & Isenhart, T. (2014). Reconnecting Tile Drainage to Riparian Buffer Hydrology for Enhanced Nitrate Removal. Journal of Environmental Quality, 43, 631–638.
Obenour, D., Gronewold, D., & Stow, C. (2014). 2014 Lake Erie Harmful Algal Bloom (HAB) Experimental Forecast: This product represents the first year of an experimental forecast relating bloom size to total phosphorus load. Ohio Department of Agriculture. (2018). Ohio Agriculture. Farm Flavor. https://www.farmflavor.com/ohio-agriculture/
Jaynes, D., Isenhart, T., & Parkin, T. (2014). Reconnecting riparian buffers with tile drainage (2). Leopold Center Completed Grant Reports.
Ohio History Connection. (n.d.). Cuyahoga River Fire. Ohio History Central. https://ohiohistorycentral.org/w/Cuyahoga_ River_Fire
Jetoo, S., Grover, V., & Krantzberg, G. (2015). The Toledo Drinking Water Advisory: Suggested Application of the Water Safety Planning Approach. Sustainability, 7, 9787–9808.
Paerl, H., & Huisman, J. (2009). Climate change: a catalyst for global expansion of harmful cyanobacterial blooms. Environmental Microbiology Reports, 1(1), 27–37.
Ji, X., Verspagen, J., & Van de Waal, D. (2020). Phenotypic plasticity of carbon fixation stimulates cyanobacterial blooms at elevated CO2. Science Advances.
Raikow, D., Sarnelle, O., & Wilson, A. (2004). Dominance of the noxious cyanobacterium Microcystis aeruginosa in low-nutrient lakes is associated with exotic zebra mussels. Limnology and Oceanography, 49(2), 482–487.
Johnston, L. (2019). How big did the 2019 Lake Erie harmful algal bloom get? See the progression. Cleveland.Com. https:// www.cleveland.com/news/g66l-2019/09/204be17826609/ how-big-did-the-2019-lake-erie-harmful-algal-bloom-get-seethe-progression.html
Richards, Baker, D., & Crumrine, J. P. (2009). Improved water quality in Ohio tributaries to Lake Erie: A consequence of conservation practices. Journal of Soil and Water Conservation, 64(3).
Kasich, J., Taylor, M., & Butler, G. (2016). Public Water System Harmful Algal Bloom Response Strategy. Ohio Environmental Protection Agency.
Richards, R. P., Baker, D., & Eckert, D. (2002). Trends in Agriculture in the LEASEQ Watersheds, 1975–1995. Journal of Environmental Quality, 31.
Kleinman, P., Sharpley, A., & Johnson, L. (2015). Implementing agricultural phosphorus science and management to combat eutrophication. AMBIO, 44, 293–310.
RMB Environmental Laboratories Inc. (n.d.). Lake Eutrophication. Lakes Monitering Program. https://www. rmbel.info/primer/lake-eutrophication/
Liu, X., Lu, X., & Chen. (2011). The effects of temperature and nutrient ratios on Microcystis blooms in Lake Taihu, China: An 11-year investigation. Harmful Algae, 10, 337–343.
Scavia, D., Manning, N., & Bertani, I. (2019). 2019 Western Lake Erie Harmful Algal Bloom (HAB) Forecast.
Makarewicz, J. (1993). Phytoplankton Biomass and Species Composition In Lake Erie, 1970 to 1987. Journal of Great Lakes Res., 19(2). Makarewicz, J., & Bertram, P. (1991). Evidence for the Restoration of the Lake Erie Ecosystem. Oxford University Press, 41(4), 216– 223. Makarewicz, J., Bertram, P., & Lewis, T. (2000). Chemistry of the Offshore Surface Waters of Lake Erie: Pre- and Post-Dreissena Introduction (1983-1993). Journal of Great Lakes Research, 21(1), 82–93. Michalak, A., Anderson, E., & Chaffin, J. (2013). Recordsetting algal bloom in Lake Erie caused by agricultural and meteorological trends consistent with expected future conditions. PNAS, 110(16), 644–6462. National Oceanic and Atmospheric Administration. (n.d.).
68
Nicholls, K. H., Standen, D. W., Hopkins, G. J., & Carney, E. C. (1977). Declines in the Near-Shore Phytoplankton of Lake Erie’s Wester Basin Since 1971. Journal of Great Lakes Res, 3, 72–78.
Schindler, D. W. (1977). Evolution of Phosphorus Limitation in Lakes: Natural Mechanisms Compensate for Deficiencies of Nitrogen and Carbon in Eutrophied lakes. Sicence, 195, 260–262. Smith, D., Francesconi, W., & Livingston, S. (2015a). Phosphorus losses from monitored fields with conservation practices in the Lake Erie Basin, USA. AMBIO, 22, 319–331. Smith, D., King, K., & Johnson, L. (2015b). Surface Runoff and Tile Drainage Transport of Phosphorus in the Midwestern United States. Journal of Environmental Quality, 44, 495–502. Smith, D., & Livingston, S. J. (2013). Managing farmed closed depressional areas using blind inlets to minimize phosphorus and nitrogen losses. Soil Use and Management, 29, 94–102. Snyder, L. (2018). Overview: Senate Bill 150, Senate Bill 1. Together with Farmers. Steffen, Davis, T., & Bullerjahn, G. (2017). Ecophysiological
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Examination of the Lake Erie Microcystis Bloom in 2014: Linkages between Biology and the Water Supply Shutdown of Toledo, OH. Environmental Science and Technology, 51, 6745–6755. Steinmetz, M. (2015). Dreissenid Mussels Impact on Phosphorus Levels in the Laurentian Great Lakes. The Duluth Journal of Undergraduate Biology. Stewart, I., Seawright, A., & Shaw, G. (2008). Cyanobacterial poisoning in livestock, wild mammals and birds – an overview. Ulén, B., Aronsson, H., & Bechmann, M. (2010). Soil tillage methods to control phosphorus loss and potential sideeffects: a Scandinavian review. Soil Use and Management, 26, 94–107. United States Environmental Protection Agency. (2020a). Human Health Effects Caused by the Most Common Toxinproducing Cyanobacteria. Health Effects from Cyanotoxins. United States Environmental Protection Agency. (2020b). Lake Erie [Government]. The Great Lakes. https://www.epa.gov/ greatlakes/lake-erie United States Government, & Canadian Government. (1972). The Great Lakes Water Quality. Wood, S., Borges, H., & Puddick, J. (2017). Contrasting cyanobacterial communities and microcystin concentrations in summers with extreme weather events: insights into potential effects of climate change. Erschienen in: Hydrobiologia, 785(71–89). WTOL Staff. (2019). 5 years since the Toledo water crisis: A timeline of what happened. WTOL 11. https://www.wtol. com/article/news/local/protecting-our-water/5-years-sincethe-toledo-water-crisis-a-timeline-of-what-happened/51271a2414b-a34d-4b4a-9632-58e1c212d098 Zhang, H., Culver, D., & Boegman. (2011). Dreissenids in Lake Erie: an algal filter or a fertilizer? Aquatic Invasions, 6(2), 175– 194. Zulauf, C., & Brown, B. (2019). Use of Tile, 2017 US Census of Agriculture. Farm Doc Daily.
SUMMER 2020
69
Differences in microbial flora found on male and female Clusia sp. flowers BY BETHANY CLARKSON, UNIVERSITY OF LINCOLN (UK) GRADUATE Cover Image: A snapshot of the Santa Lucia Cloud Forest Reserve. This image was captured at the Santa Lucia Ecolodge, GPS coordinates N00°8.193 W078°36.458. Photograph captured by author.
70
Abstract Specialized rewards with known antimicrobials such as resin are being increasingly used to develop understanding of plant-pollinator relationships, though the knowledge of how this varies between male and female dioecious plants, such as Cluisa, is still somewhat limited. This research, carried out at the Santa Lucia cloud forest reserve in Ecuador on a Clusia sp., aimed to explore differences between microbial growth on male and female Clusia sp. flowers. In total, 16 flowers were collected and stored in 100ml of sterile water for 60 minutes so that any antimicrobial compounds would infuse the water. The infusion was then used for further testing. Spread plating, Petri film culturing, and biochemical testing were performed on sixteen samples from four different Clusia plants (two males and two females) using a mixture of open flowers and young buds. Microbial investigation revealed that female flowers yielded a lower density and diversity of
bacterial and fungal colonies present on Petri films than male flowers. Noticeably, there was a significant decrease in the number of fungal colonies present on female flowers. A possible reason is that female flowers have a shorter lifespan, and therefore support increased antimicrobial activity to make themselves more attractive to possible pollinators.
Keywords Pollinator reward systems, Clusia, Ecuador, Antimicrobial, Resin, Bees.
Introduction Antibiotic resistance is not only on the rise, but has been named “the greatest risk to public health” (Kumarasamy et al., 2010); antibioticresistant infections are projected to be responsible for an estimated 10 million deaths by 2050 (Davies et al., 2013). With increasing
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: A makeshift laboratory within the Ecolodge. This table was where most of the work was carried out. Equipment includes disposable 5ml pipettes, sterile water, ethanol, pH testing strips and samples of Clusia sp. flowers suspended in sterile water. Photograph captured by author.
resistance comes a rising urgency to not only find new sources of antibiotics but also to make antibiotic treatment as efficacious as possible. Current research explores many potential sources of antimicrobials, from cockroaches to deep-sea fungi (Zhang et al., 2013; Ali et al., 2018). Plants are a common source of antimicrobial compounds; ongoing research shows plant species including Calamus aromaticus, Croton lechleri, and many others possess significant antimicrobial potential. Therefore, research into understanding these specimens is of the utmost importance (Salam and Quave, 2018; Khan et al., 2018). Indeed, a study by Kumar et al. (2006) on over sixty species of plants used in traditional Indian medicine showed that substantial sources of antimicrobials are found in flowers, leaves, and resin. This phenomenon is the basis of natural relationships like pollinator reward systems, which involves mutually beneficial interactions when pollinators, such as bees, receive ‘rewards’ from plants (primarily in the form of pollen or nectar). In return, the plants are pollinated by the bees (Simpson and Neff, 1981). In rarer cases, this pollinator reward system has been observed in nature using alternative rewards such as resins, particularly those containing antimicrobial properties. Though pollinator reward systems using resin are fairly rare, the genus Clusia, from the family Clusiaceae, has several species that have these relationships with male euglossine bees (de Faria et al., 2006). The genus Clusia contains around 250 species native to tropical America, many of which produce a viscous resin containing high
SUMMER 2020
amounts of polyprenylated benzophenones named guttiferones. Guttiferones have shown significant antifungal, antimicrobial, and antihuman immunodeficiency virus (HIV) effects (Gustafson et al., 1992; Cruz and Teixeira, 2004). When used in pollinator reward systems, the resin has many different uses depending on the species of the bee collecting the resin and the current needs of the hive. For example, Apis bees use collected resin to seal their hive, whereas Eulaema and Euglossa bees use it in the construction of their nests. Antimicrobial resin, such as the resin produced by Clusia, can be used to protect food and larvae from bacteria and fungi (Patiny, 2012).
“The genus Clusia contains around 250 species native to tropical America, many of which produce a viscous resin containing high amounts of polyprenylated benzophenones named guttiferones ”
Interestingly, almost all known Clusia are dioecious, meaning they have differentiating secondary sex characteristics—which could include the properties of these resins (Luján, 2019). Abe (2001) showed that in another species of dioecious flower, Aucuba japonica, there are significant differences in the proportion of open flowers and the period of time for which flowers opened between male and female inflorescences. Not only did male inflorescences generally flower for longer, but they also had a higher proportion of flowers open at the peak of the flowering period. Therefore, it is possible that female flowers may show higher antimicrobial activity to compensate for the shorter time frame in which they are able to become pollinated, thus making them more desirable to pollinators. Using an array of microbial culturing, including spread plating, culturing of Petrifilms, and biochemical testing, this project aims to 71
Figure 2: A schematic representation of the Petrifilm grid. The areas highlighted are an example of four randomly selected squares, which would then be counted and multiplied up to get the total number of colonies on the Petrifilm.
“A rudimentary laboratory was set up with all the microbial culturing carried out close to a Bunsen burner to maintain as close to an aseptic environment as possible...”
Figure 3: Female sample number two. This image was captured as part of the cataloguing of the samples. This particular female flower measured 2cm in diameter and weighed 6.9g. Photograph captured by Samuel Shaw and used with permission.
develop a further understanding of the differences in microbial flora to predict possible antimicrobial efficacy between male and female samples of flowers from Clusia sp.
Materials and Methods This research was carried out in the Santa Lucia cloud forest reserve, located at the far south of the Choco-Andean Rainforest Corridor, within the Pichincha province in Ecuador, in July 2018 (Figure.1). In this study, sixteen flowers were collected from four trees, two males and two females, of an unidentified species, from the genus Clusia, family Clusiaceae. Female sample 1 was collected at N00°07.084 W078°36.706, female sample 2 at N00°06.944 W078°36.114 and male samples 1 and 2 were taken at N00°07.110 W078°36.507 and N00°06.964 W078°36.155 respectively. A rudimentary laboratory was set up with all the microbial culturing carried out close to a Bunsen burner to maintain as close to an aseptic environment as possible, so as to try to avoid contamination (Fig. 2). All equipment was cleaned with ethanol and sterile water between uses and a negative control of sterile water was used to ensure all of the cultured colonies were representative of the microbial growth present on the flowers and not a result of contamination. Before testing, each flower was photographed, catalogued, stored in a plastic collection pot with 100ml of sterile water, and then left to infuse for sixty minutes. After infusion, the flowers were weighed using a 10g spring balance. Next, the water pH was measured (with pH strips), as was
the sugar concentration (with a PAL-1 Atago digital refractometer, USA). The infused water was then put aside for microbial culturing. The microbial load was measured using a combination of spread plating and 3M Petrifilm Rapid Aerobic Count, Rapid Yeast and Mould and Rapid Coliform / E. coli plates (3M United Kingdom PLC, Bracknell, England). Spread plating was executed under aseptic technique, using 1ml of infused water. 1ml of sterile water was also cultured as a negative control. Petrifilms for samples 1-4 of both sexes were prepared using 1ml of the infused water pipetted directly onto the films. Upon initial examination, many of the male films had too many colonies to count. To account for this, samples 5-8 were adjusted by using 0.5ml of the sample water and 500µl of sterile water. For each set of Petrifilms cultured, a control plate of 1ml of sterile water was also prepared. To calculate the total colony count per plate, four equal sections of the petrifilms were chosen at random; the populations of those chosen areas were then counted and an average of the overall count was reached (Fig. 3). All cultures were incubated for 24 hours in a field incubator made from a tin foil-lined box containing a mixture of bubble wrap and temperature-controlled hot water bottles, which maintained a temperature of 18-40°C, optimal for encouraging microbial growth. Initially, it was found that some petrifilms had too many colonies to count. When this happened, the samples were diluted as described above; however, this methodology did not overcome the excess of colonies. It was therefore decided that quantifiable plates would be those with colony counts from 0-999, any higher would be estimated as 1000 colonies
72
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table 1: Key findings from biochemical testing. The results of biochemical tests provide means and standard deviation of key findings including mass, height, pH and sugar concentration differences between samples taken from male and female Clusia sp. flowers.
for statistical purposes. Differences between colony counts of samples taken from male and female flowers were first tested for normality using the AndersonDarling test. When the data were normally distributed, a 2-sample t-test was used to test for significance between the number of female and male colonies formed on the Petrifilms. If the data were not normally distributed, a MannWhitney test was used to analyze the data.
Results The Clusia flowers used in this experiment had red and black petals, which were, on average, 2 cm in diameter. Their weights differed slightly between the sexes, with male inflorescences weighing slightly more (8.75±1.94g) than those of females (7.01±1.18g). Male trees also grew on average 7.5m taller than female trees (22.5±3.54m vs 15±4.24m). Measurements of the sugar levels present revealed trace amounts in the sample, with concentrations low in both males (0.05±0.15) and females (0.034±0.05). Female samples
showed a minimally lowered pH level compared to the males (6.06±0.90 vs 6.13±0.35), but this difference is negligible and not statistically significant (Table 1). Spread plating produced a myriad of results, as seen in Figure 5B. The presence of aerobic bacteria on the petrifilms is shown with a blue-green stain, which spread much more extensively on the male samples. All of the plates had less fungi than bacteria, and all fungal growths were seen exclusively on male samples. This correlates well with the Petrifilm data recorded and the negative controls showed no clear microbial growths, which suggests contamination did not play a significant role in these results. There was a large difference between the number of colonies of yeast and mould specimens which grew on samples taken from female flowers (13±16.52) compared to those taken from male flowers (815±342.10) (Fig. 6). Most male samples had too many colonies to count (>1000 colonies) and, after quantifying these results (at 1000 colonies per plate), it showed that males had significantly more colonies, around 802 on average, than female
“The presence of aerobic bacteria on the petrifilms is shown with a bluegreen stain, which spread much more extensively on the male samples.”
Figure 4: A representative sample of the spread plates which were produced. A wide selection of different morphologies is seen within the colonies grown on the different samples; one fungal growth can be seen in the bottom righthand plate. B: All the cultured petrifilms for aerobic bacteria are shown with both female and male samples included. Above, the negative controls for all three of the petrifilms are also visible. Female samples are labelled as F1-8 and males as M1-9. Photographs captured by the author.
SUMMER 2020
73
Figure 5: Mean and standard deviation of the number of yeast and mold colonies grown on Petrifilm samples taken from male and female Clusia sp. flowers.
“Female flowers had lower colony counts across all findings, with a 60% decrease in the mean number of fungal colonies per female sample compared to male samples.”
samples (Fig. 6; P<0.01, W= 93.0, n=4). Aerobic bacteria appeared to grow well on all samples (Fig. 7), though male samples did show a higher number of colonies than female samples (508±422.76 in males as opposed to 346±436.92 in females). The spread plated samples yielded similar results (Fig. 5B). This equates to an average of 72 more colonies per Petrifilm on male samples, though this was not statistically significant (T=0.75, P= 0.464, DF=14). The findings shown in Figure 8 were consistent with other results, in which colonies of coliforms were found on all the samples which were cultured, though there was no statistically significant difference between the number of colonies growing on male (143±142.88) and female (56±112.45) Petrifilm samples (Fig. 8; P=0.712, W=72.0, n=4). There was relatively little growth on the Escherichia coli petrifilms, though samples taken from male flowers showed some growth, with a mean number of 2±3.94 colonies per plate. There were no colonies observed on the female samples (Fig.9); therefore, this data was not sufficient for statistical analysis.
Discussion
Figure 6: Mean and standard deviation of the number of colonies of aerobic bacteria grown on Petrifilm samples taken from male and female Clusia sp. flowers.
74
Female flowers had lower colony counts across all findings, with a 60% decrease in the mean number of fungal colonies per female sample compared to male samples. This could be due to intrinsic antifungal properties in the female flowers. A study by Fernandez et al. (2016), on Clusia hilariana, showed that female plants not only have higher levels of specific essential oils than male plants, but that they also have some oils that male plants do not possess at all. One of these components is alpha cadinol, a powerful
fungicide, which is around four times more prevalent in females (2.2%) than in males (0.5%) (Ho et al., 2012). Furthermore, Cruz and Teixeira (2004) showed that Clusia tends to produce a large quantity of polyprenylated benzophenones, which have been proven to exhibit a wide variety of pharmacological effects, including anti-inflammatory, anti-HIV activity, fungicidal, and antimicrobial activity. This may suggest that the fungicidal action is not in fact due to discrepancies between the components of the resin itself, but rather a difference in the amount of resin present in males and females. This is supported by the work of Porto et al. (2000) which found no divergence between the male and female versions of the major chemical structures of the resin, but did find that the concentrations of resin varied between males and females. Though these findings may offer an explanation as to how these specimens produce an antimicrobial effect, it is also important to understand why this occurs; one popular theory involves pollinator reward systems. Pollinator reward systems are symbiotic relationships between plants and pollinators like birds and insects. Usually, pollinator reward systems offer pollinators nectar in exchange for pollinating the plants; however, some use non-sugar rewards, such as resin. This has been observed previously between several species of Clusia and several species of bees; the bees even select specific plants depending on the purpose for which the resin is required. Resin which hardens quickly can be used by species of Aphid bees to cement together structures for the hive or to seal the outside, whereas a softer and more malleable resin can be used for structuring cells within the hive, which may have to be remolded (Patiny, 2012). Antimicrobial resins may be used to defend structures from pathogens, therefore protecting food stores, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
larvae, and the colony itself from attack (Farias et al., 2011; Ribeiro et al., 2011). In addition, Llyod and Webb (1977), showed that, in a wide range of species, not only do male inflorescences tend to appear earlier in life and more frequently than females, but that male plants also tend to produce a higher number of inflorescences than females. If the plants are using the resin produced by the inflorescences to attract bees and allow for pollination, it could be that the females had a lower bacterial or fungal load due to an intrinsically higher antibacterial activity to compensate for the lower number of inflorescences (Evers et al., 2018). This outside pressure of selective pollinators choosing plants which offer specific rewards, such as antimicrobial resins, may explain why only trace levels of sugar were found; this particular species of Clusia may produce large amounts of resin rather than an abundance of nectar. On average, female samples also had a 32% lower sugar concentration than male samples, which may also be due to higher amounts of resin produced, as part of this compensatory mechanism. Another explanation for the differences between male and female inflorescences could be a result of the antifungal effect of the female Clusia. More than 90% of terrestrial plants form mycorrhizal associations, in which the roots of the plant form a symbiotic relationship with a species of fungi. This relationship causes an increase in phosphorus available to the plant largely due to an increased surface area, using the arbuscular mycorrhizal fungal hyphal network (Bolduc, 2011). Though literature surrounding Clusia-mycorrhiza associations is fairly limited, it has been documented that Clusia develops much better when mycorrhized. This has also shown to be the SUMMER 2020
case in a range of other species, leading to larger plants with an increase in the number of inflorescences, so it is possible this could also apply to Clusia (Lüttge, 2007; Scagel, 2003). The antifungal activity of a female may disrupt this relationship, leading to an average decrease in height of 7.5m compared to the male trees and fewer inflorescences. A similar effect has been seen in species growing close to Alliaria petiolata, which also shows antifungal activity (Stinson et al., 2006). This relationship could be investigated in future research by conducting staining on sections of male and female root systems, to search for the presence of fungi. Plant tissue analysis could also be used to look for a difference in phosphorus levels between male and female leaves, which would suggest a mycorrhizal association.
Figure 7: Mean and standard deviation of the number of colonies of coliforms grown on Petrifilm samples taken from male and female Clusia sp. flowers.
Several outliers were observed in this study, with certain flowers showing an abnormal bacterial count. One possible cause of these outliers is age. For example, male sample 2—a closed bud—showed no fungal growths. It is possible that because the bud was sealed shut, the fungi were unable to enter, hence none were present. Another outlier is female sample 4, which—despite the low microbial load of most female flowers—showed an extremely high bacterial count. Research by Tawaha and Hudaib (2012) on Thymus capitatus examined essential oils, which can show antibacterial effects, from buds and flowers and found that mature flowers yield 0.93% more oil. Though this research was not carried out on Clusia, it was carried out on members of the Lamiaceae family, which are also eudicots. This suggests the plant may produce more oil as it gets older; if the resin uses the same mechanism, that would suggest the buds in female sample
“If the plants are using the resin produced by the inflorescences to attract bees and allow for pollination, it could be that the females had a lower bacterial or fungal load due to an intrinsically higher antibacterial activity to compensate for the lower number of inflorescences.”
Figure 8: Mean and standard deviation of the number of colonies of E. coli grown on Petrifilm samples taken from male and female Clusia sp. Flowers.
75
â&#x20AC;&#x153;The resin produced by the female samples of this species of Clusia may also be able to undergo toxicity testing, such as on cell lines and LD50 testing, as it is a potent antifungal and may well be able to be used in human or veterinary medicine in the future.â&#x20AC;?
4 were not old enough to produce as much antimicrobial resin as the mature flowers, resulting in higher levels of bacteria. Therefore, future studies should analyze flowers from a range of ages to test for differences in the level of antifungal activity.
symbiotic relationship developed with bees as a pollinator reward system using antimicrobial resin. It is also possible that this inherent antimicrobial activity may be due to the presence of compounds such as polyprenylated benzophenones or alpha-cadinol.
This research is an exciting start to what is a relatively understudied area of science. Future applications could focus on obtaining a larger amount of data, with an increased number of repeats, so as to increase the validity of any findings. This could also be achieved by using more precise equipment, such as laboratory grade incubators and automated colony counters, to obtain more accurate results and a higher throughput of data. The next step would be to analyze the resin produced by both female and male samples of this species and look for any differences between the sexes and the compounds produced (such as alpha-cadinol) and for possible differences in the concentrations of resin produced by the sexes. This could then definitively prove why females show such a higher antimicrobial effect than males. Treatment of the resin with Diazomethane allows methylation of the resin, which would prevent it from decomposing and allow it to later be purified using silica column chromatography; this technique has been successfully used to analyze resin produced by Clusia flowers in the past (Porto et al., 2000).
Acknowledgements
The resin produced by the female samples of this species of Clusia may also be able to undergo toxicity testing, such as on cell lines and LD50 testing, as it is a potent antifungal and may well be able to be used in human or veterinary medicine in the future. It is important to consider all manners of toxicity in testing, as another species of Clusia, Clusia alata, has previously been shown to have mutagenic properties at very high doses (2000 mg/kg) (Moura et al., 2008). Currently, there seem to be no treatments available using Clusia extract, though previous testing for a potential antibiotic has been carried out using the stem, with poor results (Suffredini et al., 2006). It is possible that extracts using resin would be much more potent.
Bolduc, A. (2011). The Use of Mycorrhizae to Enhance Phosphorus Uptake: A Way Out the Phosphorus Crisis. Journal of Biofertilizers & Biopesticides, 02(01).
Conclusion Female Clusia sp. flowers were found to have a statistically significant lower number of fungal colonies than males (p<0.009); female samples also had fewer bacterial colonies than males, though this difference was not statistically significant. This may be due to a 76
Many thanks to my fantastic project leaders Adrian Goodman and Catrin Gunther for all their support, advice and patience with me on this project, both in the field and in their support after. Also, my fellow academic, Sophie Ellis, for her continued advice and help throughout the duration of this research and its editing as well as the support and guidance of Clare Miller. Finally, thanks to the University of Lincoln funding department for funding received towards travel so this research could be undertaken. References Abe, T. (2001). Flowering phenology, display size, and fruit set in an understory dioecious shrub, Aucuba japonica (Cornaceae). American Journal of Botany, 88(3), 455-461. Ali, S., Siddiqui, R. and Khan, N. (2018). Antimicrobial discovery from natural and unusual sources. Journal of Pharmacy and Pharmacology, 70(10), 1287-1300. Armbruster, W. (1997). Exaptations link evolution of plantherbivore and plant-pollinator interactions: A phylogenic enquiry. Ecology, 78(6), 1661-1672.
Brundrett, M. (2002). Coevolution of roots and mycorrhizas of land plants. New Phytologist, 154(2), 275-304. Davies, S., Fowler, T., Watson, J., Livermore, D. and Walker, D. (2013). Annual Report of the Chief Medical Officer: infection and the rise of antimicrobial resistance. The Lancet, 381(9878), 1606-1609. de Faria, A., Matallana, G., Wendt, T. and Scarano, F. (2006). Low fruit set in the abundant dioecious tree Clusia hilariana (Clusiaceae) in a Brazilian restinga. Flora - Morphology, Distribution, Functional Ecology of Plants, 201(8), 606-611. Evers, H., Jackson, B. and Shaw, S. (2018). Overseas Field Course Report. University of Lincoln. Farias, J., Ferro, J., Silva, J., Agra, I., Oliveira, F., Candea, A., Conte, F., Ferraris, F., Henriques, M., Conserva, L. and Barreto, E. (2011). Modulation of Inflammatory Processes by Leaves Extract from Clusia nemorosa Both In Vitro and In Vivo Animal Models. Inflammation, 35(2), 764-771. Fernandes, C., Cruz, R., Amaral, R., Carvalho, J., Santos, M., Tietbohl, L. and Rocha, L. (2016). Essential Oils from Male and Female Flowers of Clusia hilariana. Chemistry of Natural Compounds, 52(6), 1110-1112. Gustafson, K., Blunt, J., Munro, M., Fuller, R., McKee, T., Cardellina, J., McMahon, J., Cragg, G. and Boyd, M. (1992). The DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
guttiferones, HIV-inhibitory benzophenones from Symphonia globulifera, Garcinia livingstonei, Garcinia ovalifolia and Clusia rosea. Tetrahedron, 48(46), 10093-10102.
Stinson, K., Campbell, S., Powell, J., Wolfe, B., Callaway, R., Thelen, G., Hallett, S., Prati, D. and Klironomos, J. (2006). Invasive Plant Suppresses the Growth of Native Tree Seedlings by Disrupting Belowground Mutualisms. PLoS Biology, 4(5), 140.
Ho, C., Hua, K., Hsu, K., Wang, E. and Su, Y. (2012). Composition and antipathogenic activities of the twig essential oil of Chamaecyparis formosensis from Taiwan. Natural Product Communications, 7(7), 933-936.
Suffredini, I., Paciencia, M., Nepomuceno, D., Younes, R. and Varella, A. (2006). Antibacterial and cytotoxic activity of Brazilian plant extracts - Clusiaceae. Memórias do Instituto Oswaldo Cruz, 101(3).
Khan, B., Bakht, J. and Khan, W. (2018). Antibacterial potential of a medically important plant Calamus aromaticus. Pakistan Journal of Botany, 50(6), 2355-2362.
Tawaha, K. and Hudaib, M. (2012). Chemical composition of the essential oil from flowers, flower buds and leaves of Thymus capitatus hoffmanns. Journal of Essential Oil Bearing Plants, 15(6), 988-996.
Kumar, V., Chauhan, N., Padh, H. and Rajani, M. (2006). Search for antibacterial and antifungal agents from selected Indian medicinal plants. Journal of Ethnopharmacology, 107(2), 182188.
Zhang, X., Zhang, Y., Xu, X. and Qi, S. (2013). Diverse DeepSea Fungi from the South China Sea and Their Antimicrobial Activity. Current Microbiology, 67(5), 525-530.
Kumarasamy, K., Toleman, M., Walsh, T., Bagaria, J., Butt, F., Balakrishnan, R., Chaudhary, U., Doumith, M., Giske, C., Irfan, S., Krishnan, P., Kumar, A., Maharjan, S., Mushtaq, S., Noorie, T., Paterson, D., Pearson, A., Perry, C., Pike, R., Rao, B., Ray, U., Sarma, J., Sharma, M., Sheridan, E., Thirunarayan, M., Turton, J., Upadhyay, S., Warner, M., Welfare, W., Livermore, D. and Woodford, N. (2010). Emergence of a new antibiotic resistance mechanism in India, Pakistan, and the UK: a molecular, biological, and epidemiological study. The Lancet Infectious Diseases, 10(9), 597-602. Lloyd, D. and Webb, C. (1977). Secondary sex characters in plants. The Botanical Review, 43(2), 177-216. Luján, M. (2019). Playing the Taxonomic Cupid: Matching Pistillate and Staminate Conspecifics in Dioecious Clusia (Clusiaceae). Systematic Botany, 44(3), 548-559. Lüttge, U. (2007). Clusia: A Woody Neotropical Genus Of Remarkable Plasticity And Diversity. Berlin: Springer, 235-239. Moura, A., Perazzo, F., Maistro, E. (2008). The mutagenic potential of Clusia alata (Clusiaceae) extract based on two short-term in vivo assays. Genetics and Molecular Research, 7(4), 1360-1368. Patiny, S. (2012). Evolution of plant-pollinator relationships. Cambridge: Cambridge University Press, 51-54 Peres, M., Monache, F., Cruz, A., Pizzolatti, M. and Yunes, R. (1997). Chemical composition and antimicrobial activity of Croton urucurana Baillon (Euphorbiaceae). Journal of Ethnopharmacology, 56(3), 223-226. Ribeiro, P., Ferraz, C., Guedes, M., Martins, D. and Cruz, F. (2011). A new biphenyl and antimicrobial activity of extracts and compounds from Clusia burlemarxii. Fitoterapia, 82(8), 12371240. Salam, A. and Quave, C. (2018). Opportunities for plant natural products in infection control. Current Opinion in Microbiology, 45, 189-194. Scagel, C. (2003). Inoculation with arbuscular mycorrhizal fungi alters nutrient allocation and flowering of Freesia x hybrida. Journal of environmental horticulture, 21(4), 196. Simone-Finstrom, M., Gardner, J. and Spivak, M. (2010). Tactile learning in resin foraging honeybees. Behavioral Ecology and Sociobiology, 64(10), 1609-1617. Simpson, B. and Neff, J. (1981). Floral Rewards: Alternatives to Pollen and Nectar. Annals of the Missouri Botanical Garden, 68(2), 301.
SUMMER 2020
77
The Facial Expressions of Mice Can Teach Us About Mental Illness BY BRYN WILLIAMS '23 Cover Image: Highlighted neural connections and pathways in the brain for various brain regions. These connections can be related to various emotions in humans and other animals. Source: Wikimedia Commons
78
Introduction Emotions play a large role in our lives, yet scientists know very little about their neurobiological roots. If researchers can identify regions of the brain that correlate to certain emotions, the medical profession would be one step closer to providing better treatments for mental illness, which affects one in five Americans over the course of their lives (NIH, 2017). To determine this connection, emotional expressions need to be objectively analyzed, but this can be especially difficult in animals given their difference from human expressions (Dolensek et al., 2020). Almost 150 years ago, Darwin posited that animals have facial expressions, similar to humans, that can provide insight into their emotions; however, researchers did not have the proper tools to analyze these minute facial changes until recently. Using optogenetics and twophoton imaging, scientists deciphered the complex facial expressions of mice, taking the
first step in an attempt to explain emotions in neurobiological terms (Abbott, 2020).
The Importance of Emotions in Both Humans and Animals The definition of the term “emotion” is widely disputed and differs between fields, but most researchers would agree that the term includes, but is not limited to, expressive behaviors connected with internal brain states that are classified as “feelings”. In humans, these expressive behaviors include actions like smiling or frowning, as well as vocal expressions like laughing or crying. These expressions are based on an internal central (nervous system) state which is activated by certain stimuli, and this internal central state consists of neural pathways that have a certain result when triggered (Anderson et al., 2014). Researchers currently know very little about these neural pathways, but analyzing emotions across DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
researchers can attempt to link the external changes to internal neural circuits (Dolensek et al., 2020).
Figure 1: Plate 1 from Charles Darwin’s The Expression of the Emotions in Man and Animals. Source: Wikimedia Commons
Varying Facial Expressions in Response to Stimuli
phylogeny, or evolutionarily distinct species, may provide answers (Encyclopedia Britannica, 2020). Animals and humans may share the same “primitive emotions,” which are considered evolutionary building blocks of emotion (Anderson et al., 2014). Even single cell organisms have ways to detect and respond to certain events. Darwin asserted that emotions evolve just as other biological features. Humans across the globe use similar expressions of emotion, and animals do as well (LeDoux, 2012). It is plausible that “primitive emotions” are conserved across different types of animals given their proposed evolutionary advantage. The events that illicit emotional responses have occurred many times over evolutionary history, so responses to these events can be selected for evolutionarily. Emotions help animals, including humans, process external stimuli by sorting out various brain states or “feelings” (Adolphs, 2017). Though different animals may have similar emotional building blocks, that does not necessarily mean they are homologous to human emotional states. This is where many studies have fallen short in the past (Anderson et al., 2014). Controversy still surrounds the extent to which emotional states are conserved across phylogeny. To determine the extent to which these emotional states are conserved, scientists sought to analyze the facial expressions of mice in response to certain stimuli. If an observable external facial change occurs in response to a stimulus, the
SUMMER 2020
A study conducted by Nejc Dolensek et al. at the Max Planck Institute of Neurobiology sought to explore the question of conserved emotional states in animals using mice to map out a connection between emotional expressions and neural circuits. The researchers exposed mice to wide-ranging stimuli assumed to result in an emotional response. The stimuli included tail shocks for pain, sucrose for sweetness, quinine for bitterness, and lithium chloride for malaise and freeze or fight behaviors. The mice’s heads were fixed, and video monitored as they responded to the stimuli. The facial expressions in response to the stimuli were clearly noticeable, even to naïve untrained observers, yet the type of underlying emotion was not immediately recognizable. To maintain an objective reading of the mice’s facial expressions, the “histogram of oriented gradients” (HOG) feature of machine vision was used. These HOG descriptors could measure and track facial movements in an objective way by providing a numerical vector for each video frame. The facial expressions were recorded for each stimulus and corresponding emotion. Each stimulus exposure was repeated three times and HOGs from before and after exposure were compared. They found two main clusters of facial expressions: an expression from before the exposure and an expression from either during or immediately after exposure. The group then analyzed the data from after exposure to determine if there were distinct facial expressions resulting from certain stimuli. They observed distinct facial expressions in response to each event type, or stimulus type. These unique expressions suggest that there were underlying emotions associated with the facial expressions.
“Animals and humans may share the same 'primitive emotions', which are considered evolutionary building blocks of emotion.”
To test this theory, researchers trained a“random forest classifier,” which was an unbiased third party, to predict the event or stimulus behind the observed facial expression. This third party could predict the underlying emotion with 90% accuracy. Given that the stimulus could be predicted by the expression, the team explored similarities between the emotional categories of mice and humans. To test this, Dolensek et al. constructed the typical HOGs from each stimulus and then correlated a stimulus to 79
Figure 2: A patient undergoing transcranial magnetic stimulation, a type of brain stimulation that can be used to treat mental illnesses. Source: Wikimedia Commons
an emotion. Quinine would correspond to disgust, sucrose to pleasure, tail shock to pain, lithium chloride to malaise, escape to active fear, and freezing to passive fear. They found that each facial expression (or its HOG numerical equivalent) was unique to an emotional state.
â&#x20AC;&#x153;A new technology called optogenetics allows scientists to control events within specific cells of the brain.â&#x20AC;?
These emotional states may correspond to a central internal state, or they may just be reflexlike reaction. To determine if these reactions were based on internal states or simply reflexes, the research team scaled the intensity of stimuli to see if the expression also scaled. They discovered that the facial expressions during events with higher intensity were closer to the prototypical expression for that emotion, meaning the expressions scaled in response to more intense stimuli. Additionally, the team tested the underlying emotions of the expressions by providing corresponding alternative stimuli that should elicit similar emotions, resulting in evidence suggesting mice, like humans, have facial expressions tied to different categories of emotions (Dolensek et al., 2020).
Connection of External Expressions to Internal Neural Connections Even though researchers can test facial expressions in response to stimuli, the connection between emotions and the cells inside the brain is widely unknown. As Nobel laureate Francis Crick observed, the major challenge facing neuroscience is the need to observe and control individual cells while leaving others unaffected. A new technology called optogenetics allows scientists to control events within specific cells in
80
the brain. Optogenetics works by transfecting certain cells with genes that confer light responsiveness and then using technologies that allow light to reach those specific cells in vivo (Deisseroth, 2010). The study conducted by Dolensek et al. at the Max Planck Institute of Neurobiology used optogenetics in the brains of the mice to determine whether facial expressions would result from the activation of certain brain regions. Certain regions known to be associated with emotions were activated and facial expressions were observed. These regions were located in the insular cortex (IC), the portion of the brain which has been shown to evoke emotional sensations and their related behaviors in both animals and humans. Additionally, they activated neurons in the ventral pallidum (VP), which processes the rewarding properties of pleasure stimuli. Each of these activations resulted in strong facial expressions. Again, a random unbiased third party was brought in to analyze the facial expressions and was able to match the expressions with the previously analyzed HOGs corresponding to certain emotions. When the anterior IC and VP were activated using optogenetics, the mice expressed the same facial expression associated with pleasure. For every brain region tested, the mice responded with an expression similar to the prototypes from the earlier stimuli. It was found that projections from the IC to the amygdala were found to be able to influence emotional reactions to the stimulants. For example, the anterior IC to basolateral amygdala pathway is associated with pleasure, and when the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
pathway was activated internally, the reaction to the bitter quinine was attenuated. The data resulting from optogenetic studies suggests that facial expressions are sensitive reflections of internal emotional states, and that these emotional states correspond to brain states (Dolensek et al., 2020). The connection between facial expressions and emotional states suggests that facial expressions have neuronal connections in brain regions associated with emotion like the IC and VP. To further study this concept, facial videography was combined with two photon imaging in the posterior IC (Dolensek et al., 2020). Two photon imaging is used to study the activity of certain neurons in vivo. Calcium sensitive fluorescent dye is loaded into neurons, so when the neurons are active, they can be specifically identified through the use of fluorescence. When the neuron is active it illuminates with the fluorescence and is therefore more easily identifiable (Mitani et al., 2018). Two types of neurons were identified in the posterior IC; those which corresponded with reaction to the sensory stimuli and those which corresponded to facial expressions. The neurons receiving the sensory inputs were multisensory, but the neurons responsible for facial expression were highly segregated (Dolensek et al., 2020).
Improved Mental Illness Diagnoses and treatments By identifying neural connections and neural regions associated with certain emotions, scientists can better diagnose and treat mental illnesses. The study at the Max Planck Institute of Neurobiology by Dolensek et al. provides an early but profound insight into how emotions originate from the activation of specific neural pathways. If physicians can identify abnormalities associated with brain regions and neural pathways early on, more effective preventative treatments can begin. In addition to aiding in preventative treatments, this knowledge can help researchers discover more targeted medical interventions which could aid physicians in providing more personalized treatments (Wojtalik et al., 2018). For example, brain stimulation is currently being tested as a targeted treatment for mental illnesses such as depression. Multiple variations of brain stimulation treatment can be further explored and made more effective by understanding more about how targeting specific brain regions affects certain emotions. By targeting more
SUMMER 2020
specific regions, the side-effects of this type of treatment can be significantly reduced (NIMH, 2019). In the future, a better understanding of the psychopathology of varying emotions and their associated diseases allows scientists to better identify and treat patients suffering from mental illness (Etkin et al., 2013).
Conclusion Recent studies have identified the nature of facial expressions in mice and connected these expressions with emotional states and brain states. Additionally, these expressions and their related internal emotions can be linked to certain neurons (Dolensek et al., 2020). This connection between expressions, emotions, and neurons is a first step in an attempt to uncover the neurobiological basis of emotions. Given the evidence that emotions are highly conserved across phylogeny due to their evolutionary advantage, researchers can learn from the emotions and associated brain states in mice and apply the findings to human brains. As scientists learn more about emotions and how they are actuated in the brain, many medical applications could arise. Physicians can create more effective diagnoses and treatments for mental illnesses that many people are currently suffering from.
“The connection between facial expressions and emotional states suggests that facial expressions have neuronal connnections in brain regions associated with emotion like the IC and VP.”
References Abbott, A. (2020). Artificial intelligence decodes the facial expressions of mice. Nature. https://doi.org/10.1038/d41586020-01002-7 Adolphs, R. (2017). How should neuroscience study emotions? By distinguishing emotion states, concepts, and experiences. Social Cognitive and Affective Neuroscience, 12(1), 24–31. https://doi.org/10.1093/scan/nsw153 Anderson, D. & Adolphs, R. (2014). A Framework for Studying Emotions Across Phylogeny. (n.d.). Retrieved August 5, 2020, from https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC4098837/ Deisseroth, K. (n.d.). Optogenetics: Controlling the Brain with Light [Extended Version]. Scientific American. Retrieved August 5, 2020, from https://www.scientificamerican.com/ article/optogenetics-controlling/ Dolensek, N., Gehrlach, D. A., Klein, A. S., & Gogolla, N. (2020). Facial expressions of emotion states and their neuronal correlates in mice. Science, 368(6486), 89–94. https://doi. org/10.1126/science.aaz9468 Etkin, A., Gyurak, A., & O’Hara, R. (2013). A neurobiological approach to the cognitive deficits of psychiatric disorders. Dialogues in Clinical Neuroscience, 15(4), 419–429. LeDoux, J. E. (2012). EVOLUTION OF HUMAN EMOTION. Progress in Brain Research, 195, 431–442. https://doi.
81
org/10.1016/B978-0-444-53860-4.00021-0 Mitani, A., & Komiyama, T. (2018). Real-Time Processing of Two-Photon Calcium Imaging Data Including Lateral Motion Artifact Correction. Frontiers in Neuroinformatics, 12. https:// doi.org/10.3389/fninf.2018.00098 NIMH » Discover NIMH: Personalized and Targeted Brain Stimulation Therapies. (n.d.). Retrieved August 5, 2020, from https://www.nimh.nih.gov/news/media/2019/discover-nimhpersonalized-and-targeted-brain-stimulation-therapies.shtml NIMH » Mental Illness. (n.d.). Retrieved August 5, 2020, from https://www.nimh.nih.gov/health/statistics/mental-illness. shtml Phylogeny | biology | Britannica. (n.d.). Retrieved August 5, 2020, from https://www.britannica.com/science/phylogeny Wojtalik, J. A., Eack, S. M., Smith, M. J., & Keshavan, M. S. (2018). Using Cognitive Neuroscience to Improve Mental Health Treatment: A Comprehensive Review. Journal of the Society for Social Work and Research, 9(2), 223–260. https://doi. org/10.1086/697566
82
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
83
How Telemedicine could Revolutionize Primary Care BY CHRIS CONNORS '21 Cover Image: A physician and patient engaging in a telemedicine consult Source: Wikimedia Commons
84
Introduction The United States primary healthcare system is broken. As stated by the American College of Physicians, “primary care, the backbone of the nation’s health care system, is at grave risk of collapse (American College of Physicians, 2006).” There is an alarming shortage of primary care physicians and current practitioners are overworked and face administrative challenges. Fortunately, there are feasible solutions on the horizon–one of which is telehealth. Since the beginning of the 20th century, there has been discussion of how telehealth could be integrated in order to supplement the primary healthcare system (Lustig and Nesbitt, 2012). Yet, the utilization of telehealth today has significantly lagged behind its intended use until the current COVID-19 pandemic, as healthcare systems in the United States as well as around the world have into an unprecedented era of telemedicine in response to widespread quarantine and social distancing
measures. While it is expected that the prevalence of telehealth use will subside once the pandemic ends, there is some hope that the momentum of telehealth will carry forward, as telehealth has the potential to revolutionize the healthcare system by improving access to care. While telehealth impacts all specialties, the focus of this paper concerns the realm of primary care¬–specifically investigating the potential of telehealth to ameliorate issues in rural primary care.
Overview of the Current Primary Health Care System and its Limitations Importance of Primary Care Primary care is often the first point of contact a patient has with a healthcare system. Primary care physicians (PCPs) are responsible for treating common conditions such as diabetes, hypercholesterolemia, arthritis, and depression DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: An illustrative map showing Health Professional Shortage Areas in Primary Care. Source: Wikimedia Commons
or anxiety as well as routine health maintenance, and act as a point of continuing care for patients (Finley et al., 2018). If necessary, PCPs will be the one to refer a patient to a specialist for additional treatment. Thus, primary care is considered the backbone of any healthcare system. It has been shown that even small increases in access to primary care significantly improve the health of community population. In areas where primary care is insufficient, communities have higher death and disease rates, higher rates of emergency department visits, and generally worse health outcomes than in areas with better primary care and access. For example, in northern, rural Canada, access to primary care is one of the largest reasons that there still exists a healthcare gap between indigenous and non-indigenous peoples (Jong et al., 2019). Limitations of the Current Primary Health Care System in the United States There is a variety of limitations in the current primary care system that can be partially, if not fully, resolved by telemedicine. Arguably the largest issue in the United States regarding healthcare is the immense shortage of PCPs. In fact, over 65 million Americans are said to live in primary care shortage areas (Bodenheimer and Pham, 2010). Furthermore, current primary care practitioners experience a whole host of inconveniences including burdensome administrative tasks, distracting
SUMMER 2020
work environments, and crammed schedules (Bodenheimer and Pham, 2010). These problems are also amplified by the fact that medical students are avoiding primary care. In fact, only 7% of medical students plan to go into primary care todayâ&#x20AC;&#x201C;a figure that has been steadily declining since from 40% in 1997 (Bodenheimer and Pham, 2010). The reason for this decline is in part due to the significantly higher salary offered by many specialties. This monetary incentive becomes particularly attractive when you consider the steadily rising cost of medical school tuition that leaves many recent graduates in significant debt.
â&#x20AC;&#x153;Evidently, the U.S. primary care system as a whole has many shortcomings, and these problems are significantly worse when considering rural primary care.â&#x20AC;?
Evidently, the U.S primary care system as a whole has many shortcomings, and these problems are significantly worse when considering rural primary care. While the ratio of primary care physicians to population in the United States in urban areas is 100 per 100,000, in rural areas, the ratio is just 46 per 100,000 (Bodenheimer and Pham, 2010). The situation looks even more dire when we take into account that 21% of the U.S. population lives in rural areas. Effectively, 10% of PCPs are currently responsible for providing for 21 percent of Americans (Bodenheimer and Pham, 2010). In more absolute terms, over additional 16000 PCPs are needed to meet the demand in these rural areas (Rieselbach et al., 2010). The effects of these shortages in rural regions are substantial. Rural populations face significant health disparities, have less health insurance, 85
Figure 2: An example of a common synchronous telemedicine appointment conducted from a patient’s home. Source: Shutterstock
and have higher rates of chronic conditions such as diabetes and obesity when compared to urban populations (Marcin et al., 2016).
“The first known mention of using technology to facilitate distance healthcare dates back to 1879 when the Lancet notes using a telephone to help reduce unnecessary office visits.”
Ultimately, the United States has a precarious geographic maldistribution of primary care that is being exaggerated by ever-worsening PCP shortages (Bodenheimer and Pham, 2010). However, the natural solution of producing more primary care physicians and incentivizing them to practice in rural areas may not work. It has been tried before unsuccessfully and, to be put simply, it just very difficult to motivate physicians to work in rural areas. This is because rural physicians face additional challenges and complications when compared to their urban or suburban counterparts. These include feelings of professional isolation, a reduced access to continuing medical education, and a lack of collaboration with other specialists or support services (Anderson et al., 1994). Given this acute situation and the shortcomings of previous solutions, there is a gap to be filled by a novel solution–and telemedicine might be the perfect fit.
Overview of Telemedicine Introduction to Telemedicine Telemedicine, which is synonymous with telehealth and virtual care, is defined as the “use of electronic information and communications technologies to provide and support healthcare when distance separates the participants (Marcin et al., 2016).” Generally, there are three
86
types of applications: live videoconferencing between patient and provider (synchronous), transmission of information and medical images (store-and-forward/asynchronous), and remote patient monitoring (Marcin et al., 2016). The most common form today–synchronous telemedicine–is comprised of patients using a videoconferencing platform to communicate with a physician at scheduled time. During the call, verbal communication about symptoms and visual tests such as displaying a skin condition or demonstrating range of motion can be used by the physician to confirm a diagnosis. However, telemedicine has come a long way since its inception. History of Telemedicine The first known mention of using technology to facilitate distance healthcare dates back 1879 where the Lancet notes using a telephone to help reduce unnecessary office visits (Lustig and Nesbitt, 2012). While telemedicine in its early stages was seldomly used, there are instances of the radio being used throughout the 1920s to diagnose patients or communicate with clinics on ships (Lustig and Nesbitt, 2012). However, medical video communications did not truly begin in the United States until 1959 (Institute of Medicine, 1996). Applications of Telemedicine Today Today, telemedicine is much more advanced and complex as it can now facilitate a
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Image of a surgeon operating remotely on a patient. Source: Wikimedia Commons
patient and provider meeting over a video communications platform at a scheduled time in lieu of an in-person visit. Outside of primary care, telemedicine has been used extensively in certain specialties. For example, teleradiology has been used by physicians for over 60 years (Lustig and Nesbitt, 2012). This is likely because the highly image-based practice is easily transferable to an online format. Similarly, dermatology, another highly visual specialty, has been well-integrated within telemedicine. Nevertheless, telemedicine has also been experimented with for primary care. For example, primary care telehealth has been used often throughout rural regions of Northern Canada. In fact, in some areas of Nunavut and Labrador, telehealth is routine–almost entirely replacing in-person visits (Jong et al., 2019). Future of Telemedicine While telemedicine has made significant advances since its early days in the late 19th century, there are still large improvements to be implemented. For example, future research into telemedicine plans to explore avenues in which patients could be provided with small physiological monitoring devices in their own homes. These wearable devices would be able to function as a stethoscope, accelerometer, and an electrocardiogram, among other functions such as measuring heart rate or blood pressure in order to allow a physician to perform these tests from a distance in real
SUMMER 2020
time (Lustig and Nesbitt, 2012). This would be a significant advance in telehealth as it would eliminate one of the biggest barriers: the inability to perform diagnostic tests from a distance. In fact, more basic versions of these devices have already been successfully used by some physicians and have been an effective method of measure vital signs from a distance. Successful implementation of these devices would significantly reduce the need for in-person appointments. Additionally, while not in the realm of primary care, some telemedicine advancements in the future seem like science fiction. For example, telesurgery–in which a patient is operated on remotely by a physician–has been successfully done several times. The first such operation occurred in 2001 when a New York surgeon successfully operated on a patient in Strasbourg, France (Choi et al., 2018). In this case, the surgeon used remote-controlled, robot-assisted laparoscopic surgery to remove the gall bladder of a patient on the other side of the Atlantic. Overall, it is clear that telemedicine has made enormous strides since its inception and has a promising future, but what is more important, however, is the success of its implementation–particularly in primary care.
“... future research into telemedicine plans to explore avenues in which patients could be provided with small physiological monitoring devices in their own homes.”
Healthcare Outcomes of Telemedicine in Primary Care Outcomes Perhaps the most important aspect of
87
“Since many chronic and preventative care appointments (which account for the majority of primary care visits) can effectively be managed using virtual care, more clinic time becomes available for those who need in-person visits”
telemedicine when assessing its ability to supplement a healthcare system in rural areas is evaluating its convenience for patients and patients’ healthcare outcomes. Overall, most outcome studies involving telehealth use in primary care has shown positive results. One study showed that patients receiving synchronous, remote care demonstrated similar outcomes as those receiving in-person care for various conditions (Portnoy et al., 2020). Of course, telemedicine has not been without its weaknesses. Some patients participating in telehealth have indicated that they prefer traditional office care. Additionally, the current lack of testing equipment available in patients’ homes means that many diagnostic tests that PCPs typically perform in office are unable to take place. The lack of medical technology and equipment at home also necessitates that some procedures take place in-person. Furthermore, many in-office procedures that traditionally occur the same-day of the diagnosis have to be scheduled at a later date during an in-office visit–inconveniencing patients. However, these downsides are counterbalanced by reports where telemedicine has been shown to have even better healthcare outcomes than traditional in-person care. For example, a survey conducted in Canada concerning telemedicine for primary care found that use of telehealth in rural regions resulted in improved patient care, reduced transfers, and better collaboration between patients and providers when compared to inperson care (Jong et al., 2019). Similarly, some monitoring programs have demonstrated better management of certain chronic conditions such as diabetes, hypertensions, and congestive heart failure (Lustig and Nesbitt, 2012). Fortunately, it appears that telemedicine utilization does not imply less effective healthcare in many cases. Additional Benefits Outside of health outcomes, telemedicine has several other advantages, such as the ability to streamline clinics–allowing PCPs to be more efficient. Since many chronic and preventative care appointments (which account for the majority of primary care visits) can effectively be managed using virtual care, more clinic time becomes available for those who need inperson visits (Bodenheimer and Pham, 2010). Similarly, if more sick, contagious patients are staying at home using telemedicine, then the potential for these sick patients to expose other patients to their illness is minimized. Thus, widespread use of telemedicine has the added benefit of minimizing patient exposure
88
to infectious diseases within a clinic. Given successful outcomes of telemedicine as well as a host of other benefits, it appears that the implementation of telehealth in rural communities to supplement primary care is a feasible and effective solution. However, successful healthcare outcomes will not be the only factor weighed when it comes to deciding whether to implement telemedicine infrastructure.
Economic Evaluation of Telemedicine Economic Overview What is equally important–and in policy makers’ eyes, perhaps more important–in the push towards implementing telemedicine in rural primary care is the economics of implementing such as system. This is particularly relevant in rural primary care due to the extreme financial burden that healthcare puts on governments in these areas. For example, health care per capita in rural Canada costs more than double what it costs for the rest of Canada (Jong et al., 2019). Clearly, if a government could alleviate this expenditure while maintaining the same quality of care, they may be interested in the opportunity. Economic Evaluation of Telemedicine Financially, telemedicine has had mixed success among different specialties. For example, in dermatology, telemedicine was found to not be cost-effective when compared to conventional care (Delgoshaei et al., 2017). However, integrating telemedicine into primary care has shown significant economic benefits. This is for a multitude of reasons including cost saving opportunities concerning access, unnecessary in-office visits, and patient travel. First, a lack of access to primary care has become an enormous financial drain on healthcare systems. For example, emergency rooms have been overwhelmed by patients who could have otherwise been treated by a PCP. In fact, a 2006 California survey estimated that about 46% of emergency room visits could have been addressed by a family medicine practitioner (Bodenheimer and Pham, 2010). Seeing that emergency room visits tend to be more expensive than primary care appointments and that they overwhelm hospitals, a lack of access to primary care imposes a significant financial problem for healthcare systems. Improved primary care access through telemedicine has already been shown to reduce hospitalizations (Bodenheimer and Pham, 2010). Thus, if telemedicine is implemented widely in order to improve health care access, health care
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
systems stand to benefit from substantial savings. Second, telemedicine also offers an enormous cost-savings benefit by reducing unnecessary office visits. In fact, about 75% of American healthcare expenditures are linked to the presence of chronic disease (Lustig and Nesbitt, 2012). A majority of these chronic diseases are managed by PCPs, and since many chronic disease visits could be transferable to a telemedicine appointment, the American healthcare system could benefit greatly from an increased use of virtual care (Bashshur et al., 2014). Third, telemedicine would eliminate patient travel, which has been shown to provide significant cost-savings opportunities for most patients (Marcin et al., 2016). In fact, in rural locations, almost 50% of in-office appointments require significant travel for patients (Jong et al., 2019). This travel not only results in transportation expenses, but also lost time from work and family–all of which can impose a financial burden on patients. Telemedicine has the potential to drastically reduce patient travel and, in turn, provide financial savings to patients. In fact, a recent study of 47 cancer patients demonstrated that 27,000 miles of travel was saved due to telepharmacy (Lustig and Nesbitt, 2012). Evidently, using telemedicine in rural primary care stands to offer many economic benefits to both healthcare systems and patients.
Conclusion The need for improvement in our primary health care system is dire, especially in rural areas where a lack of patient access is having a detrimental effect on health outcomes. With past measures failing to remedy this situation, it is time for an innovative solution: telehealth. Like any solution, telehealth has its limitations, but its potential to drastically improve healthcare outcomes in rural areas and its economic benefits are far-reaching. Since the COVID-19 pandemic has sparked the widespread implementation of telemedicine, it will hopefully give health care systems and policy makers an unprecedented chance to evaluate this technology on a large-scale. However, it is imperative that this trial-run is not abandoned once the pandemic subsides. The implementation of telehealth in rural healthcare systems has the potential to improve millions of lives, and policy makers must take full advantage of this opportunity.
SUMMER 2020
References American College of Physicians. (2006, Jan 30). The impending collapse of primary care medicine and its implications for the state of the nation’s health care. American College of Physicians. Anderson, E. A., Bergeron, D., Crouse, B. J. (1994) Recruitment of family physicians in rural practice. Minn Med, 8, 29-32. Bashshur, R. L., Shannon, G. W., Smith, B. R., Alverson, D. C., Antoniotti, N., Barsan, W. G., Bashshur, N., Brown, E. M., Coye, M. J., Doarn, C. R., Ferguson, S., Grigsby, J., Krupinski, E. A., Kvedar, J. C., Linkous, J., Merrell, R. C., Nesbitt, T., Poropatich, R., Rheuban, K. S., Sanders, J. H., … Yellowlees, P. (2014). The empirical foundations of telemedicine interventions for chronic disease management. Telemedicine journal and e-health: the official journal of the American Telemedicine Association, 20(9), 769–800. https://doi.org/10.1089/ tmj.2014.9981 Bodenheimer, T., & Pham, H. H. (2010). Primary Care: Current Problems and Proposed Solutions. Health Affairs, 29(5), 799805. https://doi.org/10.1377/hlthaff.2010.0026 Choi, P. J., Oskouian, R. J., & Tubbs, R. S. (2018). Telesurgery: Past, Present, and Future. Cureus, 10(5), e2716. https://doi. org/10.7759/cureus.2716 Delgoshaei, B., Mobinizadeh, M., Mojdekar, R., Afzal, E., Arabloo, J., & Mohamadi, E. (2017). Telemedicine: A systematic review of economic evaluations. Medical journal of the Islamic Republic of Iran, 31, 113. https://doi.org/10.14196/ mjiri.31.113 Finley, C. R., Chan, D. S., Garrison, S., Korownyk, C., Kolber, M. R., Campbell, S., Eurich, D. T., Lindblad, A. J., Vandermeer, B., & Allan, G. M. (2018). What are the most common conditions in primary care? Systematic review. Canadian family physician Medecin de famille canadien, 64(11), 832–840. Institute of Medicine: Committee on Evaluating Clinical Applications of Telemedicine. (1996). Telemedicine: A Guide to Assessing Telecommunications in Health Care. Washington: National Academies Press. Jong, M., Mendez, I., & Jong, R. (2019). Enhancing access to care in northern rural communities via telehealth. International journal of circumpolar health, 78(2), 1554174. https://doi.org/10.1080/22423982.2018.1554174 Lustig, T.A., & Nesbitt, T. S. (2012). The Role of Telehealth in an Evolving Health Care Environment: Workshop Summary. Washington: National Academies Press. Marcin, J. P., Shaikh, U., Steinhorn, R. H. (2016). Addressing health disparities in rural communities using telehealth. Pediatric Research, 79(1), 169-176. https://doi.org/10.1038/ pr.2015.192 Portnoy, J., Waller, M., & Elliott, T. (2020). Telemedicine in the Era of COVID-19. The journal of allergy and clinical immunology. In practice, 8(5), 1489–1491. https://doi. org/10.1016/j.jaip.2020.03.008 Rieselbach, R. E., Crouse, B. J., Frohna, J. G. (2010). Teaching primary care in community health centers: addressing the workforce crisis for the under- served. Annals of Internal Medicine, 152, 118-122. https://doi.org/10.7326/0003-4819152-2-201001190-00186
89
Evidence Suggesting the Possibility of Regression and Reversal of Liver Cirrhosis BY DANIEL ABATE '23 Cover Image: Comparison of a liver from a healthy individual with the liver of a person affected by Liver Cirrhosis. The irregular surface on the cirrhotic liver is the scar tissue that accumulates on the liver after extensive damage over a long period of time Source: healthdirect
Introduction & Statistics Liver cirrhosis, or simply cirrhosis, refers to damage that the liver accumulates over a long period of time. This damage may have multiple causes, including excessive consumption of alcohol (known as alcoholic liver cirrhosis), fatty liver disease, chronic hepatitis B, and chronic hepatitis C (NIH, 2014). As the liver sustains damage, it forms scar tissue to repair cells in a process known as liver fibrosis. Liver cirrhosis may be thought of as an advanced stage of liver fibrosis, as it involves the accumulation of scar tissue to such an extent that the normal functioning of the liver is impaired. The disease is asymptomatic during the early stages, and symptoms only begin to surface after the liver has sustained significant damage. Some of the common symptoms of liver cirrhosis are fatigue, lack of appetite accompanied by weight loss, nausea, itching, and easy bruising of the skin (Mayo, 2018).
90
To diagnose liver cirrhosis, a liver biopsy is performed, which involves the removal of a small amount of liver tissue using a needle for laboratory analysis. Liver cirrhosis is a major heath concern with a high mortality rate. According to the World Health Organization (WHO), liver cirrhosis is the third leading cause of non-communicable deaths in Africa, with 174,420 dying from the disease in 2016. More broadly, liver cirrhosis was the ninth leading cause of death in lowermiddle-income countries (WHO, 2018), and in the United States, 41,743 deaths were attributed to the disease in 2017 as reported by the National Vital Statistics Report (Kochanek et al., 2019). The consensus among physicians and researchers is that the effects of liver cirrhosis are largely irreversible. In general, medical practitioners employ preventative measures DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: A surgeon performing a liver biopsy. A syringe and needle are used to extract a small piece of liver tissue, which is then sent to the laboratory for analysis. This is the main process by which liver fibrosis and cirrhosis are detected and diagnosed Source: Flickr
such as vaccination against hepatitis A and B, as well as advising patients to minimize consumption of acetaminophen and alcohol. However, in cases involving advanced liver cirrhosis, a liver transplant is usually required for survival. Unsurprisingly, a liver transplant has many risks and can be problematic. As with any transplantation procedure, there is always the possibility of transplant rejection, where the body’s immune system attacks the transplanted liver in a self-destructive effort to protect itself from foreign tissue. According to the United Network for Organ Sharing, there are currently 12,420 people in the United States who are waiting for a liver transplant, with the median national waiting time being 149 days (Organ Procurement, 2020). Unfortunately, this leads to the development of clinical depression in patients waiting for a liver transplant, with some even dying before an organ becomes available (Mandal, 2019).
Liver Fibrosis Regression Contrary to contemporary views on the permanence of liver cirrhosis, research in recent years seems to indicate that reversal may be at least partly possible. Most of the research has focused on the process of liver fibrosis, the repair of damaged liver tissue which is a precursor to liver cirrhosis. In particular, the process of hepatic fibrogenesis involves the accumulation of myofibroblasts in the liver. The process occurs when hepatic stellate cells (HSCs) are activated from their inactive forms, and then themselves initiate an immune response that ultimately results in the accumulation of collagen and
SUMMER 2020
other extracellular matrix (ECM) materials in the liver (Jung & Lim, 2017). HSCs are pericytes which are found between the sinusoids of the liver and the hepatocytes, and while their role in their inactive form is not clearly understood, their role in the production of collagen scar tissue has been well documented. ECM, on the other hand, refers to the complex grid of water, proteins and proteoglycans that anchors and supports the cells in the liver, maintaining the organ’s structure. In normal quantities, ECM plays a vital role in maintaining the liver’s normal functioning by, among other things, maintaining hydrolysis and homeostasis. However, various liver infections not only cause increased and unregulated production of ECM, but also change the structure and components of the ECM such that the normal functioning of hepatocytes is impaired (Arriazu et al., 2014). Since the activation of HSCs is integral to the formation of ECM, agents of this activation have become a target for researchers looking for medication or treatment that could lead to regression or even reversal of liver cirrhosis. Hepatic inflammation is usually the main process that results in the activation of HSCs, and it is common among various liver diseases like viral hepatitis and alcoholic hepatitis (Bataller & Brenner, 2005). Hepatic inflammation is caused by oxidative stress, an imbalance between free radicals and antioxidants in the liver with the former in excess. Oxidative stress has also been shown to play a vital role in fibrogenesis, and it is prompted by the release of reactive oxygen species (ROS) by Kupffer cells (Nieto, 2006). Additionally, cytokines like TGFbeta1, as well
“Contrary to contemporary views on the permanence of liver cirrhosis, research in recent years seems to indicate that reversal may be at least partly possible.”
91
Figure 2: The above diagram illustrates the process by which quiescent Hepatic Stellate Cells (HSCs) are activated. Activated HSCs play a major role in the accumulation of extracellular matrix (ECM) in the liver, which ultimately leads to liver cirrhosis and fibrosis Source: ResearchGate
as some growth factors such as platelet-derived growth factor, have been shown to play a role in the activation of HSCs (Hellerbrand et al., 1999).
“...experimental models have indicated that the apoptosis (cell death) and clearance of HSCs (hepatic stellate cells) leads to fibrosis regression”
The previous consensus that liver fibrosis is irreversible has been challenged by a handful of studies studies. First, experimental models have indicated that the apoptosis (cell death) and clearance of HSCs leads to fibrosis regression (Kisseleva & Brenner, 2007). This mainly occurs in early stages of fibrosis, and if ECM has not been deposited in the liver to any significant degree, the liver may revert almost completely to its previous structure after the cause of the liver damage has been removed. Additionally, enzymes known as Matrix Metalloproteinases (MMPs) may play a role in fibrosis reversal. MMPs are responsible for degradation of ECM in the liver, and tissue inhibitor of metalloproteinases (TIMP-1) may promote liver fibrosis (Guimarães et al., 2010). Studies of liver fibrosis in rats have indicated that when the agent causing injury in the rats is removed, internal TIMP-1 levels decreased and ECM was degraded (Iredale et al., 1998). Restoration of macrophages in the liver has also been shown to mediate degradation of ECM in the liver, though the mechanism used to achieve this is not clearly understood (Duffield et al., 2005). Regression of liver cirrhosis also involves the metabolic processes that regulate liver fibrosis. It should be noted that in order for activated HSCs to maintain production of ECM, a continuous supply of intercellular energy is required;
92
inhibition of metabolic pathways needed to provide activated HSCs with energy may ultimately lead to regression of fibrosis. In one study, researchers found that a sublethal dose of the energy blocker 3-bromopyruvate (3BrPA) transformed activated LX-2 (a human HSC line) into a less-active form, thus blocking the progression of liver fibrosis. Regression of the fibrosis was detected using biomarkers such as increased levels of MMPs and decreased collagen mRNA (Karthikeyan et al., 2016). Some clinical evidence also supports the regression of liver fibrosis when caused or facilitated by certain diseases and infections. For example, the standard treatment for chronic hepatitis B involves a protein known as interferon type I (IFN-α), which has been shown to exhibit antifibrotic activity by inhibiting action of a cytokine known as Transforming growth factor beta (TGF-β), decreasing the activation of HSCs and encouraging their death (Chang et al., 2005). The same is true for hepatitis C, where patients who received treatment associated with IFN demonstrated a reduced risk of developing cirrhosis, among other liver complications (Everson, 2005). As for liver disease associated with alcoholism, there is little evidence suggesting possible regression. For example, a colchicine treatment of alcoholic liver fibrosis showed no statistically significant improvements in liver health (Rambaldi et al., 2005). However, abstinence in those who were diagnosed with alcoholic liver cirrhosis did improve the prospect of long-term
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Illustration of the Hepatitis C virus attacking a human liver. The virus is spread through blood-to-blood contact, and while treatments such as antiviral medications exist, Hepatitis C currently has no vaccine Source: Wikimedia Commons
survival, suggesting that addiction treatment may be an avenue for reducing alcoholic liver cirrhosis mortality rates. Most pharmacological substances that could be used as antifibrotic agents have targeted TGF-β1. However, a difficulty of this approach is that systemic inhibition of TGF-β1 has been shown to increase inflammation in the liver, which is closely associated with activation of HSCs (Samarakoon et al., 2013). Researchers have sidestepped this issue by shifting their focus to specific steps in the activation of TGF-β1. Alternative targets that indirectly influence TGF-β1 activation include the protein integrin beta 6 (αvβ6) and connective tissue growth factor (CTGF), wherein agents like anti-αvβ6 monoclonal antibody and antiCTGF monoclonal antibody have respectively demonstrated efficacy (Patsenker et al., 2008; Wang et al., 2011). The inhibition of a cannabinoid receptor known as CB1 using a CB1 antagonist has been linked to apoptosis of HSCs (Giannone et al., 2012). Additionally, NOX inhibitors have been shown to reduce oxidative stress by reducing the production of ROCs by Kupffer cells (Jiang et al., 2012). Melatonin is another chemical which has been studied for its potential antifibrotic effects. It is a hormone that is secreted by the pineal gland in the brain, and it is mostly known for its role in regulating the sleep-wake cycle. However, scientists are beginning to look into the hormone’s latent role in reducing excessive fibrosis due to its antioxidant and anti-inflammatory properties. In mice that were
SUMMER 2020
given ethanol to induce acute liver disease, melatonin was found to reduce oxidative stress by downregulating MMPs and upregulating the tissue inhibitor TIMP-1 (Mishra t al., 2011). Moreover, in a study involving rats with liver fibrosis induced by carbon tetrachloride, melatonin was found to decrease levels of substances such as growth factor β1 and α-smooth muscle actin which were initially increased through exposure to carbon tetrachloride (Choi et al., 2015). The researchers also found that melatonin prevented liver fibrosis by inhibiting an inflammatory signaling associated with necroptosis, the programmed death of inflammatory cells (Zhang et al., 2017). Indeed, the relationship between liver cirrhosis and melatonin is further cemented by the fact that in patients with liver cirrhosis, imbalances in the level of melatonin in the body have been observed. Furthermore, melatonin was found to ameliorate the conditions of rat livers affected by thioacetamide-induced liver
“Scientists are beginning to look into the hormone's [melanin's] latest role in reducing excessive fibrosis due to its anti-oxidant and anti-inflammatory properties.”
Figure 4: A ball-and-stick model of the molecule melatonin. Melatonin is a hormone that is produced in the pineal gland and is mainly involved in the regulation of the sleep cycle. Moreover, research suggests that it may have a role to play in liver fibrosis regression Source: Wikimedia Commons
93
“It would immensely benefit the research community to develop an alternative strategy of assessing liver fibrosis development and regression that is more representative of the entire liver.”
cirrhosis by mitigating destructive changes caused by oxidative stress (Ellis et al., 2012).
web/20150609090212/http://www.niddk.nih.gov/healthinformation/health-topics/liver-disease/cirrhosis/Pages/facts. aspx.
Limitations
Mayo Clinic, “Symptoms of Cirrhosis.” Mayo Foundation for Medical Education and Research, December 7, 2018. https://www.mayoclinic.org/diseases-conditions/cirrhosis/ symptoms-causes/syc-20351487.
While contemporary research into regression of fibrosis and cirrhosis is quite promising, these studies are not without limitations. For example, most of the research has focused on reversing the effects of liver fibrosis rather than liver cirrhosis, and while the two are linked, they are not the same. There is not much information in the literature about reversal of liver cirrhosis, but this may be attributed to the lack of a clear boundary between liver fibrosis and cirrhosis. Another major barrier to the study of liver fibrosis regression is the methodology used to assess the level of regression. Recall that the main way doctors diagnose liver cirrhosis (and fibrosis) is by performing a liver biopsy. However, the liver is a large organ (the largest internal organ in the body), and the inconsistency of biopsy samples must be considered; due to the liver’s size, a tiny piece of tissue extracted for a liver biopsy may not be representative of the state of the liver as a whole (Celli & Zhang, 2015). The results from clinical studies in which a statistical decrease in liver fibrosis is observed must therefore be carefully evaluated. It would immensely benefit the research community to develop an alternative strategy of assessing liver fibrosis development and regression that is more representative of the entire liver. Additionally, increasing the sample size of research studies would help ensure that results are more meaningful, since many of the studies on regression have involved only a handful of patients.
Conclusion Despite the commendable progress that has been made in the study of liver fibrosis and cirrhosis, scientists and researchers still have a long way to go before they develop effective therapies that mitigate or even reverse the damage caused to a liver by cirrhosis. While some substances hold promising implications, they are still highly localized to specific cells of the liver, and their efficacy has only been demonstrated in rat models. That being said, a growing body of research gives hope for a future where liver cirrhosis is manageable without a transplant.
94
“Disease Burden and Mortality Estimates.” World Health Organization. World Health Organization, March 26, 2019. https://www.who.int/healthinfo/global_burden_disease/ estimates/en/. “The Top 10 Causes of Death.” World Health Organization. World Health Organization, May 24, 2018. https://www.who. int/news-room/fact-sheets/detail/the-top-10-causes-ofdeath. Kochanek KD, Murphy SL, Xu JQ, Arias E. “Deaths: Final data for 2017.” National Vital Statistics Reports; vol 68 no 9. Hyattsville, MD: National Center for Health Statistics. 2019 https://www. cdc.gov/nchs/data/nvsr/nvsr68/nvsr68_09-508.pdf Organ Procurement and Transplantation Network, “Number of Patients on the Waitlist by Organ,” US Department of Health and Human Services, Health Resources and Services Administration, August 16, 2020. https://optn.transplant.hrsa. gov/data/view-data-reports/national-data/# Mandal, Dr. Ananya. “Waiting for a Liver Transplant.” News. News Medical Life Sciences, April 22, 2019. https://www.newsmedical.net/health/Waiting-for-a-liver-transplant.aspx. Jung, Y. K., & Yim, H. J. (2017). Reversal of liver cirrhosis: current evidence and expectations. The Korean journal of internal medicine, 32(2), 213–228. https://doi.org/10.3904/ kjim.2016.268 Arriazu, E., Ruiz de Galarreta, M., Cubero, F. J., Varela-Rey, M., Pérez de Obanos, M. P., Leung, T. M., Lopategi, A., Benedicto, A., Abraham-Enachescu, I., & Nieto, N. (2014). Extracellular matrix and liver disease. Antioxidants & redox signaling, 21(7), 1078– 1097. https://doi.org/10.1089/ars.2013.5697 Bataller R, Brenner DA. Liver fibrosis [published correction appears in J Clin Invest. 2005 Apr;115(4):1100]. J Clin Invest. 2005;115(2):209-218. doi:10.1172/JCI24282 https://pubmed. ncbi.nlm.nih.gov/15690074/ Nieto N. Oxidative-stress and IL-6 mediate the fibro-genic effects of [corrected] Kupffer cells on stellate cells [published correction appears in Hepatology. 2007 Feb;45(2):546]. Hepatology. 2006;44(6):1487-1501. doi:10.1002/hep.21427 https://pubmed.ncbi.nlm.nih.gov/17133487/ Hellerbrand C, Stefanovic B, Giordano F, Burchardt ER, Brenner DA. The role of TGFbeta1 in initiating hepatic stellate cell activation in vivo. J Hepatol. 1999;30(1):77-87. doi:10.1016/ s0168-8278(99)80010-5 https://pubmed.ncbi.nlm.nih. gov/9927153/
References
Tatiana Kisseleva and David A Brenner, “Role of Hepatic Stellate Cells in Fibrogenesis and the Reversal of Fibrosis,” Wiley Online Library (John Wiley &amp; Sons, Ltd, May 29, 2007), https://onlinelibrary.wiley.com/doi/full/10.1111/j.14401746.2006.04658.x.
National Institute of Health, “What Causes Cirrhosis?” Cirrhosis. National institute of Diabetes and Digestive and Kidney Diseases, April 23, 2014. https://web.archive.org/
Guimarães EL, Empsen C, Geerts A, van Grunsven LA. Advanced glycation end products induce production of reactive oxygen species via the activation of NADPH oxidase
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
in murine hepatic stellate cells. J Hepatol. 2010;52(3):389-397. doi:10.1016/j.jhep.2009.12.007 https://pubmed.ncbi.nlm.nih. gov/20133001/ Iredale JP, Benyon RC, Pickering J, et al. Mechanisms of spontaneous resolution of rat liver fibrosis. Hepatic stellate cell apoptosis and reduced hepatic expression of metalloproteinase inhibitors. J Clin Invest. 1998;102(3):538549. doi:10.1172/JCI1018 https://pubmed.ncbi.nlm.nih. gov/9691091/ Duffield JS, Forbes SJ, Constandinou CM, et al. Selective depletion of macrophages reveals distinct, opposing roles during liver injury and repair. J Clin Invest. 2005;115(1):5665. doi:10.1172/JCI22675 https://pubmed.ncbi.nlm.nih. gov/15630444/ Karthikeyan, S., Potter, J. J., Geschwind, J. F., Sur, S., Hamilton, J. P., Vogelstein, B., Kinzler, K. W., Mezey, E., & GanapathyKanniappan, S. (2016). Deregulation of energy metabolism promotes antifibrotic effects in human hepatic stellate cells and prevents liver fibrosis in a mouse model. Biochemical and biophysical research communications, 469(3), 463–469. https://doi.org/10.1016/j.bbrc.2015.10.101 Chang XM, Chang Y, Jia A. Effects of interferon-alpha on expression of hepatic stellate cell and transforming growth factor-beta1 and alpha-smooth muscle actin in rats with hepatic fibrosis. World J Gastroenterol. 2005;11(17):2634-2636. doi:10.3748/wjg.v11.i17.2634 https://pubmed.ncbi.nlm.nih. gov/15849824/ Everson GT. Management of cirrhosis due to chronic hepatitis C. J Hepatol. 2005;42 Suppl(1):S65-S74. doi:10.1016/j.jhep.2005.01.009 https://pubmed.ncbi.nlm.nih. gov/15777574/ Rambaldi A, Gluud C. Colchicine for alcoholic and nonalcoholic liver fibrosis and cirrhosis. Cochrane Database Syst Rev. 2005;(2):CD002148. Published 2005 Apr 18. doi:10.1002/14651858.CD002148.pub2 https://pubmed.ncbi. nlm.nih.gov/15846629/
ncbi.nlm.nih.gov/22618020/ Mishra, Amartya, Sumit Paul, and Snehasikta Swarnakar. “Downregulation of Matrix Metalloproteinase-9 by Melatonin during Prevention of Alcohol-Induced Liver Injury in Mice.” Biochimie. Elsevier, February 24, 2011. https://www.sciencedirect.com/science/article/pii/ S0300908411000630?via=ihub. Choi, Hyo-Sun, Jung-Woo Kang, and Sun-Mee Lee. “Melatonin Attenuates Carbon Tetrachloride–Induced Liver Fibrosis via Inhibition of Necroptosis.”Translational Research. Mosby, April 12, 2015. https://www.sciencedirect.com/science/article/pii/ S1931524415001097?via=ihub. Zhang, J. J., Meng, X., Li, Y., Zhou, Y., Xu, D. P., Li, S., & Li, H. B. (2017). Effects of Melatonin on Liver Injuries and Diseases. International journal of molecular sciences, 18(4), 673. https:// doi.org/10.3390/ijms18040673 Ellis, Elizabeth L., and Derek A. Mann. “Clinical Evidence for the Regression of Liver Fibrosis.” Journal of Hepatology. Elsevier, January 13, 2012. https://www.sciencedirect.com/science/ article/pii/S016882781200044X. Romulo Celli and Xuchen Zhang, “Pathology of Alcoholic Liver Disease,” Journal of Clinical and Translational Hepatology 2 (September 11, 2015): 103–9, https://doi.org/10.14218/ JCTH.2014.00010. *ResearchGate has made this image available for use via license (i.e. it may be copied and reproduced in any medium). Commercialization of the image, however, is prohibited. For more information, please visit https://www.researchgate. net/figure/Hepatic-Stellate-Cell-HSC-activation-Both-alcoholand-inflammation-damage-the-liver_fig2_281745019
Samarakoon R, Overstreet JM, Higgins PJ. TGF-β signaling in tissue fibrosis: redox controls, target genes and therapeutic opportunities. Cell Signal. 2013;25(1):264-268. doi:10.1016/j. cellsig.2012.10.003 https://pubmed.ncbi.nlm.nih. gov/23063463/ PatsenkerE,PopovY,StickelF,JonczykA,GoodmanSL,Schuppan D. Inhibition of integrin alphavbeta6 on cholangiocytes blocks transforming growth factor-beta activation and retards biliary fibrosis progression. Gastroenterology. 2008;135(2):660-670. doi:10.1053/j.gastro.2008.04.009 https://pubmed.ncbi.nlm. nih.gov/18538673/ Wang Q, Usinger W, Nichols B, et al. Cooperative interaction of CTGF and TGF-β in animal models of fibrotic disease. Fibrogenesis Tissue Repair. 2011;4(1):4. Published 2011 Feb 1. doi:10.1186/1755-1536-4-4 https://pubmed.ncbi.nlm.nih. gov/21284856/ Giannone FA, Baldassarre M, Domenicali M, et al. Reversal of liver fibrosis by the antagonism of endocannabinoid CB1 receptor in a rat model of CCl(4)-induced advanced cirrhosis. Lab Invest. 2012;92(3):384-395. doi:10.1038/labinvest.2011.191 https:// pubmed.ncbi.nlm.nih.gov/22184091/ Jiang JX, Chen X, Serizawa N, et al. Liver fibrosis and hepatocyte apoptosis are attenuated by GKT137831, a novel NOX4/NOX1 inhibitor in vivo. Free Radic Biol Med. 2012;53(2):289-296. doi:10.1016/j.freeradbiomed.2012.05.007 https://pubmed.
SUMMER 2020
95
CR-grOw: The Rise and Future of Contract Research Organizations BY DEV KAPADIA '23 Cover Image: Because of the increasing resistance of bacteria to antibiotics, the pharmaceutical industry has seen steep increases in research and development costs every year. One of the solutions to combat these increasing prices is to outsource some of the operations of the industry, including the clinical trials process. This outsourcing trend is how the Contract Research Organization industry gained a foothold in the research and development process, and it has since expanded to be just as much a part of the process as the pharmaceutical and medical device companies. Source: Wikimedia Commons
96
Introduction One of the biggest problems in the pharmaceutical industry today is the exorbitant costs of researching, developing, and testing a new drug. A recent study at the Tufts Center for the Study of Drug Development reported that the 2016 estimated cost to drug makers to produce a drug that receives market approval is $2.6 billion with $1.4 billion in estimated cash costs (DiMasi et al., 2016). This number is more concerning within the context of Eroom’s Law. In 1965, Gordon Moore of IBM coined Moore’s Law that stated the number of transistors that computer scientists can use on a microchip doubles every two years, which also doubles the computing power. Conversely, Eroom’s Law, with “Eroom” being the inverse spelling of “Moore,” was first observed in the 1980s when it was documented that every nine years, the cost of research and development (R&D) for pharmaceutical drugs doubles. While the doubling time has since extended slightly, the
growth continues to threaten pharmaceuticals, and the increasingly high-cost phenomenon is now seen across the medical product segment, not just pharmaceuticals (Ringel et al., 2020). There are several methods, such as machine learning and drug re-design, that the pharmaceutical, biotechnology, and academic research industries use to expedite timelines and lower the high costs they incur as a result of R&D. However, one of the most popular methods that is used by nearly every laboratory research industry is the Contract Research Organization (CRO). CROs provide outsourced research services to the pharmaceutical, biotechnology, and academic research industries, as well as virtually any other industry that requires the need for development and testing of life science research. These services can come in several forms, including biologic assay development, clinical trials management, toxicology reports, research models, and much DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
more. Outsourcing these services has started to become common practice in the medical product pipeline and shows no signs of stopping its expansion throughout the medical product development process.
The History of CROs CROs sprouted in the wake of the Cold War Increased regulation and competition in pharmaceutical research brought sudden increases in R&D timelines and subsequent increases in cost; these factors caused the R&D costs for pharmaceutical companies to double every five years (Dimachkie Masri et al., 2012). The CRO industry was popularized as an easy way to cut down on costs and headaches caused by supply chain and clinical trial management in the drug discovery process (Mirowski & Van Horn, 2005). Eventually the benefits of the industry were seen far beyond clinical trials testing exclusively in the pharmaceutical industry, and these organizations spread like wildfire. It is estimated that CROs make the clinical trials process 30% faster and save research teams more than $150 million per development process (A. Miller, 2019). Now, not only do CROs provide cost-effective services, but, for many organizations like PPD, Charles River Laboratories, and Covance, their industry experience allows them to claim more effective services than would be performed in-house. In 2010, it was estimated that CROs were involved
SUMMER 2020
in about 64% of studies. It is now estimated that this number has risen to 80% and shows no signs of stopping (A. Miller, 2019). Even better for the industry is that despite the increased competition seen over the past years, the revenues and profits continue to rise due to sharp increase in demand and incorporation of strategic partners to monopolize the supply for these CROs. Profit margins (calculated from the net revenue as a percentage of revenue) has increased from 17.2% in 2014 to 24.1% in 2019, meaning that CROs are charging more for their services relative to their costs than they have in the past (A. Miller, 2019). Clearly, this industry is doing very well and growing. However, because these CROs are expected to be a big part of the research industry in the upcoming years, it is important to assess the factors affecting the industryâ&#x20AC;&#x2122;s business as well as the services and the innovation that is happening in-house for many of these companies.
The Development Process In order to truly understand the dynamics of the CRO industry, it is extremely important to learn about their specialty: the medical product development process. While drug and medical device development is an increasingly costly and timely industry that is always changing due to regulation, the overall structure of the
Figure 1: The medical product traditional timeline begins with the exploratory and preclinical phases, then moves into the clinical trials phase, and finally the manufacturing phase. While CROs started with outsourcing services to manage the clinical trials process, many have since expanded to include software and animal models for the exploratory and preclinical testing phases as well as operational optimization services and more for the manufacturing of medical products. Although COVID-19 has caused an expedited and overlapping timeline as shown below the traditional time, CROs are still needed, if not more needed, for their expertise, experience, and ability to accelerate the timeline of clinical trials. In the future, CROs might expand their offerings even further or innovate offerings already discussed to further integrate across the medical product industry. Source: Flickr
â&#x20AC;&#x153;The CRO industry was popularized as an easy way to cut down on costs and headaches caused by supply chain and clinical trial management in the drug discovery process.â&#x20AC;?
97
process is constant. The average time length for a drug or device to be developed and gain FDA approval is about twelve years, not including the years of research that is put in beforehand to finalize the idea and internal approval to pursue development of the drug (Van Norman, 2016). The development process consists of four stages: discovery, preclinical research, clinical trials, and FDA review. The discovery stage consists of actually developing the product; the preclinical research consists of the researchers conducting internal tests to estimate the efficacy and safety of their product. Once the researchers see signs of adequate safety and efficacy within a laboratory environment, the product will then be tested on real individuals in the clinical stage process. Lastly, once the product has been tested on a sufficient number of individuals to prove its efficacy, safety, and more, it will be submitted for FDA review. Given the multiple steps and countless hours of research and analysis in each process, the entire development process can often take over a decade to complete. Also, almost all of the drugs will not even make it to FDA approval. For example, say that 10,000 drugs enter the discovery stage; on average, only about 250 of those will be cleared to enter the preclinical research in which researchers conduct pharmacology and toxicology reports. Of the 250 drugs, only a portion of those enter clinical trials with five reaching the final clinical trials stage. Eventually, only one, on average, will gain FDA approval (Dimachkie Masri et al., 2012).
“Many international CROs include the following offerings: exploratory research, animal and cellular models, clinical trial management, and manufacturing processes”
98
While CROs now operate over every stage of the development process, their specialty has always been in facilitating the clinical trials phases. The clinical trials process itself can be broken up into three different phases. Phase I clinical trials are intended to ensure the safety of the product to permit further testing and usually take around one year to complete. Phase I trials only use anywhere from 15 to 80 patients, a smaller group relative to other phases as researchers don’t want to expose too many participants to the product if it causes adverse effects. For this reason, in every trial phase, many participants are selected extremely carefully. If there are any potential signs of characteristics or health conditions that might cause adverse effects from the use of the medical products, volunteers are rejected from participation in the trial. Next, phase II trials can use anywhere from 100 to 500 subjects with the goal of ensuring efficacy, necessary concentration, biological interactions, and more. Phase II trials usually last around two
years because researchers must determine how effective the drug is at different doses, what parts of the body are affected and in what way, and much more to determine a clearer idea of the efficacy of the product. Lastly, the product enters phase III trials, which is a comprehensive review of both the safety and efficacy of the drug to build a strong case for FDA approval. Because this is the last step before regulatory approval, groups can consist of anywhere from 1,000 to 5,000 subjects and can take one to four years depending on the complexity of the product and the conclusiveness of the initial result (Mohs & Greig, 2017).
Offerings of CROs The development process is similar across drugs, devices, and other medical products that require FDA approval. Although the process seems straightforward at this highlevel description, there are actually many complexities in each of the steps before and during the process. CROs can assist in a variety of these processes. At first, CROs operated simply as methods of outsourcing clinical trials management, but, as they saw more demand for outsourcing in the process, they continued to expand their offerings (Getz et al., 2014). Many international CROs include the following offerings: exploratory research, animal and cellular models, clinical trial management, and manufacturing processes. The first offering is exploratory research. Before researchers can even think of testing the drug or medical device, they must first conceive and produce it in the discovery stage. This takes an immense amount of research and planning, and this is where CROs can come in to assist by helping research teams in validating the intended method of bioanalysis of the product. CROs can also help with determining the intended stability of the product and ensuring proper Investigational New Drug filing studies are planned and filed with the FDA. Because these processes deal with the development, or exploration, of the product, this is the “exploration” process (A. Miller, 2019). The next major offering that CROs can provide are animal and cellular models in the preclinical phase. Especially for drug development, researchers must test their products on animals and human cells before they can test them on human subjects; because this process can be cumbersome and infeasible for many laboratories, CROs took the opportunity to offer
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: With the rise of the pandemic, telehealth has been an industry that has garnered a lot of attention throughout 2020. The technology allows physicians to consult, monitor, and potentially diagnose their patients remotely. Because of the widespread attention, telehealth has now been proposed to supplement many industries, including the CRO industry. By allowing CROs to monitor clinical trials participants remotely, CROs can conduct trials even more efficiently and more comprehensive than before. Source: Wikimedia Commons
to outsource the production of these animal models. Using gene editing techniques, CROs can alter the genome of rabbits, rats, mice, human stem cells, and other models. The end goal of this editing is to simulate the human phenotypes and the conditions of the diseases studied. If the genes are not accurately altered to model what the real conditions in a disease would be, then researchers would not be able to predict the efficacy of proposed medical products (Huang, 2019). The main service that has been the “bread-andbutter” of CROs is clinical trials management. There are many complexities in the clinical trials process that makes it extremely frustrating for researchers at pharmaceutical companies, academic institutions, and biotechnology companies to conduct themselves. Challenges can include finding patients who meet the many eligibility requirements for the study, staying in contact with each patient throughout the duration of the trail, navigating the various regulations involved in running clinical trials, and paying the high costs of running these trials. Because CROs have specialties in the management of clinical trials, they can run them more efficiently at lower costs. Through patient networks, they can more match patients with clinical trials much quicker and have even begun to develop methods of remote monitoring to better maintain connection with these patients through the duration of the trial. Further, CROs are also starting to provide data analysis services to provide researchers with instant, actionable insight on the results of
SUMMER 2020
their trails (A. Miller, 2019). Lastly, many CROs provide services to assist in the manufacturing process of product development. These services can entail an evaluation and optimization of a firm’s entire drug or device discovery protocol, design of machines to produce these products, consulting services to identify best practices for the firm to follow, and much more. Manufacturing services can also be applied to individual projects if there is a certain challenge that the firm is facing that could use the expertise of those who specialize in the discovery process. Whatever the method, the end goal is to ensure the most time and cost-effective path is taken from FDA approval to introduction of the product to the market (Huang, 2019).
The Future of CROs Much of the work that CROs do seems routine– like it has not changed in the past decades and isn’t expecting to in the near future. However, this would be an oversimplification of the large amount of innovation that CROs are doing both in-house and through partnerships and acquisitions (Huang, 2019). One of the largest sources of innovation in the CRO industry is in animal models. These animal models often require mutations in order to fit the needs of the research team, meaning that CROs need to change the gene expression of many of the models when they are sent to researchers. This requirement makes CROs one of the primary testers of innovation in the gene editing
“One of the largest sources of innovation in the CRO industry is in animal models...”
99
landscape, particularly in CRISPR technology (Labant, 2020). For instance, Charles River Laboratories, one of the largest global CROs, announced in 2016 the introduction of in vivo and in vitro genome editing through a licensing partnership with the Broad Institute of MIT and Harvard. These editing techniques allow the firm to use the power of gene knock-outs and knock-in, processes that remove and add genome sequences into the target cell, to alter the phenotypic expression of the models much more effectively than before (Charleas River Laboratories, 2016).
â&#x20AC;&#x153;The opportunity for at home clinical trials has been present for years, even before the inception of telehelath platforms, but CROs have always been resistant to the change.â&#x20AC;?
Along with innovation in current service offerings, CROs are also looking to expand their reach beyond services they have done in the past. For instance, beyond just clinical trial recruitment, company organizational optimization, and more that have been traditionally demanded by researchers, the industry is looking ahead to what they expect will be needed. For instance, many CROs have now expanded their offerings to include data analytic services and integration of EHR data into clinical trial management (Landhuis, 2018). The opportunity for at home clinical trials has been present for years, even before the inception of telehealth platforms, but CROs have always been resistant to the change. The reasons included the fact that there was never a commonly accepted way to facilitate these types of trials, and the use of telehealth came with a regulatory hurdle that would have taken large cash deposits and time to overcome. Now a combination of the desire to ensure safety of participants, improve efficiency, and increase patient monitoring ability could bring the entrance of telehealth far quicker than expected (Lahiri, 2013). Telehealth is the technology that enables medical healthcare professionals to connect virtually via video or other virtual engagement platforms with their patients. By allowing researchers this remote communication capability, clinical trials can be conducted on a larger scale and much more efficiently. Just before the pandemic, IQVIA, one of the largest CROs in the world, introduced its Avacare Clinical Research Network, which connects members of clinical trials to research teams while also providing a patient engagement platform that allows for remote patient monitoring. Integrated with artificial intelligence, the network automatically matches and patients with clinical trial leaders while also allowing for remote monitoring of progress, greatly improving the efficiency of the
100
entire clinical trials process from recruitment to analysis. Now, with the new effects of the pandemic, IQVIA expects to grow this technology in-house and through acquisitions in the telehealth industry in hopes of creating an entire home-trial environment, allowing for recruitment, instruction, monitoring, and reporting all to be done from the comfort of a participantâ&#x20AC;&#x2122;s home (Adams, 2020). Given that there is concern regarding safety and ethical conduct in the industry, particularly in the way that clinical trials are planned and managed, these problems can easily be solved through more transparency in the CRO process. Because of the important position that CROs will play in the future along with the increased government attention to the industry due to a refocus of the political landscape on health, firms are projected to be far more open about their gene editing techniques and clinical trials protocols along with a myriad of other steps in the process through public reports to the FDA and other health organizations (Roberts et al., 2016).
Industry Outlook There are several key growth drivers of the CRO industry. These include the increase in the research and development expenditure, demand for outsourcing solutions, demand for drugs, medical devices, and other biotechnology, and number of elderly individuals. Historically, the industry has done well and has enjoyed robust, stable growth. Unsurprisingly, the outlook for the industry continues to be favorable for a number of reasons (A. Miller, 2019). Firstly, the CRO industry is expected to greatly benefit from the always increasing discovery pipeline costs of products in the medical industry. These rising costs not only benefit the industry by increasing research and development expenditure, thereby increasing the commissions that CROs receive for their services, but these increasing costs also put pricing pressures on research teams. These pricing pressures encourage them to find ways of increasing efficiency while decreasing costs, like outsourcing certain activities to CROs (Foster & Malik, 2012). Therefore, the projected increase in costs simply from the increasing complexity of diseases and antimicrobial resistance of bacteria will factor into the success of the CRO industry (A. Miller, 2019). Second, the number of elderly individuals
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
in the world is expected to increase in the coming decades. Because baby boomers are reaching ages beyond the past median age of the population due to more advanced healthcare services and increased standards of living worldwide, the resulting number of elderly individuals is expected to increase. In fact, by 2060, the median age of the United States is expected to be 43, a five-year increase from the current median age of 38 (Bureau, 2018). This shift in the median age is not specific to America; it is also clearly reflected in global demographics. Because the number of health problems sustained increases as individuals age, there will likely be a surge in demand for drugs, medical devices, and other biotechnology worldwide to treat these health problems. This will in turn increase the number of products entering the discovery and clinical trial process, benefitting CROs (J. Miller, 2012).
be up to these firms to continue to innovate their current offerings and expand in order to avoid irrelevancy in the market. This comes in the form of expanding to the digital sector in data analytics and telehealth as well as further research on genome editing and embryonic stem cells. Regardless, it seems as though CROs have a winning strategy that will last the industry for years to come.
Further, while the coronavirus pandemic has put many industries in jeopardy, the CRO industry is one industry that has suffered temporarily but is expected to benefit from the long-term trends. Because of the closing of many businesses, including pharmaceutical companies and laboratories, CROs experienced a brief hiccup in revenue that is expected to hurt their total 2020 revenue and lead to a decrease in total growth from 2019 (A. Miller, 2019). However, the expected increased attention to health and treatment going forward, not only relating to the coronavirus but all aspects of health in general, bodes very well for the CRO industry in the long run. This shift is expected to not only increase the demand for medical products but also the demand for grants and other funding for research developments that will increase the research and development expenditures. Therefore, while the coronavirus pandemic spelled trouble for many industries, it likely signals opportunity for CROs (Margherita & Valeria, 2017).
Charles River Laboratories. (2016, December 1). Charles River Laboratories Demonstrates Expertise in CRISPR/Cas9 Genome Engineering Technology | Charles River Laboratories International, Inc. https://ir.criver.com/news-releases/newsrelease-details/charles-river-laboratories-demonstratesexpertise-crisprcas9/
Conclusion As seen, the world of CROs is not only complex but extremely dynamic. As the price of research and development of medical products rises, the demand for outsourcing solutions will only go up. Coupled with the increased attention that the health sector as a whole will receive in the future, it will be shocking if companies like PPD, Charles River Laboratories, PRA Health Sciences, and others don’t take a more dominant position in the healthcare sector (Margherita & Valeria, 2017). However, with the saturation of competitors in the market, it will
SUMMER 2020
References Adams, B. (2020, February 27). IQVIA launches new research network to better match patients to trials. FierceBiotech. https://www.fiercebiotech.com/cro/iqvia-launches-newresearch-network-to-better-match-patients-to-trials Bureau, U. C. (2018, March 13). Older People Projected to Outnumber Children. The United States Census Bureau. https://www.census.gov/newsroom/press-releases/2018/ cb18-41-population-projections.html
Dimachkie Masri, M., Ramirez, B., Popescu, C., & Reggie, E. M. (2012). Contract research organizations: An industry analysis. International Journal of Pharmaceutical and Healthcare Marketing, 6(4), 336–350. https://doi. org/10.1108/17506121211283226 DiMasi, J. A., Grabowski, H. G., & Hansen, R. W. (2016). Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics, 47, 20–33. https:// doi.org/10.1016/j.jhealeco.2016.01.012 Foster, C., & Malik, A. Y. (2012). The Elephant in the (Board) Room: The Role of Contract Research Organizations in International Clinical Research. The American Journal of Bioethics, 12(11), 49–50. https://doi.org/10.1080/15265161.2 012.719267 Getz, K. A., Lamberti, M. J., & Kaitin, K. I. (2014). Taking the Pulse of Strategic Outsourcing Relationships. Clinical Therapeutics, 36(10), 1349–1355. https://doi.org/10.1016/j. clinthera.2014.09.008 Huang, J. (2019). Contract Research Organizations Are Seeking Transformation in the Pharmaceutical Value Chain. ACS Medicinal Chemistry Letters, 10(5), 684–686. https://doi. org/10.1021/acsmedchemlett.9b00046 Labant, M. (2020, April 1). As Needs Change, the CRO Industry Adapts. GEN - Genetic Engineering and Biotechnology News. https://www.genengnews.com/insights/as-needs-changethe-cro-industry-adapts/ Lahiri, K. (2013). Telemedicine, e-Health and Health related IT enabled Services: The Indian Situation. Globsyn Management Journal; Calcutta, 7(1/2), 1–16. Landhuis, E. (2018). Outsourcing is in. Nature, 556(7700), 263–265. https://doi.org/10.1038/d41586-018-04163-8 Margherita, B., & Valeria, L. (2017). The increasing role of
101
contract research organizations in the evolution of the biopharmaceutical industry. African Journal of Business Management, 11(18), 478–490. https://doi.org/10.5897/ AJBM2017.8360 Miller, A. (2019, April). Contract Research Organizations. https:// my.ibisworld.com/us/en/industry-specialized/od5708/about Miller, J. (2012). Contract Services in 2012. Biopharm International; Monmouth Junction, 25(1), 18–19. Mirowski, P., & Van Horn, R. (2005). The Contract Research Organization and the Commercialization of Scientific Research. Social Studies of Science, 35(4), 503–548. https://doi. org/10.1177/0306312705052103 Mohs, R. C., & Greig, N. H. (2017). Drug discovery and development: Role of basic biological research. Alzheimer’s & Dementia : Translational Research & Clinical Interventions, 3(4), 651–657. https://doi.org/10.1016/j.trci.2017.10.005 Ringel, M. S., Scannell, J. W., Baedeker, M., & Schulze, U. (2020). Breaking Eroom’s Law. Nature Reviews Drug Discovery. https:// doi.org/10.1038/d41573-020-00059-3 Roberts, D. A., Kantarjian, H. M., & Steensma, D. P. (2016). Contract research organizations in oncology clinical research: Challenges and opportunities. Cancer, 122(10), 1476–1482. https://doi.org/10.1002/cncr.29994 Van Norman, G. A. (2016). Drugs, Devices, and the FDA: Part 1: An Overview of Approval Processes for Drugs. JACC: Basic to Translational Science, 1(3), 170–179. https://doi.org/10.1016/j. jacbts.2016.03.002
102
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
103
Preventative Medicine: The Key to Stopping Cancer in its Tracks BY DINA RABADI '22 Cover Image: The field of medicine is on the cusp of a paradigm shift towards preventative medicine. This paper examines a potential model of prevention and analysis for cancer, minimizing patient costs and improving patient response to the disease. Source: Nick Youngson, obtained from Alpha Stock Images, original at https://www.picpedia. org/medical/p/preventivemedicine.html
104
Introduction “For the next quantum leap, fundamentally different strategies have to be developed. The two immediate steps should be a shift from studying animals to studying humans and a shift from chasing after the last cancer cell to developing the means to detect the first cancer cell.” – Dr. Azra Raza, The First Cell: And the Human Costs of Pursuing Cancer to the Last (Raza, 2019, 48). According to the World Health Organization (WHO), cancer is the second cause of death globally, causing about one out of every six deaths. Between 1975 and 2015, cancer mortality rates stagnated around 40% (Falzone et al., 2018). Why is it that, despite such great advancements in the field like immunotherapy, cancer still accounted for approximately 9.6 million deaths in 2018, making it one of the leading causes of death worldwide (World Health Organization)? Why is the triad of
surgery, radiotherapy, and chemotherapy, also known as the “slash, burn, poison” method, still the standard for treatment (Hunter, 2017)? Perhaps one of the reasons for this lies in the fact that much of medicine is reactive, rather than preventative. Physicians often find themselves in a position where a cancer has become malignant, and if the cancer is not malignant, the intricate and heterogenous nature of a tumor presents a challenge in and of itself. This suggests a needed shift in the way researchers and clinicians view cancer, from reactive treatment to preventative interventions, such as using biosensors that are informed by an international biomarker database.
Peto’s Paradox and How Cancer Affects Animals Why are humans so susceptible to cancer? Considering that humans have trillions of cells, and that each cell has billions of base pairs of DNA, mistakes are unavoidable (Tollis et al., DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: This figure displays the global burden of disease, with cancer causing the second highest number of deaths worldwide. Source: Max Roser and Hannah Ritchie (2015). Published online at OurWorldInData.org.
2017). Therefore, it would make sense that organisms that are even larger than humans, such as elephants and whales, are even more likely to get cancer. Surprisingly, elephants have about a 5% risk of developing cancer, much lower than the 16% that a human will develop cancer according to the WHO (Tollis et al., 2017). This contradiction is known as Peto’s Paradox, named after epidemiologist Richard Peto, who studied tumor progression after carcinogen exposure in mice. He quickly realized that despite the fact humans have one thousand times more cells than mice, the rates of cancer incidence between the two organisms were very similar (Tollis et al., 2017). There are several possible explanations of Peto’s Paradox. One of the possible mechanistic answers to this question is evolution. First, one must examine p53, also known as the “guardian of the genome.” Humans have only one copy of p53 in their genome, and a mutation in a single TP53 allele indicates a 90% chance of developing cancer (Tollis et al., 2017). Interestingly, elephants have twenty copies of p53, which when exposed to ionizing radiation, switch on the apoptotic pathway to destroy mutated cells (Tollis et al., 2017). When the number of copies of p53 were increased in mice, cancer risk was significantly minimized, suggesting that the number of copies of p53 that an organism has plays a critical role in determining the risk for cancer (Tollis et al., 2017). However, whales do not have any extra copies of any tumor suppressor gene, and whales have an even lower rate of cancer than elephants. This fact contradicts
SUMMER 2020
the possible solution that extra copies of p53 are necessary in preventing cancer (Tollis et al., 2017). There are other exceptions to Peto’s Paradox: some smaller organisms, specifically naked mole rats and blind mole rats, which have very low incidence of cancer (Tollis et al., 2017; Tidwell et al., 2017). This is due to their very sensitive tumor-suppressor pathway and over proliferation to trigger necrotic cell death, respectively (Tollis et al., 2017). Another perhaps more satisfying explanation to Peto’s Paradox is the Warburg effect, as it explains the metabolic transition of a cancerous cell. Essentially, a cancerous cell will alter its methods of metabolism, which permits the regulation of gene expression epigenetically, causing rapid turnover of metabolic substrates (Tidwell et al., 2017). Perhaps then, larger organisms like whales developed slower metabolic rates in order to compensate for their incredibly high cell numbers. A slower metabolism reduces reactive oxygen species, which are very damaging to DNA and are a major cause of cancer (Tidwell et al., 2017). Indeed, it was found that elephants and whales have slower metabolisms than smaller animals (Tidwell et al., 2017). An additional factor that allows these organisms to survive for such a long time with low incidences of cancer is the lack of predation and other causes of death, which allows energy to be directed towards maintaining cells (Tidwell et al., 2017). It is also critical to acknowledge that humans are the only species to have significantly extended its lifespan well beyond reproductive years. Interestingly, organisms that are in captivity, such as tigers, also have a lifespan far past
“... it would make sense that organisms that are even larger than humans, such as elephants and whales, are even more likely to get cancer. Surprisingly, elephants have about a 5% risk of developing cancer, much lower than the 16% that a human will develop cancer according to the WHO."
105
reproductive years, and it has been found that they develop cancer at higher rates when compare to their wild relatives (Tidwell et al., 2017). This extension of life beyond reproductive years is another possible solution to Peto’s Paradox, since cancer is strongly associated with aging.
How does cancer work? “There are three main factors that play a major role in tumor biology and development: genetic mutations, the immune system, and epigenetics."
There are three main factors that play a major role in tumor biology and development: genetic mutations, the immune system, and epigenetics. There are two classifications of genes that regulate cancer growth and are critical in preventing cancer – proto-oncogenes and tumor suppressor genes. Proto-oncogenes provide the ‘gas’ that fuels the cell’s growth. However, when mutated, a proto-oncogene becomes an oncogene, causing the cell to grow out of control, much like a gas pedal being stuck to the floor. Tumor suppressor genes provide the ‘brake’ for cell growth, meaning that it stops cell growth when necessary, and initiates apoptosis if there are any errors in the cell. If there is a mutation in a tumor suppressor gene, cell division will be out of control, also causing cancer. Mutations in either or both types of genes are major causes for cancer. There are many other ways by which gene expression is regulated. For example, microRNAs, which are small non-coding RNAs that will terminate protein translation at a specific point, can act as oncogenes and tumor suppressors under different conditions, and they are found to be highly dysfunctional in cancer (Peng and Croce, 2015). MicroRNAs affect many characteristics of cancer growth and spread, so recently there have been more studies to explore their role (Peng and Croce, 2015). The immune system plays another critical aspect in how to treat and prevent cancer. Cancer-associated inflammation contributes to genomic instability, epigenetic modification, improvement of anti-apoptotic pathways, and other methods by which a tumor successfully establishes itself within the body (Gonzalez et al., 2018). Initially, it may seem that chronic inflammation associated with tumor progression draws the immune system in to kill the cancer. In reality, inflammatory immune cells, such as macrophages, neutrophils, dendritic cells, and myeloid-derived suppressor cells (MDSCs), can be manipulated to play a tumor-promoting role in the tumor microenvironment, thereby protecting the tumor from the immune system (Gonzalez et al., 2018). Another way to describe this phenomenon is that the cancer creates its own micro-immune system by manipulating
106
immune cells, allowing the tumor to evade the body’s immune system. While weaker cancer cells are in fact eliminated by the immune system, the stronger ones survive and give rise to further generations that can avoid immune detection and build immune tolerance (Gonzalez et al., 2018). Examining just one of these cell types that leads to immune tolerance sheds light on how the others generally work when transformed from effector cells to tumorprotecting cells. For example, macrophages play a major role in the innate, or immediate, immune response and are among the first responders to infection and injury. However, if recruited by the tumor, macrophages transform into tumor-associated macrophages (TAMs), which will attack the body’s immune cells. Unsurprisingly, higher levels of TAMs are associated with poorer prognosis and overall survival rates, so this has become a potential clinical target (Li et al., 2019). Furthermore, there are many critical environmental factors that can increase a person’s likelihood of developing cancer. Understanding how epigenetic factors regulate reading of the genome cannot be understated in cancer prevention. Examples of epigenetic causes of the disease include tobacco, alcohol, diet, and pollution, all of which induce low levels of inflammation and leads to an elevated cancer risk over time (Gonzalez et al., 2018). Epigenetic changes occur in humans during all stages of development, all the way from early embryonic stages through adulthood (Roberti et al., 2019). Exposure to certain environmental factors can cause changes to how DNA is transcribed and later expressed as a protein. These modifications typically occur by at least one of the following regulators of gene expression—DNA methylation, histone modification, and non-coding RNAs (Roberti et al., 2019). These modifications are often interdependent, which makes identification of the specific cause of the disease even more challenging. Clearly, the complexity of tumor heterogeneity makes identifying treatments extremely complex.
Current Treatments The “slash, burn, poison” model is still the standard of cancer care. As described earlier, this method entails surgery, radiotherapy, and chemotherapy. While surgery can remove a mass, if any cancerous cells remain, cancer recurs. Radiotherapy is effective, but it kills cells nonspecifically, so a patient’s healthy cells are
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Genes can be turned on and off based on methylation or acetylation, as seen above, which can impact whether DNA is open or condensed. Source: Wikimedia Commons
killed along with the cancer cells. Therefore, this treatment causes drastic side effects and decreased quality of life for the individual. Additionally, excess radiation itself can cause cancer. Chemotherapy has the same issue with nonspecific cell killing, causing severe side effects in patients as well. Despite the development and implementation of new treatments, cancer mortality rates have remained mostly unchanged over the past several decades. Newer treatments, such as checkpoint inhibitors, cancer vaccines, chimeric antigen receptor (CAR) T-cells, and antibodydrug conjugates (ADCs), are being used more frequently in the clinic, showing variably promising results depending on the patient. Immunotherapy has been hailed as a great stride in the history of cancer treatments and even as a fourth pillar, added onto the “slash, burn, poison” method (Hunter, 2017). One type of immunotherapy is the use of checkpoint inhibitors. One of the most well-known checkpoint inhibitors is anti-programmed death-1 (anti-PD-1). PD-1 is a molecule is expressed on the surface of cancer cells and allows the cancer to deflect the immune system. Anti-PD-1 blocks this interaction and allows T-cells to better kill cancer cells. While this checkpoint inhibitor has shown immense promise in mouse models and some patients, much is still to be learned about the mechanisms by which the checkpoint functions (Hunter, 2017). Essentially, the efficacy of these checkpoint inhibitors is highly variable, with relatively better results before metastasis.
SUMMER 2020
There are other types of promising immunotherapies that are beginning to become more widely used or are undergoing clinical trials. Personalized anti-cancer vaccines may help improve responses in patients whose tumors fail to respond to checkpoint inhibition. These vaccines are not currently preventative, as they work by targeting T cells, dendritic cells, peptides, DNA, or whole-cells (Thomas and Prendergast, 2016). CAR T-cell therapy is another immunotherapy that is receiving much recognition for its potential. These engineered T cells function by binding to a tumor-associated antigen to cause death of tumor cells by the T cells (Golubovskaya, 2017). While quite promising in clinical trials, obstacles arise in the tumor microenvironment, where suppressive MDSCs continue to protect the tumor. Strategies such as chemotherapy and checkpoint inhibition can work to reduce this suppressive factor (Baruch et al., 2017). Some serious toxicities, such as anaphylaxis and cytokine release syndrome, can occur, so significant research must be done in order to evaluate the clinical safety of this therapy (Baruch et al., 2017). Finally, ADCs are also a promising immunotherapy, essentially combining chemotherapy and immunotherapy (Pondé et al., 2019). The main concern that arises with ADCs is off-target toxicity, resulting in the release of cytotoxic agents into the bloodstream which kill healthy cells (Baruch et al., 2017). As with the other new and upcoming immunotherapies, more research must be done in order to minimize serious side effects and enhance efficacy in patients. These methods are certainly promising, but their focus is curative rather than preventative.
“Newer treatments, such as checkpoint inhibitors, cancer gaccines, chimeric antigen receptor (CAR) T-cells, and antibody-drug conjugates (ADCs) are being used more frequently in the clinic, showing variably promising results dependent on the patient."
107
Figure 3: This is an example of what a lung-on-a-chip looks like, which would be similar to the cancer-on-a-chip model. Source: Wikimedia Commons
While many new therapies appear promising, over 90% will likely fail during clinical trials (Goossens et al., 2015). Furthermore, a major drawback that cannot be understated is the exuberant costs associated with the treatments that are available. A single year of a new cancer treatment in the United States is at least $100,000, and costs have increased by 13% each year from 2000-2015 (Nakashima, 27). These massive costs must be minimized in order to make treatment accessible to all.
The Human Cancer Biomarker Project “Broadly speaking, a biomarker is defined as any biological substance that is indicative of disease...”
Thus far, the problems in understanding and treating cancer stem from the complex factors that cause a tumor grow: genetics, the immune system, and epigenetics. Challenges with cancer treatment lie in the variable success rates of treatments, as well as the physical, mental, and financial costs of treatment. Here, the Human Cancer Biomarker Project is proposed – it is an attempt to create an affordable and preventative potential solution for evading cancer. This project is focused on scaling up the efforts to determine cancer biomarkers and precursors, and it also seeks to ensure that all this information is shared in a large, interoperable database. This was done for the Human Genome Project, a thirteenyear international effort to discover every gene in humans –a human cancer biomarker project could be equally groundbreaking. After discovery, these biomarkers can be sorted in several ways, such as type of cancer, genetics, risk factors, likelihood of metastasis, and prognostic factors in order to determine the best treatment plan for a patient. There are thousands of publications that discuss such biomarkers. Broadly speaking, a biomarker is defined as any biological substance that is indicative of disease. According to the WHO, biomarkers fall into the following categories: predictive, prognostic, and diagnostic (Goossens et al., 2015). There are already several cancer biomarker databases, some being more specific than others. For example, the National Cancer Institute has a large database that can filter results by organ, then lists aliases, a brief description of the biomarker itself, studies, publications, and resources for each biomarker. There also exist specific biomarker databases: some are based on specific types of mutations, while others focus on a specific cancer. Another example, BioMuta, focuses on cancer-associated single-nucleotide variations, while the LACEBio project focuses specifically on prognosis biomarkers in patients with non-small-cell lung
108
cancer (Dingerdissen et al., 2018; Seymour et al., 2019). Clearly, significant research is being done on cancer biomarkers at many levels, but this research is not yet comprehensive. What makes the Human Cancer Biomarker Project different is that this will be a collaborative, international project with a unified goal, clear direction, and unparallel focus, which will result in a much more comprehensive database, both in breadth and depth. A major challenge to the use of biomarkers is validation and clinical implementation, considering that approximately 0.1% of biomarkers are translated to the clinic successfully (Goossens et al., 2015). Too often, biomarker discovery is an afterthought of experiments at best, but this challenge can be overcome if the search for biomarkers becomes a worldwide goal. Carefully designed studies and high-quality sample sizes can certainly overcome the currently low percentage of biomarkers researched in the clinic. If the world can place greater emphasis on the identification of human cancer biomarkers, then those biomarkers can be translationally implemented as the preventative solution to cancer.
Implementation of Biosensors A biosensor equipped with a global database of cancer biomarkers and high biological sensitivity to subtle, yet cancerous changes could be used to catch a tumor before it can evade therapies or metastasize. The biosensor can be programmed based on the Human Cancer Biomarker Project’s database to be optimized to detect the most frequent and broad indicators of cancer. Furthermore, each person’s biosensor could be personalized to their genome and environment. An example
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
of genome-based personalization could be for BRCA-1, a key gene in breast cancer susceptibility, as its mutation is found in greater than 80% of inherited breast and ovarian cancers (Yang et al., 2016). A modification to the biosensor would allow it to detect levels of targets of the BRCA-1 gene. Such modifications can also utilize epigenetic signatures as potent biomarkers for early detection, noninvasive screening, prognosis, and prediction of therapeutic response. Epigenetic therapy, based on a biomarker test, is another way of sensitizing tumors’ current treatments, allowing the cancer to be stopped in its earlier stages (Roberti et al., 2019). Both types of personalization, genetic and epigenetic, will lead to unheard levels of sensitivity and specificity, optimized to the patient and their needs. This is hypothetically within our technological capabilities, and it is both achievable and affordable. There are numerous types of available biosensors, including fluorescent biosensors, electrochemical DNA biosensors, among many others, all of which are being continuously studied and tested. Specifically, fabrication of low-cost and highly sensitive, stable, and specific sensors is critical in the movement towards preventative care (Sokolov et al., 2009). The use of organic materials and technology like 3D printing will allow for the eventual accessibility, affordability, and commercialization of the biosensor. Another approach involves nanomaterial based biosensors, such as those used to detect ovarian cancer, which is advantageous due to the sensors’ high levels of sensitivity and selectivity (Sha and Badhulika, 2020). For lung cancer, a cancer-on-a-chip platform has been developed, which can monitor various tumor characteristics in real time and consequently optimize subsequent treatment (Khalid et al., 2020). Thus, each individual could have a set of cancer-on-chips that mimics that person’s genetics, lifestyle, and other conditions that are present. This could help physicians screen patients early if they are at risk, and these chips could be used to test the effects of various therapies to determine which would be most beneficial for that individual and their cancer.
Conclusion – The Future for
SUMMER 2020
Biosensors and Preventative Cancer Care So, what is required to put these two ideas – a comprehensive biomarker panel and a personalized predictive biosensor – together? Interdisciplinary research involving data science, computer science, basic science, clinical research, engineering, and machine learning are all needed and more. With this technology, a patient could monitor themselves, and their physician could also monitor them in real time. This way, the moment conditions within the body go awry, both the patient and their physician can be notified to determine the best course of action. This method will allow physicians to detect cancer far earlier than ever before, which will give patients exponentially higher chances for survival. The earlier that cancer is detected, the more efficacious our current treatments will be at eliminating the tumor.
“For lung cancer, a cancer-on-a-chip platform has been developed, which can monitor various tumor characteristics in real time and consequently optimize subsequent treatment."
It is time to do away with the “slash, burn, poison” method. If the world wants to rid itself cancer, it must place preventative cancer care at the forefront of research and funding, and scientists must work collaboratively. “The toll of human suffering should serve as a tool with which to pry open new ways of critical thinking, a grander global vision, a positive outlook toward our world… the future is in preventing cancer by identifying the earliest markers of the first cancer cell rather than chasing after the last,” (Raza, 2019, 290). References Baruch, Erez Nissim, et al. Adoptive T Cell Therapy: An Overview of Obstacles and Opportunities. Cancer, 123(S11), 2154–62. doi:10.1002/cncr.30491. Dingerdissen, Hayley M., et al. BioMuta and BioXpress: Mutation and Expression Knowledgebases for Cancer Biomarker Discovery. Nucleic Acids Research, 46(D1), D1128–36. doi:10.1093/nar/gkx907. Falzone, Luca, et al. Evolution of Cancer Pharmacological Treatments at the Turn of the Third Millennium. Frontiers in Pharmacology, 9(1300), Frontiers, 2018. Frontiers, doi:10.3389/fphar.2018.01300. Golubovskaya, Vita. CAR-T Cell Therapy: From the Bench to the Bedside. Cancers, 9(11), 150. doi:10.3390/ cancers9110150. Gonzalez, Hugo, et al. Roles of the Immune System in Cancer: From Tumor Initiation to Metastatic Progression. Genes & Development, 32(19–20), 1267–84. doi:10.1101/ gad.314617.118.
109
Goossens, Nicolas, et al. Cancer Biomarker Discovery and Validation. Translational Cancer Research, 4(3), 256–69. doi:10.3978/j.issn.2218-676X.2015.06.04. Hunter, Philip. The Fourth Pillar. EMBO Reports, 18(11), 1889–92. doi:10.15252/embr.201745172. Khalid, Muhammad Asad Ullah, et al. A Lung Cancer-onChip Platform with Integrated Biosensors for Physiological Monitoring and Toxicity Assessment. Biochemical Engineering Journal, 155, 107469. doi:10.1016/j.bej.2019.107469. Li, Xiaolei, et al. Harnessing Tumor-Associated Macrophages as Aids for Cancer Immunotherapy. Molecular Cancer, 18(1), 177. doi:10.1186/s12943-019-1102-3. Nakashima, Lynne. Evolution of Cancer Treatment and Evolving Challenges. Healthcare Management Forum, 31(1), 26–28. doi:10.1177/0840470417722568. Peng, Yong, and Carlo M. Croce. The Role of MicroRNAs in Human Cancer. Signal Transduction and Targeted Therapy, 1(1), 1–9. doi:10.1038/sigtrans.2015.4. Pondé, Noam, et al. Antibody-Drug Conjugates in Breast Cancer: A Comprehensive Review. Current Treatment Options in Oncology, 20(5), 37. doi:10.1007/s11864-019-0633-6. Raza, Azra. The First Cell - And the Human Costs of Pursuing Cancer to the Last. First, Basic Books, 2019. Roberti, Annalisa, et al. Epigenetics in Cancer Therapy and Nanomedicine. Clinical Epigenetics, 11(1), 81. doi:10.1186/ s13148-019-0675-4. Seymour, Lesley, et al. LACE-Bio: Validation of Predictive and/ or Prognostic Immunohistochemistry/Histochemistry-Based Biomarkers in Resected Non–Small-Cell Lung Cancer. Clinical Lung Cancer, 20(2), 66-73. doi:10.1016/j.cllc.2018.10.001. Sha, Rinky, and Sushmee Badhulika. Recent Advancements in Fabrication of Nanomaterial Based Biosensors for Diagnosis of Ovarian Cancer: A Comprehensive Review. Microchimica Acta, 187(3), 181. doi:10.1007/s00604-020-4152-8. Sokolov, Anatoliy N., et al. Fabrication of Low-Cost Electronic Biosensors. Materials Today, 12(9), 12–20. doi:10.1016/S13697021(09)70247-0. Thomas, Sunil, and George C. Prendergast. Cancer Vaccines: A Brief Overview. In: Thomas S. (eds) Vaccine Design. Methods in Molecular Biology, 1403. 755–61. doi:10.1007/978-1-49393387-7_43. Tidwell, Tia R., et al. “Aging, Metabolism, and Cancer Development: From Peto’s Paradox to the Warburg Effect.” Aging and Disease, 8(5), 662–76. doi:10.14336/AD.2017.0713. Tollis, Marc, et al. Peto’s Paradox: How Has Evolution Solved the Problem of Cancer Prevention? BMC Biology, 15. doi:10.1186/ s12915-017-0401-7. Yang, Hui, et al. In Situ Hybridization Chain Reaction Mediated Ultrasensitive Enzyme-Free and ConjugationFree Electrochemcial Genosensor for BRCA-1 Gene in Complex Matrices. Biosensors and Bioelectronics, 80,450–55. doi:10.1016/j.bios.2016.02.011.
110
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
111
Challenges and Opportunities in Providing Palliative Care to COVID-19 Patients BY EMILY ZHANG '23 Cover Image: A palliative care physician is holding hands of a patient. Source: Flickr
Introduction Palliative care is a method of treatment used to prevent and relieve suffering for patients with serious life-threatening illnesses ("WHO | WHO Definition of Palliative Care," n.d.). While traditional medical approaches seek to postpone or “fight” death with all possible means regardless of how painful these treatments might be for patients, palliative care strives to minimize the patients’ pain and suffering at the end of their lives. While palliative care does not intend to postpone death, it is also different from physician-assisted suicide as it does not intend to hasten death either. Instead, palliative care both “affirms life and regards death as normal processes” without forcefully pushing patients in either direction ("WHO | WHO Definition of Palliative Care," n.d.). As famous American surgeon and writer Dr. Atul Gawande, describes in Being Mortal: Illness, Medicine, and What
112
Matters in the End, while assisted suicide focuses on “a good death,” palliative care focuses on “a good life to the very end” (2014). Palliative care not only includes medical treatments that focuses on patients’ direct physical health but also integrates psychosocial and spiritual support to ensure a satisfying and comfortable end-of-life experience mentally. This support often involves a comprehensive support network from physicians, nurses, social workers, psychological and spiritual counselors, and family members (Chidiac et al., 2020). It is often regarded as an essential component of a complete medical system, especially during mass-casualty emergencies like the COVID-19 pandemic where so many patients are experiencing the end of their lives and in need of the pain and symptom relief provided by palliative care (Farrell et al., 2020). However, the scarcity of medical resources during this emergency as well as the required DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Four components of palliative care: biological, psychological, social, and spiritual. Image created by author
physical isolation caused by the virus’ highly transmissible nature adds great challenges to providing comprehensive palliative care to COVID-19 patients. Nevertheless, many care providers are highly aware of the importance of palliative care for COVID-19 patients and are integrating innovative technologies to improve the feasibility of palliative care.
Why is Palliative Care Important? For patients infected by COVID-19, palliative care becomes increasingly crucial due to the highly transmissible nature of the virus and the fatality of COVID-19 infection. According to World Health Organization (WHO), there are three major components of palliative care: physical, psychosocial, and spiritual ("WHO | WHO Definition of Palliative Care," n.d.). In fact, all these three categories of support are relevant and important to supply to patients suffering from COVID-19 infection. Firstly, palliative medicine can effectively relieve physical pain of COVID-19 patients. The high symptom burdens of COVID-19 patients are traditionally managed through mechanical ventilation; however, invasive ventilation that requires intubation is often considered as a last resort that is not suitable for every patient because it can be uncomfortable and dangerous. For example, for unstable or endof-life patients who are unlikely to recover from COVID-19, continuing oxygen therapy or invasive interventions may bring them more burden and discomfort (Fusi-Schmidhauser et al., 2020). In this case, palliative methods would call for pharmaceuticals such as diazepam or opioids that are more effective to relieve patients’ symptoms and pain in
SUMMER 2020
conjunction with other psychosocial and spiritual support systems from their family or other professionals. Even for stable patients who have a high chance of recovering, their breathing difficulties, excessive shivering, and anxiety can be effectively managed by morphine, lorazepam, or other less aggressive, pain-relieving palliations when mechanical ventilation is not suitable or beneficial (FusiSchmidhauser et al., 2020). On the psychosocial aspect, Guo et al at Shanghai Jiao Tong University School of Medicine in China found that compared to non-COVID-19 patients, hospitalized COVID-19 patients suffer from a significantly higher level of depression, anxiety, and post-traumatic stress symptoms, not even including critical COVID-19 patients who are not likely to recover (Guo et al., 2020). The social stigma and pressure they feel of getting infected and potentially spreading the virus to more people as well as the uncertainty of their individual disease progression triggers a great amount of fear, guilt, and helplessness among infected patients (Guo et al., 2020)
"... for unstable or end-of-life patients who are unlikely to recover from COVID-19, continuing oxygen therapy or invasive interventions may bring them more burden and discomfort.”
Similar to patients’ psychosocial needs, the spiritual needs of patients also become increasingly important. During the pandemic, patients infected by COVID-19 often suffer from excessive isolation, loneliness, and vulnerability compared to areas no longer in a pandemic environment since those infected during a pandemic are often hospitalized in isolation rather than having the option of staying at home or in a hospice to receive palliative care (Ferrell et al., 2020). As a result, many forms of social and spiritual interactions become unavailable, which makes spiritual care that can be easily practiced in a hospital setting 113
Figure 2: A senior man is using virtual reality goggles. Source: Shutterstock
increasingly crucial for patients facing death. Given the pressing need for spiritual assessment and care, Ferrell et al recommends that all health care providers should be trained to provide spiritual care during such emergencies (Ferrell et al., 2020).
Is Palliative Care Feasible At All? “New technologies give us new ways to simulate our palliative care settings in normal times. For example, video conferencing and virtual reality (VR) technologies are gaining popularity in palliatve care for COVID-19 patients.”
114
Providing palliative care to patients during a pandemic poses unique challenges. For COVID-19, since the virus is highly contagious, COVID-19 patients are often treated in physical isolation to limit transmission to others. In this situation, palliative care providers and patients’ families can only visit COVID-19 patients in a highly limited frequency under strict protective measures. Since palliative care strives to support patients to live as fully and actively as possible until death, palliative care normally occurs at the patient’s home, a hospice, or other less institutionalized place than a hospital to ensure autonomy of the patient’s life (Gawande, 2014). However, since most of society is quarantined and/or social distancing, palliative care for COVID-19 is difficult to implement. Moreover, since personal protective equipment are needed for physicians at all times, patients and physicians are separated by an additional barrier, so it is even harder for either side to form close relationship with another. Therefore, it may seem impossible to provide high-quality palliative care amidst this
time since even best palliative care practices right now cannot fully provide holistic personal care, family support, and an active, autonomous end-of-life experience. Fortunately, palliative care physicians have been seeking innovative measures to counter this additional challenge posed by physical isolation. New technologies give us new ways to simulate our palliative care settings in normal times. For example, video conferencing and virtual reality (VR) technologies are gaining popularity in palliative care for COVID-19 patients. Currently, care providers mostly use video conferencing tools to facilitate palliative care for patients suffering from COVID-19 who are under physical isolation (Wang et al., 2020). In this way, patients can both receive palliative care from medical professionals and connect with family and friends to meet their social needs without having to worry about virus transmission (Chua et al., 2020). While video conferencing only provides a tool to achieve verbal and visual communications; it still cannot realize the patients’ need for physical interactions and real-life experiences. Therefore, VR technology has gained attention in the field of palliative care as the next best alternative to actual physical interactions and support that are needed the most by patients in end-of-life stages (Wang et al, 2020). For example, VR technology can help end-oflife patients simulate vacations, outdoor DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: A doctor wearing personal protective equipment in a hospital in Italy during the COVID-19 pandemic. Source: Wikimedia Commons
settings, memorable places, or facilitate social interactions for patients at home, in hospital, or in physical isolation (Baños et al., 2013; Niki et al., 2019). Wang et al from National University Hospital System in Singapore thus recommends using VR technology to help provide COVID-19 patients with both psychosocial and physical palliation (Baños et al., 2013; Niki et al., 2019). Psychosocially, VR technology is able to support patients in palliative care as a source of distraction, entertainment, and relaxation (Baños et al., 2013). In a clinical trial done in Spain’s Universitat de Valencia, patients under palliative care are led to an immersive experience of either an urban park or a rural forest through virtual reality (Baños et al., 2013). Patients in the trial reported significant improvements in satisfaction after the VR intervention, and the usage of VR devices were “minimally uncomfortable,” suggesting that there are no significant psychological or physical rejection of the VR device among patients receiving this novel form of palliative care treatment (Baños et al., 2013). Physically, the adoption of VR technology can relieve patients’ physical pain. A study done by Osaka University found that using VR for palliative care patients can relieve their symptom burdens, such as pain, shortage of breath, and drowsiness. (Niki et al., 2019). Interestingly, this study found that participants who went to a memorable place in VR tend to SUMMER 2020
experience more physical benefits than those who travel to a place they have never visited, showing how virtual reality benefits on physical pain can be supplemented when triggering the patients’ positive memory (Niki et al., 2019). Therefore, the positive psychological effects brought by virtual reality can improve the patients’ physical experiences and help relieve their suffering and pain. Thus, both video conferencing and VR pose as viable method to provide palliative care in physical isolation during the COVID-19 pandemic.
Luxury or Necessity? Palliative care is not always prioritized in healthcare delivery during such emergencies like the COVID-19 pandemic, and people have different opinions on its priority level. Even though many healthcare professionals argue that palliative care is important for COVID-19 patients and current challenges to providing palliative care can be solved by new technology, many people still would not prioritize palliative care for COVID-19 patients, especially for patients who are unlikely to recover. Critics claim that such methods are a waste of critically needed medical resources.
“A study done by Osaka University ofund that using VR for palliative care patients can relieve their symptom burdens, such as pain, shortage of breath, and drowsiness.”
For instance, the World Health Organization issued a guidance this March on maintaining essential health services during the pandemic. While it highlighted the essential maintenance of maternal care, immunization, chronic
115
diseases, and many other methods of care, there was no mention of palliative care as one of those essential health services (World Health Organization, 2020a). Only in the updated version issued in June did WHO remember to include a short section on the safe delivery of palliative care during the pandemic (World Health Organization, 2020b).
“... providing palliative care can actually increase the efficiency of our other medical resources because it prevents resource loss from insisting on other more aggressive and resourceconsuming medical treatments on critical patients when those treatments are likely to be futile.”
It generally seems that during mass casualty events like the COVID-19 pandemic, when medical resources become extremely scarce, the primary goal of healthcare becomes utilitarian: saving the greatest number of people eclipses the need to provide the best individual care for each patient (Matzo et al., 2009). This utilitarian view could lead to providers focusing resources on healthier patients rather than use scarce resources on those who are unlikely to survive. Since the total amount of medical resources becomes increasingly limited as the pandemic sprawls, critics have suggested that an altered paradigm for medical care should be adopted where palliative care should no longer be a necessity for all patients but only a luxury of lower priority (Rosoff, 2010). There are mainly three ways that advocates for palliative care counter the utilitarian concerns of palliative care critics. First of all, many believe that providing palliative care should be granted a high priority simply out of humanitarian considerations (Nouvet et al., 2018). Though in such mass casualty context limitation on medical resources requires our coordinated response to save as many lives as possible, people in a civil society demand a secondary goal in which they want to support the life quality of people whose lives are shortened unexpectedly by the crisis event (Matzo et al., 2009). From their point of view, palliative care ought to be provided as much as possible even during such emergencies to not only save quantity but also quality. Second, advocates for palliative care argue that additional human resources can be used during emergencies to sustain palliative care for end-oflife COVID-19 patients. For instance, emergency palliative care can involve the mobilization of personnel beyond traditional palliative care physicians, such as physicians and nurses who did not specialize in palliative care before or non-clinician volunteers who can be trained to provide some basic palliative care (Matzo et al., 2009). Moreover, critical COVID-19 patients near the end of their life can also be directed to alternative care sites, such as hospices or other non-hospital settings to avoid occupying
116
hospital resources for other patients who are more likely to recover, yet specific protocols and training are needed before providing palliative care from those new staff (Matzo et al., 2009). Finally, proponents of palliative care argue that palliative care does not mean grabbing resources from other patients that are more likely to survive and wasting resources on people destined to die. Rather, providing palliative care can actually increase the efficiency of our other medical resources because it prevents resource loss from insisting on other more aggressive and resource-consuming medical treatments on critical patients when those treatments are likely to be futile (Powell et al., 2017). Therefore, if the option of palliative care is presented to both the physician and the patient, not only can the patient choose to enjoy a better, less painful end-of-life care, but the physician can also choose a more cost-effective approach to help the patient while saving other resources.
Conclusion While palliative care can provide important physical, psychosocial, and spiritual support for COVID-19 patients undergoing the end of their lives, challenges to providing palliative care posed by the pandemic have limited the beneficial effects. Physical isolation and ethical objection to providing palliative care for endof-life patients in resource scarcity in particular make it harder for palliative care to be labeled as a priority during the COVID-19 pandemic. Nevertheless, VR technology now enables palliative care physicians to solve the problem from physical isolation by continuing to give palliative psychosocial and physical pain relief. For the ethical dilemma in resource distribution, even though the conflicting interest of different sectors of the healthcare system may seem insurmountable, healthcare providers can actually break down these macroscopic dilemmas into small, individual decisions between the physicians and the patients under specific contexts of the care center to better counter this challenge and provide more effective and desirable treatments to their patients. References Baños, R. M., Espinoza, M., García-Palacios, A., Cervera, J. M., Esquerdo, G., Barrajón, E., & Botella, C. (2013). A positive psychological intervention using virtual reality for patients with advanced cancer in a hospital setting: A pilot study to assess feasibility. Supportive Care in Cancer, 21(1), 263–270. https://doi.org/10.1007/s00520-012-1520-x Chidiac, C., Feuer, D., Naismith, J., Flatley, M., & Preston, N.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
(2020). Emergency Palliative Care Planning and Support in a COVID-19 Pandemic. Journal of Palliative Medicine, 23(6), 752–753. https://doi.org/10.1089/jpm.2020.0195 Chua, I. S., Jackson, V., & Kamdar, M. (2020). Webside Manner during the COVID-19 Pandemic: Maintaining Human Connection during Virtual Visits. Journal of Palliative Medicine. https://doi.org/10.1089/jpm.2020.0298 Farrell, T. W., Ferrante, L. E., Brown, T., Francis, L., Widera, E., Rhodes, R., Rosen, T., Hwang, U., Witt, L. J., Thothala, N., Liu, S. W., Vitale, C. A., Braun, U. K., Stephens, C., & Saliba, D. (2020). AGSPosition Statement: Resource Allocation Strategies andAge-RelatedConsiderations in theCOVID-19 Era and Beyond. Journal of the American Geriatrics Society, 68(6), 1136–1142. https://doi.org/10.1111/jgs.16537 Ferrell, B. R., Handzo, G., Picchi, T., Puchalski, C., & Rosa, W. E. (2020). The Urgency of Spiritual Care: COVID-19 and the Critical Need for Whole-Person Palliation. Journal of Pain and Symptom Management. https://doi.org/10.1016/j. jpainsymman.2020.06.034 Fusi-Schmidhauser, T., Preston, N. J., Keller, N., & Gamondi, C. (2020). Conservative Management of COVID-19 PatientsEmergency Palliative Care in Action. Journal of Pain and Symptom Management, 60(1), e27–e30. https://doi. org/10.1016/j.jpainsymman.2020.03.030
Journal of Clinical Ethics, 21(4), 312–320. Wang, S. S. Y., Teo, W. Z. W., Teo, W. Z. Y., & Chai, Y. W. (2020). Virtual Reality as a Bridge in Palliative Care during COVID-19. Journal of Palliative Medicine, 23(6), 756–756. https://doi. org/10.1089/jpm.2020.0212 WHO | WHO Definition of Palliative Care. (n.d.). WHO; World Health Organization. Retrieved July 19, 2020, from https:// www.who.int/cancer/palliative/definition/en/ World Health Organization. (2020a). COVID-19: Operational guidance for maintaining essential health services during an outbreak: interim guidance, 25 March 2020 (WHO/2019nCoV/essential_health_services/2020.1). Article WHO/2019nCoV/essential_health_services/2020.1. https://apps.who.int/ iris/handle/10665/331561 World Health Organization. (2020b). Maintaining essential health services: Operational guidance for the COVID-19 context: interim guidance, 1 June 2020 (WHO/2019-nCoV/ essential_health_services/2020.2). Article WHO/2019-nCoV/ essential_health_services/2020.2. https://apps.who.int/iris/ handle/10665/332240
Gawande, A. (2014). Being Mortal: Illness, Medicine, and What Matters in the End. Profile Books. Guo, Q., Zheng, Y., Shi, J., Wang, J., Li, G., Li, C., Fromson, J. A., Xu, Y., Liu, X., Xu, H., Zhang, T., Lu, Y., Chen, X., Hu, H., Tang, Y., Yang, S., Zhou, H., Wang, X., Chen, H., … Yang, Z. (2020). Immediate psychological distress in quarantined patients with COVID-19 and its association with peripheral inflammation: A mixed-method study. Brain, Behavior, and Immunity, 88, 17–27. https://doi.org/10.1016/j. bbi.2020.05.038 Matzo, M., Wilkinson, A., Lynn, J., Gatto, M., & Phillips, S. (2009). Palliative Care Considerations in Mass Casualty Events with Scarce Resources. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science, 7(2), 199–210. https://doi. org/10.1089/bsp.2009.0017 Niki, K., Okamoto, Y., Maeda, I., Mori, I., Ishii, R., Matsuda, Y., Takagi, T., & Uejima, E. (2019). A Novel Palliative Care Approach Using Virtual Reality for Improving Various Symptoms of Terminal Cancer Patients: A Preliminary Prospective, Multicenter Study. Journal of Palliative Medicine, 22(6), 702–707. https://doi.org/10.1089/jpm.2018.0527 Nouvet, E., Sivaram, M., Bezanson, K., Krishnaraj, G., Hunt, M., de Laat, S., Sanger, S., Banfield, L., Rodriguez, P. F. E., & Schwartz, L. J. (2018). Palliative care in humanitarian crises: A review of the literature. Journal of International Humanitarian Action, 3(1), 5. https://doi.org/10.1186/s41018-018-0033-8 Powell, R. A., Schwartz, L., Nouvet, E., Sutton, B., Petrova, M., Marston, J., Munday, D., & Radbruch, L. (2017). Palliative care in humanitarian crises: Always something to offer. The Lancet, 389(10078), 1498–1499. https://doi.org/10.1016/S01406736(17)30978-9 Richardson, P. (2014). Spirituality, religion and palliative care. Annals of Palliative Medicine, 3(3), 10. Rosoff, P. M. (2010). Should palliative care be a necessity or a luxury during an overwhelming health catastrophe? The
SUMMER 2020
117
The Botanical Mind: How Plant Intelligence ‘Changes Everything’ BY EVA LEGGE '22 Cover Image: Plants have a highly manipulative relationship with their environment and are able to respond to environmental stimuli with a remarkable complexity that some have deemed to be intelligent. However, the emerging field of plant intelligence has become an increasingly controversial subject in recent years. Source: Flickr
118
Introduction to Plant Intelligence In November 2017, Fred Adams, a philosophy professor at the University of Delaware, published a paper entitled “Cognition Wars” in the leading scientific journal Elsevier. He wrote, “In case you missed it, there is a war going on over what counts as cognition. Luckily, it is a war among academics, so no one will get hurt, but it is a war nonetheless” (Adams, 2014). This war to which Adams refers is that of plant intelligence, a topic that had become the subject of cuttingedge research by plant scientists over the past few decades. Adams is an outspoken skeptic of plant cognition. Plants, Adams asserts, are hard-wired to display knee-jerk reactions to environmental stimuli and are not capable of complex learning and cognitive processing. In other words, Adams believes plants are not capable of intelligence. This is not an isolated belief; skepticism of plant intelligence is widespread among plant biologists and has been the subject of multiple back and forth
journal articles debating the topic. Soon after Adams’ piece, philosopher SegundoOrtin from the University of Wollongong and plant neurobiologist Paco Calvo from the University of Murica published “Are plants cognitive? A reply to Adams,” in which they argued that plants display intelligent behavior that is remarkably reminiscent of that of animals. Plants display remarkably adaptive behavior, are capable of complex decision-making, learning, memory, and even anticipation of future events. There is nothing “metaphorical” about plant intelligence. It is not a placeholder. Plants, instead, are “genuine cognitive agents.” However, the root of the discrepancy is deeper than a debate over adaptive behavior, and instead stems from how “intelligence” has been conceptualized and bounded. Even today, the definitions of intelligence DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
are complex and varied. “Intelligence is a term fraught with difficult definitions,” writes University of Edinburgh plant scientist Anthony Trewavas (2003). Instead of one elegant definition, intelligence is most commonly defined as an assemblage of behaviors that display a complex, sensitive adaptivity to one’s environment. The list of “intelligent behaviors” is exhaustive. As University of Edinburgh plant scientist Anthony Trewavas writes in his article “Plant Intelligence,” “Biologists suggest that intelligence encompasses the characteristics of detailed sensory perception, information processing, learning, memory, choice, optimization of resource sequestration with minimal outlay, self-recognition, and foresight by predictive modeling,” (Trewavas, 2005). In other words, intelligence is the ability of an organism to problem-solve through predicting, learning, and adapting, among others. However, these definitions of intelligence come with a set of assumptions and biases. Many scientists, including Adams, believe that displaying intelligent behavior is not enough to deem an organism intelligent. For an organism to be intelligent, it must also have a central nervous system, which in humans is the brain and the spinal cord. Intelligent behavior “is the kind of thing that creatures with minds do, that creatures with cognitive processes do” (Adams, 2018, p. 29). Plants, however, do not have a centralized organ for computation. Rather, the cells that make up their ‘nervous’ system are decentralized and distributed over a
SUMMER 2020
large area of the plant (Garzón & Keijzer, 2011). This ‘decentralized’ intelligence, according to many biologists, immediately negates the possibility that plants are intelligent. Intelligence, according to Adams, is the ability of an organism to problem solve — to predict, learn, and adapt — as long as the pathway through which these behaviors are displayed are similar to that of humans (Adams, 2018). This perception of intelligence has its own host of social, ethical, and environmental issues, ranging from peoples’ tendency to anthropomorphize (i.e. to attribute human characteristics to a nonhuman); the flawed idea of human primacy and even can be claimed to be one of the many roots of speciesism and racism (Segundo-Ortin & Calvo, 2019; Franks, B. et al., 2020). Essentially, the current definition of intelligence carries the assumption that if a creature does not look like humans, it can’t be as smart.
Figure 1. Boquila trifolata in its “normal” phenotypic expression
In addition, this human-centered perception of intelligence as something inherently centralized in a computational organ (like the brain) is a counterargument that immediately disqualifies any argument in support of plant intelligence. “The notion that learning takes place in the tissues of a collection of individuals, not a single individual, sounds like a very different or metaphorical extension of this conception of learning” (Adams, 2018). Instead, Adams suggests choosing a new term rather than “cognition” or “intelligence” with which to describe plant intelligence. However, to choose a new term would be a rapid response to a complicated issue. Plant neurobiologists are just beginning to understand the ways in which plants can display intelligent behavior, and a widening pool of evidence suggests that even though its cognition is decentralized plants possess many of the same neurobiological pathways present in humans and other nonhuman animals (Baluška et al, 2004). Perhaps, then, it is time to change the rules of the game: to consider diverse pathways that can lead to intelligent behavior. The Extended Cognition Hypothesis has been accepted by many as a way to bridge the gap between intelligent plant behavior and our current pathway to intelligence. Originating in the 1990’s, the Extended Cognition Hypothesis calls for the recognition of decentralized intelligence. The hypothesis claims that intelligence isn’t necessarily bound to one centralized organ but instead can occur beyond the limits of the body, including objects from the environment (Parise, A.G. et al., 2020). By
“Originating in the 1990s, the Extended Cognition Hypothesis calls for the recognition of decentralized intelligence... that intelligence isn't necessarily bound to any one organ but instead can occur beyond the limits of the body, including objects from the envrionment."
Source: Wikimedia Commons
119
Figure 2. Mimosa pudica, with its leaves closed Source: Wikimedia Commons
including extended cognition, this hypothesis could be the key to legitimize the concept of plant intelligence.
Phenotypic Plasticity “At the cornerstone of animal intelligence is plasticity, the ability of an organism to change their observable characteristics in response to environmental stimuli."
120
At the cornerstone of animal intelligence is plasticity, the ability of an organism to change their observable characteristics in response to environmental stimuli (M.J. West-Eberhard, 2008). In an analysis of the evolution of intelligence, Stenhouse (1974) defines plasticity as the “adaptively variable behavior within the lifetime of the individual.” Further, Stenhouse claims that “the more intelligent the organism, the greater the degree of individual adaptively variable behavior'' (Trewavas 2003). Animal intelligence plasticity isn’t a novel idea, but rather has been accepted by scientists for a halfcentury. Contrary to Adam’s claim, researchers have reported plant behavior is not “purely reactive and mechanical” (Segundo-Ortin & Calvo, 2019). Plants have an incredible capacity for adaptable behavior. Plants display directional behavior, such as growing in the direction of a light source, as well as non-directional movement, such as the folding of plant leaves in response to an external stimulus. In addition, plants are capable of both positive behavior (such as growing towards the light) and negative behavior (growing away from a gravity vector) (Segundo-Ortin & Calvo, 2019). In fact, plants are able to process over 20 different environmental stimuli at any one time, including water, gravity, minerals, alien roots, and chemicals (Baluška et al., 2006; Yokawa & Baluška, 2018).
A unique case study with which to study phenotypic plasticity is the Boquila trifolata, a ground-rooted vine native to the temperate rainforests of Chile and Argentina (Mancuso 2017). It possesses mimetic capabilities, allowing it to imitate the phenotype of another species. In 2013, botanist Ernesto Gianoli noticed that the Boquila can mimic the phenotype of every plant it grows upon. Boquila plants can change the size, color, and shape of its leaves to match even the most complex leaf. Moreover, a singular vine growing upon multiple different plants can change its leaves accordingly. Boquila is also able to adjust its leaves over time to compensate for any change in the host plants’ leaves. The purposes for and pathways leading to this behavior remain unknown, but Boquila plants are widely celebrated as the “veritable Zelig of the plant world” (Mancuso, 2017, p. 44). Plants display nuanced responses to small shifts in environmental stimuli. In a study of the saline affinity of Arabidopsis thaliana (known as “the model lab plant” due to its simple genomic structure and rapid life cycle) Li and Zhang (2008) of the Chinese Academy of Sciences distributed salt unevenly throughout the soil (National Science Foundation). The plant’s roots could sense the presence of the salt and began growing toward the high-salt areas in order to acquire necessary nutrients. However, after Li and Zhang increased the concentration of the salt in those areas to above a healthy threshold, the roots turned back before they had even reached the salt. The plants and their roots noticed the smallest shift in the salinity gradient and made the “decision,” as Li and
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
moving in a nearby pipe. Somehow, the plant could hear the water, and could decide to grow toward it, even without a moisture gradient leading the way.
Figure 3. Pisum sativum has been shown to use ‘sound’ to locate moving water Source: Wikimedia Commons
The method by which plants perceive these acoustic cues (such as the sound of rushing water) requires further study, as does the method by which the plant compares different environmental stimuli. Nevertheless, it is becoming more and more evident that even without a brain, plants are able to intelligently perceive, choose and act on a vast array of environmental factors, even in many ways that humans are unable to do.
Learning, Memory, and Prediction
Zhang put it, to turn back. Plants also have the capability of adjusting to temporal shifts in nutrient availability. A study of Pisum sativum showed that pea plants are able to plastically shift their behavior in response to a changing availability of nutrients in the soil, in order to access the ideal nutrient concentration for that species (Dener et al., 2016). Plants, too, have been shown to modify their behavior in the presence of predators (Segundo-Ortin & Calvo, 2019). Trewavas (2014) even found that a species of wetland grass was capable of making “compromises.” When placed in an environment where, in any one place, only two out of three environmental factors — competition, warmth, or light — were optimal, the plants were able to prioritize warm soil and light, by growing primarily in those conditions. Perhaps one of the better-known experiments about plant adaptability and decision-making is exemplified by the study of plants using “sound” to locate water. Tree roots respond to a water-saturation gradient to locate a water source in a similar fashion as they use a mineral gradient to locate a mineral source. However, the specific mechanism the plants use to comprehend and respond to the auditory stimulus is unknown (Gagliano et al., 2017). Even without the presence of substrate moisture, the model plant Pisum sativum was able to detect vibrations caused by water
SUMMER 2020
In 1815, French botanist René Desfontaines asked one of his students to take a collection of plants on a tour of Paris. Among the plants collected were a few jars of Mimosa pudica, a tropical plant best known for its sensitive response to touch. When someone runs their fingers along the leaves of Mimosa pudica, or jostles the plant, the leaves will close in on themselves (Mancuso, 2019). The reaction evolved as an evolutionary response to predators, but the act of closing up its leaves exhausts a large portion of the plant’s precious energy reserves. Desfontaines wondered if the plants had the capacity to differentiate between threatening and non-threatening stimuli. In other words, could the plant learn when it should close its leaves? During the tour, Desfontaines asked his student to carefully observe the plants for even the slightest movement as they traveled in their carriage. For most of the ride, the upand-down motion of the carriage caused the plant to close up its leaves. But as the tour continued, the plants suddenly relaxed. Almost simultaneously, every Mimosa pudica opened its leaves, and remained in this position for the duration of the carriage ride (Lamarck 1815). “The plants are getting used to it,” the student observed in his field notebook (Mancuso 2019). Centuries later, Gagliano et al. (2014) replicated the carriage ride experiment. Gagliano built an apparatus that would drop Mimosa pudica at a controlled speed from a height of four inches, simulating the bumpy, repetitive movement of the carriage ride. After seven or eight drops, the plant stopped closing its leaves. To rule out the effects of fatigue, Gagliano then shook that same plant, and it immediately closed its leaves. In other words, the plant learned
“Perhaps one of the better-known experiments about plant adaptability and decision-making is exemplified by the study of plants using 'sound' to locate water.”
121
Figure 4. Lavatera cretica orients in anticipation of the light before sunrise Source: needpix.com
“Plants not only have the capacity to modify their behavior due to memory. They are also able to predict future events and modify their behavior in accordance with their predictions..”
122
to differentiate between a drop, a movement deemed to be non-threatening, and a shake, a new and threatening movement. After forty days, the plants still exhibited the learned response. More complex exhibitions of learning have also been recorded in plants, as well. Gagliano et al. (2016) designed an experiment in which garden peas (Pisum sativum) were able to exhibit “Pavlovian” learning, meaning learning by association. By using a y-maze track, a single tube that branches off in two directions, the growing pea tendril had to make a choice: to grow right or left. After being exposed to two external factors, the positive light source and the neutral presence of light wind, the tendril was conditioned to associate wind with the presence of light (Gagliano, 2016). These results showed that “associative learning represents a universal adaptive mechanism shared by both animals and plants” (Gagliano et al., 2016).
also able to grow towards areas in which they anticipate future shade and roots by modifying their behavior in anticipation of water and minerals (Calvo & Friston, 2017). The ability of plants to anticipate the future events, or minimize “surprise over time,” is an adaptation that is vital to plants’ fitness (Calvo & Friston, 2017). However, it is important to note that the research field of plant intelligence is still in its infancy. For these results to have the traction they deserve, they must be successfully replicated, and the experiments must also be transposed into field settings. But even in the absence of sufficient replication, it is hard not to revel in the precise, ingenious observations of two centuries of botanists.
Plants do not only have the capacity to modify their behavior due to memory. They are also able to predict future events and modify their behavior in accordance with their predictions. Lavatera cretica, a flowering plant in the Mallow family, learns to orient their leaves east before the sun rises (Garzón & Keijzer, 2009). Lavatera predicts the position from which the sun will rise each day and can optimize the amount of sunlight input through the reorientation of their leaves. In addition, Latvatera is able to retain this anticipatory behavior for multiple days, even in the absence of any light input to guide their behavior (Garzón & Keijzer, 2009). Plants are
In recent years, studies supporting the intelligent behavior of plants (such as learning, memory, prediction, and phenotypic plasticity) have grown in number. However, the fact that plants do not possess a central nervous system remains the crux of many arguments against the existence of plant intelligence. In recent years, scientists have begun to explore the neurobiological pathways that, although being decentralized, display uncanny resemblances to the nervous systems in animals. According to Bullock and Horridge in Structure and Function in the Nervous Systems of Invertebrates (1965), the nervous system of plants is defined as “an organized constellation
Plant Neurobiology and Intelligent Communication Networks
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 5. A large portion of plants’ cognitive processing occurs underground. The root apices have been proposed to act as ‘brain-like’ command centers in the roots, capable of synaptic communication previously thought to be limited to animals and humans (Baluška et al., 2004) Source: Wikimedia Commons
of cells (neurons) specialized for the repeated conduction of a repeated state from receptor sites or from other neurons to effectors, or to other neurons.” Although plants do not possess neurons, they may possess a decentralized set of cells that is functionally similar to the central nervous system. The emerging field of plant neurobiology grapples with this very concept: to understand how the “integrated signaling and electrophysiological properties of plant networks of cells” can meet the requirements of a central nervous system that the “intelligence” definition mandates (Garzón & Keijzer, 2011). In plant neurobiology, roots are often considered to serve as the main harbor of plants’ decentralized nervous systems. In this model, root apices, the region of cells at the tip of the roots responsible for root extension, are likened to “brain-like” units (Britannica; Garzón & Keijzer, 2011). In fact, some scientists have argued that the highly specialized group of cells at the root apex “has almost all the attributes of a brainlike tissue” (Baluška et al, 2004, p.2). Vascular strands which connect these apices are likened to plant neurons, and their polarly-transported auxins are likened to plant neurotransmitters, capable of extracellular communication and the propagation of electrical signals (Baluška et al., 2004, 2010). By identifying these neuronadjacent mechanisms in plant roots, plant neurobiologists posit that “the integration and transmission of information at the plant level involves neuron-like processes such as action potentials, long-distance electrical signaling, and vesicle-mediated transport of (neurotransmitter-like) auxin” (Garzón & Keijzer,
SUMMER 2020
2011). There are three major similarities between plant intelligence networks and animal nervous systems: 1) the common presence of longdistance electrical signaling 2) the similarity between certain plant molecules and animal neuroreceptors/neurotransmitters, and 3) the similarities between the neurotransmitter auxin in plants and neurotransmitters in humans (Garzón & Keijzer, 2011). Plants, like animals, exhibit action potentials in response to environmental stimuli that allow coordination between different cells and parts of the body. Originally thought to be limited to plants that display rapid, observable movements (like insectivores), it is now becoming widely accepted that action potentials actually occur in all plants (Baluška et al., 2004). These signals are able to travel long distances within the plant axis (Garzón & Keijzer, 2011). Even though the observed responses of most action potentials may be hard for the human eye to see, the action potentials may be just as rapid as the action potentials in animal nerves (Baluška et al., 2004). For example, Barlow (2008) observes a rapid change in CO2 assimilation after a small shift in the moisture of the soil caused an action potential to be sent from the roots to the leaves. Next, plants possess many neurotransmitters that are also present in animal nervous systems, including but not limited to GABA, glutamate, dopamine, serotonin, and acetylcholine. It remains unknown whether these substances
“some scientists have argued that the highly specialized group of cells at the root apex 'has almost all the attributes of a brain-like tissue.' ”
123
“The issue of plant conservation has become particularly pressing in recent years. According to a comprehensive study by Humphreys et al. (2019), over 600 plant species have gone extinct in the last 250 years: twice the amount of amphibian, mammalian, and avian species combined."
have the same role in signaling as they do for animals. However, some substances, such as glutamate, have been proven to act as neurotransmitters in intracellular plant communication (Garzón & Keijzer, 2011). Finally, synapses, the microscopic space between two animal nerve cells through which neurotransmitters are exchanged, may have a parallel in plants. Auxin, which is exchanged extracellularly, is known to cause fast electrical responses in the receiving plant cell (Garzón & Keijzer, 2011). Thus, auxin is thought to resemble neurotransmitters, facilitating rapid extracellular communication and electrical signaling. These similarities have led scientists to the “rootbrain hypothesis,” a concept that both builds on and specifies the cognition question presented in the Extended Cognition Hypothesis. According to this hypothesis, plants have an underground and widespread center for cognitive processing. The fact that the ‘brain-like units’ occur on root apices speak to the fact that although scientists may be able to identify specific parts of the plant that are similar to animal cognitive centers, the expressed reality in plants is fundamentally different. Therefore, the public must shift their understanding of cognition to include the vast world below their feet.
Reciprocal Benefits of Plant Intelligence When formulating the Extended Cognition Hypothesis, crafting salinity gradients in soil, or comparing neurotransmitters in animals and plants, it may be easy to wonder why such a study is important, especially when its driving goal has been met with so much skepticism. Plant intelligence research has provided scientists with the curiosity not just to continue researching botany, but to advocate aggressive preservation and promotion for plants. When plants are perceived as passive agents in the carbon cycle, it is much more difficult to provide a salient argument for their preservation. But when plants are seen in their entirety as both objects of beauty, agents in the carbon cycle and intelligent organisms capable of sensing, perceiving, thinking, and choosing, the defense of plants has suddenly become much more fortified. Continued research on plant intelligence is crucial to the preservation of many ecosystems, as well as the preservation of the biosphere. As Baluška and Mancuso (2020) write, “Considering plants
124
as active and intelligent agents has therefore profound consequences not just for future climate scenarios but also for understanding mankind’s role and position within the Earth’s biosphere.” Plant roots actively control the carbon in the soil and may even manipulate the amount of carbon in the air. Therefore, current climate models don’t accurately reflect the extremely nuanced carbon manipulation pathways in plants (Baluška & Mancuso, 2020). To form more accurate predictive models, further research into plant intelligence is necessary. A deeper knowledge of plant intelligence may also alleviate plant blindness. Coined by University of Tennessee botanist Elisabeth Schlusser and Louisiana State University science educator James Wandersee in 1998, “plant blindness” is the tendency for people to overlook plants, to consider them as a backdrop in our lives (Wandersee and Schlusser, 1999). Plant blindness feeds into the belief that animals are superior to plants, rendering us unable to recognize the crucial role that plants play in our well-being and survival. This mindset has led to a lack of funding and interest in plant conservation, — far lower than animal conservation efforts (Havens et al., 2014). The issue of plant conservation has become particularly pressing in recent years. According to a comprehensive study by Humphreys et al. (2019), over 600 plant species have gone extinct in the last 250 years: twice the amount of amphibian, mammalian, and avian species combined. This extinction rate is 500 times as fast as it would without the anthropogenic contribution to species decline. These findings are not only devastating for the floral world, but for all species on earth. As primary producers, plants form the foundation of ecosystems across the world and are responsible for producing the air we breathe. Without plants, life on earth would be inexorably altered. And for the human race, it could threaten survival and would certainly alter human life for the worse. According to Humphreys et al. (2019), combatting plant blindness is a necessary solution to combating the rapid decline in plant biodiversity. Researching and supplementing arguments for plant intelligence is, in turn, crucial to combatting plant blindness. The more humans perceive plants as conscious, intelligent organisms, the more likely it is that plant biodiversity will be preserved. This is
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
not just good news for plants, but lifesaving for the whole human race. In political theorist Jane Bennet’s paper “Steps Toward an Ecology of Matter,” she argues for “a renewed emphasis on our entanglement with things.” This, she believes, will allow us to “tread more lightly upon the earth, both because things are alive and have value and because things have the power to do us harm” (Bennet, 2004). Indeed, this is how we should approach plants. Plants should be preserved both because they have value and because their absence has the power to do us harm. Without considering the fact that plants are intelligent, it is impossible to create a relationship with then. And according to Balding and Williams (2016), the issue of empathy is crucial for conservation efforts. “We argue that support for plant conservation may be garnered through strategies that promote identification and empathy with plants,” write Balding and Williams (2016). Therefore, to consider the possibility that plants can hear the sound of rushing water, fire neurotransmitters between synapses, and learn and remember past events is to begin to see ourselves in the plant, and perhaps to begin to see the plant in the people. Humans do not just have a lot to learn about plants. By observing how plants cultivate a sensitive, highly adaptive relationship with their environment, humans may be able to learn from them too and be able to manipulate their behavior in ways that work towards a better, and greener planet. As Baluška and Mancuso write, “plant intelligence changes everything” (Baluška & Mancuso, 2020 p.1). References
status of the root apex transition zone. Biologia, 13(1), 1-13. Baluška, F., Mancuso, S., Volkmann, D., & Barlow, P. (2010). Root apex transition zone: A signalling–response nexus in the root. Trends in Plant Science, 15, 402–408. Baluška, František, and Stefano Mancuso. “Plants, Climate and Humans.” EMBO Reports, vol. 21, no. 3, 27 Feb. 2020, doi:10.15252/embr.202050109 Bennett, Jane. “The Force of Things: Steps Towards an Ecology of Matter.” Political Theory, vol. 32, no. 3, 2004, pp. 347–372., doi:10.1177/0090591703260853. Book Reviews: Stenhouse, David, The Evolution of Intelligence. London: Allyn & Unwin, 376pp, about $12, 1974. (1975). Gifted Child Quarterly, 19(2), 102–102. Bullock, T. H., & Horridge, G. A. (1965). Structure & function in the nervous systems of invertebrates (Vol. 1). San Francisco: W.H. Freeman. Burton, N. (2018, November 28). What Is Intelligence? Psychology Today. Retrieved from https://www. psychologytoday.com/us/blog/hide-and-seek/201811/whatis-intelligence Calvo, Garzón, P., & Keijzer, F. (2009). Cognition in plants. In F. Baluška (Ed.). Plant-environment interactions (pp. 247–266). Berlin: Springer. Calvo, P., & Friston, K. (2017). Predicting green: Really radical (plant) predictive processing. Journal of The Royal Society Interface, 14(131), 20170096. https://doi.org/10.1098/ rsif.2017.0096. Dener, E., Kacelnik, A., & Shemesh, H. (2016). Pea Plants Show Risk Sensitivity. Current Biology, 26(13), 1763-1767. doi:10.1016/j.cub.2016.05.008 Jørgensen, S.E, & Fath, Brian (2008) Encyclopedia of Ecology. Elsevier.Firn, R. “Plant Intelligence: an Alternative Point of View.” Annals of Botany, vol. 93, no. 4, 2004, pp. 345–351., doi:10.1093/aob/mch058. Franks, B. et al. (2020). “Conventional science will not do justice to nonhuman interests: A fresh approach is required.” Animal Sentience, 300, 2004, pp. 1-5.
Adams, Fred. “Cognition Wars.” Studies in History and Philosophy of Science Part A, vol. 68, 2018, pp. 20–30., doi:10.1016/j.shpsa.2017.11.007.
Gagliano, M., Renton, M., Depczynski, M., & Mancuso, S. (2014). Experience teaches plants to learn faster and forget slower in environments where it matters. Oecologia, 175(1), 63–72. https://doi.org/10.1007/s00442-013-2873-7.
“Anthropomorphize.” Merriam-Webster, Merriam-Webster, www.merriam-webster.com/dictionary/anthropomorphize. Arabidopsis: The Model Plant. (2017, March 24). Retrieved from https://www.nsf.gov/pubs/2002/bio0202/model.htm
Gagliano, M., Vyazovskiy, V., Borbély, A. et al. Learning by Association in Plants. Sci Rep 6, 38427 (2016). https://doi. org/10.1038/srep38427
Balding, Mung, and Kathryn J.h. Williams. “Plant Blindness and the Implications for Plant Conservation.” Conservation Biology, vol. 30, no. 6, 2016, pp. 1192–1199., doi:10.1111/ cobi.12738. Baluška, F., Hlavacka, A., Mancuso, S., & Barlow, P. W. (2006). Neurobiological view of plants and their body plan. In F. Baluška, S. Mancuso, & D. Volkmann (Eds.). Communication in plants: Neuronal aspects of plant life (pp. 19–35). New York, NY: Springer. Baluška, F., Mancuso, S., Volkmann, D., & Barlow, P. (2004). Root apices as plant command centres: The unique ‘brain-like’
SUMMER 2020
Gagliano, M., Grimonprez, M., Depczynski, M. et al. Tuned in: plant roots use sound to locate water. Oecologia 184, 151–160 (2017). https://doi.org/10.1007/s00442-017-3862-z Humphreys, Aelys M., et al. “Global Dataset Shows Geography and Life Form Predict Modern Plant Extinction and Rediscovery.” Nature Ecology & Evolution, vol. 3, no. 7, 2019, pp. 1043–1047., doi:10.1038/s41559-019-0906-2. Lamarck, J.B. and A.P. de Candolle (1815) “French Flora or Short Summaries of all the plants that Naturally grow in France.” Paris: Desray.
125
Li, X., & Zhang, W. (2008). Salt-avoidance tropism in Arabidopsis thaliana. Plant Signaling & Behavior, 3(5), 351–353. https://doi.org/10.4161/psb.3.5.5371. Mancuso, Stefano. The Revolutionary Genius of Plants: a New Understanding of Plant Intelligence and Behavior. Atria Books, 2018. Novoplansky, A. (2016). Future perception in plants. In N. Mihai (Ed.). Anticipation across disciplines (pp. 57–70). New York, NY: Springer. https://doi.org/10.1007/978-3-319- 22599-9_5. Parise, André Geremia, et al. “Extended Cognition in Plants: Is It Possible?” Plant Signaling & Behavior, vol. 15, no. 2, 3 Jan. 2020, p. 1710661., doi:10.1080/15592324.2019.1710661. Segundo-Ortin, Miguel, and Paco Calvo. (2019). “Are Plants Cognitive? A Reply to Adams.” Studies in History and Philosophy of Science Part A, vol. 73, 27 Feb. 2020, pp. 64–71., doi:10.1016/j.shpsa.2018.12.001. Seward, A. C. Plants: What They Are and What They Do. Cambridge University Press, 2011.Trewavas, Anthony. Aspects of Plant Intelligence, Annals of Botany, Volume 92, Issue 1, July 2003, Pages 1–20, https://doi.org/10.1093/aob/mcg101. Trewavas, Anthony. “Plant Intelligence.” Naturwissenschaften, vol. 92, no. 9, 2005, pp. 401–413, doi:10.1007/s00114-005-00149. Trewavas, A. J. (2014). Plant behaviour and intelligence. Oxford, United Kingdom: Oxford University Press. Wandersee, James H., and Elisabeth E. Schussler. “Preventing Plant Blindness.”The American Biology Teacher, vol. 61, no. 2, 1999, pp. 82–86., doi:10.2307/4450624. Yokawa, K., & Baluška, F. (2018). Sense of space: Tactile sense for exploratory behavior of roots. Communicative & Integrative Biology, 11(2), 1–5.
126
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
127
On the Structure of Field Theories I BY EVAN CRAFT '20 Cover Image: ATLAS Experiment Source: CERN
Abstract We rewrite the gravitational action solely in terms of the gamma matrices and spinor connection (as defined in the curved space Dirac equation). The requirement that the variation vanishes gives the condition that the affine connection be Levi-Civita along Einstein's equations in vacuum. We then propose an extension of this theory to the case of the gravitational field interacting with another, external field.
Preliminary Remarks Hilbert used the action principle to formalize the gravitational field equations and extend them further. His original approach, however, was not unique. It was soon realized that different pairs of variables, such as the vierbein and spin connection or the metric and affine
128
connection, could be used to rewrite Hilbert’s action. From these, one could also reproduce Einstein’s theory. In the current discourse, we describe another formulation. Dirac’s variables seemingly coincide with the vierbein formalism. Both the tetrad and gamma matrices can be used to construct the metric. And, the spin and spinor connections are intimately related by formulae. From this, we are led to believe that a symmetry of must exist between the theories. This paper is the result of that conviction and serves as a proof of the aforementioned correspondence. In Appendix A, we derive the closed form action in terms of Dirac’s variables. Taking the variation, one arrives at the desired result. This was not my original direction, however. Presented is my initial proof.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Gravitation We will first show that the Palatini Lagrangian may be rewritten so that
where all integrations are over a certain fourdimensional volume. Here, we are referring the spinor connection and gamma matrices as used in the covariant Dirac equation
where the covariant derivative of spinors is defined as
A. Re-expressing the Metric
We can then take the anti-commutator on the R.H.S. of (7) so that
where we have taken the trace over the spinor indices. Hence both metric and affine connection can be written as functions of the spinor connection and gamma matrices alone.
Cover Photo: The United Nation’s (UN) symbol for Maternal Mortality. In the Millennium Summit conference in 2000, the UN set a goal to improve maternal health as one of eight key performance indicators by 2015. (Source: Public Broadcasting Service (PBS))
Equations of Motion In order for
in (1), we must have
Each term must vanish separately so that
The defining property of the gamma matrices are the anti-commutation relations A. Variation with respect to the Gamma Matrices Inverting this, we find
B. Re-expressing the Affine Connection We have that gamma matrices are covariantly constant,
This assertion is due to equation (8) of [1]. We can solve this relation this for Rearranging,
Take the anti-commutator on the L.H.S.
From above, the first condition must be true for any arbitrary variation of the gamma matrices. We must then require
This can be expanded in terms of the metric and affine connection
“... both metric and affine connnection can be written as functions of the spinor connection and gamma matrices alone.”
after a change of indices in the gamma matrices. We can do this chain rule expansion as the original Palatini Action could be written solely as a function of both the metric and affine connection. The metric can then be rewritten in terms of the gamma matrices by formula (5). Furthermore, since the spinor connection is fixed, the affine connection is completely determined by the these same matrices using (11). For the second term in (15), we find
Figure 2: Maternal mortality rate (per 100,000 births) across the United States. Source: Wikimedia Commons (published in 2015).
Hence raising the index, we get where we have abbreviated
SUMMER 2020
129
Since Dirac’s matrices are covariantly constant (6), we may conclude
Taking the trace over the spinor indices we get
From (15), (19), and (23) we conclude And therefore
“The tetrad (or vierbein) projects the curved geometry down to flat space and vice versa."
Hence the second term in (15) is vanishing. For the first term, we can compute the variation of the metric using (5). This is done in text Appendix B. The result is
B. Variation with respect to the Spinor Connection We may now focus on the spinor connection. From before,
We may then say
where we have used Palatini’s Variation
to evaluate the first element. Now, taking the transpose over the spinor indices,
where we have abbreviated Einstein's tensor to
We may take the anticommutator with
We may obtain a formula for in terms of the other variables. We will need equation (6), and for convenience, we restate it here:
Contracting with
The contraction on the L.H.S. can be evaluated with the help of the tetrad. The tetrad (or vierbein) projects the curved geometry down to flat space and vice versa. In the new coordinate system, the curved space vector becomes
And its scalar product gives
This results in the condition Using this result in (23), which is equation (3) of [2]. We may use (33) and (35) to re-express the anti-commutation relation (4)
In the next section, we will show that the connection is symmetric and hence the Einstein tensor is symmetric so that this reduces to
130
Hence inverting the original relation we get
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Contracting over the latin indices, we find Since the inner product is a scalar, it will not depend on coordinates so that Using this fact in (32),
In Appendix C, we show that the solution to the above equation is given by
where is a four vector. Hence the spinor connection can be written solely as a function of the gamma matrices, the affine connection, and an arbitrary four vector. We can now rewrite the in (30) as
The vector potential is fixed and the gamma matrices are fixed so the spinor connection is solely a function of the affine connection. We can therefore eliminate the chain rule in this variable leaving
Now that we have simplified the variation of the spinor connection, we can insert it into (30):
Since this variation holds the curved space gamma matrices fixed, the action is solely a function of the spinor connection. Removing the chain rule,
variations hence after a change of indices in the spinor connection. Since the affine connection transforms as a connection, it can be written solely as a function of the transformation matrix (the vierbein) and its corresponding connection in flat space (the spin connection). We may therefore express its variation as
where we have used to denote the spin connection. We can now rewrite the variation of the spinor connection. Using this in (40) we get
The tetradic version of Palatini’s Theorem allows us to write the action solely as a function of the vierbein and spin connection. The requirement that the variation vanish with respect to the spin connection gives the condition that the connection be symmetric. For equation (48), since the vierbein is fixed (Appendix D), we then have that the connection is symmetric. This along with (35) gives the Levi-Civita connection.
“The tetradic version of Palatini's Theorem allows us to write the action solely as a function of the vierbein and spin connection.”
A Comprehensive Action Principle For the total action, one could propose
The curved space gamma matrices are being held fixed so that the first term vanishes. At this point we may also eliminate the variation in the vector potential. The reason for this is explained in Appendix D. We are left with
where the primed functional corresponds to the external field. In the case of a Dirac particle we get
where we treat the gamma matrices, spinor connection, spinor, and adjoint spinor as independent variables. The variation then gives In Appendix D, we also show that whenever the curved space gamma matrices are held fixed, the vierbein is also fixed. The above equation then implies
SUMMER 2020
131
We can compute the variation of the covariant derivative as
Forcing x to vanish, we obtain the equations of motion for the gravitational field
And the equations of motion for the spinor field
We may add extra terms to the external action by using (5) and (11) to rewrite the metric and affine connections in terms of Dirac’s variables.
Appendices
Subtracting, we obtain the commutator
where
2. Two-Spinors We can create a two spinor by contracting onto the gamma matrices. Choose some arbitrary four vector and let
This can be inverted to
Appendix A: The Closed Form Action “Here we derive the closed form of teh Einstein-Hilbert Action in terms of the gamma matrices and spinor connection alone.”
Here we derive the closed form of the EinsteinHilbert Action in terms of the gamma matrices and spinor connection alone. The scalar density can be written solely as a function of these variables using (4). All that is left is reexpressing the Ricci Scalar. 1. Spinor Curvature and Torsion
We can follow suit with the previous section and take the commutator on this object Using (6), we have for the first term:
Expanding the covariant derivative,
The covariant derivative of spinors is of the form
We want to calculate the commutator of covariant derivatives . This is analogous to how the Riemann curvature tensor is obtained. For the first term, we have
Similarly,
Expanding the covariant derivative,
Subtracting these, we arrive at the commutator Similarly,
132
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
3. The Riemann Tensor in terms of the Gamma Matrices and Spinor Connection We can use the result for two-spinors to help calculate the commutator on the gamma matrices. From (6),
So that the first term in the anti-commutator gives
We can expand out the covariant derivative:
so that the Riemann Tensor,
the Ricci Tensor,
and the Ricci Scalar
can all be written in terms of the gamma matrices and spinor connection alone.
Appendix B: The Variation of the Metric
â&#x20AC;&#x153;We can uise the result for two-spinors to help calculate the commutator on the gamma matrices.â&#x20AC;?
The metric is related to the gamma matrices by the anti-commutation relation
This can be inverted to Taking the commutator and using (B13) we find We can use this to take the variation of the metric: The gamma matrices are covariantly constant so that the entire L.H.S and the middle term on the R.H.S vanish. We are left with
Rearranging,
On the L.H.S of the above equation, we may take the anti-commutator with .
The anti-commutator on the R.H.S of (A19) gives
SUMMER 2020
Use the fact that tr(AB) = tr(BA) so that
The trace also commutes with the variation:
Working out the parenthesis,
We can use this in (B5) so that
133
By the definition of the trace,
Giving the coefficient
Appendix C: Solving for the Spinor Connection Begin with differential equation (38)
Assume there exists a solution
Appendix D: Some comments on the Variation w.r.t. the Spinor Connection 1. Fundamental Variables Given the vierbein, the choice of metric for the curved space is completely unambiguous. One simply takes the Minkowski metric and constructs it by projecting up,
Now, given the vierbein, it would seem ambiguous what to choose for the curved space gamma matrices. However, this ambiguity does not exist. In flat space, in order to solve the Dirac equation, one makes a choice of gamma matrices to use. From these, we can then construct the curved space gamma matrices by projecting up:
We would then have
The tetrad is the only variable. From this, we use the ansatz 2. A relation between the Gamma Matrices and Vierbein
"In flat space, in order where is to be determined. Inserting equation to solve the Dirac (C4) in (C1) we are left with the condition equation, one makes a choice of gamma matrices to use." So that
The solution to which is given by
for an arbitrary four vector. So in total,
Rewriting the gamma matrices using the vierbein, one can show that an object of the form (C3) exists and satisfies (C2) so that the above expression is valid.
134
We can now use the argument of the preceding section. The claim is that whenever the curved space gamma matrices are fixed, the vierbein must also be fixed. Begin by taking the variation of (D2),
If the curved space gamma matrices are held fixed then the L.H.S vanishes leaving
Now, since the flat space gamma matrices are not variables, we get
Take the anti-commutator with
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
We may take the trace over the spinor indices to eliminate the identity matrix. Finally, we can contract with the Minkowski metric to obtain
References [1] E. Schrรถdinger, Sitz. Preuss. Akad. Wiss. Berlin (1932). [2] A. Einstein, Sitz. Preuss. Akad. Wiss. Berlin 217 (1928). [3] P.A.M Dirac, Proc. R. Soc. Lond. A 126 (1928).
which is the desired result. From (D3) we also see that whenever the vierbein is fixed, the curved space gamma matrices are also fixed. Finally, we note that the vierbein is completely determined by the gamma matrices as
3. The Vector Potential We may apply a similar argument to the vector potential. Recall that the connection for spinors is given by (C8),
where
Any choice of four vector could be used in (D9) to satisfy the equation (C1). It would therefore seem ambiguous how to determine the nature of . This ambiguity, however, does not exist. Given a particular choice of Dirac spinor, there is a fixed, predetermined four potential with which it is associated. It is precisely the four potential contained in the interaction term of its Lagrangian. From this, we can then construct the covariant derivative. The variation of this object is therefore vanishing; it is not a variable.
SUMMER 2020
135
On the Structure of Field Theories II BY EVAN CRAFT '20 Cover Image: ATLAS Experiment Source: CERN
Abstract Given any action functional, we may construct a path integral. Many theories, however, have actions that are not unique. In those cases, the functional of the fields may be rewritten in terms of other variables yet yield the exact same equations of motion. This symmetry of the action leads to a corresponding symmetry of the path integral. We discuss these ideas in the context of gravitation.
Introduction The basis of any field theory is the principle of least action. We begin with some functional S describing our system, and we impose the condition that its variation vanish, rather
If the action is a functional of n independent fields, this gives the equations of motion 136
for all n. Oftentimes, the functional of the fields can be rewritten in terms different variables. Take for instance gravitation. It is known that Palatini’s treatment of the metric and affine connection as independent yields Einstein’s theory. In a recent paper, we’ve shown that Dirac’s variables will also do the trick [1]. This ambiguity of the action allows us to rewrite the path integral. For a field theory, we define the latter functional to be
where we are integrating over all possible field configurations [2]. In the following, we consider this expression with respect to the various formulations of the gravity. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Single Variable Theories A. Hilbert Hilbert’s original formulation treated the action solely a functional of the metric. From this, we may construct the path integral
Multivariable Extension A. Palatini In the Palatini Formulation, the metric and affine connection are treated independently. From this, we can construct
(Source: Public Broadcasting Service (PBS))
where the integration is over all possible metric configurations.
where we are integrating over field configurations of for both the connection and metric.
B. Tetrad
B. Tetradic Palatini
The existence of a vierbein would allow us to project curved space vectors down to flat space (and vice versa). This leads to the relationship
For a given affine connection of curved space, there is a corresponding connection in flat space termed the spin connection. This object satisfies
between the corresponding metrics. We can use this to reformulate our functional as
since the Minkowski metric is fixed. In this case, we are integrating over all possible vierbein configurations. C. Spinor Dirac’s theory proposes the existence of gamma matrices such that
Cover Photo: The United Nation’s (UN) symbol for Maternal Mortality. In the Millennium Summit conference in 2000, the UN set a goal to improve maternal health as one of eight key performance indicators by 2015.
where the Greek indices refer to curved space and the Latin correspond to flat. It is known that the action, written in terms of this new connection and the vierbein, produces Einstein’s equations in vacuum. From this, we construct the path integral
“For a given affine connection of curved space, there is a corresponding connection in flat space termed the spin connection...”
where we are integrating over field configurations for both the connection and vierbein. C. Spinor
These are used to determine the dynamics of spinors in his namesake equation
where the covariant derivative is defined by
As we’ve shown in [1], the Palatini action may be reformulated solely in terms of Dirac’s variables. This leads to a new path integral given by
where we are integrating over field configurations for both the gamma matrices and spinor connection.
The relationship (7) may be inverted to References
And hence the path integral can transformed into
SUMMER 2020
Figure 2: Maternal mortality rate (per 100,000 births) across the United States. Source: Wikimedia Commons (published in 2015).
[1] E. Craft, “On the Structure of Field Theories I”, Dartmouth Undergrad J. Sci., 20X (2020). [2] R. P. Feynman Rev. Mod. Phys. 20, 367 (1948).
137
The Modernization of Anesthetics BY GIL ASSI '22 Cover Image: Robert Liston performing an amputation at the University College London. Source: Wikimedia Commons
138
Introduction In the 18th century, London's University College Hospital became known for its agonizing medical procedures. On many occasions, hundreds of men and women gathered at the operating theater to observe a surgery. The surgeon was typically accompanied by his medical assistants, who were not yet regarded as nurses. Operating theaters were filled with a mixture of medical students and miscellaneous people with no real medical credentials. Due to the large crowds, surgery was often only used as a last resort. Out of fear, doctors restricted themselves to external and superficial skin wounds. Internal procedures, although necessary, were very dangerous and risky (Fitzharris, 2018). This was mostly because there was no concept of anesthesia, so surgery was extremely painful. This meant that there was an extremely high rate of postoperative death due to infections. Robert Liston, commonly known as the “fastest knife in the West End,”
was a Scottish surgeon who built his reputation on his speed and agility when amputating his patients. Speed was a determinant factor for many patients as they preferred surgeons who could perform quick albeit painful surgeries (Wright et al., 2014). One afternoon in 1846, Liston prepared for a mid-thigh operation in the College’s renowned hospital. As the voyeurs rushed in to find their seats, Liston readied himself to perform his very first surgery using a primitive form of anesthesia. The anesthetic properties of ether, vaporized into a gaseous compound, had recently been discovered by Boston dentist William T.G. Morton to leave patients unconscious and numb (Chang et al., 2015). Liston was able to amputate his patient in under thirty seconds (Reginald, 2000). The ether prevented the patient from fighting and being agitated on the surgery table, and as he woke up, he and the crowd were surprised to find DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: This diagram depicts a presynaptic and a postsynaptic neuron. GABA released into the synaptic cleft will bind to GABAA receptor on the postsynaptic neuron, which results in a reduction of neuronal excitation. Source: Wikimedia Commons
that the surgery had been completed without any pain. Thus, 1846 marked the birth of a new science in the medical field: anesthesiology.
The Different Types of Anesthesia It has now been two centuries since Robert Liston conducted that famed operation. What does anesthesia look like today? Surgeons are now at the liberty of choosing between four different types of anesthesia: general anesthesia, regional anesthesia, local anesthesia, and sedation. Depending on the situation, patients are allowed to choose the anesthetic they prefer. The most common anesthetic known to any person is general anesthesia. Through medication, doctors render their patients unconscious and unaware of their surroundings. Some general anesthetics are gases (like the ether used by Liston) that can be administered in a tube or mask. Others are given as a liquid through an IV to induce sleep and treat pain. Regional anesthesia, on the other hand, is precise. It targets a specific area of the body and numbs it to prevent pain. One great example is nerve blocks; femoral nerve blocks numb just the thigh and knee (UCLA Health, Los Angeles, CA, n.d.). In eye surgery cases, doctors sometimes use sedation, which involves a medication that induces drowsiness and relaxation. If the patient needs
to be awake to follow instructions from the surgeon, a moderate sedation may be used, where the patient may doze off but awakens easily. Medications such as lidocaine that are injected through a needle or applied as a cream to an area are known as local anesthetics. Many doctors tend to use local anesthetics in combination with sedation during minor outpatient surgery (UCLA Health, Los Angeles, CA, n.d.). Although anesthetics are prevalent today and used in every hospital, the general population and sometimes also doctors still lack a general understanding of its molecular basis.
â&#x20AC;&#x153;Surgeons are now at the liberty of choosing between four different types of anesthesia: general anesthesia, regional anesthesia, local anesthesia, and sedation.â&#x20AC;?
Molecular Pathway In recent years, researchers have used model organisms to find the targets of general anesthetics. These studies have shown that different types of general anesthetics act through distinct mechanisms. For example, one study focused on the classification of general anesthetics and their relative potencies and effect on EEG. This ultimately shed light on certain molecular targets associated with loss of consciousness. Group 1 The first class of anesthetics consists of
SUMMER 2020
139
Figure 2: This diagram shows a GABA receptor with its subunits and where various ligands bind. α and β subunits determine anesthetic sensitivity for group 3—sensitive to volatile anesthetics. Source: Wikimedia Commons
“These anesthetics enhance GABAmediated channel activation and prolong postsynaptic inhibitory currents (IPSCs), ultimately suppressing neuronal excitability.”
intravenous drugs such as etomidate, propofol, and barbiturates which tend to induce a state of unconsciousness rather than immobilization. A subset of γ-aminobutyric acid type A (GABAA) receptors mediate loss of righting reflexes (LORR) and immobility as produced by these drugs. GABAA receptors, located both postsynaptically and extrasynaptically, can trigger a reduction in neuronal excitation when activated. These anesthetics enhance GABA-mediated channel activation and prolong postsynaptic inhibitory currents (IPSCs), ultimately suppressing neuronal excitability. This was discovered when researchers performed tests on rats showing that intravenous drugs induced LORR through GABAA receptors containing β3 subunits, whereas sedation utilized GABAA receptors containing β2 subunits (Forman and Chin, 2008). Group 2 The second group includes the inhaled anesthetics (nitrous oxide, xenon, and cyclopropane) and ketamine, an intravenous drug. This group has the lowest potency of the three groups in their ability to render patients unconscious and immobilized. At the molecular level, they have little to no effect on GABAA receptors (Raines et al., 2001). However, this class of anesthetics could be the major factor in inhibiting N-methyl-D-aspartate (NMDA) receptors, cation channels activated by glutamate (Jevtović-Todorović et al., 1998).
140
For example, in the presence of xenon, NMDA receptor-mediated excitatory postsynaptic currents are inhibited and it has been shown that reduced excitatory signals in neuronal circuits causes unconsciousness (Forman and Chin, 2008). Group 3 The volatile halogenated anesthetics— halothane, enflurane, isoflurane, sevoflurane, and desflurane—are drugs used to induce amnesia, unconsciousness, and immobility in a more predictable way. Compared to the other groups, the volatile halogenated anesthetics lack significant selectivity in regards to their target molecules (Campagna et al., 2003). They have been shown to enhance the function of inhibitory GABAA receptors, which suggests that they produce unconsciousness via different GABAA receptor subunits—α and β subunits determine anesthetic sensitivity for this group (Forman and Chin, 2008). These subunits are different from those targeted by group 1 drugs. TREK-1 is an anesthetic-sensitive K+ channel that plays a role in the resting membrane potential of neurons (Franks & Honoré, 2004). It can be activated by volatile anesthetics (Honoré et al., 1999). Recent studies show that TREK-1 knockout mice required an increased dosage of volatile anesthetic to trigger LORR and immobility. This suggests that a wild type
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
TREK-1 channel is sensitive to these drugs (Linden et al., 2007). Furthermore, other ion channels have been discovered to be sensitive to volatile anesthetics including serotonin type 3 receptors (Stevens et al., 2005), Na+ channels (Roch et al., 2006), mitochondrial ATP-sensitive K+ channels (Turner et al., 2005), neuronal nicotinic acetylcholine receptors (Flood & Role, 1998), and cyclic nucleotide-gated HCN channels (Chen et al., 2005). A relationship between these channels and unconsciousness has yet to be established.
A Proposed Biological Pathway of General Anesthetics At St. John’s University, Professor Mahmud Arif Pavel’s lab found that the chemical properties of anesthetics target lipid rafts found in the cell plasma membrane. They discovered that anesthetics such as chloroform and diethyl ether target GM1 rafts and activate ion channels in a two-step mechanism. Previously, as Professor Pavel described, “for 100 years, anesthetics were speculated to target cellular membranes, yet no plausible mechanism emerged to explain a membrane effect on ion channels” (Pavel et al., 2020). Knowing that inhaled anesthetics are hydrophobic molecules that can activate TREK-1 channels and ultimately induce loss of consciousness, his lab was able to demonstrate that chloroform and isoflurane, inhaled anesthetics, activate TREK-1 channels by disrupting phospholipase D2 (PLD2) localization to lipid rafts which subsequently produces signaling phosphatidic acid (PA) (Pavel et al., 2020). The lab elucidated a mechanism where PLD2 activates TREK-1 by binding to a disordered C terminus, which in turn produces a high level of PA that activates the channel. Therefore, the researchers tested anesthetic sensitivity by blocking PLD2’s activity. Additionally, the researchers tested an inactive mutant of PLD2 (xPLD2). Since chloroform was unsuccessful in activating TREK-1, they concluded that PLD2 is a necessary enzyme in the biological pathway for the activation of TREK-1 by general anesthetics (Pavel et al., 2020). However, more research will be necessary to uncover the other factors involved in this specific mechanism.
Side Effects of Anesthesia While modern anesthesia is safe, it can cause side effects during and after the medical
SUMMER 2020
procedure. Although most of these side effects are as minor and temporary as a sore throat, rash, or hypothermia, there are serious effects that should be highlighted. In some surgeries where general anesthesia is used, an individual can exhibit confusion, long term memory loss, and subsequent learning problems. Known as postoperative cognitive dysfunction, this is more common in older people who have preconditions like heart disease, Alzheimer's or Parkinson’s. In other instances, patients can suffer from pneumothorax as a result of sedation; when anesthesia is directly injected through a needle near the lungs, the needle may accidentally puncture the lungs causing them to collapse. A chest tube will be required to re-inflate the lungs. Of all the different types of anesthetics, local anesthesia is the least likely to cause side effects (“When Seconds Count,” n.d.). However, it also has minor side effects such as itchiness or soreness.
The Future of Anesthesia Chief medical officer of Team Health Anesthesia and anesthesiologist Sonya Pease has discussed the intricacies of current day anesthesiology and the potential for the field in the future. One of the major issues she raised concerned the possibility of zero defects in anesthesiology related harm. According to Dr. Pease, a future with “zero harm” anesthesia will require anesthesiologists to consider redesigning healthcare delivery methodologies as well as improving patient-specific anesthesiology (Pease, 2020).
“At St. John's University, Professor Mahmud Arif Pavel's lab found that the chemical properties of anesthetics target lipid rafts found in the cell plasma membrane.”
Fortunately, there have been many predictions as to what the future holds to improve patient care. One of the most popular predictions is the application of artificial intelligence. For instance, it has been hypothesized that patients will have 24/7 access to virtual nurse avatars who will assist them with home medication and provide them with healthier lifestyle choices after the surgery. This will allow doctors to continue tracking patients’ care and health after a medical procedure (Pease, 2020). In terms of the potential loss of memory or other severe side effects, preoperative testing will be developed in virtual clinics in combination with a “cognitive assistant” that will repetitively run algorithms to determine the clearest and safest procedural pathway prior to the actual procedure. Additionally, there will be ingestible sensors accompanied with drug-devices that will track and monitor the effectiveness of anesthetics as they are
141
injected in patients. This will allow doctors to prevent specific complications during the procedures and track the patients’ health in the days that follow (Pease, 2020).
Anesthetic Agents in Development Novel drug development is costly, and risky. Only one tenth of drugs in the early stages of development will later be approved by the Food and Drug Administration (Hay et al., 2014). There have been instances when approved drugs are withdrawn from the market because of unanticipated limitations and side effects. In recent years, evolving drug innovations have been focusing on modifying the chemical structures of existing drugs to improve their pharmacodynamic, pharmacokinetic, and side effects (Mahmoud and Mason, 2018). Remimazolam Rem
“Novel drug development is costly, and risky. Only one tenth of drugs in the early stages of development will later be approved by the Food and Drug Administration.”
Remimazolam is a new ester-based anesthetic agent that combines the properties of midazolam (sedative) and remifentanil, two already established anesthetic drugs. Remimazolam acts on GABA receptors and exhibits pharmacokinetic properties similar to remifentanil. In animal studies, remimazolam induced a quicker onset and faster recovery than midazolam (Upton et al., 2010). Initially, remimazolam was developed for procedural sedation, but more studies have begun to focus on utilizing the agent for induction and maintenance of general anesthesia. A recent study examined the properties of inhaled remimazolam alone and as an adjunct to remifentanil in rodents. The results showed that remimazolam significantly potentiated the analgesic effect of remifentanil without lung irritation, bronchospasm, or other pulmonary complications (Bevans et al., 2017). ADV6209 A novel generation of oral midazolam has been formulated by combining sucralose, orange aroma, and y-cyclodextrin with a citric acid solution of midazolam (Marçon et al., 2009). Initial research has indicated that this drug can help by offering anxiolysis and sedation with improved patient acceptance and tolerance (Mahmoud and Mason, 2018). At the present time, substantial evidence shows that the formulation improves the longevity of the oral midazolam’s shelf-life (Mathiron et al., 2013).
Conclusion Liston’s surgery at the University College London
142
in 1846 marked the birth of anesthesiology. Long before that era, clinicians desired immobile patients to improve the rate of procedural success. However, unwanted consequences and side effects have emerged, which has prompted the anesthesiology community to invest in further research into the molecular pathway and mechanism of general anesthesia. In doing so, novel drugs such as remimazolam, ADV6209, and many more are currently in the research and animal testing phase. As novel anesthetics are studied and uncovered, a future with zero defects in anesthesiology related procedures could be in close proximity. References Bevans, T., Deering-Rice, C., Stockmann, C., Rower, J., Sakata, D., & Reilly, C. (2017). Inhaled Remimazolam Potentiates Inhaled Remifentanil in Rodents. Anesthesia and Analgesia, 124(5), 1484–1490. https://doi.org/10.1213/ ANE.0000000000002022 Campagna, J. A., Miller, K. W., & Forman, S. A. (2003). Mechanisms of actions of inhaled anesthetics. The New England Journal of Medicine, 348(21), 2110–2124. https://doi. org/10.1056/NEJMra021261 Chang, C. Y., Goldstein, E., Agarwal, N., & Swan, K. G. (2015). Ether in the developing world: Rethinking an abandoned agent. BMC Anesthesiology, 15. https://doi.org/10.1186/ s12871-015-0128-3 Chen, X., Sirois, J. E., Lei, Q., Talley, E. M., Lynch, C., & Bayliss, D. A. (2005). HCN subunit-specific and cAMP-modulated effects of anesthetics on neuronal pacemaker currents. The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 25(24), 5803–5814. https://doi.org/10.1523/ JNEUROSCI.1153-05.2005 Effects of Anesthesia on Brain & Body—When Seconds Count. (n.d.). When Seconds Count | Anesthesia, Pain Management & Surgery. Retrieved August 16, 2020, from https://www.asahq.org/whensecondscount/anesthesia-101/ effects-of-anesthesia/ Fitzharris, Lindsey. (2018). Prologue: The Age of Agony. In Straus & Giroux (Eds.), TheButchering Art: Joseph Lister's Quest to Transform the Grisly World of Victorian Medicine. Penguin Books Flood, P., & Role, L. W. (1998). Neuronal nicotinic acetylcholine receptor modulation by general anesthetics. Toxicology Letters, 100–101, 149–153. https://doi.org/10.1016/s03784274(98)00179-9 Forman, S. A., & Chin, V. A. (2008). General Anesthetics and Molecular Mechanisms of Unconsciousness. International Anesthesiology Clinics, 46(3), 43–53. https://doi.org/10.1097/ AIA.0b013e3181755da5 Franks, N. P., & Honoré, E. (2004). The TREK K2P channels and their role in general anaesthesia and neuroprotection. Trends in Pharmacological Sciences, 25(11), 601–608. https://doi. org/10.1016/j.tips.2004.09.003 General anesthesia—Sedation—UCLA Anesthesiology & Perioperative Medicine—UCLA Health, Los Angeles, CA. (n.d.).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Retrieved August 16, 2020, from https://www.uclahealth.org/ anes/types-of-anesthesia
Korean Journal of Anesthesiology, 59(1), 3–8. https://doi. org/10.4097/kjae.2010.59.1.3
Hay, M., Thomas, D. W., Craighead, J. L., Economides, C., & Rosenthal, J. (2014). Clinical development success rates for investigational drugs. Nature Biotechnology, 32(1), 40–51. https://doi.org/10.1038/nbt.2786
Stevens, R. J. N., Rüsch, D., Davies, P. A., & Raines, D. E. (2005). Molecular properties important for inhaled anesthetic action on human 5-HT3A receptors. Anesthesia and Analgesia, 100(6), 1696–1703. https://doi.org/10.1213/01. ANE.0000151720.36988.09
Jevtović-Todorović, V., Todorović, S. M., Mennerick, S., Powell, S., Dikranian, K., Benshoff, N., Zorumski, C. F., & Olney, J. W. (1998). Nitrous oxide (laughing gas) is an NMDA antagonist, neuroprotectant and neurotoxin. Nature Medicine, 4(4), 460–463. https://doi.org/10.1038/nm0498-460 Linden, A.-M., Sandu, C., Aller, M. I., Vekovischeva, O. Y., Rosenberg, P. H., Wisden, W., & Korpi, E. R. (2007). TASK-3 knockout mice exhibit exaggerated nocturnal activity, impairments in cognitive functions, and reduced sensitivity to inhalation anesthetics. The Journal of Pharmacology and Experimental Therapeutics, 323(3), 924–934. https://doi. org/10.1124/jpet.107.129544
Turner, L. A., Fujimoto, K., Suzuki, A., Stadnicka, A., Bosnjak, Z. J., & Kwok, W.-M. (2005). The interaction of isoflurane and protein kinase C-activators on sarcolemmal KATP channels. Anesthesia and Analgesia, 100(6), 1680–1686. https://doi. org/10.1213/01.ANE.0000152187.17759.F6 Upton, R. N., Somogyi, A. A., Martinez, A. M., Colvill, J., & Grant, C. (2010). Pharmacokinetics and pharmacodynamics of the short-acting sedative CNS 7056 in sheep. British Journal of Anaesthesia, 105(6), 798–809. https://doi.org/10.1093/bja/ aeq260
Magee, R. (2000). Surgery in the Pre-Anaesthetic Era: The Life and Work of Robert Liston. Health and History, 2(1), 121–133. JSTOR. https://doi.org/10.2307/40111377
Weir, C. J. (2006). The molecular mechanisms of general anaesthesia: Dissecting the GABAA receptor. Continuing Education in Anaesthesia Critical Care & Pain, 6(2), 49–53. https://doi.org/10.1093/bjaceaccp/mki068
Mahmoud, M., & Mason, K. P. (2018). Recent advances in intravenous anesthesia and anesthetics. F1000Research, 7. https://doi.org/10.12688/f1000research.13357.1
Wright, A. S., & Maxwell, P. J. (2014). Robert Liston, M.D. (October 28, 1794-December 7, 1847): The fastest knife in the West End. The American Surgeon, 80(1), 1–2.
Marçon, F., Mathiron, D., Pilard, S., Lemaire-Hurtel, A.-S., Dubaele, J.-M., & Djedaini-Pilard, F. (2009). Development and formulation of a 0.2% oral solution of midazolam containing gamma-cyclodextrin. International Journal of Pharmaceutics, 379(2), 244–250. https://doi.org/10.1016/j. ijpharm.2009.05.029 Mathiron, D., Marçon, F., Dubaele, J.-M., Cailleu, D., Pilard, S., & Djedaïni-Pilard, F. (2013). Benefits of methylated cyclodextrins in the development of midazolam pharmaceutical formulations. Journal of Pharmaceutical Sciences, 102(7), 2102–2111. https://doi.org/10.1002/jps.23558 Patel, A. J., Honoré, E., Lesage, F., Fink, M., Romey, G., & Lazdunski, M. (1999). Inhalational anesthetics activate twopore-domain background K+ channels. Nature Neuroscience, 2(5), 422–426. https://doi.org/10.1038/8084 Pavel, M. A., Petersen, E. N., Wang, H., Lerner, R. A., & Hansen, S. B. (2020). Studies on the mechanism of general anesthesia. Proceedings of the National Academy of Sciences, 117(24), 13757–13766. https://doi.org/10.1073/pnas.2004259117 Pease, Sonya MD (2020). Future of Anesthesiology: Anesthesia Industry Predictions for 2028. (n.d.). Retrieved August 16, 2020, from https://www.teamhealth.com/blog/ anesthesiology-2028-bd/?r=1 Raines, D. E., Claycomb, R. J., Scheller, M., & Forman, S. A. (2001). Nonhalogenated alkane anesthetics fail to potentiate agonist actions on two ligand-gated ion channels. Anesthesiology, 95(2), 470–477. https://doi. org/10.1097/00000542-200108000-00032 Roch, A., Shlyonsky, V., Goolaerts, A., Mies, F., & SaribanSohraby, S. (2006). Halothane directly modifies Na+ and K+ channel activities in cultured human alveolar epithelial cells. Molecular Pharmacology, 69(5), 1755–1762. https://doi. org/10.1124/mol.105.021485 Son, Y. (2010). Molecular mechanisms of general anesthesia.
SUMMER 2020
143
Tripping on a Psychedelic Revolution A Historical and Scientific Overview, with Dr. Rick Strassman and Ken Babbs BY JULIA ROBITAILLE '23 Cover Image: Psilocybin mushrooms have been used as a psychedelic for centuries and grow naturally in many parts of the world. Source: Wikimedia Commons [Credit: Thomas Angus, Imperial College London]
144
Overview Lysergic acid diethylamide, known more commonly as “LSD,” was first synthesized by Swiss chemist Albert Hoffmann in 1943. It was initially used as an investigational drug for clinical research, but by the 1960s, it was widely used among Americans as a recreational drug. LSD’s mainstream use and connection to various drug-fueled social movements led to its eventual criminalization in 1968. With this, the field of psychedelic research was brought to a sudden halt. The U.S. Drug Enforcement Administration (DEA) claimed that the ban on LSD was prompted by medical reasons, noting the dangers of psychedelic drugs. But the research to support these claims was, and still is scarce and not well supported. Social and political motives played a major role in the DEA’s decision to criminalize psychedelics. Through the expertise of two psychedelic pioneers, this paper explores the effects of LSD on the brain and its validity within a clinical setting.
I first interviewed Dr. Rick Strassman, a leading researcher in the field of psychedelics and their use in psychotherapy and integrative medicine. Dr. Strassman has worked as a psychiatrist for many years and has received numerous research grants. He has published over fortyone peer-reviewed articles and written four books. I also interviewed Ken Babbs, a central figure in the psychedelic revolution that began in the late 1950s. He is a certified Merry Prankster and novelist – he was on the original famed cross-country bus trip with Ken Kesey (author of One Flew Over the Cuckoo’s Nest, 1962) that became hallmark of the psychedelic era. Both Babbs and Strassman were kind enough to talk to me and provide their expert insight on the risks and benefits of psychedelics.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: A storefront window displaying a poster that reads, “Hippies use backdoor.” Source: Wikimedia Commons [Runran, Public Domain]
History of Psychedelics in America Following its discovery, the clinical effects of LSD were thoroughly researched. LSD was utilized in psychiatry as a psychotomimetic – a drug that mimics disordered psychiatric states by inducing psychotic symptoms. Researchers administered LSD to healthy participants for the purpose of studying their delusions and hallucinations, both of which were present in patients of psychosis and schizophrenia. LSD was also used as an investigational drug in clinical settings, as it was piloted for the treatment of depression (Novak, 1997). Within ten years of LSD’s synthesis, more than thirty scientific publications had been published on its clinical effects (Nichols, 2013). In 1956, Dr. Sidney Cohen - who had personally taken LSD - began to implement psychedelics in psychotherapy used to treat depression and alcohol dependence (DiPaolo, 2018). LSD was even informally endorsed by the founders of Alcoholics Anonymous for its potential in combatting addiction. During this time, the rhetoric surrounding psychedelics significantly shifted. LSD went from being an experimental psychotomimetic to something with tangible therapeutic potential (Novak, 1997). This inevitably led to the wider recreational use of LSD. By 1960, the roots of counterculture movements were taking hold in America. Young adults and liberal intellectuals who would eventually become known as “hippies,” began to use psychedelics in creative endeavors. SUMMER 2020
It was around this time that Ken Babbs met Ken Kesey - and was first introduced to LSD. Both Babbs and Kesey were students in the Stanford Graduate Writing Program and met at a cocktail party held by a professor before the start of the term. “We hit it off right away. We had the same kind of temperaments. We were both imaginative, and both outgoing, and both athletic. We just really meshed,” Babbs says. In 1961, Ken Kesey was working in the Menlo Park VA Hospital where psychedelic drug studies were being conducted. Kesey willingly volunteered in the experiments, in which researchers would administer different drugs to him and observe his symptoms. “Sometimes it would be a placebo, and then other times it would be different. Then, there was one drug that was just really a knockout,” Ken Babbs says.” That drug was LSD. There were no laws against psychedelic drugs at the time, so Kesey sought to obtain some LSD of his own. “He went into the office where that guy was running the show at night, when nobody was there. And he opened a drawer, and found a bottle from Sandoz lab, from Switzerland - pure LSD tabs,” explains Ken Babbs. Kesey took the LSD and brought it back to his house on Perry Lane in Palo Alto, where Ken Babbs had his first dose of the drug. “That’s where people started drinking wine and playing banjos and getting high on LSD.”
“In 1961, Ken Kesey was working in the Menlo Park VA Hospital where psychedelic drug studies were being conducted. Kesey willingly volunteered in the experiments, in which researchers would administer different drugs to him and observe his symptoms.”
145
Figure 2: Dr. Robin CarhartHarris presents at the Centre for Psychedelic Research. Dr. Carhart-Harris’ Entropic Brain Model explains how psychedelics may allow one to experience multiple combinations of cognitive, emotive, and perceptive functions. Source: Wikimedia Commons [Thomas Angus, Imperial College London]
" ' When LSD hit the West Coast, it was like a big tsunami came roaring in and everybody was high'..."
146
Meanwhile, on the East Coast, two psychology professors at Harvard were using LSD for psychological studies on graduate students. The professors coordinating these unconventional experiments (the famed Timothy Leary and Richard Alpert) were fired once administration caught wind of the studies. But Leary and Alpert soon became leaders of the new psychedelic movement, just as LSD was gaining popularity among Hippies, Bhikkhus, and gypsies. Psychedelics were flowing through these groups in subcurrents and tributaries, waiting to permeate the river of mainstream American consciousness. “When LSD hit the West Coast, it was like a big tsunami came roaring in and everybody was high,” says Ken Babbs. Recreational use of LSD was growing as psychedelics percolated through the wider American population. Famous artists began using the drug to boost creativity. Hippies took the drug for an intimate spiritual experience often described as more meaningful than earthly existence. LSD soon became a major player in American counterculture movements. The Sixties Psychedelic Revolution, led primarily by free thinkers, liberals, and “leftists,” both cascaded into and coincided with the Civil Rights, Peace, and Hippie Movements that followed. However, the increased use of LSD also brought about its increased misuse. The dangers of blackmarket distribution and unsupervised use led to negative experiences with the drug. These threatened to overshadow LSD’s therapeutic and positive uses. Abusers of the drug and others with bad experiences helped perpetuate
false myths about LSD - such as its ability to “fry” one’s brain or make someone permanently insane. In addition, Dr. Sidney Cohen, the same scientist who presented the world with a beneficial view of psychedelics, turned his back on LSD in 1962. He publicly stated his weariness of LSD’s potential for abuse. He worried about the impurities that arose in street drugs and proposed that the unsupervised use of LSD was too dangerous. This claim led to regulations passed in order to “crack down” on the use of psychedelics in America (Novak, 1997).
The Criminalization of LSD In 1968, as part of Nixon’s ‘War on Drugs,’ the U.S. Drug Enforcement Administration banned LSD in the United States. It is not clear, however, whether the drug’s criminalization was prompted by actual medical concerns or on socioeconomic and political purposes. (Novak, 1997). It is often speculated that these laws were not enacted for the purpose of medical safety, but rather for political and social motives. “It was thrown in with the rest of those drugs – like heroin and methedrine, so it was kind of demonized for a while,” Ken Babbs says. It is commonly believed that the criminalization of LSD and “The War on Drugs” was part of an effort to marginalize and discriminate against the Hippies, in order to quell the anti-war movement. “The War on Drugs was lost before it began,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
because it was all based on a lie,” says Mumia Abu-Jamal in Turning the Tide. John Ehrlichman, a Nixon White House aide, opened up about the ‘War On Drugs’ in a 1994 interview with Harper’s Magazine, admitting that the socalled ‘War’ was actually an effort to “attack” the enemies of the Nixon campaign (Abu-Jamal, 2016). The media portrayed the Hippies as dirty, classless, barefooted drug addicts on the fringes of society, posing a threat to the morals of an upstanding country. Discrimination was evident in billboards which displayed, “Keep America Clean: Take a Bath,”“Hippies not served here,” and “Get a Haircut.” In 1968, police sweeps enacted violence and brutality against Hippies in San Francisco streets (DiPaolo, 2018). “It wasn't about drugs. It never was. It was about Politics,” writes Abu-Jamal.
Psychedelics and the Brain The field of psychedelic research was significantly stunted in the 1960s when psychedelics were criminalized. Researchers have developed multiple theories on how these drugs act on the brain as a physical system, but there is more work to be done. In 1953, Aldous Huxley proposed that psychedelics act on the brain’s “cerebral reducing valve,” a function which he defined as the brain’s method of filtering sensory information from everyday experiences. According to Huxley, psychedelics reduce the efficacy of this valve, allowing the brain to experience multiple processes, such as emotion, cognition, and perception - all at once. Huxley believed that young children, whose “cerebral reducing valves” were not yet fully solidified, could experience this phenomenon to an extent (Swanson, 2018). This is consistent with users’ accounts which report the return to a child-like sense of wonder and delight. Recent theories on psychedelic effects focus on the brain as a dynamic and physical system. Dr. Robin Carhart-Harris postulates that psychedelics produce effects on the brain at three different levels. The first is the brain receptor level. Most psychedelics work as serotonin 2A receptor agonists, meaning they bind to and activate a response in these receptors. (Carhart-Harris, 2019). Usually this binding causes the depolarization of a neuron, increasing its likelihood of firing. The excitability of the neuron, however, does not correlate SUMMER 2020
with general excitability of the whole brain. In fact, functional magnetic resonance imaging studies have found that the frequency and amplitude of brain waves under the influence of psychedelic drugs are lower than they would be during resting states. This phenomenon can be explained by the possible excitement of inhibitory neurons. Serotonin 2A receptors are not the only receptors that are involved in the effects of psychedelics. Some drugs produce downstream effects that eventually trigger other neurotransmitters, such as glutamate and dopamine, which cause differing symptoms and experiences (Swanson, 2018). The second method by which Dr. Carhart-Harris believes psychedelics work is on the functional level. Psychedelics increase the brain’s plasticity, or its ability to change and adapt in accordance to experience. (Carhart-Harris, 2019). A 2017 paper published by Nichols et al. states that psychedelics increase the expression of genes that affect synaptic plasticity of the brain. This triggers a cascade of actions, resulting in the activation of serotonin 2A receptors in the brain and the onset of what is referred to as the psychedelic experience (Nichols et al., 2017). Psychedelics also work on the dynamic level, which resembles Huxley’s Cerebral Reducing Valve theory. This theory suggests that psychedelics produce effects when normal brain entropy is disturbed. In the Entropic Brain Theory, developed by Carhart-Harris, it is understood that the brain normally functions in a state of suppressed entropy (or disorder), in which modes of perception and cognition are constrained. (Carhart-Harris, 2019). This normal state of suppressed entropy in the brain allows for optimal focus and survival - making it evolutionarily favorable. According to EBT, psychedelics interfere with these informationsuppressing systems, increasing the brain’s entropy. This allows for temporarily expanded combinations of cognitive, perceptive, and emotive functions which would not normally occur (Swanson, 2018). These combinations result in hallucinations and intense sensory experiences that make up a psychedelic “trip.”
“Dr. Robin CarhartHarris postulates that psychedelics produce efects on the brain at three different levels...”
Ultimately, psychedelics are thought to set off a cascade of downstream effects that eventually lead to an experience often described as a heightened sense of relaxation (Carhart-Harris, 2019). Although some liken this to a mystical experience, Dr. Rick Strassman envisions psychedelics as “super placebos.”
147
Figure 3: The striking similarities in the molecular structures of LSD (A) and serotonin (B) suggested that the drug may act on these receptors in the brain. Source: (A) Wikimedia Commons [D0ktorz, 2006]; (B) Wikimedia Commons [U3117276]
A
“Psychadelics haven't been widely used in psychiatry since they were criminalized in the 1960s. But in the past decade, a 'psychedelic renaissance' has taken place, as researchers reevaluate the value of these drugs in psychiatry."
148
“They reinforce more or less conscious preexisting beliefs, aspirations, goals, and so on,” he says. “Look at Charles Manson. He used LSD in shaping his followers from being halfhearted psychopaths to fully committed ones.” Alongside this, Strassman warns the psychedelic community from “overreaching” in their interpretations of these experiences. “Psychologists are not theologians, and psychiatrists are not ministers,” Strassman cautions. “When people start making claims about areas in which they are not qualified to do so, there’s backlash. So, talking about ‘God’ as a psychologist will be criticized by those with authority in the field.”
Potential Benefits & Risks Psychedelics can also provide a vast array of potential benefits. For Ken Babbs, psychedelics changed the course of his life. “It opens all kinds of doors,” he says. “Before I had LSD, I was kind of a frat rat, know-it-all. But when I took LSD, it wised me up.” “When you’re high on LSD, you’re roaming though the cosmos, you’re back in time having fights with dragons with a sword, and you’re having huge parties with people. I mean, you’re going everywhere… Sometimes, if you’re really lucky, you can leave your body and roam around,” Babbs says. “Now, they’re exploring [LSD] along with psilocybin mushrooms as beneficial for people
B in certain ways,” Babbs says. “I think it’s a good idea because it could be used in therapy. But you know, there’s always the problem - with all those drugs - that if people that do too much of it, it’s not good for them.” Psychedelics haven’t been widely used in psychiatry since they were criminalized in the 1960s. But in the past decade, a “psychedelic renaissance” has taken place, as researchers reevaluate the value of these drugs in psychiatry. Rick Strassman confirms the potential applications of psychedelic-assisted psychotherapy for disorders including obsessive-compulsive disorder, alcohol or tobacco dependence, depression, and endof-life issues – he even notes that “the list continues to grow.” There seem to be two ways in which psychedelic therapies can be implemented. One such method is described by Dr. Rick Strassman as “the production of a very intense psychedelic or peak experience which sets into motion a series of downstream effects, resulting in the desired outcome.” Another method constitutes a low-dosage of the drug in conjunction with other treatments. This method “enhances the beneficial effects of more traditional therapies, while being used in the context of those traditional therapies—talk therapy, for example.”
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Carhart-Harris similarly reiterates that “psychedelics initiate a cascade of neurobiological changes that manifest at multiple scales and ultimately culminate in the relaxation of high-level beliefs” (Carhart-Harris, 2019). The ability of psychedelics to simulate a relaxed and flexible neuroplastic state may allow for groundbreaking intervention in individuals with disordered pathology (Nichols et al., 2017). Integrating psychedelics into practical psychotherapy also has potential risks that must be considered. According to Dr. Strassman, “The risks are primarily psychological and must be addressed by careful screening, supervision of drug sessions, and post-session integration. Some people have flashbacks, some develop psychosis, anxiety, depression.” In addition, patients may experience drawbacks when psychedelic therapy does not work as promised or is improperly administered. “There was a suicide at Hopkins in the terminally ill patient study because the patient got a low dose of drug and didn’t attain the experience everyone told her would be healing and curative. While this may not have been a drug side effect, it can be interpreted as a side effect of the model. Another reason why we must remain open-minded regarding how psychedelics cure,” says Dr. Strassman. Along with treatment, researchers must work to manage patient expectations with regard to psychedelic-assisted psychotherapy. An argument commonly made against the use of psychedelics is one claiming their lack of medical or societal use. Existing research conducted on psychedelics provides proof of their versatility and use in psychiatry. Psychedelic research has even proven useful in other ways. LSD had been integral to the discovery of the relationship between brain chemistry and behavior. In 1954, the tryptamine moiety in the structure of LSD was also found in serotonin. The similarities between the molecules led researchers Wolley and Shaw to hypothesize that LSD works by interacting with serotonin receptors in the brain. This important discovery was hedged by the research of psychedelics and was integral to the consideration of chemical makeup in brain functions and disorders (Nichols, 2013).
A Cautious Future The potential risks associated with using
SUMMER 2020
psychedelics must be critically analyzed before being implemented in a clinical setting. Ken Babbs cautions that the range of reactions elicited from these drugs can vary from person to person. “People have been drinking [alcohol] since the beginning of time. It can be fun, but also too much of it can be bad. I think the sort of thing is true of all drugs,” Babbs says. He also emphasizes the necessity of preparation. “We had to be as set, mentally, physically, and morally, as the astronauts did when they went up into space - to be able to come back and be able to resume our regular lives - and not be left out there in some weird place,” Babbs says. “We had good launching pads. When we left, we had good places to come back to. Now I’m not sure that everybody can do that.” Many claim that LSD is a dangerous drug. But it is difficult to encounter unbiased evaluations because these claims are often confounded with stigmas and subliminal biases, partially as a result of the ‘War On Drugs’. There are still many stigmas surrounding psychedelic drugs that exist today. US law currently classifies LSD, peyote, and other psychedelics, along with heroin, as Schedule I substances, due to having high potential for abuse and the potential to create severe psychological and/or physical dependence.” (“Drug Scheduling,” n.d.). But according to researcher David E. Nichols, psychedelics “are generally considered physiologically safe and do not lead to dependence or addiction.” Psychedelic drugs have even shown potential to aid in combatting substance addiction disorders (Nichols, 2016). Dr. Strassman agrees. “Physiologically, psychedelics are safe. The only exception might be the parenterally active tryptamines which raise blood pressure and heart rate, and which should be avoided by people with cerebrovascular, cardiovascular disease, or epilepsy—common sense kinds of contraindications,” Dr. Strassman says. In a 1969 publication, Abram Hoffer and Humphrey Osmond object quite frankly to the dangers of LSD. “Is LSD a dangerous drug? Of course it is. So is salt, sugar, water, and even air. There is no chemical which is wholly safe nor any human activity which is completely free of risk. The degree of toxicity or danger associated with any activity depends on its use. Just as a scalpel may be used to cure, it may also kill. Yet we hear
“The potential risks associated with using psychedelics must be critically analyzed before being implemented in the clinical setting."
149
no strong condemnatory statements against scalpels” (Hoffer & Osmond, 1967).
Conclusion “Moving forward with psychadelics, but with a healthy amount of skepticism and caution, seems to be the consensus of the experts.”
150
There are still several barriers to break and stigmas to be overcome in psychedelic research. There will always be dissenting opinions regarding its use, and according to Ken Babbs, this is simply the way of everything in life. “A certain class of people doesn’t want other people doing certain things. It’s always been like that. It’s one of the nice things about society. There are so many different attitudes… so you just got to work your way through all that and not let it hang you up,” he says. While some psychedelics have been cleared for research, society is a long way from legalizing psychedelics for wider public use. Dr. Strassman says, “The move to decriminalize psychedelics will also create a backlash. This move to decriminalize is heavily dependent on the promising results coming out of laboratory research on beneficial effects. As more people take psychedelics, more adverse effects will be reported—suicides, crazy behavior, psychosis, flashbacks, and the like. We need to be ready for this.” “Downplaying the adverse effects is a recipe for problems; that is, the media and larger public will say, ‘Why didn’t you tell us about this?’ If we can get ahead of the adverse effects before it’s even a problem, we will be in a much better position to deal with these types of reports,” says Dr. Strassman. History has proven that perhaps the most influential breakthrough discoveries were achieved by thinking outside the box and breaking from mainstream processes. As Hoffer and Osmond wrote in 1969, "To the extent to which discovery changes our interpretive framework, it is logically impossible to arrive at it by the continued application of our previous interpretative framework. In other words, discovery is creative also in the sense that it is not to be achieved by the diligent application of any previously known and specifiable procedure.” (Hoffer & Osmond, 1967). Moving forward with psychedelics, but with a healthy amount of skepticism and caution, seems to be the consensus of the experts. According to Dr. Strassman, “One of the reasons this field was shut down in the late ‘60s and early ‘70s was because of the zealotry of its advocates.
They thought they had found the silver bullet and stopped looking at mechanisms. Instead, they just applied it to everything that they could see, in kind of a religious fervor.” As for Ken Babbs, he believes that the use of psychedelics is just one of the many ways to stay happy in life. “Oh yeah, well, life is a groove, and you have to find the groove.” References Abu-Jamal, M. (2016). Ehrlichman: 'War On Drugs Really 'Bout Blacks & Hippies'. Turning the Tide. Retrieved from https:// search-proquest-com.dartmouth.idm.oclc.org/docview/1787 805206?accountid=10422 Baum, D., Kroll-Zaidi, R., Jr., and Indiana, G. (2016, March 31). Legalize It All, by Dan Baum. Retrieved June 06, 2020, from https://harpers.org/archive/2016/04/legalize-it-all/ Carhart-Harris, R. L. (2019). How do psychedelics work?. Current opinion in psychiatry, 32(1), 16–21. Retrieved from https://doi.org/10.1097/YCO.0000000000000467 Carhart-Harris R. L., Friston K. (2010). The default-mode, ego-functions and free-energy: a neurobiological account of Freudian ideas. Brain 133 1265–1283. 10.1093/brain/awq010 DiPaolo, M. (2018). LSD and The Hippies: A Focused Analysis of Criminalization and Persecution In The Sixties. PIT Journal. Retrieved June 4, 2020, from http://pitjournal.unc.edu/ content/lsd-and-hippies-focused-analysis-criminalizationand-persecution-sixties Drug Scheduling. (n.d.). Retrieved June 06, 2020, from http:// www.dea.gov/drug-scheduling Friedenberg, E. Z. (1971). The anti-American generation. Chicago: Distributed by Aldine Pub. Retrieved from https:// www.taylorfrancis.com/books/9781315082240 Hoffer, A., & Osmond, H. (1967). Criticisms of LSD Therapy and Rebuttal. The Hallucinations. Retrieved June 4, 2020, from http://www.psychedelic-library.org/lsd1.htm Nichols, D. E. (2013). Serotonin, and the Past and Future of LSD. MAPS Bulletin Special Edition. Retrieved from https:// maps.org/news-letters/v23n1/v23n1_p20-23.pdf Nichols, D. E. (2016). Psychedelics. Pharmacological reviews, 68(2), 264–355. Retrieved from https://doi.org/10.1124/ pr.115.011478 Nichols, C. D., Gainetdinov, R. R., Nichols, D. E., Kalueff, A. V. (2017, September 22.) Psychedelic Drugs in Biomedicine. Trends in Pharmacological Sciences, Volume 38, Issue 11. Retrieved from https://doi.org/10.1016/j.tips.2017.08.003 Novak, S. J. (1997). LSD before Leary. Sidney Cohen's critique of 1950s psychedelic drug research. Isis; an international review devoted to the history of science and its cultural influences, 87–110. Retrieved June 4, 2020, from https://doi. org/10.1086/383628 Swanson L. R. (2018). Unifying Theories of Psychedelic Drug Effects. Frontiers in pharmacology, 9, 172. Retrieved from https://doi.org/10.3389/fphar.2018.00172
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
151
The Functions and Relevance of Music in the Medical Setting BY KAMREN KHAN '23 AND YVON BRYAN Cover. The application of music within the clinical setting. Source: Shutterstock
Introduction This article will explore the role of music in clinical contexts with respect to the psychologically demanding nature of the medical field. The discussion begins with a consideration of the levels of anxiety in both patients and medical professionals and the effects of this anxiety in clinical contexts. Within this context, the focus will then turn to several studies that explore the influence of musical intervention on patient experience and physiological state and finally a collection of studies that assesses the influence of music exposure on surgical performance of clinicians.
Music in the Medical Setting Almost universally, the thought of impending surgery induces a gripping sensation of anxiety characterized by negative emotional valence, feelings of tension, and increased autonomic activation (Kazdin & Alan 2000). At Yirgalem 152
Zonal hospital in Ethiopia, researchers found the incidence of preoperative anxiety for patients undergoing elective surgery to be 47 % (Bedaso & Ayalew, 2019). Unsurprisingly, preoperative anxiety can contribute to a traumatic and stressful experience for patients. More concretely, the severity of a patientâ&#x20AC;&#x2122;s anxiety is predictive of the amounts of intravenous propofol and sevoflurane gas that a patient will require under general anesthesia, where greater levels of anxiety correspond to greater doses needed to achieve sedation (Kil et al, 2012). Furthermore, preoperative symptoms of anxiety correspond to lower levels of patient satisfaction (KavalnienÄ&#x2014; et al, 2018). Preoperative anxiety therefore directly affects not only the patient experience but also the outcome of the medical procedures themselves. However, the effects of anxiety are not limited to patients alone. There is also a high prevalence of anxiety DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
among healthcare personnel. A cross-sectional survey of Chinese nurses in public city hospitals found the incidence of anxiety symptoms to be 43% (Gao et al, 2012). Similarly, physicians suffer from the emotional demands of their job, including feelings of obligation to the health of the patient, feelings of responsibility or powerlessness in response to declining health of patients, grief, and concerns about contracting illnesses which ultimately may degrade both the physician’s well-being and the quality of the care they provide (Meier, Black & Morrison, 2001). These emotions are often shared by family members of patients and accentuated by the uncertain and foreign nature of surgery for those outside the medical field. Essentially, surgery acts as a source of anxiety that affects a broad range of people including patients, families of patients, and healthcare personnel themselves, which in turn may affect the quality of care given to the patient and the wellbeing of all those involved. The prevalence and effect of anxiety related to medical care necessitates intervention in order to improve the experience and general quality of healthcare. While the sources of this anxiety (the uncertainty, invasiveness, and implicit risk of surgery) are themselves fundamental to medical care, the anxiety may be treated symptomatically. Of all available methods of intervention, music meets the demands of universal and undemanding applicability with low potential for adverse outcomes. In fact, a recent study found that the primary reason people listen to music is to manage and regulate mood (Lonsdale & North, 2011). Additionally, music therapy has been shown to decrease levels of anxiety and depression (Jasemi, Aazami & Zabihi, 2016). How then might the efficacy of music in regulating mood and managing anxiety translate into a clinical setting?
The Effects of Musical Intervention on Patients Perioperative (around the time of surgery) music therapy decreases patient anxiety and elicits a wide variety of related benefits. The benefits of this therapy unsurprisingly vary with respect to the features of the chosen music. Studies have shown that exposure to music of the patient’s preference can positively affect patients undergoing surgery. In a prospective randomized double-blind study evaluating patients undergoing abdominal surgery under general anesthesia, researchers
SUMMER 2020
exposed an intervention group to music of their preference immediately following induction with anesthesia. As referenced in Figure 4, the intervention group had more stable systolic arterial blood pressure, calmer recovery, higher satisfaction, and lower pain as reported on the Visual Analog Scale – a subjective yet reliable method of reporting pain wherein patients indicate a location along a continuum from no pain to extreme pain corresponding to their pain levels (Kahloul et al, 2017). Patients most frequently selected Tunisian music, perhaps indicating a bias towards familiarity (as the study occurred in Tunisia) when seeking anxiety-reducing effects. Ultimately, the study demonstrated the efficacy of perioperative exposure to patient-selected music in objectively improving the experience of patients undergoing surgery. However, allowing the patients to select whatever music they want fails to address the broadness of that decision. Some patients may choose music because it calms them, others may choose music because of its association with happy memories, while others may choose music they find to be particularly engaging. The variation in decision making criteria allows for variance in the role of music and likely corresponds to variation in affective response to music. Alternatively, another study exposed patients to self-selected music followed by a mandatory playlist curated by a music therapist to promote serenity and relaxation. The plurality of music selections (41%) consisted of indigenous or religious music, affirming the preference for familiarity demonstrated in the previous study. Ultimately, the researchers found that musical intervention decreased scores on the Hospital Anxiety and Depression Scale, referring to a fourteen-item questionnaire that gauges patient symptoms of both anxiety and depression (Tan et al, 2020). One limitation of the study was the notion that the mandatory playlist promoted relaxation and serenity. This presumed an overly simplistic level of universality to music perception, as perception of music varies culturally and generationally (Thompson 2010). As music therapy operates through the regulation of mood, one might therefore expect the efficacy of music therapy to vary by cultural, and perhaps even personal, background. In some cases, musical intervention was supplemented by the use of a music therapist. In a three-group randomized controlled trial,
“Perioperative (around the time of surgery) music therapy decreases patient anxiety and elicits a wide variety of related benefits.”
153
Figure 2. An example of the Visual Analog Scale (VAS) used to measure pain Source: Created by Author
patients were divided into two experimental groups, live music and recorded music, and a control group. Patients in the two experimental groups took part in five-minute music therapy sessions in which they listened to and discussed a preferred song with the music therapist. The patients in these two groups then listened to music from a playlist curated by the music therapist characterized by smooth melodic lines, stable rhythms, and consistent dynamics. Though the two experimental groups did not differ significantly from the control group in terms of propofol needed to achieve moderate sedation or satisfaction scores, the experimental groups had larger reductions in preoperative anxiety. Additionally, recovery times of patients in the live music group were shorter than those of the patients in the recorded music group, suggesting live music was a more effective form of intervention (Palmer et al, 2015). The relative inefficacy of musical intervention in this study perhaps implicates duration of musical intervention as a factor, as the other, more successful studies were characterized by longer exposure to music.
“... recovery times of patients in the live music group were shorter than those of patients in the recorded music group, suggesting live music was a more effective form of intervention."
154
Palmer’s study was perhaps limited by the narrowing of instrumentation as a result of the use of live music; the live music only included piano and guitar which may alter the effect of the music; for example, a song with guitar and drums might elicit significantly different neurophysiological activation than the same song without drums given the potential for emotion communication through drumming (Rojiani, Zhang, Noah, & Hirsch, 2018). Lastly, the choice to subject the live music and recorded music treatment groups to the same intraoperative music seems to contradict the previous reliance on patient preference. This raises the question of how much intraoperative music influences patients’ psychological states and neurophysiological activation under general anesthesia (referring to the medically induced loss of consciousness).
In contrast to the aforementioned brief preoperative intervention, other studies have assessed more prolonged intervention that spanned the entire perioperative period. In a randomized controlled trial study, patients undergoing mastectomies under general anesthesia in the experimental group listened to music of their preference (of four genres: classical, easy listening, inspirational, and new age) with earphones throughout the preoperative, intraoperative, and postoperative periods. Patients in the experimental group had greater reductions in mean arterial pressure, greater reductions in anxiety, and less pain spanning from the preoperative period to the time of discharge from the recovery room (Binns-Turner et al, 2008). 2.1 General trends in musical intervention and patient experience The large degree of variation in experimental design and operational contexts prevents the simplification of perioperative music therapy into a universal procedure. However, the varying success of different studies reflects the existence of general trends that predict positive or negative outcomes. Firstly, it appears that music of patient preference outperforms music deemed to be ubiquitously ‘calming.’ Next, the benefit of music therapy may vary by duration of intervention with longer duration corresponding to greater reductions in anxiety and pain. However, this apparent trend may arise from the resultant variation in exposure to anxiety-inducing aspects of medical care; subjects who spend less time exposed to music consequentially spend more time fully immersed in their stressful environment which presumably acts as the source of their anxiety. Lastly, it appears music therapy can benefit patients throughout the entire perioperative period and exposure should therefore not be limited simply to the preoperative period.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3. A depiction of surgeons during an operation Source: Shutterstock
The Effects of Musical Exposure on Surgical Performance The relevance of music in clinical contexts is not limited to patients. In fact, a survey conducted by physicians at the University Hospital of Wales found that music is played 62-72% of the time in the operating theater. Specifically, instrumental and classical music appear to be among the most popular genres in the OR (Ullman et al, 2008; George, Ahmed, Mammen & John, 2011). Naturally, one might wonder why music is so prevalent in operating rooms. In a questionnaire-based cross-sectional prospective study, researchers found that 63% percent of respondents agreed that music improves their concentration and 59% agree that it helps reduce autonomic reactivity in stressful surgeries (George, Ahmed, Mammen & John, 2011). Ultimately, it appears that a majority of health-care providers believe in some form of benefit from music in the operating room. These beliefs about the functions of music in the operating theater are to some degree reinforced by experimental findings. For example, researchers conducted a crossover study involving 12 plastic surgery residents to determine the effect of music on the quality and duration of a surgical task. In the study, the residents conducted layered closures of standardized incisions on pigs’ feet both while listening to music of their choice and then
SUMMER 2020
in the absence of music. Listening to music led to a 10% faster completion of the closure and increased repair quality as according to the judgement of blinded faculty (Lies & Zhang, 2015). Though rather narrow in scope, the study affirms the notions of music as beneficial to surgical performance — though it is important to note that the study refers specifically to surgical performance of somewhat inexperienced surgeons within the field of plastic surgery—as expressed in the aforementioned surveys. Listening to music has even been shown to lead to enhanced learning of surgical procedures. Researchers conducted a crossover study in which a total of 31 surgeons performed tasks related to manual tasks required to perform surgery, but not requiring surgical knowledge. These tasks were performed under four conditions: silence, dichotic music, mental loading, and relaxing music, with performance measured in terms of speed and accuracy. The tasks, executed using a surgical computerized laparoscopy simulator (which may not translate to the OR), were repeated after a ten-minute break. During the break, subjects were engaged in a manual number alignment quiz in order to divert the subjects’ focus from the previously accomplished task so as to best assess memory consolidation and recall of motor performance. Their results suggest that classical music (in this case specified as slow movements from Mozart’s piano sonatas) leads to significantly
“Listening to music has even been shown to lead to enhanced learning of surgical procedures.”
155
“...The finding that performance was most significantly improved by hiphop and Jamaican music likely indicates the importance of familiarity and surgeon preference rather than some element of musical form.”
156
improved memory consolidation (Conrad et al, 2020). Additionally, some studies suggest that certain types of music are better suited to altering surgical performance. In a crossover study investigating the effect of music on robot assisted laparoscopic surgical performance, subjects completed both a suture tying and a mesh aligning task using the da Vinci robotic surgical system. Subjects completed the task in the absence of music and under four musical conditions: jazz, classical, Jamaican, and hip-hop music. The subjects then repeated the task in the absence of music to ensure that performance varied by musical intervention rather than serial position. Researchers found that accuracy (as measured by total travel distance of instrument tips) and time of task completion both improved in the presence of music. Music also led to reduced muscle activations and increased median muscle frequency (as given by electromyography), implying decreased muscle fatigue (Siu et al, 2010). Notably, in this study, the performanceenhancing effects of music were greatest in the presence of Jamaican or hip-hop music. This trend can perhaps be explained by the fact that 7 out of 10 participants rated hip-hop as one of their two favorite types of music and that the subjects were all young medical students (Siu et al, 2010). Therefore, the finding that performance was most significantly improved by hip-hop and Jamaican music likely indicates the importance of familiarity and surgeonpreference rather than some element of musical form. Additionally, this instance potentially demonstrates the generationality of music within clinical contexts; given the generational homogeneity of the subjects in this study, hiphop and Jamaican music emerged as the most beneficial. However, one might expect a similar study with a broader age range to yield different results as reflected by the fact that classical and instrumental music, not hip-hop or Jamaican music, appear to be the most common genres. Ultimately, music exposure can increase surgical performance in terms of speed, accuracy, and even memory consolidation and motor recall.
blood pressure and heart rate while reducing the quantity of anesthetics necessary for induction of anesthesia. Additionally, music exposure improves clinician speed and accuracy when performing surgical tasks as well as motor learning. While these findings call for greater application of music in the medical setting, several questions are left unanswered. One might wonder how the effect of music varies with culture, generation, musicality, and many other factors. Additionally, one may ask how the altered neurological state arising from anesthetic induction might influence a patient’s neural response to music. Ultimately, while the subtler details remain obscure, the general applicability and relevance of music within the medical setting is clear.
As we have seen, there exists a definite role of music in the OR for everyone involved. For the patients, preoperative, intraoperative, and postoperative music exposure can significantly decrease self-reported anxiety and depression and increase patient satisfaction. More concretely, music exposure can lower patient
George, S., Ahmed, S., Mammen, K. J., & John, G. M. (2011). Influence of music on operation theatre staff. Journal of anaesthesiology, clinical pharmacology, 27(3), 354–357. https://doi.org/10.4103/0970-9185.83681
Appendix References Bedaso, A., & Ayalew, M. (2019). Preoperative anxiety among adult patients undergoing elective surgery: a prospective survey at a general hospital in Ethiopia. Patient safety in surgery, 13, 18. https://doi.org/10.1186/s13037-019-0198-0 Binns-Turner, P.G., Wilson, L.L., Pryor, E.R., Boyd, G.L., & Prickett, C.A. (2008). Perioperative music and its effects on anxiety, hemodynamics, and pain in women undergoing mastectomy. AANA journal, 79 4 Suppl, S21-7 . Conrad, C., Konuk, Y., Werner, P. D., Cao, C. G., Warshaw, A. L., Rattner, D. W., Stangenberg, L., Ott, H. C., Jones, D. B., Miller, D. L., & Gee, D. W. (2012). A quality improvement study on avoidable stressors and countermeasures affecting surgical motor performance and learning. Annals of surgery, 255(6), 1190–1194. https://doi.org/10.1097/SLA.0b013e318250b332 Cunningham, L. L., & Tucci, D. L. (2017). Hearing Loss in Adults. The New England journal of medicine, 377(25), 2465–2473. https://doi.org/10.1056/NEJMra1616601 Daryl Jian An Tan, Breanna A. Polascik, Hwei Min Kee, Amanda Chia Hui Lee, Rehena Sultana, Melanie Kwan, Karthik Raghunathan, Charles M. Belden, & Ban Leong Sng. (2020). The Effect of Perioperative Music Listening on Patient Satisfaction, Anxiety, and Depression: A Quasiexperimental Study. Anesthesiology Research and Practice, 3761398. https://doi.org/10.1155/2020/3761398 Gao, Y., Pan, B., Sun, W. et al. Anxiety symptoms among Chinese nurses and the associated factors: a cross sectional study. BMC Psychiatry 12, 141 (2012). https://doi. org/10.1186/1471-244X-12-141
Groarke J.M., Hogan M.J. (2019) Listening to self-chosen music regulates induced negative affect for both younger and older adults. PLoS ONE 14(6): e0218017. https://doi. org/10.1371/journal.pone.0218017
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Jasemi, M., Aazami, S., & Zabihi, R. E. (2016). The Effects of Music Therapy on Anxiety and Depression of Cancer Patients. Indian journal of palliative care, 22(4), 455–458. https://doi. org/10.4103/0973-1075.191823 Kahloul, M., Mhamdi, S., Nakhli, M. S., Sfeyhi, A. N., Azzaza, M., Chaouch, A., & Naija, W. (2017). Effects of music therapy under general anesthesia in patients undergoing abdominal surgery. The Libyan journal of medicine, 12(1), 1260886. https://doi.org/10.1080/19932820.2017.1260886
Study. Anesthesiology research and practice, 2020, 3761398. https://doi.org/10.1155/2020/3761398 Thompson, William. (2010). Cross-cultural similarities and differences (Music and Emotion). 10.1093/acprof:o so/9780199230143.001.0001. Ullmann, Y., Fodor, L., Schwarzberg, I., Carmi, N., Ullmann, A., & Ramon, Y. (2008). The sounds of music in the operating room. Injury, 39(5), 592–597. https://doi.org/10.1016/j. injury.2006.06.021
Kavalnienė, R., Deksnyte, A., Kasiulevičius, V., Šapoka, V., Aranauskas, R., & Aranauskas, L. (2018). Patient satisfaction with primary healthcare services: are there any links with patients' symptoms of anxiety and depression?. BMC family practice, 19(1), 90. https://doi.org/10.1186/s12875-018-0780-z Kazdin, A. E. (2000). Encyclopedia of psychology. Washington, D.C: American Psychological Association. Kil, H. K. , Kim, W. O. , Chung, W. Y., Kim, G. H., Seo, H., Hong, J. Y, Preoperative anxiety and pain sensitivity are independent predictors of propofol and sevoflurane requirements in general anaesthesia, BJA: British Journal of Anaesthesia, Volume 108, Issue 1, January 2012, Pages 119–125, https:// doi.org/10.1093/bja/aer305 Lies, S. R., & Zhang, A. Y. (2015). Prospective Randomized Study of the Effect of Music on the Efficiency of Surgical Closures. Aesthetic surgery journal, 35(7), 858–863. https:// doi.org/10.1093/asj/sju161 Lonsdale A. J., North A. C. (2011). Why do we listen to music? A uses and gratifications analysis. Br. J. Psychol. 102, 108–134 10.1348/000712610X506831 Meier, D. E., Back, A. L., & Morrison, R. S. (2001). The inner life of physicians and care of the seriously ill. JAMA, 286(23), 3007–3014. https://doi.org/10.1001/jama.286.23.3007 Palmer, J. B., Lane, D., Mayo, D., Schluchter, M., & Leeming, R. (2015). Effects of Music Therapy on Anesthesia Requirements and Anxiety in Women Undergoing Ambulatory Breast Siu, K. C., Suh, I. H., Mukherjee, M., Oleynikov, D., & Stergiou, N. (2010). The effect of music on robot-assisted laparoscopic surgical performance. Surgical innovation, 17(4), 306–311. https://doi.org/10.1177/1553350610381087 Surgery for Cancer Diagnosis and Treatment: A Randomized Controlled Trial. Journal of clinical oncology : official journal of the American Society of Clinical Oncology, 33(28), 3162–3168. https://doi.org/10.1200/JCO.2014.59.6049 Trehub, S. E., Becker, J., & Morley, I. (2015). Cross-cultural perspectives on music and musicality. Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 370(1664), 20140096. https://doi. org/10.1098/rstb.2014.0096 Rojiani, R., Zhang, X., Noah, A., & Hirsch, J. (2018). Communication of emotion via drumming: dual-brain imaging with functional near-infrared spectroscopy. Social cognitive and affective neuroscience, 13(10), 1047–1057. https://doi.org/10.1093/scan/nsy076 Tan, D., Polascik, B. A., Kee, H. M., Hui Lee, A. C., Sultana, R., Kwan, M., Raghunathan, K., Belden, C. M., & Sng, B. L. (2020). The Effect of Perioperative Music Listening on Patient Satisfaction, Anxiety, and Depression: A Quasiexperimental
SUMMER 2020
157
Meta-analysis Regarding the Use of External-Beam Radiation Therapy as a Treatment for Thyroid Cancer BY MANYA KODALI, HAMPTON HIGH SCHOOL, AND DR. VIVEK VERMA MD Cover Image: Pictured above is an overview of the hormones released by the thyroid and their relation to multiple bodily functions. The graph depicts the relationship between basal metabolic relate and hormone production rate. Source: Wikimedia Commons
158
Background - The Thyroid Gland The thyroid is vital to the function and metabolic health of the human body. This organ is part of the endocrine system and lies under the voice box, towards the front of the neck. It has a bilobular structure with an appearance like that of a butterfly. Weighing between 20 and 60 grams on average, it is surrounded by two fibrous capsules. Thyroid tissue itself consists of individual lobules enclosed in layers of connective tissue; these lobules contain vesicles which store droplets of thyroid hormones (Information, 2018). As with other glands in the endocrine system, the thyroid gland is controlled through feedback systems involving the pituitary gland and hypothalamus. Although both positive and negative feedback systems are exhibited, the negative feedback system dominates and is responsible for maintaining constant levels of circulating hormones.
Two hormones are produced by the thyroid gland (in addition to calcitonin, a hormone that regulates calcium, secreted by the parafollicular C-cells): Tetraiodothyronine (T4 or thyroxine) and Triiodothyronine (T3). T3 and T4 are not produced in equal amounts; the thyroid produces all the T4 in the body, but only around 20% of total T3 in the body is produced in the thyroid. The other 80% of T3 is produced through extrathyroidal deiodination of T4, typically in the kidney or liver. These hormones circulate through the body bound to one of three plasma proteins - thyroxine binding prealbumin (TBPA), thyroxine binding globulin (TBG), or albumin. TBG binds to most of the circulating T3 and T4, while albumin has the lowest affinity for binding (Fritsma, 2013). Hormones produced by the thyroid gland have a wide variety of functions which affect metabolism, growth, and maturation. The hormones are calorigenic, meaning they DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
result in the generation of body heat and consumption of oxygen. They increase lipid metabolism, glucose utilization, heart rate, protein catabolism, myocardial contractility, and cardiac output. They also stimulate the production of cytokines, proteins that play a key role in cell signaling. Thyroid hormones additionally promote gluconeogenesis, cell differentiation, and increased motility of the gastrointestinal system (Fritsma, 2013).
Background - Thyroid Cancer Thyroid nodules, caused by growth of cells in the thyroid gland, are relatively common in the general population. When nodules are discovered, either through an exam or as an incidental finding, patients are tested for thyroid-stimulating hormone (TSH) levels in the blood; measuring TSH levels allows doctors to distinguish between functional and nonfunctional nodules. Nodules with low TSH levels undergo radioiodine imaging to find whether they are autonomously functioning or hypofunctional. Autonomously functioning nodules are typically benign and thus don't usually require further treatment; however, further diagnostic tests are used if deemed necessary. Hypofunctioning nodules and ones with elevated TSH undergo either ultrasoundguided fine needle aspiration based on features described by the American Thyroid Association, or the patient's doctors will monitor the nodule; hypofunctioning nodules are often malignant and require surgery (Haddad et al., 2020). The diagnostic neck ultrasounds are performed on suspicious nodules to check for unusual morphologic features; if the nodule is large/speculated/raggedy, it is more likely to be cancerous than a nodule that is small/rounded (Lee et al., 2011). Repeat biopsies are performed if the initial test is indeterminate; nodules are also monitored through ultrasound or surgery if results are unclear or suspicious in any way (Mayo). Thyroid cancer is divided into two main categories: 1) well-differentiated, which includes papillary and follicular cancers, and 2) poorly differentiated, which includes anaplastic and medullary cancers. If found to be malignant, thyroid cancer is further subdivided by differentiation and by the cell type of origin. Papillary carcinomas (PTC) account for around 80% of thyroid cancer cases (Nguyen et al., 2015); they are differentiated, slow-growing cells and can develop in one or both lobes of the thyroid gland. These can spread to the lymph nodes, but generally are SUMMER 2020
treatable and have a good prognosis, close to 100% after five years (American Cancer Society). Follicular thyroid cancer makes up approximately 14% of cases (Nguyen et al., 2015). This cancer is more aggressive than papillary cancer and is more likely to spread to other organs, specifically to bones and the lungs. It is often associated with iodine deficiency. Hürthle-cell carcinomas are a subtype of follicular thyroid cancer and are treated similarly to other follicular carcinomas. The prognosis for distant follicular cancer is 63%, but 98% for all stages combined (American Cancer Society). Medullary thyroid cancer begins in C-cells and accounts for roughly 3% of thyroid cancers (Nguyen et al., 2015). It is typically associated with multiple endocrine neoplasia, a genetic syndrome that predisposes people to getting various types of endocrine cancers at multiple places in the body (Multiple Endocrine Neoplasia); this form of cancer produces an excess of calcitonin. Distant medullary thyroid cancers have a prognosis of 40%, significantly lower than the 90% survival rate of localized medullary cancer (American Cancer Society). Anaplastic thyroid cancer (ATC) is the most aggressive form of thyroid cancer. It is found in less than 2% of patients and typically occurs in people over 60 years of age (Nguyen et al., 2015). This is the most undifferentiated form of thyroid cancer and spreads rapidly to other parts of the neck and body. The prognosis for ATC is grim -a 5-year survival rate of 7% - with virtually no proper therapy (American Cancer Society).
“Thyroid cancer is divided into two main categories: 1) well-differentiated, which includes papillary and follicular cancers, and 2) poorly differentiated, which includes anaplastic and medullary cancers.”
Treatments - Overview Thyroid cancer is typically treated with a combination of treatments depending on the stage and cell type of thyroid cancer, patient preference, general health of the patient, and possible side effects. Non-radiation treatments include surgery, hormone treatment, targeted therapies, and chemotherapy; surgery is the most common therapy. Radiotherapies include external-beam and radioactive iodine (RAI). Radiotherapy is typically used in patients with residual cancer activity due to incomplete surgery. Radiation therapy can be used for stage III papillary and follicular thyroid cancer, stage IV papillary and follicular thyroid cancer, localized medullary thyroid cancer, and anaplastic thyroid cancer (alone or in conjunction with 159
Figure 1: A histopathological image of a papillary thyroid carcinoma, obtained through total thyroidectomy, under a hematoxylin and eosin stain. Source: Wikimedia Commons
chemotherapy).
Introduction: External-Beam Radiation Therapy â&#x20AC;&#x153;Tubiana et al. reported on 97 patients who had pathologic tissue macroscopically remaining after surgery; the study found a 57% 15-year survival rate and a 40% 15-year relapsefree survival rate.â&#x20AC;?
Reports on the use of external-beam radiation therapy (EBRT) are largely retrospective, with nonuniform criteria for the selection of patients, thereby leading to contradictory conclusions. Many studies have shown no effect or detrimental effects, while other have shown positive effects. EBRT is typically only considered for patients with significant risk of relapse and/ or when surgery and RAI are less effective (Kiess et al., 2016). Because radiation therapy affects both normal and cancerous cells, it often involves a multitude of side effects. Common side effects of EBRT for thyroid cancer include dry mouth, cough, appetite loss, nausea, fatigue, trouble swallowing, and dry skin. Other symptoms include skin erythema, mucositis, hyperpigmentation of the skin, and esophageal and tracheal stenosis (Wexler, 2011).
External-Beam Radiation for Residual Cancer Tubiana et al. reported on 97 patients who had pathologic tissue macroscopically remaining after surgery; the study found a 57% 15-year survival rate and a 40% 15-year relapse-free
160
survival rate (Tubiana et al.,1985). A more recent study looked at patients with differentiated thyroid carcinoma from the Royal Marsden Hospital. The patients received EBRT [a dose of 60 Grays (Gy) in 30 fractions] over a period of six weeks. Complete regression was seen in 37% and partial regression in another 35%; the 5-year survival rate was 27% (O'Connell et al., 1994). Additionally, a review of 33 patients with residual disease at the Princess Margaret Hospital was performed. Of the total number, 20 patients were treated solely with EBRT, and the other 13 were given RAI along with EBRT. The 5-year local relapse-free rate was found to be 62%, and the cause-specific survival rate was found to be 65% (Tsang et al., 1998). Brierley and Tsang suggest the administration of RAI followed by EBRT for patients with residual disease following thyroidectomies. For young patients with limited residual disease, EBRT may be unnecessary if the patient shows appropriate iodine uptake (Brierley and Tsang, 1999).
External-Beam Radiation as Adjuvant Therapy The American Head and Neck Society suggests that EBRT should not be used routinely as
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: The image to the left depicts a thyroidectomy, the most common treatment for patients with thyroid cancers. Source: Wikimedia Commons
an adjuvant therapy after patients have undergone complete resection, but it should be considered for patients older than 45 years with low likelihood of responding to RAI treatment and who have high likelihood of microscopic residual disease (Kiess et al., 2016). Several studies support this suggestion for the use of EBRT in select patients. Patients with resected stage 4 papillary thyroid cancers were shown to have a 10-year local failure-free survival of 88% after EBRT along with RAI treatment, far better than the same rate for RAI therapy alone (72%) (Chow et al., 2006). Another study performed on patients with papillary and follicular thyroid cancer similarly found that adjuvant EBRT improved recurrence-free survival (Farahati et al., 1996). A study reporting on 114 patients post-surgery with no macroscopic disease left behind found significantly improved local relapse free survival (Ă&#x2030;sik et al., 1994). In Essen, Germany, EBRT was shown to significantly increase the 10-year survival rate for patients older than 40 years who had stage 3 or 4 tumors; those given EBRT had a survival rate of 58% while those
SUMMER 2020
without had a rate of 48% (Benker et al., 1990). EBRT as an adjuvant therapy has also been found to vastly improve control rates. Postsurgery, 23 patients received EBRT (with and without RAI therapy), and another 68 were treated with RAI therapy alone. Survival rates at 7 years were not statistically different between the two groups, but the 5-year locoregional control rate was 95.2% for the EBRT group and 67.5% without EBRT (Kim et al., 2003). A Korean study showed that EBRT significantly decreased locoregional recurrence from 51% to 8% in 68 patients who underwent excision of thyroid tumors off the trachea (Keum et al., 2006). Tubiana et al. reported on 66 patients who received adjuvant radiation for regional lymph node involvement; they found that the rate of local recurrence was 14% when EBRT was involved compared with 21% recurrence for patients who did not receive EBRT (Tubiana et al., 1985). All the studies discussed above suggest benefits for patients given adjuvant EBRT. However, these studies are retrospective and
â&#x20AC;&#x153;Patients with resected stage 4 papillary thyroid cancers were shown to have a 10-year local failure-free survival of 88% after EBRT along with RAI treatment, far better than the same rate for RAI therapy alone (72%)."
161
do not have clear criteria for patient selection or standardization of therapy. Thus, the studies leave room for future improvement and standardization.
External-Beam Radiation for Recurrent Cancer “A study on patients with welldifferentiated papillary thyroid carcinoma identified patients who relapsed in the thyroid bed, and treated fourteen with EBRT; seven of these patients did not relapse regionally a second time.”
delayed for one-month following radiotherapy (Frassica, 2003). Other studies found complete or partial pain relief in 50% of patients who received EBRT (Simpson et al.1998; Simpson 1990). More research is required for definitive evidence regarding the use of EBRT to treat metastases due to thyroid cancer.
Patients with relapse in the neck are typically given RAI therapy and TSH suppression; in patients with nodal recurrence, neck dissections are often performed (Brierley and Tsang, 1999). If the recurrence occurs in the thyroid bed or there is extra capsular lymph node involvement and the soft tissues of the neck are infiltrated, EBRT is also given (Brierley and Tsang, 1999).
Conclusions
In another study, five patients with locally recurring papillary thyroid cancer all had no second relapses following EBRT (Sheline et al., 1966). A study on patients with welldifferentiated papillary thyroid carcinoma identified patients who relapsed in the thyroid bed, and treated fourteen with EBRT; seven of these patients did not relapse regionally a second time (Vassilopoulou-Sellin et al., 1996). EBRT of greater than 50 Gy has been shown to be useful for long-tem control of sites with recurrent lesions of differentiated thyroid cancer (Makita et al.). Finally, in patients with extensive extrathyroidal extension, EBRT should be considered to aid in patient outcome (Brierley and Tsang, 1999). Other studies suggest reserving EBRT and instead using salvage surgery or RAI therapy for recurrence to avoid the morbidity and side effects associated with EBRT (Shaha, 2004).
EBRT, when used as a treatment for residual disease, achieves increased rates of relapsefree survival and both partial and complete regressions. Local relapse-free rates, regional control, and overall survival rates in patients with poor uptake of RAI are all benefitted through the use of EBRT - either alone or in conjunction with RAI. Some research has shown EBRT should only be used as a last resort for recurrent cancers, but other studies have shown that in certain patient groups, the therapy helped to avoid second relapses. Finally, EBRT can be used to help relieve pain in patients with skeletal metastases. After reviewing the existing literature, it becomes clear that EBRT often provides added benefit to management of thyroid cancer, especially when used in addition to thyroidectomy and radioiodine therapy. Thus, doctors continue to utilize EBRT as a treatment for thyroid cancer due to its efficacy.
When the most common treatment for thyroid cancer, thyroidectomy, doesn't work, doctors often recommend radiotherapy. While not the most common method, external radiation has been shown to benefit patient outcome in a variety of scenarios.
References
External-Beam Radiation for Bone Metastases Metastatic thyroid cancer is typically treated with RAI therapy; however, its effectiveness has been shown to vary greatly with the site of the metastasis (Casara et al., 2018; Brown et al., 1984). In these situations, surgical resection is recommended. For unresectable bone metastases, EBRT is warranted (Brierley and Tsang, 1999). Few studies have been done on the efficacy of EBRT as management for bone metastases. However, studies on the general principles of EBRT have shown that approximately 70% of patients receive some pain relief through palliative EBRT. Patients reported improved symptoms in 2-3 days, but in some cases relief was
162
American Cancer Society. (2020, September). Survival Rates for Thyroid Cancer.AmericanCancer.Org. https://www.cancer.org/cancer/ thyroid-cancer/detection-diagnosis-staging/survival-rates. html Benker, G., Olbricht, T., Reinwein, D., Reiners, C. R., Sauerwein, W., Krause, U., . . . Hirche, H. (1990). Survival rates in patients with differentiated thyroid carcinoma. Influence of postoperative external radiotherapy. Cancer, 65(7), 15171520. doi:10.1002/1097-0142(19900401)65:73.0.co;2-k Brierley, J. D., & Tsang, R. W. (n.d.). External‐beam radiation therapy in the treatment of differentiated thyroid cancer. 8. Brown, A. P., Greening, W. P., McCready, V. R., Shaw, H. J., & Harmer, C. L. (1984). Radioiodine treatment of metastatic thyroid carcinoma: the Royal Marsden Hospital experience. The British journal of radiology, 57(676), 323–327. https://doi. org/10.1259/0007-1285-57-676-323
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Casara, D. D., Rubello, D., Saladini, G., Gallo, V., Masarotto, G., & Busnardo, B. (2018). Distant Metastases in Differentiated Thyroid Cancer: Long-term Results of Radioiodine Treatment and Statistical Analysis of Prognostic Factors in 214 Patients: Tumori Journal. https://doi. org/10.1177/030089169107700512 Chow, S.-M., Yau, S., Kwan, C.-K., Poon, P. C. M., & Law, S. C. K. (2006). Local and regional control in patients with papillary thyroid carcinoma: Specific indications of external radiotherapy and radioactive iodine according to T and N categories in AJCC 6th edition. Endocrine-Related Cancer, 13(4), 1159–1172. https://doi.org/10.1677/erc.1.01320 Ésik, O., Németh, G., & Eller, J. (1994). Prophylactic External Irradiation in Differentiated Thyroid Cancer: A Retrospective Study over a 30-Year Observation Period. Oncology, 51(4), 372–379. https://doi.org/10.1159/000227368 Farahati, J., Reiners, C., Stuschke, M., Müller, S. P., Stüben, G., Sauerwein, W., & Sack, H. (1996). Differentiated thyroid cancer: Impact of adjuvant external radiotherapy in patients with perithyroidal tumor infiltration (stage pT4). Cancer, 77(1), 172–180. https://doi.org/10.1002/(SICI)10970142(19960101)77:1<172::AID-CNCR28>3.0.CO;2-1 Frassica, D. A. (2003). General Principles of External Beam Radiation Therapy for Skeletal Metastases. Clinical Orthopaedics and Related Research (1976-2007), 415, S158. https://doi.org/10.1097/01.blo.0000093057.96273.fb Fritsma, G. A. (n.d.). DIALOG AND DISCUSSION. 68. Haddad, R. I. (2020). Thyroid Carcinoma (Vol. 2). National Comprehensive Cancer Network. Information, N. C. for B., Pike, U. S. N. L. of M. 8600 R., MD, B., & Usa, 20894. (2018). How does the thyroid gland work? In InformedHealth.org [Internet]. Institute for Quality and Efficiency in Health Care (IQWiG). https://www.ncbi.nlm.nih. gov/books/NBK279388/ Keum, K. C., Suh, Y. G., Koom, W. S., Cho, J. H., Shim, S. J., Lee, C. G., Park, C. S., Chung, W. Y., & Kim, G. E. (2006). The role of postoperative external-beam radiotherapy in the management of patients with papillary thyroid cancer invading the trachea. International Journal of Radiation Oncology*Biology*Physics, 65(2), 474–480. https://doi. org/10.1016/j.ijrobp.2005.12.010 Kiess, A. P., Agrawal, N., Brierley, J. D., Duvvuri, U., Ferris, R. L., Genden, E., Wong, R. J., Tuttle, R. M., Lee, N. Y., & Randolph, G. W. (2016). External-beam radiotherapy for differentiated thyroid cancer locoregional control: A statement of the American Head and Neck Society. Head & neck, 38(4), 493–498. https://doi.org/10.1002/hed.24357 Kim, T.-H., Yang, D.-S., Jung, K.-Y., Kim, C.-Y., & Choi, M.-S. (2003). Value of external irradiation for locally advanced papillary thyroid cancer. International Journal of Radiation Oncology*Biology*Physics, 55(4), 1006–1012. https://doi. org/10.1016/S0360-3016(02)04203-7
with Thyroid Cancer. American Health & Drug Benefits, 8(1), 30–40. O’Connell, M. E. A., A’Hern, R. P., & Harmer, C. L. (1994). Results of external beam radiotherapy in differentiated thyroid carcinoma: A retrospective study from the Royal Marsden Hospital. European Journal of Cancer, 30(6), 733–739. https:// doi.org/10.1016/0959-8049(94)90284-4 Multiple endocrine neoplasia (n.d). Genetics Home Reference. Retrieved July 28, 2020, from https://ghr.nlm.nih. gov/condition/multiple-endocrine-neoplasia ShahaAR2004 Implications of prognostic factors and risk groups in the management of differentiated thyroid cancer. Laryngoscope114393–402. Sheline, G. E., Galante, M., & Lindsay, S. (1966). Radiation therapy in the control of persistent thyroid cancer. American Journal of Roentgenology, 97(4), 923–930. https://doi. org/10.2214/ajr.97.4.923 Simpson, W. J., Panzarella, T., Carruthers, J. S., Gospodarowicz, M. K., & Sutcliffe, S. B. (1988). Papillary and follicular thyroid cancer: Impact of treatment in 1578 patients. International Journal of Radiation Oncology, Biology, Physics, 14(6), 1063–1075. https://doi.org/10.1016/0360-3016(88)90381-1 Simpson WJ. Radioiodine and radiotherapy in the management of thyroid cancers. Otolaryngologic Clinics of North America. 1990 Jun;23(3):509-521. Thyroid cancer—Symptoms and causes. (n.d.). Mayo Clinic. Retrieved July 23, 2020, from https://www.mayoclinic.org/ diseases-conditions/thyroid-cancer/symptoms-causes/syc20354161 Tsang, R. W., Brierley, J. D., Simpson, W. J., Panzarella, T., Gospodarowicz, M. K., & Sutcliffe, S. B. (1998). The effects of surgery, radioiodine, and external radiation therapy on the clinical outcome of patients with differentiated thyroid carcinoma. Cancer, 82(2), 375–388. Tubiana, M., Haddad, E., Schlumberger, M., Hill, C., Rougier, P., & Sarrazin, D. (1985). External radiotherapy in thyroid cancers. Cancer, 55(S9), 2062–2071. https://doi. org/10.1002/1097-0142(19850501)55:9+<2062::AIDCNCR2820551406>3.0.CO;2-O Vassilopoulou-Sellin, R., Schultz, P. N., & Haynie, T. P. (1996). Clinical outcome of patients with papillary thyroid carcinoma who have recurrence after initial radioactive iodine therapy. Cancer, 78(3), 493–501. https://doi.org/10.1002/(SICI)10970142(19960801)78:3<493::AID-CNCR17>3.0.CO;2-U Wexler, J. A. (2011). Approach to the Thyroid Cancer Patient with Bone Metastases. The Journal of Clinical Endocrinology & Metabolism, 96(8), 2296–2307. https://doi.org/10.1210/ jc.2010-1996
Lee, Y. H., Kim, D. W., In, H. S., Park, J. S., Kim, S. H., Eom, J. W., Kim, B., Lee, E. J., & Rho, M. H. (2011). Differentiation between benign and malignant solid thyroid nodules using an US classification system. Korean journal of radiology, 12(5), 559–567. https://doi.org/10.3348/kjr.2011.12.5.559 Nguyen, Q. T., Lee, E. J., Huang, M. G., Park, Y. I., Khullar, A., & Plodkowski, R. A. (2015). Diagnosis and Treatment of Patients
SUMMER 2020
163
The Role of Epigenetics in Tumorigenesis BY MICHELE ZHENG, SAGE HILL SCHOOL SENIOR Cover Image: A structural representation of the DNA molecule with methylated cytosines. DNA methylation is integral to the regulation of gene expression and a critical contributor to cancer progression. Source: Wikimedia Commons [Cristoph Bock of Max Planck Institute for Informatics, CCPL]
164
Introduction Often, DNA is regarded as a fixed, rigid blueprint that directs the lives of every living organism. In humans, it determines our health and ultimately, our identities. Though we are born with a fixed set of genes, the study of epigenetics has illuminated other factors, including our environment and lifestyle choices, that have just as much influence on the way our genome manifests. The epigenome shapes aspects of behavior and appearance as well as susceptibility to various kinds of diseases and cancers. Moreover, epigenetic patterns are inherited by offspring, and consequently, the lives and health of future generations remains dependent upon the environmental contexts and lifestyles that affected earlier generations. The advancing field of epigenetics is crucial to the understanding of cancer etiology and progression. DNA methylation and other epigenetic mechanisms reveal a dynamic yet delicate epigenetic landscape that regulates
the expression of the human genome that, when disrupted, can initiate the progression of cancer. Thus, in understanding the mechanisms responsible and processes that take place, a better understanding of epigenetics will allow for the creation and implementation of effective risk reduction strategies that may improve quality of life and healthcare for future generations.
The Rise of Epigenetics From Mendelian Genetics to Epigenetics The study of genetics is a complex, everchanging subject. Its foundations were established in the revolutionary work of Gregor Mendel whose famous experiments on plant hybridization set the stage for genetics to become â&#x20AC;&#x153;the science of heredityâ&#x20AC;? (Gayon, 2016). With increasing research in molecular biology, the theories of Mendelian genetics were expanded and revised, while the study of epigenetics grew to become a field of its own. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
pregnant mother was given regular mouse chow, the other was given a nutritious diet supplemented with vitamin B12, folic acid, choline, and betaine, all foods rich in methyl groups (Murphy, 2003). The results were drastic. The mother given a nutrient-poor diet produced yellow, obese newborns who had an increased risk of developing cardiovascular disease, diabetes, and cancer (Murphy, 2003). On the other hand, the mother that received a nutrient-rich diet produced thin, brown, healthy mice with a lower incidence of disease (Murphy, 2003).
Figure 1: Gregor Mendel, known as the “father of modern genetics,” discovered the fundamental principles of inheritance through his plant hybridization experiments. Source: Wikimedia Commons
What explains these results? In utero, the brown, lean offspring received methyl groups through their mother’s nutritious diet that acted upon the agouti gene to effectively turn it “off” (Murphy, 2003). Thus, this study revealed that environmental cues, such as prenatal diet, are able to affect disease risk and alter phenotype not by changing the fundamental DNA code, but by modulating the epigenome through DNA methylation. While Mendelian inheritance describes the stable replication of the genome, epigenetics illuminates the ways in which inheritance involves more than just a predetermined set of genes. First introduced in 1942 by embryologist Conrad Waddington, epigenetics today is defined as “the study of changes in gene function that are mitotically/meiotically heritable and that do not entail a change in DNA sequence” (Dupont et.al., 2009). Key Experiment: The Agouti Mice Model The ground-breaking agouti mice experiment is an example of the powerful influence of epigenetic gene regulation upon phenotype and transgenerational inheritance. Conducted by Randy Jirtle of Duke University, the experiment explored the link between prenatal diet and susceptibility to certain diseases such as diabetes and cancer (Murphy, 2003). By manipulating the nutritional intake of pregnant agouti yellow mice, the researchers were able to identify a link between genotype and phenotype. These mice contained a gene named agouti that when expressed, produces a yellow, obese phenotype and increases susceptibility to various cancers and diabetes. When turned “off,” the gene remains unexpressed and produces a brown, thin phenotype. In his experiment, Jirtle and his team took two genetically identical strains of agouti mice whose mothers were given different diets during pregnancy. While one
SUMMER 2020
The Role of Twin Studies in Epigenetics Research Twin studies are also a unique tool in illuminating the effects of environment and lifestyle upon the epigenome. Though monozygotic twins share almost all their genetic information, phenotypic differentiations, such as birth weight, are not uncommon. In one survey, monozygotic twins showed lifetime disparities in type 1 diabetes (61%), type 2 diabetes (41%), autism (58-60%), schizophrenia (58%), and various cancers (0-16%) (Castillo-Fernandez et.al., 2014). Though genetic inheritance remains a factor in disease risk, it is clear that differing environmental exposures acquired throughout life, not genotype alone, determines phenotype (Castillo-Fernandez et.al., 2014). In one of the largest twin studies on epigenetics, epigenetic patterns of 80 sets of monozygotic twins from ages 3 to 74 years were analyzed (Choi, 2005). Researchers also dietary habits, physical exercise, drug consumption, alcohol intake, height, and weight (Choi, 2005). Results indicated older twins are epigenetically dissimilar to each other than those who were younger. For the two sets of twins where the individual siblings were most epigenetically opposed, there were “four times as many differentially expressed genes in the older pair than in the younger pair” (Choi, 2005). This
“Though genetic inheritance remains a factor in disease risk, it is clear that differing environmental exposures acquired throughout life, not genotype alone, determines phenotype.”
Figure 2: Maternal mortality rate (per 100,000 births) across the United States. Source: Wikimedia Commons (published in 2015).
165
Figure 2: Mouse with agouti gene activated (left), mouse with inactive agouti gene (right) Source: Wikimedia Commons [Randy Jirtle, CCPL]
further suggest that accumulated differences of environmental exposures can influence the magnitude of expression of certain genes (Choi, 2005).
Epigenetic Mechanisms â&#x20AC;&#x153;The expression of genes within the mammalian genome is regulated by three primary epigenetic components: DNA methylation, posttranslational histone modifications, and non-coding RNAs.â&#x20AC;?
By altering chromatin structure and therefore DNA accessibility, epigenetic modification determines how the genome manifests through cellular identity and development as well as the onset of various disease states. The expression of genes within the mammalian genome is regulated by three primary epigenetic components: DNA methylation, posttranslational histone modifications, and noncoding RNAs. DNA Methylation DNA methylation is, by far, the most extensively researched epigenetic mechanism due to its expansive influence upon the expression of the mammalian genome as well as its critical role in maintaining homeostasis and genetic continuity. DNA methylation is involved in many biological processes including cellular differentiation, genomic imprinting, X-chromosome inactivation, and aging Its primary mechanism of action, however, involves the silencing the expression of certain genes by mobilizing regulatory proteins (Roberti et.al., 2019). Methylation occurs via the transfer of a methyl group to a cytosine residue in CpG dinucleotides within the genome. These segments of DNA,
166
in which a cytosine is followed by a guanine are concentrated in CpG-rich regions of DNA that remain unmethylated in differentiated tissues (Sharma et.al., 2010). In some instances, however, these CpG promoter sites may become methylated, both in embryonic and adult somatic cells, resulting in long-term gene silencing (e.g., X-chromosome inactivation) (Roberti et.al., 2019 and Sharma et.al., 2010). Other CpG sites, not located in islands, are heavily methylated ensuring chromosomal stability (Sharma et.al., 2010). DNA methylation is mediated by three main enzymes, known as DNA methyltransferases (DNMTs), that serve to both develop and maintain epigenetic patterns. Following fertilization, DNMT3A and DNMT3B enable the differentiation of embryonic stem cells, silencing or activating certain genes by establishing methylation patterns during early development. As these differentiated cells replicate, DNMT1 maintains these epigenetic patterns through the methylation of CpG sites on daughter DNA strands during DNA replication (Roberti et.al., 2019). In addition, ten-eleven translocation enzymes (TETs) carry out DNA demethylation, in which methyl groups are removed through the oxidation of methylcytosine to hydroxymethylcytosine, followed by reversion to unmethylated cytosine (Roberti et.al., 2019). Together, methylation, regulated by DNMTs, and demethylation, regulated by TETs, strike a delicate balance that
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: The epigenetic landscape is shaped by a variety of environmental and lifestyle factors that affect gene expression. Source: Wikimedia Commons [National Institutes of Health]
enables proper cell functioning. Histone Post-Translational Modifications Histone post-translational modifications, otherwise known as HPM, is a group of epigenetic mechanisms that influence gene expression through the alteration of chromatin structure. This class of epigenetic modification includes histone acetylation, methylation, and phosphorylation. Chromatin consists of repeating units called nucleosomes, each comprised of about 146 base pairs of DNA wrapped around an octamer, or eight-piece complex, of four core histone proteins. Each octamer consists of two subunits of H2A, H2B, H3, and H4 proteins with NH2 terminal tails extending outward from the nucleosome structure (Kanwal et.al., 2012). Through the covalent modification of these histone tails, the chromatin can either be condensed and transcriptionally inactivated, known as heterochromatin, or decondensed in a way that facilitates transcription, known as euchromatin (Kanwal et.al., 2012). By altering the compaction state of the chromatin, histone modifications can either block or enable the recruitment of transcriptional proteins such as RNA polymerase to nearby genes (Chuang et.al., 2007). The principle enzymes that enable this process include histone acetyltransferases (HATs), histone deacetyltransferase, histone methyltransferases (HMTs), and histone demethyltransferases among others that act
SUMMER 2020
in combination to activate or repress gene expression at specific gene bodies (Kanwal et.al., 2012). The actions of these opposing proteins histone modification to be a highly reversible process. Non-coding RNAs Non-coding RNAs (ncRNAs) are divided into two main categories: short-chain non-coding RNAs and long non-coding RNAs (lncRNAs) (Roberti et.al., 2019). Operating on both the transcriptional and post-transcriptional levels, ncRNAs regulate gene expression the way phenotype manifests. Short-chain ncRNAs include miRNAs, siRNAs, and piRNAs. The most researched are endogenous miRNAs that can control gene expression by binding to mRNAs. In doing so, they block the mRNA from being translated into protein. miRNAs, extending ~22 nucleotides long, play a major role in regulating the cell cycle, mainly cell proliferation, differentiation, and apoptosis (Sharma et.al., 2010). Thus, they are often targets of interest in the study of tumorigenesis. miRNAs, as well as siRNA, also play a part in establishing DNA methylation and histone modification patterns. By regulating the expression of DNMT1, DNMT3A, and DNMT3B, these ncRNAs are predicted to affect the activity of these enzymes and facilitate the expression of epigenetic mechanisms (Chuang et.al., 2007). However, activity of miRNAs can, in
â&#x20AC;&#x153;miRNAs, extending ~22 nucleotides long, play a major role in regulating the cell cycle, mainly cell proliferation, differentiation, and apoptosis. Thus, they are often targets of interest in the study of tumorigenesis.â&#x20AC;?
167
Figure 4: A computer representation of the molecular structure of DNA methyltransferase DNMT3B. Source: Wikimedia Commons [Jawahar Swaminathan and MSD staff of the European Bioinformatics Institute]
turn, be regulated by histone modifications and DNA methylation as well (Chuang et.al., 2007). Long non-coding mRNAs (lncRNAs) are also an integral part of global gene regulation. These include small nucleolar RNAs and enhancer RNAs among other types (Roberti et.al., 2019). In both embryonic and adult stem cells, they help to maintain pluripotency and facilitate differentiation (Gayon, 2016). lncRNAs also have well-established roles in transcriptional interference and other biological processes including the reparation of damaged DNA and DNA replication (Roberti et.al., 2019). In posttranscriptional processes, lncRNAs regulate splicing and protein translation as well as maintain both protein and mRNA stability (Roberti et.al., 2019).
Disruptions of Epigenetic Landscape in Tumorigenesis â&#x20AC;&#x153;In general, cancer cells exhibit both global hypomethylation and promoter hypermethylation, ultimately leading to the destabilization of the genome and inducing abnormal cell functioning.â&#x20AC;?
168
In tumorigenesis, the delicate epigenetic landscape is significantly dysregulated and distorted enabling cancer progression and the development of other disease. Ultimately, cancer is the result of combined epigenetic events that influence each other in a way that changes and alters the normal cell cycle. Aberrations in DNA Methylation In general, cancer cells exhibit both global hypomethylation and promoter hypermethylation, ultimately leading to the destabilization of the genome and inducing abnormal cell functioning. Hypermethylation of promoter regions is associated with the permanent silencing of tumor suppressor genes that help to maintain correct cell division and regulate cell death. These genes that are normally unmethylated in healthy cells are inappropriately methylated in tumor tissues which enables irregular cellular growth (Cheng et.al., 2019). For example, the p16 gene, essential in cell cycle inhibition, undergoes heavy methylation in a multitude of tumor types including lung and breast carcinomas (Esteller et.al., 2001). Hypermethylation is also prevalent in gastrointestinal tumors in which mutated p14 and APC genes lead to uninhibited cell growth (Esteller et.al., 2001). Ultimately, these mutations can be passed down from generation to generation such as those of the BRCA1 gene which increases risk of developing breast and ovarian carcinomas in families (Esteller et.al., 2001). Global hypomethylation at repetitive elements, transposons, and various gene bodies is also a main contributor to cancer initiation that leads to the deregulation of the genome (Sharma
et. al., 2010). This process is not only prevalent in cancer but also in many other disease states such as systemic lupus erythematosus and ICF syndrome, in which DNMT mutations may lead to precancerous conditions (Kelly et.al., 2010).
Aberrations in Histone Modification Patterns and Non-Coding RNAs In addition to aberrations in DNA methylation, abnormalities in histone modifications as well as decreased miRNA expression are prevalent in cancerous tissues. The hypoacetylation and hypermethylation of histones are hallmarks of cancer progression that contribute to the permanent repression of tumor suppressor genes. For example, global loss of H4K16 acetylation and the overexpression of HDAC proteins have been identified in numerous cancers and contribute to the silencing of tumor-suppressor genes (Sharma et.al., 2010). The dysregulation of histone methylation is also a cause for concern as H4K20 tri-methylation loss can lead to the silencing of suppressor genes in addition to the overexpression of regulatory proteins and DNA hypermethylation (Sharma et.al., 2010). Various HMTs, such as those that regulate histone H3K27 and H3K9, are overexpressed in breast, prostate, and liver cancer in a way that aberrantly alters chromatin structure and levels of transcription (Sharma et.al., 2010). Disruptions in miRNA expression, which plays a central role in regulating cell growth, transcription, and cell death, can have detrimental consequences that serve to promote tumorigenesis and further cancer
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
radiation, and pesticides (Park, 2020). Thus, DNA methylation is a unique tool that reveals the impact of environmental and lifestyle factors upon physiological traits such as body weight, physical activity, depression, and even alcohol consumption. For example, children whose mothers smoked while pregnant are found to be at increased risk for asthma (Li et.al., 2005). Parental exposure to pesticides is also found to be associated with an increased risk for hematologic cancers such as leukemia in children (Park, 2020). Thus, a better understanding of one’s DNA methylation profile and cancer risk is essential to the development of personalized prevention strategies.
progression. Many miRNAs that target proapoptic genes like Bim are often overexpressed while those targeting antiapoptic genes including BCL2 are significantly downregulated ultimately accelerating the cell cycle and inhibiting cell death (Sharma et.al., 2010). For example, the overexpression of miR-146, which represses the BRCA1 gene, has been observed in multiple cancers including as well as several autoimmune disorders (Kasinski et.al., 2011). Furthermore, the alterations of non-coding RNAs processes have significant impact on the functioning of DNMTs that regulate DNA methylation patterns. Decreased activity of miR-143 and miR-29 both lead to increased DNMT expressions, specifically those that regulate de novo methylation including DNMT3A and DNMT3B (Kelly et.al., 2010).
Molecular Markers for Risk Reduction Armed with this understanding of epigenetic processes, the potential for new and improved cancer treatment is tremendous. By targeting specific epigenetic mechanisms and identifying epigenetic markers, it is possible to develop effective risk reduction strategies that can transform the way we perceive and treat cancer. Over the course of life, humans accumulate a multitude of different epigenetic markers as a result of the environmental exposures, lifestyle, and ancestral inheritance. In analyzing salivary and peripheral blood DNA, scientists are able to draw upon the relationship between DNA methylation patterns and risk association (Park, 2020). Methylation markers in DNA can act as an “epigenetic memory” that details past exposures including drugs, air pollution,
SUMMER 2020
Risk reduction strategies include changing or quitting lifestyle habits, proactively seeking medical assistance, and acquiring medication that can potentially reverse risk markers all together (Park, 2020). In reversing these epigenetic markers, one may significantly decrease risk for a disease (Park, 2020). Breast cancer risk, linked to obesity and a lack of physical activity, can be reduced by ~20% for women all across the risk spectrum which includes women of higher-risk due to family history according to a joint study by Columbia University and the University of Melbourne (Kehm et.al., 2020). Further studies comparing DNA methylation patterns of former smokers revealed that those whose methylation patterns resembled those of non-smokers were at lower risk of developing lung cancer than those whose patterns matched those of current smokers (Zhang et.al., 2016). These findings suggest the possibility of reversing cancerinducing, epigenetic markers imposed by one’s lifestyle (Park, 2020). Thus, in understanding the ways in which people can assert control over personal cancer risk, we can push the needle forward in making conscious lifestyle choices to better our health as well as develop new and improved epigenetic therapies to counteract genetic and environmentally-imposed risk factors.
Figure 5: A representation of the different stages of the cell cycle. In tumorigenesis, checkpoints that serve to halt the cell cycle at various stages are bypassed, inducing unregulated cell growth. Source: Wikimedia Commons [Richard Wheeler, COMGFDL]
“Methylation markers in DNA can act as an 'epigenetic memory' that details past exposures including drugs, air pollution, radiation, and pesticides.”
Conclusion Epigenetics is still a relatively new field, but with tremendous potential in the realm of cancer treatment. Epigenetic mechanisms, such as DNA methylation, histone modification, and the action of non-coding RNAs, are greatly dynamic and essential to any comprehensive view of the interactions between the genome and environment in the context of cancer progression and etiology. In understanding how this epigenetic landscape becomes
169
disrupted in tumorigenesis, we can both devise better treatment strategies and improve quality of health for living at-risk individuals and future generations to come.
Schneider AP 2nd, Zainer CM, Kubat CK, Mullen NK, Windisch AK. The breast cancer epidemic: 10 facts. Linacre Q. 2014;81(3):244-277. doi:10.1179/2050854914Y.0000000027
References
Sharma S, Kelly TK, Jones PA. Epigenetics in cancer. Carcinogenesis. 2010;31(1):27-36. doi:10.1093/carcin/bgp220
Castillo-Fernandez, J.E., Spector, T.D. & Bell, J.T. Epigenetics of discordant monozygotic twins: implications for disease. Genome Med 6, 60 (2014). https://doi.org/10.1186/s13073-0140060-z
Zhang, Y., Elgizouli, M., Schöttker, B. et al. Smoking-associated DNA methylation markers predict lung cancer incidence. Clin Epigenet 8, 127 (2016). https://doi.org/10.1186/s13148-0160292-4
Cheng, Y., He, C., Wang, M. et al. Targeting epigenetic regulators for cancer therapy: mechanisms and advances in clinical trials. Sig Transduct Target Ther 4, 62 (2019). https://doi.org/10.1038/ s41392-019-0095-0 Choi, C.Q. How epigenetics affects twins. Genome Biol 5, spotlight-20050708-02 (2005). https://doi.org/10.1186/gbspotlight-20050708-02 Chuang, J., Jones, P. Epigenetics and MicroRNAs. Pediatr Res 61, 24–29 (2007). https://doi.org/10.1203/pdr.0b013e3180457684 Dupont, C., Armant, D. R., & Brenner, C. A. (2009). Epigenetics: definition, mechanisms and clinical perspective. Seminars in reproductive medicine, 27(5), 351–357. https://doi.org/10.1055/ s-0029-1237423t Esteller M, Corn PG, Baylin SB, Herman JG. A gene hypermethylation profile of human cancer. Cancer Res. 2001;61(8):3225-3229. Gayon J. From Mendel to epigenetics: History of genetics. C R Biol. 2016;339(7-8):225-230. doi:10.1016/j.crvi.2016.05.009 Kanwal R, Gupta S. Epigenetic modifications in cancer. Clin Genet. 2012;81(4):303-311. doi:10.1111/j.1399-0004.2011.01809.x Kasinski AL, Slack FJ. Epigenetics and genetics. MicroRNAs en route to the clinic: progress in validating and targeting microRNAs for cancer therapy. Nat Rev Cancer. 2011;11(12):849864. Published 2011 Nov 24. doi:10.1038/nrc3166 Kelly, T. K., De Carvalho, D. D., & Jones, P. A. (2010). Epigenetic modifications as therapeutic targets. Nature biotechnology, 28(10), 1069–1078. https://doi.org/10.1038/nbt.1678 Kehm RD, Genkinger JM, MacInnis RJ, et al. Recreational Physical Activity Is Associated with Reduced Breast Cancer Risk in Adult Women at High Risk for Breast Cancer: A Cohort Study of Women Selected for Familial and Genetic Risk. Cancer Res. 2020;80(1):116-125. doi:10.1158/0008-5472.CAN-19-1847 Li YF, Langholz B, Salam MT, Gilliland FD. Maternal and grandmaternal smoking patterns are associated with early childhood asthma. Chest. 2005;127(4):1232-1241. doi:10.1378/ chest.127.4.1232 Murphy, G. Mother's diet changes pups' colour. Nature (2003). https://doi.org/10.1038/news030728-12 Park HL. Epigenetic Biomarkers for Environmental Exposures and Personalized Breast Cancer Prevention. International Journal of Environmental Research and Public Health. 2020; 17(4):1181. Roberti, A., Valdes, A.F., Torrecillas, R. et al. Epigenetics in cancer therapy and nanomedicine. Clin Epigenet 11, 81 (2019). https:// doi.org/10.1186/s13148-019-0675-4
170
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
171
Selective Autophagy and Its Potential to Treat Neurodegenerative Diseases BY SAM HEDLEY '23 Cover Image: Image of mouse cortical neurons after 15 days in culture. Mouse models are often used to study protein aggregation in neurons, a leading cause of neurodegenerative diseases. Source: Wikimedia Commons
172
What is Autophagy? Autophagy, meaning â&#x20AC;&#x153;self-eatingâ&#x20AC;?, is a degradative process contributing to the maintenance of cellular homeostasis. By using the autophagic pathway to deliver cargo to the lysosome, the cell is able to eliminate cytoplasmic material like proteins or organelles that would otherwise inflict harm. Autophagy is highly regulated within the cell and is evolutionarily conserved, with significant overlap between the Atg (autophagy) proteins in yeast and in mammals (Glick et al., 2010). Given a renewed attention to autophagy within the scientific community in recent years, we now know much more about the inner workings of this elusive process. Autophagy was once thought to only induce bulk-degradation, but three subdivisions of autophagy have since been discovered: macroautophagy, microautophagy, and chaperone-mediated autophagy (CMA) (Parzych & Klionsky, 2014).
The general autophagy pathway is conserved throughout the three subtypes, but each division confers its own targeting processes and mechanisms of specificity (Figure 1). Macroautophagy, or selective autophagy, is the most studied of these three autophagy processes and involves the synthesis of a double-membrane vesicle called the autophagosome. This vesicle will ultimately carry the cargo to the lysosome to be degraded. During microautophagy and chaperonemediated autophagy (CMA), cargo is absorbed into the lysosomes directly. In microautophagy, invaginations in the membrane of the lysosome entrap cargo and transport it into the organelle (Parzych & Klionsky, 2014). This process confers the lowest level of target specificity because it does not involve the targeting of individual proteins. CMA involves the recognition of an amino acid sequence in the cargo protein by a highly specific chaperone protein complex, which then binds to and carries the cargo to DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Each subtype of autophagy has a different mechanism of cargo targeting and delivery to the lysosome. Macroautophagy utilizes vesicle fusion to the lysosome. Chaperone-mediated autophagy transports cargo via the lysosome-associated membrane protein 2 (LAMP-2A) and in microautophagy, the cargo is absorbed directly into the lysosome. Source: Original figure created using BioRender; see (Parzych & Klionsky, 2014) for figure inspiration.
the lysosome. This cargo is then transported into the lysosome through the membrane (Glick et al., 2010). Selective autophagy is activated in response to stress and has been shown to endanger cells when unregulated (Parzych & Klionsky, 2014). Dysregulation of selective autophagy is linked to a range of health issues, prominently neurodegenerative diseases (Pyo et al., 2012). Discovering more about the selective autophagy pathway and the structures of the associated Atg proteins provides the potential to alter autophagy functions in patients through drug therapies. By restoring dysregulated autophagic processes, normal cellular processes are promoted, lessening the impact of neurodegeneration.
The Process of Selective Autophagy Selective autophagy, hereafter referred to as autophagy, involves the de novo1 formation of an autophagosome around targeted cargo. The autophagosome travels to the lysosome and fuses with its membrane to form an autolysosome, where the cargo is subsequently degraded (Figure 2). The type of cargo being transported for degradation depends on the autophagy subtype. For instance, mitophagy refers to mitochondrial degradation while aggrephagy involves the targeting of protein aggregates.The same autophagic pathway, however, occurs during each subtype, independent of the nature of the cargo. Recent
SUMMER 2020
research has identified over 30 Atg proteins involved in selective autophagy. These Atg proteins have been grouped into complexes based on their role in the autophagic pathway, which will be outlined in the following sections. Autophagy Initiation via the ULK1 Complex
â&#x20AC;&#x153;Autophagy can be initiated in response to deprivation of nutrients such as insulin and glucose in mammalian cells.â&#x20AC;?
Autophagy can be initiated in response to deprivation of nutrients such as insulin and glucose in mammalian cells (Moruno et al., 2012). In such starvation conditions, the GAAC (general amino acid control) pathway upregulates amino acid synthesis while the autophagy pathway degrades unnecessary proteins, recycling amino acids to be used in new processes (Chen et al., 2014). The regulator kinase of autophagy is the mTOR complex, which is deactivated by starvation or the presence of the drug rapamycin, an mTOR inhibitor. Under normal conditions, mTORC1 (a protein of the mTOR complex) phosphorylates and inhibits the autophagy proteins ULK1 and Atg13. ULK1 and Atg13 are part of the mammalian autophagy initiation complex, so phosphorylation of these proteins by mTORC1 inactivates the autophagy pathway. This ULK1 initiation complex is comprised of ULK1 and Atg13, as well as the proteins FIP200 and Atg101 (Zachari & Ganley, 2017). When mTOR is inactivated, ULK1 and Atg13 are dephosphorylated and the autophagy pathway 173
Figure 2: Autophagy is initiated through the inhibition of mTOR (1). The first groups of autophagy proteins are recruited to the PAS (2) where lipids are processed and used to form the isolation membrane (3). Expansion of the vesicle continues to form the phagophore (4), where cargo is then targeted and enclosed by the completed autophagosome (5). The autophagosome fuses with the lysosome (6) to degrade the cargo and release it back into the cell (7). Source: Original figure created using BioRender; see (Nixon, 2013) for figure inspiration.
â&#x20AC;&#x153;Following the initial synthesis of the phagophore through the PI3K complex, the Atg12-Atg5-Atg16L1 complex expands the phagophore to create the doublemembraned vesicle known as the autophagosome.â&#x20AC;?
is activated. FIP200 has recently been identified as the largest protein in the initiation complex and is the source of interaction for the other complex components. Atg13:Atg101 is a dimer that forms and interacts with the N-terminal of FIP200. Once Atg13:Atg101 localizes to FIP200, ULK1 is recruited to and interacts with FIP200 as well (Shi et al., 2020). This complex localizes to the endoplasmic reticulum, where the autophagosome is formed. ULK1 is the only protein in the complex with kinase activity and phosphorylates Beclin1, Vps34, and Atg14 to initiate the nucleation of the autophagosome membrane via the class III PI3K complex (Zachari & Ganley, 2017).
of the autophagosome to the lysosome, a later stage in the autophagic cycle (Chun & Kim, 2018). The PI3K-Atg14 complex helps localize the ULK1 complex to the phagophore initiation sites close to the endoplasmic reticulum membrane (Figure 4). Additionally, Beclin1 and Vps34 interact to phosphorylate the lipid phosphatidylinositol (PI), generating phosphatidylinositol I 3-phosphate (PI3P). The lipid PI3P then elongates the phagophore and recruits other Atg proteins to the complex (Pyo et al, 2012). This process is regulated by BCL2, which binds to Beclin1 and inhibits interaction with Vps34, preventing the formation of the autophagosome (Parzych & Klionsky, 2014).
Autophagy Nucleation via the PI3K Complex
Autophagosome Expansion via Atg12-Atg5Atg16 and LC3B-II
The class III PI3K complex enlists three primary proteins: Beclin1, Vps34, and p150, which are activated by phosphorylation. This PI3K complex plays two different roles in the autophagy pathway; there is a 4th protein that interacts with the Beclin1/Vps34/p150 complex which changes based on the stage of autophagy. When PI3K interacts with Atg14, the complex facilitates membrane nucleation2, an early phase of autophagy. If the complex interacts with the protein UVRAG, however, it facilitates the fusion
174
Following the initial synthesis of the phagophore through the PI3K complex, the Atg12-Atg5-Atg16L1 complex expands the phagophore to create the double-membraned vesicle known as the autophagosome. Through an ATP-dependent process involving other Atg proteins, Atg12 and Atg5 are attached covalently, initiating the noncovalent interaction between Atg5 and Atg16L1 to form the Atg12-Atg5-Atg16L1 complex (Glick et al.,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
LC3B-II (Bjørkøy et al. 2005). By the end of autophagosome expansion, the membrane has enclosed ubiquitin-targeted proteins, and the vesicle proceeds to the lysosome. Autophagosome Completion and Fusion
2010). These proteins associate with the phagophore and induce both membrane expansion and curvature through the recruitment of processed LC3B-II, then dissociate after autophagosome completion (Figure 4). The production and processing of the protein LC3B increase during autophagy. Through the interaction of various Atg proteins, LC3B is cleaved to form LC3B-I and is then conjugated with the phospholipid phosphatidylethanolamine (PE) to form LC3B-II (Figure 4). Drawn to the phagophore by Atg12Atg5-Atg16L1, LC3B-II recruits the cargo to be enclosed by the autophagosome (Glick et al., 2010). The main molecule interacting with LC3B-II is the receptor protein p62. Proteins in the cell that are targeted for degradation via the lysosome are tagged by the protein ubiquitin. p62 recognizes and binds to these ubiquitinated proteins, delivering them to the autophagosome via interaction with
The last step in the selective autophagy pathway is the least studied but involves the trafficking of the autophagosome to the lysosome via microtubule transport and the subsequent formation of the autolysosome through fusion (Figure 1). The endosomal complex required for transport, ESCRT III, is said to play a role in the closure of the autophagosome as well as the fusion of the autophagosome and the lysosome (Pyo et al., 2012). Deletion of the complex causes autophagosome accumulation, indicating that ESCRT III is necessary to form the autolysosome. Additional components include the aforementioned PI3K-UVRAG complex, which activates the G-protein Rab7, therefore promoting microtubule transport (Parzych & Klionsky, 2014). Furthermore, SNARE proteins facilitate the fusion process by tethering the autophagosome membrane to the lysosome membrane and drawing the two components closer together (Nixon, 2013).
Figure 3: The orientation of the ULK1 initiation complex is centered around the N-terminal domain of the protein FIP200 (FIP200NTD). FIP200NTD has a c-shaped conformation and mediates direct interactions with both the Atg13:Atg101 dimer and the kinase ULK1. Source: Original figure created using BioRender; see (Shi et al., 2020) for figure inspiration.
“The last step in the selective autophagy pathway is the least studied but involves the trafficking of the autophagosome to the lysosome via microtubule transport and the subsequent formation of the autolysosome through fusion.”
Figure 4: Following autophagy initiation (1), the ULK1 and PI3K complexes induce membrane nucleation through the processing of the lipid PI3P (2). The Atg12-Atg5Atg16L1 complex is localized to the phagophore, where it recruits processed LC3B-II. LC3B-II interacts with the cargo receptor p62, which carries the ubiquitinated proteins to the phagophore (3). Source: Original figure created using BioRender; see (Quan & Lee, 2013) for figure inspiration.
SUMMER 2020
175
Figure 5: Neurodegenerative diseases impair the autophagy at various points in the pathway, exacerbating the aggregation of toxic proteins in cells. Source: Original figure created using BioRender; see (Nah et al., 2015) for figure inspiration.
Causes of Neurodegenerative Diseases A significant cause of neurodegenerative diseases is the accumulation of mutant proteins in affected cells. Given that selective autophagy removes such protein aggregates in normally functioning cells, mutations in the autophagy pathway have been studied as a potential source of these diseases. The dysfunction of autophagy in these cases can be attributed to insufficient autophagy initiation, reduced degradation function, or increased levels of autophagic stress due to protein aggregates (Nah et al., 2015). The next section explores the function of autophagy in three of the most prominent neurodegenerative diseases: Alzheimer’s, Parkinson’s, and Huntington’s (Figure 5). Alzheimer’s Disease (AD)
“A significant cause of neurodegenerative diseseases is the accumulation of mutant proteins in affected cells... mutations in the autophagy pathway have been studied as a potential source of these diseases.”
176
Alzheimer’s Disease is the leading cause of dementia and develops from cell death in the hippocampus and cerebral cortex, both of which play roles in memory (Irvine et al., 2008). Two toxic protein structures can be attributed to the onset of Alzheimer’s Disease: plaques of beta-amyloid (Aβ) and tangles of tau. When an amyloid precursor protein (APP) is targeted to autophagosomes for degradation, it undergoes a cleavage and produces Aβ peptides. In diseased cells, the functions of autophagosome transport and fusion are compromised, leading to an accumulation of Aβ-containing vesicles (Figure 5). These Aβ peptides are released back into the extracellular space and aggregate to form insoluble toxic Aβ plaques (Nah et al.,
2015). This Aβ accumulation in turn initiates abnormal downstream tau activity. Under normal conditions, the protein tau interacts with tubulin and stabilizes microtubules, promoting vesicle transport. The Aβ peptides, however, induce the hyper-phosphorylation of tau and lead to the formation of aggregate tau structures called neurofibrillary tangles. These tangles are not specific to Alzheimer’s, but exacerbate the neuronal toxicity caused by the Aβ peptides and are good indicators of disease severity (Irvine et al., 2008). Further autophagy impairments include the downregulation of PI3K complex protein Beclin1, which plays a role in autophagy initiation. This phenotype is observed in early AD patients and leads to increased Aβ peptide accumulation (Nah et al., 2015). Additionally, mutations in the protein presenilin-1 (PS1) impair lysosomal degradation (Figure 5) leading to early-onset Alzheimer’s (Nixon, 2013). The compromised autophagy pathway degrades the tau and Aβ protein aggregates at a much lower rate than normal, ultimately leading to further disease progression. Parkinson’s Disease (PD) Parkinson’s Disease is characterized by a loss of motor skills, coordination, and cognitive decline – functions controlled by a region of the brain called the substantia nigra. The neurons in this region are involved in a pathway that controls voluntary movement and use dopamine as a neurotransmitter (Irvine et al.,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
2008). Motor function is impaired by death of these dopamine neurons, which results from an accumulation of “Lewy Bodies” in these cells. These “Lewy Bodies” are aggregates of mutated ⍺-synuclein protein, which is involved in the transport of neurotransmitters like dopamine. The mutation of several genes in PD, primarily SNCA and LRRK2, inhibit autophagy and lead to the toxic accumulation of ⍺-synuclein (Figure 5). SNCA encodes ⍺-synuclein, mutations in which lead to the generation of “Lewy Bodies” (Albanese et al., 2019). Cells which overexpress ⍺-synuclein also demonstrate a downregulation of the protein Rab1A, which is necessary for proper autophagosome synthesis (Rahman & Rhim, 2017). There are 6 mutations in LRRK2 linked to PD; mutated LRRK2 has been identified as the most common genetic cause of PD. However, the mechanism by which LRRK2 impairs autophagy remains unclear due to a lack of models effectively isolating LRRK2 function in the pathway (Albanese et al., 2019). Additionally, Parkinson’s cells have compromised mitophagy functions, which inhibits their ability to degrade mitochondrial waste. Mitophagy is impeded by mutations in PINK1 and parkin, which comprise the pathway by which p62 targets cargo to the autophagosome. Infected cells have also been shown to have both compromised mitophagy and CMA via LRRK2 mutations (Nixon, 2013). Huntington’s Disease (HD) Huntington’s Disease is accompanied by abnormal motor movements and personality changes. These symptoms are most directly influenced by the cerebellum, spinal cord, and striatum, the latter being the region of the brain with control over rewards and decision making (Arraste & Finkbeiner, 2012). Degeneration of such functions is caused by the accumulation of toxic mutant huntingtin (HTT) protein in neural cells, leading to cell death and cognitive decline (Rahman & Rhim, 2017). The HTT mutation is a trinucleotide repeat created by aberrant DNA replication and exhibits autosomal dominant inheritance pattern. The resulting protein is more prone to misfolding and aggregation. The process by which mutant HTT accumulates in neurons is still largely unknown, but disruptions in the autophagy pathway have been identified. In the presence of mutant HTT, autophagosomes form correctly and the HTT is successfully ubiquitinated, but the HTT cargo is not recruited to the autophagosome (Figure 5). This is possibly due to an inability of the cargo receptor p62 to recognize the mutant
SUMMER 2020
protein (Rahaman & Rhim, 2017). Additionally, these mutant HTT aggregates can interact with Beclin1 and therefore impede Beclin1 function in autophagosome nucleation (Nah et al., 2015). However, in an effort towards self-preservation, HTT aggregates have been shown to form structures called “inclusion bodies” that reduce the load of protein aggregates targeted by the autophagic pathway (Arraste & Finkbeiner, 2012). Little is known about this process, but it is thought to function as a way to increase protein degradation, providing support for autophagyrelated neurodegenerative therapies.
Drug Induced Manipulation of Autophagy The most prominent mechanism being studied to treat neurodegenerative diseases is the upregulation of the autophagy pathway. By inducing greater levels of autophagy in affected cells, toxic protein aggregates can be degraded at a higher rate, alleviating the effects of the disease. Selective autophagy can be upregulated through two different pathways: mTORC1 dependent, and mTORC1 independent.
“By inducing greater levels of autophagy in affected cells, toxic protein aggregates can be degraded at a higher rate, alleviating the effects of the disease.”
Rapamycin directly inhibits mTORC1, inducing autophagy. Treatment with rapamycin has been shown to alleviate neurodegeneration in mice via autophagy upregulation (Nixon, 2013). In Drosophila (flies) and mouse models, rapamycin treatment enhanced mutant HTT clearance as well as mutant ⍺-synuclein clearance (Metcalf et al., 2012). In transgenic mice with PS1/APP/ Tau proteins, rapamycin was shown to diminish Aβ plaque formation (Rahman & Rhim, 2017). One study compared a mouse strain of normal neurological function to a senescence-prone (SAMP) mouse strain that exhibits age-related neurodegenerative decline. The SAMP mice demonstrated hyperphosphorylated Tau proteins and increased autophagy-inhibiting mTOR activity. Treatment of the SAMP mice with rapamycin greatly reduced the levels of phosphorylated Tau (Wang et al., 2017). There are three main approaches to mTORC1independent upregulation. The first is direct activation of the ULK1 complex via the kinase AMPK. This treatment is the least studied because activation of such a central signaling pathway will affect multiple cellular processes and create unwanted side effects (Nixon, 2013). The second approach involves intracerebral delivery of Beclin1. Deletion of the Beclin1 gene in a mouse model of Alzheimer’s Disease was shown to increase Aβ peptide accumulation,
177
and patients with Alzheimer’s have demonstrated reduced Beclin1 production (Pickford et al., 2008). In mouse models, of both Alzheimer’s and Parkinson’s the intracerebral delivery of Beclin1 reduced aggregation of both Aβ peptides and ⍺-synuclein (Metcalf et al., 2012). Additionally, the introduction of Beclin1 into HeLa cells via viral packaging was shown to be sufficient in inducing autophagy (Nah et al., 2015). The third pathway of increased autophagy is the inhibition of the phosphoinositol (IP3) cycle in cells. The production of I3P is necessary to generate PI3P for autophagosome formation, so the reduction of free IP3 in the cell through phosphoinositol cycle inhibition upregulates autophagy (Metcalf et al., 2012). The main drug used to accomplish this is lithium, which is more commonly used as a mood stabilizer in the treatment of bipolar disorder. In neurodegenerative disorders, lithium enhances clearance of aggregates of mutant HTT and ⍺-synuclein aggregates (Nixon, 2013). Additionally, inhibition of the IP3 cycle inhibits the protein GSK-3B, which interrupts tau phosphorylation in transgenic mice (Rahman & Rhim, 2017). Some FDA approved drugs have the same function as lithium in acting on the IP3 cycle, including Ca2+ blockers like loperamide. The dual treatment with mTORC1-dependent drugs like rapamycin and mTORC1-independent drugs like lithium leads to further upregulation of autophagy in flies in vivo (Metcalf et al., 2012).
Conclusion (Looking to the Future) “The dual treatment with mTORC1dependent drugs like rapamycin and mTORC1independent drugs like lithium leads to further upregulation of autophagy in flies in vivo.”
178
Autophagy as a treatment for neurodegenerative diseases holds a great deal of promise but has yet to see the same amount of success in human trials as in mouse models. Lithium has been administered to small trial groups with adverse results, attributed to the small population size and the variability in the degree of neurodegeneration (Nixon, 2013). Selective autophagy, while becoming increasingly understood, still involves proteins that have undefined functions. The mechanisms of certain complexes are understood, but the roles proteins play in this process remain unknown. Thus, many of these preclinical studies have not been tested in humans to the same degree as animals. Transgenic mice models simulate these neurodegenerative diseases by overexpressing causative proteins, but the individual proteins leading to neurodegeneration are not always obvious (Sweeny et al., 2017). Yet, increases in accessible patient data in recent years, as well as strides in drug discovery technology, shape a promising future for neurodegenerative disease treatment through autophagic mechanisms.
References Albanese, F., Novello, S., & Morari, M. (2019). Autophagy and LRRK2 in the Aging Brain. Frontiers in Neuroscience, 13, 1352. https://doi.org/10.3389/fnins.2019.01352 Arrasate, M., & Finkbeiner, S. (2012). Protein aggregates in Huntington’s disease. Experimental Neurology, 238(1), 1–11. https://doi.org/10.1016/j.expneurol.2011.12.013 Bjørkøy, G., Lamark, T., Brech, A., Outzen, H., Perander, M., Overvatn, A., Stenmark, H., & Johansen, T. (2005). P62/ SQSTM1 forms protein aggregates degraded by autophagy and has a protective effect on huntingtin-induced cell death. The Journal of Cell Biology, 171(4), 603–614. https://doi. org/10.1083/jcb.200507002 Chen, R., Zou, Y., Mao, D., Sun, D., Gao, G., Shi, J., Liu, X., Zhu, C., Yang, M., Ye, W., Hao, Q., Li, R., & Yu, L. (2014). The general amino acid control pathway regulates mTOR and autophagy during serum/glutamine starvation. The Journal of Cell Biology, 206(2), 173–182. https://doi.org/10.1083/ jcb.201403009 Chun, Y., & Kim, J. (2018). Autophagy: An Essential Degradation Program for Cellular Homeostasis and Life. Cells, 7(12). https://doi.org/10.3390/cells7120278 Glick, D., Barth, S., & Macleod, K. F. (2010). Autophagy: Cellular and molecular mechanisms. The Journal of Pathology, 221(1), 3–12. https://doi.org/10.1002/path.2697 Irvine, G. B., El-Agnaf, O. M., Shankar, G. M., & Walsh, D. M. (2008). Protein aggregation in the brain: The molecular basis for Alzheimer’s and Parkinson’s diseases. Molecular Medicine (Cambridge, Mass.), 14(7–8), 451–464. https://doi. org/10.2119/2007-00100.Irvine Metcalf, D. J., García-Arencibia, M., Hochfeld, W. E., & Rubinsztein, D. C. (2012). Autophagy and misfolded proteins in neurodegeneration. Experimental Neurology, 238(1), 22–28. https://doi.org/10.1016/j.expneurol.2010.11.003 Moruno, F., Pérez-Jiménez, E., & Knecht, E. (2012). Regulation of autophagy by glucose in Mammalian cells. Cells, 1(3), 372–395. https://doi.org/10.3390/cells1030372 Nah, J., Yuan, J., & Jung, Y.-K. (2015). Autophagy in neurodegenerative diseases: From mechanism to therapeutic approach. Molecules and Cells, 38(5), 381–389. https://doi. org/10.14348/molcells.2015.0034 Nixon, R. A. (2013). The role of autophagy in neurodegenerative disease. Nature Medicine, 19(8), 983–997. https://doi.org/10.1038/nm.3232 Parzych, K. R., & Klionsky, D. J. (2014). An overview of autophagy: Morphology, mechanism, and regulation. Antioxidants & Redox Signaling, 20(3), 460–473. https://doi. org/10.1089/ars.2013.5371 Pickford, F., Masliah, E., Britschgi, M., Lucin, K., Narasimhan, R., Jaeger, P. A., Small, S., Spencer, B., Rockenstein, E., Levine, B., & Wyss-Coray, T. (2008). The autophagy-related protein beclin 1 shows reduced expression in early Alzheimer disease and regulates amyloid beta accumulation in mice. The Journal of Clinical Investigation, 118(6), 2190–2199. https://doi. org/10.1172/JCI33585 Pyo, J. O., Nah, J., & Jung, Y. K. (2012). Molecules and their functions in autophagy. Experimental & Molecular Medicine,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
44(2), 73–80. https://doi.org/10.3858/emm.2012.44.2.029 Quan, W., & Lee, M.-S. (2013). Role of Autophagy in the Control of Body Metabolism. Endocrinology and Metabolism, 28(1), 6. https://doi.org/10.3803/EnM.2013.28.1.6 Rahman, M. A., & Rhim, H. (2017). Therapeutic implication of autophagy in neurodegenerative diseases. BMB Reports, 50(7), 345–354. https://doi.org/10.5483/ bmbrep.2017.50.7.069 Shi, X., Yokom, A. L., Wang, C., Young, L. N., Youle, R. J., & Hurley, J. H. (2020). ULK complex organization in autophagy by a C-shaped FIP200 N-terminal domain dimer. Journal of Cell Biology, 219(7), e201911047. https://doi.org/10.1083/ jcb.201911047 Sweeney, P., Park, H., Baumann, M., Dunlop, J., Frydman, J., Kopito, R., McCampbell, A., Leblanc, G., Venkateswaran, A., Nurmi, A., & Hodgson, R. (2017). Protein misfolding in neurodegenerative diseases: Implications and strategies. Translational Neurodegeneration, 6(1), 6. https://doi. org/10.1186/s40035-017-0077-5 Wang, Y., Ma, Q., Ma, X., Zhang, Z., Liu, N., & Wang, M. (2017). Role of mammalian target of rapamycin signaling in autophagy and the neurodegenerative process using a senescence accelerated mouse-prone 8 model. Experimental and Therapeutic Medicine, 14(2), 1051–1057. https://doi. org/10.3892/etm.2017.4618 Zachari, M., & Ganley, I. G. (2017). The mammalian ULK1 complex and autophagy initiation. Essays in Biochemistry, 61(6), 585–596. https://doi.org/10.1042/EBC20170021
SUMMER 2020
179
The Role of Autophagy and Its Effect on Oncogenesis
BY ZOE CHEN '23 Illustration depicting autophagy with a western spin! Directed by signaling complexes, autophagy wrangles cellular components in the wild cytosolic landscape Created by the author
Introduction Just two decades ago, autophagy had little foothold in the world of research. This quickly changed as the mechanism erupted into relevance on the scientific stage, revolutionizing current knowledge of neurodegeneration, longevity, and immune diseases. Autophagy is also a critical area of interest in the development of cancer therapies. This mechanism is suspected to either aid or combat chemotherapy resistance in tumor cells; whether it is friend or foe remains contentious. Defining autophagy’s involvement may thus be key in the fight against cancer. Autophagy, or self (auto) eating (phagy), is a homeostatic cellular process that degrades debris present within the cell. It serves as a well conserved survival mechanism in eukaryotes (Feng et al., 2014), also present among different tumor types (Tan et al., 2016). Essentially, autophagy can be thought of as a
180
lassoing cowboy, corralling unnecessary or harmful components in the cell’s cytosol for roundup. The autophagosome—an organelle that, in this simile, is akin to a lasso—then drives these components into the lysosome where they are degraded and broken into basic monomers. The double membraned autophagosome is a mediating organelle that forms upon appropriate signaling by the cell. It exists temporarily to envelope relevant cytosolic matter and transport it to the lysosome, where it fuses membranes. Interior enzymes within the lysosome break down the autophagosome’s cargo for disposal or reuse by the cell (Mizushima, 2007). A visual translation of the mechanism can be seen in Figure 1. Autophagy’s operation is regulated by several signaling pathways (Mizushima, 2005) that deem how necessary it is for the process to take place. This most general role of autophagy in homeostasis gives rise to its versatile range of function. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
A Glimpse into the Rodeo To better understand autophagy’s unique abilities, it is helpful to break down the process stage by stage. Baseline autophagy occurs, but more extensively, the process can be triggered by a state of nutrient starvation—most notably nitrogen, carbon, and amino acid starvation. This varies by the type of cellular organism (Takeshige, t al., 1992). For multicellular organisms, the endocrine system is believed to be the major regulator of autophagy, because sensing occurs as a whole (Mortimore, 1987). The autophagic pathway involves several signaling factors. An overall decrease in glucose transportation (denoting starvation) triggers the molecule mTOR to inhibit the ULK1 complex, responsible for vesicle nucleation. The protein beclin 1, phosphorylated by ULK1, transports autophagic proteins to the forming phagophore, a double membrane that is prerequisite to the autophagosome. When the activating molecule BECN1 regulated autophagy protein 1 (AMBRA1) of the pl3k complex binds to beclin 1, the phagophore becomes stabilized. It is now prepared to collect the cytosolic matter. These signaling molecules appropriately call autophagy to action and are also important as targets, frequently used by scientists to arrest autophagy. Research has targeted autophagy pharmacologically upstream with ULK1, Pl3K, and Beclin 1 inhibition. Drugs have also been used for downstream targets to prevent autophagy. Chloroquine, hydroxychloroquine and bafilomycin block the autophagosome from fusing with the lysosome (Levy, 2017) thereby preventing the breakdown of the debris. These drugs are commonly used in clinical practice to observe the effects of autophagy and what happens when it is absent. The next step is the emergence of the autophagosome, a semi-randomly occurring process. Upon formation, the phagophore sequesters components of the cytosol; the autophagosome is thus formed when the sequestration is complete. The noose of the lasso encloses around the unsuspecting herds of cytosolic matter. Subsequently, the autophagosome merges membranes with the lysosome, releasing its cargo to be degraded by hydrolases present inside. The resulting structure is called the autolysosome or the autophagolysosome. When the contents of the autophagolysosome have been broken down into individual units, they are then returned to the cytosol for reuse. Recycling proteins, for
SUMMER 2020
instance, yield useful monomers for the cell (Newsholme, Crabtree, Ardawi, 1985). These amino acids can then be digested as energy or used as the raw ingredients to synthesize proteins (Onodera and Oshumi, 2005). These abilities render autophagy a vital tool in cell survival during times of limited resources; reusing or repurposing its own contents increases the cell’s endurance.
Purpose of Autophagy Autophagy is chiefly responsible for disposal of cytosolic contents and acts as a sort of protein and organelle quality control. Researchers find that cells missing autophagy tend to accumulate long-lived or misfolded proteins and abnormal organelles in regular homeostatic conditions (Komatsu et al., 2005). Another study demonstrated autophagy’s ability to clear away excess organelles, such as damaged mitochondria (Kim, RodriguezEnriquez, Lemasters, 2007) and surplus peroxisomes (Iwata et al., 2006). Autophagy’s main functionality guards the homeostasis of the cell, in turn promoting cell survival and endurance.
“Another hat autophagy wears is as a promoter of longevity. Recent findings show autophagy plays a key role in preventing cellular senescence, or the aging of the cell that effectively ends its reproductive and growth potential."
Because of the autophagosome’s ability to round up and direct cytosolic elements, autophagy can exist alternatively as a standalone transportation pathway. For instance, vacuole specific enzymes formed in the cytosol are delivered by an autophagic pathway to the vacuole (instead of the lysosome) as part of a biosynthetic pathway (Klionsky, 2005). Furthermore, also in making use of its transportative abilitiy, autophagy is employed by dendritic cells to aid the innate immune response. The autophagosome patrols for and rounds up viral single-stranded RNA (ssRNA) in the cytosol. Once the foreign ssRNA is captured, toll-like receptors are then able to identify the ssRNA as it is being transported to the lysosome. Dendritic cells then secrete interferons which stimulate the immune response, attacking the foreign matter (Lee et al., 2007). Autophagy also assists with adaptive immune responses, informing various steps including neutrophil extracellular trap formation, antigen processing, and type I interferon production and regulation (Zhang et al., 2012). Another hat autophagy wears is as a promoter of longevity. Recent findings show autophagy plays a key role in preventing cellular senescence, or the aging of the cell that effectively ends its reproductive and growth 181
Figure 1: The stages of autophagy: from initiation to degradation. The schematic condenses the steps of macroautophagy, starting with initiation. Upstream, starvation conditions trigger mTOR inhibition of the ULK1 complex. A Pl3K complex mediates ULK1 (in charge of nucleation). This sequence appropriately allocates necessary proteins to the site of the phagophore. The double-membraned phagophore extends, evolving into an autophagosome and collecting cytosolic debris. In the stage of vesicle fusion, the lysosome forms with the autophagosome to create the autophagolysosome. The acidic lysosomal interior and its hydrolytic enzymes break down the cargo of the autophagosome which results in the return of the degraded content into the cytosol Created by the author
“Although autophagy is outfitted as a defense mechanism, certain pathogens and viruses exploit the autophagic pathway.”
182
potential. Senescence begins with damaged DNA by the cell being shed from the nucleus and leaked into the cytosol. Environmental stressors such as chemotherapy, UV rays, radiation, etc. can instigate genomic damage and breakages in the DNA strands. The self-DNA can exit the nucleus through pores or from budding of the membrane. Autophagosomes are responsible for trafficking this extranuclear DNA to the lysosome, where the nuclease Dnase2a can degrade it. Its essential role for self-DNA removal was highlighted in several experiments: when autophagy was arrested in different cell lines, DNA accumulated in the cytosol. Without autophagy, DNA accumulates in the cytosol. Another sensing pathway, STING, upon detection of DNA in the cytosol, triggers an inflammatory response (Lan et al., 2014). Typically, cytosolic DNA is considered suspect by the cell’s immune system—it could have originated from bacterial or viral sources, thereby indicating infection. In this case, the inflammatory response is prosurvival and defends the cell against foreign threats (Moritz et al., 2017). This inflammatory immune response, although in the short term beneficial, contributes to cellular senescence in the long run. In the case of self-DNA, when cytosolic nucleic acid is sourced from the cell itself and not a foreign attacker, this inflammatory
response is inappropriate. As a direct result, premature aging occurs in cells that build up self-DNA without the help of autophagy (Santoro et al., 2018). Senescence is accelerated when autophagy cannot collect and break down cytosolic DNA fast enough before STING detection. Through this perspective, autophagy can be thought of as a senescence suppressant, prolonging life and stoving off early aging.
Both Sheriff and Outlaw Although autophagy is outfitted as a defense mechanism, certain pathogens and viruses exploit the autophagic pathway. Pathogens have been known to fuse with the autophagosome to survive inside the cell and block its policing function (Swanson, Fernandez-Moreira, 2002). Forms of hepatitis, poliovirus, and picornavirus replicate within the vesicles of autophagosomal origin, increasing proliferation in a protected setting (Jackson et al., 2005). In addition to the dangers that arise when autophagy’s function is corrupted, when autophagy cannot be carried out to completion, problems are equally created. Studies find that when autophagy is disrupted,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Stained images of lung cancer cells treated with antitumor drug etoposide show autophagy. Here, autophagy serves as a line of defense against chemotherapy, mounting resistance of the cancer. By removing waste, autophagy restores the cell, enabling its survival. The lung cells, outlined in green, contain several nuclei (shown as bright green bodies). In the surrounding cytosol are several autophagosomes, shown in orange Source: Gewirtz and Saleh, 2016
the resulting accumulation of organelles causes neurodegenerative diseases like Alzheimer’s (Okamoto et al., 1991) and Parkinson’s diseases (Anglade et al., 1997). Researchers find that if autophagosome formation and maturation into autolysosomes are not fast enough, the incompletely degraded matter can lead to toxic peptides and Alzheimer’s precursors. As a result, the up-regulation of autophagy is a beneficial therapy for neurodegeneration (Ravikumar et al., 2004). Another interesting example is the autoimmune disease, lupus一specifically, systemic lupus erythematosus (SLE). In elucidating how the disease develops, scientists have linked variants of autophagic genes to SLE susceptibility. Defects in autophagy, especially adaptive immune cells, are suspected to further the development of of SLE (Qi et al., 2019). Although autophagy’s mechanism is purpoted to maintain and protect the cell, it can turn harmful when abused by foreign agents or executed incorrectly.
Effects of Autophagy on Tumorigenesis The role of autophagy is important in noncancer related topics and it is well established
SUMMER 2020
as a multifaceted tool. However, it is equally important to uncover its complex entanglement with both cancer growth and prevention. Anti-Cancer On the one hand, existing research in the field demonstrates that autophagy shields cells against cancer development and growth. Autophagy often precedes apoptosis and/or acts in conjunction with this alternative form of cell death. Evidence is mounting that autophagy enhances apoptosis when available. When apoptosis is unavailable, autophagy mediates autophagic cell death as an alternative. Type II autophagic cell death is also important in killing tumor cells that have no resistance to anticancer drugs. Recent studies demonstrate the potency of cannabinoids and TMZ as anticancer actors via autophagic cell death in some cancers. Other signal pathways like AMPK/AKT1/mTOR also regulate autophagic cell death. In cases of multidrug resistance (MDR)—defined as the resistance of cancer cells to varied avenues of chemotherapy—where apoptosis is absent, autophagy arrests the growth of tumor cells. In essence, the normal autophagic pathway is turned excessive, digesting the cell itself, killing it. Evidence has even shown that increasing autophagy could
“Evidence is mounting that autophagy enhances apoptosis when available. When apoptosis is unavailable, autophagy mediates autophagic cell death as an alternative.”
183
facilitate MDR reversal by triggering oxidative stress (Ying Jie-Li et al., 2017). Autophagy and apoptosis also make up the first line of defense against hypoxia, a deficiency of oxygen (Bohensky et al., 2007, Degenhardt et al., 2006). When both these mechanisms are suppressed, cells die from hypoxia. Necrotic cell death then triggers the inflammatory immune response, which in turn promotes tumorigenesis. Thus, researchers find that autophagy is important in fending off tumorigenesis by stopping necrosis.
“Further reports contend that autophagy may act directly as a tumor suppressor. Mutation of some autophagic genes result in tumorigenesis and a dysregulation of cell proliferation in certain lines of mice.”
Further reports contend that autophagy may act directly as a tumor suppressor. Mutation of some autophagic genes results in tumorigenesis and a dysregulation of cell proliferation in certain lines of mice (Karantza-Wadsworth et al., 2007). Another theory for its antitumor character is that autophagy’s prevention of genome damage armors the cell against tumor progression. Without autophagy, a cell under metabolic stress exhibits more genome damage which increases the likelihood of tumor growth, since tumors may arise out of genome instability and mutation. By clearing out abnormal material (such as defunct proteins or organelles) that threaten genome stability, autophagy improves the functionality of the metabolic elements. Other theories argue that autophagy aids in the T cell immune response to cancer (Townsend et al., 2012). Autophagy shapes cell death in tumor cells by making it possible for the immune system to recognize the tumor’s properties and instigate an immune response. In enhancing autophagy, a study found a boost in antitumor responsiveness to immune counters (Pietrocola et al., 2016). These findings suggest bundling chemotherapy with enhanced autophagy may prove effective. In all, various properties of autophagy, ranging from its role as an alternative cell death route, its genetic link to tumor growth, its ability to clear harmful cytosolic matter, and its connection to the immune response optimize it as a defender against cancer. Pro-Cancer On the other hand, some of the same properties that render autophagy an antitumor actor can simultaneously promote tumor development in the cell. Ironically, autophagy is suggested to help cells acquire chemotherapy resistance, a rather stark contrast to previous cited research. Because of autophagy’s capacity to promote survival under tough conditions by breaking down and reusing its own components, its application in tumor cells defends them against chemotherapy. Without autophagy there to support tumor cell survival, the cells became
184
all the more susceptible. In studies, when autophagy was suppressed, the tumor cells became more sensitized to the treatment which in turn had increased its cytotoxic potential. In breast cancer, evidence suggests that autophagy actually prevents apoptosis, thereby denying the cell of either form of self programmed death, helping tumor cells cling to life. Using drugs that prevent autophagy in conjunction with chemotherapeutic drugs have thus been shown to enhance the cytotoxicity of the treatment (Maycotte et al., 2012). Prescribing chloroquine or hydroxychloroquine with chemotherapy has proven to magnify antitumor activity (Levy, 2017). When chemotherapy is combined with autophagy suppression, similar effects are observed in a variety of human cancers, including esophageal cancer, glioblastoma, hepatocellular carcinoma, leukemia, lung cancer, pancreatic adenocarcinoma, prostate cancer, renal cell carcinoma and ovarian cancer (Sui et al., 2013). Another factor influencing autophagy is epidermal growth factor signaling, responsible for three subsequent signaling pathways (Ras/MAPK, PI3K/Akt and JAK/STATs) which are linked to cancer initiation (Henson, 2006). Research indicates that epidermal growth factor signaling can trigger autophagy which then protects tumor initiation as it develops in the cell. An extension of the resistance acquisition ability of autophagy is multidrug resistance (MDR) which arises after long-term chemotherapy. In clinical studies, higher levels of autophagy frequently appeared in patients with poor prognosis. This indicates that autophagy could trigger MDR development. Further examination of genes involved in the promotion of autophagy show that they also mediate MDR. This interaction culminates in the protection of MDR cells from appropriate cell death. Silencing autophagy related genes has sensitized cells with MDR to chemotherapy treatments, making them more effective (Ying Jie-Li et al., 2017). Autophagy’s genetic and signaling ties to cancer along with its alliance with tumor cells as a pro-survival mechanism reveal its dangers. These results pave the way for exciting possibilities in oncology, specifically, pairing chemotherapy agents with autophagy suppressants to improve the lethality of anticancer treatments. These results also show how contradictory the current knowledge in the field is. Just looking at MDR tumor cells can
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
gives readers a conflict: with evidence proving that autophagy protects MDR tumor cells against chemotherapies and evidence proving that autophagy terminates MDR tumor cells, how can a vote of confidence be made about autophagy as an instrument of life or death?
Conclusion Based on current research in the field, autophagy’s positive or negative effect on cancer remains ambiguous. Autophagy inducers or inhibitors could be fundamental to a prospective immunotherapeutic strategy against cancerーthe matter is of discerning the circumstances in which autophagy is hurtful or helpful. Although progress is being made into this branch of research, ranging from genetic to clinical studies, more is still necessary. As with many elements of the immune system, autophagy’s seemingly simple job is in reality a complex and unnavigated web filled with contradictions. With certainty, autophagy’s role in cancer growth and development can currently only be defined as contextdependent and varied. A clearer understanding of how autophagy controls chemotherapy sensitivity is necessary to elucidate the effect on tumor cells. Currently, autophagy’s functionality hinges on its circumstance; the type of anticancer treatment, the type of cancer, and more can dictate cancer outcomes. While the fundamental mechanisms and applications of autophagy are well explored, defining autophagy’s role in cancer remains in debate. Grasping connections between cancer and autophagy, the immune system, signaling pathways, and associated genes is necessary moving forward. To promote or suppress autophagy? The decision could be potentially lifesaving or death-affirming.
References Anglade, P., Vyas, S., Javoy-Agid, F., Herrero, M.T., Michel, P.P., Marquez, J., Mouatt-Prigent, A., Ruberg, M., Hirsch, E.C., Agid, Y.(1997) Apoptosis and autophagy in nigral neurons of patients with Parkinson’s disease. Histol. Histopathol. 12:25–31. Bohensky, J., Shapiro, I.M., Leshinsky, S., Terkhorn, S.P., Adams, C.S., Srinivas, V.(2007) HIF-1 regulation of chondrocyte apoptosis: Induction of the autophagic pathway. Autophagy 3:207–214.
SUMMER 2020
Degenhardt, K., Mathew, R., Beaudoin, B., Bray, K., Anderson, D., Chen, G., Mukherjee, C., Shi, Y., Gelinas, C., Fan, Y., et al.(2006) Autophagy promotes tumor cell survival and restricts necrosis, inflammation, and tumorigenesis. Cancer Cell 10:51–64. Feng, Y., He, D., Yao, Z. & Klionsky, D. J. (2014). The machinery of macroautophagy. Cell Res. 24, 24–41. Gewirtz, D., Saleh T. (2016). Lung Cancer Autophagy. National Cancer Institute Up Close 2016. Hacohen, N., & Lan, Y. Y. (2019). Damaged DNA marching out of aging nucleus. Aging, 11(19), 8039–8040. Hara, T., Nakamura, K., Matsui, M., Yamamoto, A., Nakahara, Y., Suzuki-Migishima, R., Yokoyama, M., Mishima, K., Saito, I., Okano, H., et al.(2006) Suppression of basal autophagy in neural cells causes neurodegenerative disease in mice. Nature 441:885–889. Henson ES, Gibson SB. (2006). Surviving cell death through epidermal growth factor (EGF) signal transduction pathway: implications for cancer therapy. Cell Signal; 18: 2089–2097. Iwata, J., Ezaki, J., Komatsu, M., Yokota, S., Ueno, T., Tanida, I., Chiba, T., Tanaka, K., Kominami, E.(2006) Excess peroxisomes are degraded by autophagic machinery in mammals. J. Biol. Chem. 281:4035–4041. Jackson, W.T., Giddings, T.H., Taylor, M.P., Mulinyawe, S., Rabinovitch, M., Kopito, R.R., Kirkegaard, K.(2005) Subversion of cellular autophagosomal machinery by RNA viruses. PLoS Biol. 3:e156, 10.1371/journal.pbio.0030156. K Takeshige, M Baba, S Tsuboi, T Noda, Y Ohsumi. (1992). Autophagy in yeast demonstrated with proteinase-deficient mutants and conditions for its induction.. J Cell Biol 15 119 (2): 301–311. Karantza-Wadsworth, V., Patel, S., Kravchuk, O., Chen, G., Mathew, R., Jin, S., White, E.(2007) Autophagy mitigates metabolic stress and genome damage in mammary tumorigenesis. Genes & Dev. 21:1621–1635. Kim, I., Rodriguez-Enriquez, S., Lemasters, J.J.(2007) Selective degradation of mitochondria by mitophagy. Arch. Biochem. Biophys. 462:245–253. Klionsky, D.J.(2005) The molecular machinery of autophagy: Unanswered questions. J. Cell Sci. 118:7–18. Komatsu, M., Waguri, S., Ueno, T., Iwata, J., Murata, S., Tanida, I., Ezaki, J., Mizushima, N., Ohsumi, Y., Uchiyama, Y., et al.(2005) Impairment of starvation-induced and constitutive autophagy in Atg7-deficient mice. J. Cell Biol. 169:425–434. Lan, Yuk Yuen, Diana Londono, Richard Bouley, Michael S. Rooney, Nir Hacohen. (2014). Dnase2a Deficiency Uncovers Lysosomal Clearance of Damaged Nuclear DNA via Autophagy. Cell Reports 9, no.1: 180-92. Lee, H.K., Lund, J.M., Ramanathan, B., Mizushima, N., Iwasaki, A.(2007) Autophagy-dependent viral recognition by plasmacytoid dendritic cells. Science 315:1398–1401. Levy, J., Towers, C. & Thorburn, A. (2017). Targeting autophagy in cancer. Nat Rev Cancer 17, 528–542. Li, Y., Lei, Y., Yao, N. et al. (2017). Autophagy and multidrug resistance in cancer. Chin J Cancer 36, 52.
185
Maycotte P, Aryal S, Cummings CT, Thorburn J, Morgan MJ, Thorburn A . (2012). Chloroquine sensitizes breast cancer cells to chemotherapy independent of autophagy. Autophagy; 8: 200–212. Mizushima, N. (2007). Autophagy: process and function. Genes Dev, 21, 2861–2873. Mizushima, N. (2007). The Pleiotropic Role of Autophagy: From Protein Metabolism to Bactericide. Cell Death & Differentiation, 12, 1535-1541. Moritz M. Gaidt et al. The DNA Inflammasome in Human Myeloid Cells Is Initiated by a STING-Cell Death Program Upstream of NLRP3, Cell (2017). Mortimore, G.E., Pösö, A.R.(1987) Intracellular protein catabolism and its control during nutrient deprivation and supply. Annu. Rev. Nutr. 7:539–564. Nakai, A., Yamaguchi, O., Takeda, T., Higuchi, Y., Hikoso, S., Taniike, M., Omiya, S., Mizote, I., Matsumura, Y., Asahi, M., et al.(2007) The role of autophagy in cardiomyocytes in the basal state and in response to hemodynamic stress. Nat. Med. 13:619–624. Newsholme, E.A., Crabtree, B., Ardawi, M.S.(1985) Glutamine metabolism in lymphocytes: Its biochemical, physiological and clinical importance. Q. J. Exp. Physiol. 70:473–489. Okamoto, K., Hirai, S., Iizuka, T., Yanagisawa, T., Watanabe, M.(1991) Reexamination of granulovacuolar degeneration. Acta Neuropathol. (Berl.) 82:340–345. Onodera, J., Ohsumi, Y.(2005) Autophagy is required for maintenance of amino acid levels and protein synthesis under nitrogen starvation. J. Biol. Chem. 280:31582–31586. Pietrocola, F. et al. (2016) Caloric restriction mimetics enhance anticancer immunosurveillance. Cancer Cell 30, 147–160. Qi, Y.‐y., Zhou, X.‐j. and Zhang, H. (2019), Autophagy and immunological aberrations in systemic lupus erythematosus. Eur. J. Immunol., 49: 523-533. Ravikumar, B., Vacher, C., Berger, Z., Davies, J.E., Luo, S., Oroz, L.G., Scaravilli, F., Easton, D.F., Duden, R., O’Kane, C.J., et al.(2004) Inhibition of mTOR induces autophagy and reduces toxicity of polyglutamine expansions in fly and mouse models of Huntington disease. Nat. Genet. 36:585–595. Santoro, A, Spinelli, CC, Martucciello, S, et al. (2018). Innate immunity and cellular senescence: The good and the bad in the developmental and aged brain. J Leukoc Biol. 2018; 103: 509– 524. Swanson, M.S., Fernandez-Moreira, E.(2002). A microbial strategy to multiply in macrophages: The pregnant pause. Traffic 3:170–177. Tan, Q. et al. (2016). Role of autophagy as a survival mechanism for hypoxic cells in tumors. Neoplasia 18, 347–355. Townsend, K. N. et al. (2012). Autophagy inhibition in cancer therapy: metabolic considerations for antitumor immunity. Immunol. Rev. 249, 176–194. Zhang Y, Morgan MJ, Chen K, Choksi S, Liu ZG. (2012). Induction of autophagy is essential for monocyte-macrophage differentiation. Blood. 119(12):2895-905.
186
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
187
COVID-19 Response in Vietnam BY KAMILA ZAKOWICZ '22 AND KAY VUONG '22 DR. JOSEPH ROSEN (PROFESSOR OF SURGERY, ADJUNCT PROFESSOR OF ENGINEERING AT THE GEISEL SCHOOL OF MEDICINE) PETER KATONA MD (CLINICAL PROFESSOR OF MEDICINE AT DAVID GEFFEN SCHOOL OF MEDICINE AT UCLA)
Fig. 1: Map of COVID-19 infections as of September 7, 2020. By Raphaël Dunant, Gajmar (maintainer) - Own work, data from Wikipedia English (e.g. COVID-19 data and Population), maps from File:BlankMap-World. svg and File:Blank Map World Secondary Political Divisions.svg, CC BY 4.0,Wikimedia Commons
Introduction The end of December 2019 marked the beginning of the coronavirus outbreak, now a global pandemic. As medical professionals in China’s Hubei province noticed a rise in peculiar pneumonia cases that exhibited SARSlike symptoms, they attempted to warn their government and the world about its potentially deadly consequences. It took only weeks, from December 31, 2019 when the Coronavirus was identified until January 13, 2020 when the first case of COVID-19 outside China was confirmed, for COVID-19 cases to start multiplying on a global scale and turn into a pandemic (World Health Organization, 2020). It was declared as such on March 11th by the WHO. During the first meeting of the International Health Regulations Emergency Committee regarding the outbreak of novel coronavirus on January 22 and 23, the representatives from China’s Ministry of Health confirmed
188
that there were 557 cases, located mainly in the Hubei province and some clusters outside of this region. According to the World Health Organization’s (WHO) 205th Situation Report, there have been 20,162,474 confirmed cases and 737,417 fatalities globally as of August 12th, 2020 with North America being the current epicenter (WHO, 2020). Since the Coronavirus outbreak is the first pandemic since the catastrophic Spanish flu emerged in 1918, most people who are currently alive have never experienced the severity of a pandemic’s impact. A pandemic such as the Spanish Flu, and the COVID-19 pandemic the world is experiencing now, affect almost every essential sector of the economy, including healthcare, education, and trade. Among those affected are world leaders who bear the burden of protecting all of these aspects and making the right sacrifices. A large number of governments, particularly those of the Western DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
world, have been heavily criticized for their untimely or ill-fitting pandemic-containment policies; yet, in the face of it all, Vietnam and other Asian countries seem to have been those with the firmest grasp on how to contain the 21st century’s worst public health disaster so far. The official COVID-19 statistics in Vietnam as of August 2020 are 1029 confirmed cases with 27 deaths. From April 17 to May 2020, however, there were 324 confirmed cases (all contracted overseas) and 0 deaths (Ministry of Health Vietnam, 2020). Due to the steady decrease in cases, the Vietnamese government began to ease lockdown restrictions on April 23, 2020 and entered a “new normal”– a step that few affected countries had been able to take at that point (Ministry of Health Vietnam, 2020). It is therefore valuable to assess what Vietnam accomplished for it to achieve this outcome at such an early date, especially since it has more than 97,000,000 citizens and shares a border with China that spans roughly 1,400 kilometers or 870 miles (World Population Review, 2020) (Thao, 2009).
Addressing the credibility of Vietnam’s Coronavirus statistics Vietnam’s status as a one-party Communist state inevitably provokes an instinctive reaction in Western, non-Communist societies to deny the credibility of its COVID-19 statistics. It is true that there is a lack of transparency when it comes to official details of what the medical process looks like to Vietnamese COVID-19 patients and its infected case numbers, as the Ministry of Health has yet to release any data to those who are not directly involved in the medical sector. Thus, the credibility of the number of infected cases remains uncertain. The lack of public access to data at this point during the pandemic, however, neither supports nor denies the successful results that the Vietnamese government has achieved in containing an outbreak. As of right now, there have been confirmations from independent international sources that argue in favor of Vietnam’s credibility. The director of the Centers for Disease Control and Prevention (CDC) in Thailand, Dr. John MacArthur, stated that he had faith in Vietnam’s case numbers in a telephonic briefing with Dr. Barbara Marston, CDC COVID-19 International Task Force Lead: “[Our] team that’s up in Hanoi working very, very closely with ministry of health counterparts on many, many aspects of
SUMMER 2020
this outbreak response, providing technical assistance in the areas of surveillance, data analysis, laboratory testing, the actual going into the field and doing investigations, contact tracing with the Vietnamese counterparts, and really, sort of trying to support them and their approach. [...] And so those relationships are strong, and it kind of allows us to get a sense of whether the numbers are real, the numbers are not real, only because of deficiencies in testing and some such, or whether there’s some reason to hide those numbers. And from the communications I’ve had with my Vietnam team, is they, at this point in time, don’t have any indication that those numbers are false.” In addition, Reuters, the world’s largest multimedia news provider, lauds Vietnam for having the highest ratio of tests to each confirmed case in the world, which stands at roughly 791:1. Although these numbers may contain bias from the Vietnamese Ministry of Health, international news outlets such as Reuters mostly acknowledge the effectiveness of Vietnam’s approach regardless of questions of numerical transparency.
Historical Pandemic Response: Severe Acute Respiratory Syndrome (SARS), 2003
“Reuters, the world's largest multimedia news provider, lauds Vietnam for having the highest ratio of tests to each confirmed case in the world, which stands at roughly 791:1."
The COVID-19 global outbreak is not the first pandemic that swept through Vietnam in the 21st century. In February, 2003, the SARS coronavirus, which initially appeared in southern China’s Guangdong province around mid-November 2002, spread to 26 countries and proceeded to infect roughly 8,437 in the following 3-4 month period (“Cumulative Number,” 2020; CDC, 2020). The number of worldwide fatalities reached about 800 (“Cumulative Number,” 2020). Among the affected countries outside of China, Vietnam was one of the first four to report alarmingly atypical cases of pneumonia, following the hospitalization of a 47-year-old business man, who had previously traveled to mainland China and Hong Kong, in Hanoi (Kamps & Hoffman, 2006) . These cases were later confirmed to be SARS and were all treated in Hanoi’s VietnamFrench Hospital. Out of 63 SARS cases in Hanoi, 36 of them were healthcare workers who were in direct contact with the first patient, making the percentage of infected medical personnel during the pandemic 57%, the highest of all affected countries (Hörmansdorfer, Campe, & Sing, 2008). The 5 recorded deaths in Vietnam entirely consisted of doctors and nurses at the 189
Figure 2: Graph depicting daily COVID-19 tests in South Korea and Vietnam from January 28th to April 27th. Daily testing rose significantly in South Korean starting in February, while in Vietnam daily testing did not start until March. Retrieved from Our World in Data
French Hospital (Phuong, 2018). There are two important factors that contributed to Vietnam’s success in becoming SARS-free after just two months (02/26/2003 - 04/28/2003): timely response from the hospital responsible for SARS patients and the government’s quick and transparent efforts to enforce policies that ensured effective containment of the disease.
“Only a few days after the initial report to the WHO made by Dr. Carol Unabi, the French hospital issued a hospital-wide quarantine order on March 5, 2003.”
The amount of commitment and resourcefulness that went into controlling SARS infections was especially impressive because at that time, medical resources in Vietnam were still relatively scarce (Phuong, 2018). Only a few days after the initial report to the WHO made by Dr. Carlo Ubani, a WHO representative who traveled to Hanoi for a situational assessment, the French Hospital issued a hospital-wide quarantine order on March 5, 2003 (National Hospital for Tropical Diseases Vietnam, 2013). When interviewed about the situation that led to the hospital’s lockdown, nurse Xuân, who was one of the first responders, shared with a VNExpress journalist: "Tôi lờ mờ nhận thấy dịch ngày càng nguy hiểm. Hôm đó (5/3/2003), tôi đi chợ mua rất nhiều đồ, dặn chồng nấu nướng và chăm sóc các con. Quả nhiên, tối ấy bệnh viện phát lệnh đóng cửa, toàn bộ nhân viên ở lại viện" (“I vaguely guessed that the outbreak was becoming more and more dangerous.
190
That night [March 5, 2003], I bought a lot of groceries and instructed my husband to cook and take care of the kids. As expected, the hospital sent out the order to close down the hospital and [everyone] must stay there”). All working hospital staff and residing patients were required to remain at the hospital for a period of more than 30 days. This swift move to isolate SARS cases largely prevented the spread of the disease within the community and minimized the possible damage of the outbreak. According to Doctor Võ Văn Bản, the vice director of the French Hospital during the SARS outbreak, most of the hospital’s medical personnel were unaware of the impending SARS pandemic when they came into contact with patient zero during on February 26, 2003. As a result, they were not able to take drastic preventive measures such as isolating the patient and wearing enough personal protective equipment. As a consequence, a disproportionate number of the hospital’s staff became infected and died (Phuong, 2018). This devastating unintended consequence prompted responses from the hospital to protect their staff and control infection. First, after identifying the SARS threat, the hospital allowed high-risk personnel such as those who
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Chart depicting Vietnamese tracing levels F0-F5. Created by authors.
were pregnant or had young dependents to take time off work (Anh, 2020). Second, the hospital began providing additional PPE for its workers, specifically N95 masks (Nishiura et al., 2005) . The hospital addressed its urgent need to provide enough essential life-saving equipment by cooperating with Bach Mai hospital and its French associates to “borrow” five more ventilators (Phuong, 2018). After the first few frantic days of coordinating additional medical supplies and intensively working to minimize the further spread of the outbreak, the Vietnamese Ministry of Health directed the French Hospital to move its patients to the National Hospital of Tropical Diseases, whose medical personnel and resources were better equipped for this type of emergency (National Hospital for Tropical Diseases Vietnam, 2013). The fast and efficient cooperation among the aforementioned network of hospitals enabled Vietnam to mitigate the consequences of medical institutions being caught unprepared by an unprecedented and deadly medical emergency. By March 15, the CDC issued warnings and guidelines for global health departments on the SARS pandemic (CDC SARS Response Timeline, 2013). From this point onwards, the Vietnamese government strengthened
SUMMER 2020
“By March 15, the CDC issued warnings and guidelines for global health departments on the SARS pandemic. From this point onwards, the Vietnamese government strengthened border control and enforced extensive medical screenings at ports of entry as well as airports across the In addition, the Ministry of Health directed the country.” formation of the National Steering Committee border control and enforced extensive medical screenings at ports of entry as well as airports across the country. The testing procedures were conducted by medical personnel from the National Institute of Hygiene and Epidemiology in cooperation with the National Hospital of Tropical Diseases (Phuong, 2018). Despite being a struggling developing country at the time, Vietnam was committed to funding efforts to fight the transmittable disease. The Finance Ministry spent about $2 million on medical equipment and activities related to SARS prevention and targeted a little more than $1 million for Vietnam’s border provinces tasked with preventing SARS from leaving or entering Vietnam (Congressional Research Service, 2003).
for the Prevention of SARS with the assistance of other governmental sectors with authority over national security and governmental media. This Committee consisted of divisions responsible for community tracing as well as regional disease-monitoring task forces. According to the General Department of Preventive Medicine in Vietnam, the task forces included the following sub-committees (Executive Hazardous Infectious Diseases, 2017):
191
Figure. 4: Map depicting tracing levels of different countries. Retrieved from Our World in Data.
1. The sub-committee for disease monitoring 2. The sub-committee for disease treatment 3. The sub-committee for public health education and media 4. The sub-committee for logistics
“Among the countries most affected by the SARS outbreak, Vietnam was the poorest.”
Among the countries most affected by the SARS outbreak, Vietnam was the poorest (Congressional Research Service, 2003). The country acknowledged that it was lacking in necessary medical and monetary resources at the time, making the prospect of fighting the SARS pandemic alone unfeasible. In order to overcome this vulnerability, Vietnam actively sought for international aid. As soon as the first severe SARS cases appeared, the French Hospital was financially assisted by France and received more than $100,000 in for sterilization and disinfection of hospital equipment. In addition, Vietnam called for Japan to dispatch medical experts and was sent two ventilators along with other medical supplies by the Japanese government. In addition to requesting assistance from other nations, Vietnam also cooperated extensively with large public health organizations. It was supplied with a large amount of PPE such as masks and gowns from the WHO and CDC, and received direct consultation from a team of medical professionals sent by Doctors Without Borders (Congressional Research Service, 2003). On April 28, 2003, a mere two months after
192
the appearance of its first SARS case, Vietnam became the first country in the world to contain the outbreak (Update 95 - SARS: Chronology of a serial killer, WHO, 2003).Vietnam’s holistic approach to battling this public health emergency made it possible for it to minimize the spread of the deadly virus, and Vietnam’s success in 2003 would later on become the blueprint for how the country dealt with the COVID-19 pandemic in 2020.
Strategies for COVID-19: Systemic Level Where is Vietnam now and what strategies did the government employ to get there? From the first case of COVID-19 reported on January 23rd of this year, Vietnam had gone almost 100 days without a locally transmitted case until the recent July 2020 outbreak in Da Nang. The number of positive cases reported by the Vietnamese Ministry of Health is 1009 in August, up from the 324 of May to July due in large part to Vietnamese tourists from all over the country coming to a popular resort in the Da Nang province as reported by Vietnam Briefing (Vietnam Briefing News, 2020). Meanwhile, the CDC reported ~50,000 new cases per day in the United States in July and August 2020 (CDC Covid Data Tracker, 2020). The question remains: how did Vietnam do so well in the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
face of this pandemic? A report in the Journal of Travel Medicine claimed Vietnam’s response was similar to those of other countries, but that Vietnam’s strategies have been particularly effective because they were implemented much earlier (Dinh et al., 2020). This paper will investigate several key systemic strategies employed by the Vietnamese government to prevent the spread of COVID-19. The following strategies will be discussed: early and enforced quarantine, government tracing, distribution of supplies and testing, utilization of technological tools, and leadership organization. Early and Enforced Isolation i. Budget-friendly Approach: A brief comparison to South Korea Much of Vietnam’s success can be attributed to its proactive and effective lockdown measures. Vietnam took a preventative approach because with $2,715.3 GDP per capita reported in 2019, Vietnam could not afford to mitigate an outbreak with comprehensive testing and generous spending. To demonstrate the economic constraints to Vietnam’s pandemic response, South Korea provides an illustrative counterexample. Both Vietnam and South Korea have demonstrated relatively effective COVID-19 mitigation outcomes, but they have adopted different approaches. With a significantly higher GDP per capita ($31,362), South Korea could afford to employ a comprehensive testing and generous spending pandemic response strategy (World Bank, 2020). In its current state, South Korea has reported just under 18,000 cases on August 25th with over 1.8 million tests performed (Ministry of Health and Welfare South Korea, 2020). Back in January, however, officials from the South Korean health ministry convened representatives from more than 20 medical companies to produce COVID-19 testing kits. Just a week after the January 27th meeting, a company was approved with others following soon after (Terhune et al., 2020). So far, Asia Times reported that South Korea’s national health insurance system has spent $310 million US dollars on COVID-19 treatment. This pales in comparison to the trillions of dollars the United States has spent thus far in its COVID-19 response, but Dr. Kim Sun-min, president of the Health Insurance Review and Assessment Service (HIRA), nonetheless called COVID a “relatively cheap disease” because it does
SUMMER 2020
not require MRIs, surgery, or other expensive equipment for treatment other than ventilators for extreme cases (Salmon, 2020). Relatively, Vietnam spent around $3 million US dollars total on the SARS outbreak in 2003 and could not afford this “low cost, high tech” strategy. Vietnamese test kits were launched in March, and only after several grants were awarded to the Vietnamese government from WHO to fund development. As a result, daily testing was implemented slower in Vietnam than in South Korea. The Vietnamese government instead opted to launch a prevention plan in order to accommodate their resources. Travel restrictions, quarantine centers, school closures, and lockdowns were executed early in order to prevent, not mitigate, the spread of the virus. ii. Travel Immediately after its first case on January 23rd, Vietnam cancelled flights to and from Wuhan, China and set up health screenings at airports (Tuoi Tre News, 2020). A 14-day government sanctioned quarantine was instituted for travelers on February 15th and by March 22nd, all foreign travelers were banned from entering Vietnam (Ministry of Foreign Affairs, 2020). Health declaration forms are mandatory upon entry into the country, and foreigners who arrived before March 1st were considered for an automatic extension of stay until August 31st. The country went into a limited national lockdown on April 1st, shutting down its borders, public transportation, and any gatherings of more than 10 people (Gardaworld, 2020; Shira et al., 2020; Sullivan, 2020; U.S. Embassy & Consulate in Vietnam, 2020). These measures were taken as a preventative strategy, and likely contributed to Vietnam’s minimum number of positive cases and deaths. After March, domestic restrictions loosened and airlines resumed domestic travel until the recent evacuation and resuspension due to the outbreak in Danang. The foreign travel ban, however, is still in effect in August, and citizens returning from abroad must abide by the mandatory government quarantine.
“Immediately after its first case on January 23rd, Vietnam cancelled flights to and from Wuhan, China and set up health screening at airports.”
iii. Isolation Centers There are several sites that the Ministry of Health has listed as candidates for establishing a centralized isolation facility. These include army and police barracks, school dormitories, factories and enterprises, new and unused
193
Figure 5: Application logos for the NCOVI (left) and Bluezone (right) applications. Retrieved from https://apps. apple.com/us/app/ncovi/ id1501934178 and https:// play.google.com/store/apps/ details?id=com.mic.bluezone
apartment buildings, hotels and resorts, schools, and communal health facilities (Ministry of Health, 2020).
“The Ministry of Health has outlined the system and protocols for how to carry out tracing, most of which rely on collaboration between individual citizens and local health department officials.”
Returning citizens and travelers are kept in centralized isolation facilities for 14 days, where they are given government subsidized sustenance and accommodations. The Ministry of Health lists the requirements for isolation facilities. These include essential living conditions (electricity, water, bathrooms), ventilation, security and safety, fire prevention, and the characteristic of being located away from residential areas, yet being located somewhere convenient for transportation and waste removal. If possible, rooms are equipped with television or internet. In the facilities, people are isolated as much as possible but may receive packages from friends and loved ones provided they do not contain money or alcohol. The conditions are reported to be minimal but comfortable, with volunteer translators for foreigners and fences in between bunk beds. The People’s Committees of each district or province are responsible for implementing these facilities, which are then staffed and guarded by health workers, the military, and law enforcement (Nguyen, K., 2020; Chau & Nguyen, Xan Q., 2020; Nguyen, S., 2020; Pearson, J. & Nguyen, P., 2020). iv. School Closings Hanoi was the first province to cancel in-person schooling in late January, after Vietnam’s first COVID-19 case on January 23rd. Other provinces and cities followed suit, and by February 16th, all 63 districts opted to extend school closures. After either a three-month school break or three months of online learning, schoolchildren began to return to school with physical distancing
194
measures in place starting early May, with gradual reopening in phases (Insider, V. 2020; UNESCO, 2020; Saignoneer, 2020; Tuan, V. & Tung, M., 2020; Vu, K. & Nguyen, M., 2020). Government Tracing The government has resourced a comprehensive tracing program of positive cases. The Ministry of Health has outlined the system and protocols for how to carry out tracing, most of which rely on collaboration between individual citizens and local health department officials. The system classifies cases on a scale from F0 to F5. F5 cases are notified of possible exposure and asked to self-monitor, F1-F4 cases are isolated and quarantined in the home, and contacts of F0 cases are traced for 14 to 28 days. F0 cases are infected patients, F1 cases are people who had direct contact with F0 cases, F2 cases have had contact with F1 cases, F3 cases have had contact with F2 cases, F4 cases have had contact with F3 cases, and F5 cases have had contact with F4 cases (Times, V., 2020). There are different levels of action needed for each level of tracing. The Vietnamese Ministry of Health has reported the following protocols of response from the Ministry of Health for each level. F0 individuals need to be hospitalized, treated, and isolated to prevent further infection. F1 individuals must wear a mask immediately, notify the local county health department, isolate at a hospital, and notify F2 individuals. F2 individuals must wear a mask immediately, notify the local health department, follow isolation instructions by department staff, and notify F3 individuals. F3 individuals must wear a mask immediately,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
notify the local health department, follow isolation instructions by department staff, and notify F4 individuals. F4 individuals must enact home isolation and notify the local health department. These protocols are documented on the Ministry of Health website, Vietnamese news site Vietnam+, and in work done by Dinh et al (2020). Vietnam has one of the most comprehensive tracing policies in the world. While some parts of Europe and other countries are stepping up to trace as thoroughly as Vietnam, the United States still employs limited tracing measures despite being the current global epicenter. Supplies, PPE, Testing i. Developing, Distributing, and Exporting Test Kits Reuters and The Japan Times reported that the collaboration between the Vietnamese government, the medical supply company Viet A Corps, and the research facilities at the state-run Military Medical University (MMU) was key to Vietnam’s comprehensive testing and supply strategy. By late February, MMU and Viet A Corps had designed a mass-producible test kit for coronavirus. The government issued a license, and by the end of March 250,000 kits were issued in Vietnam. Labs that could test for COVID-19 also increased, from 3 in January to 112 in April (Japan Times, 2020) (Vu, K., Nguyen, P., & Pearson, 2020). By the end of April, the World Bank reported that Vietnam was conducting 967 tests for every positive case it found (Minh Le, S., 2020). The same week, Vietnam received the seal of approval from the WHO to start exporting tests. The WHO had previously sent Vietnam lab supplies two months before to start developing the kit, and since then Vietnam has developed two internationally-used tests to avoid false-negatives: COVID-19 antibody tests and Polymerase Chain Reaction (PCR) tests to detect the presence of the virus. These kits were researched and developed by multiple organizations and political bodies in Vietnam working together. The Vietnam Academy of Science and Technology (VAST), consulting business IMM, and the Vietnamese University of Technology (UoT) were consulted by the government and worked together to develop publicly funded test kits. Vietnam used a collaborative top-down approach with kits; IMM’s kit was developed by the military and was mass produced by Viet A Corps. Vietnam’s
SUMMER 2020
kits have been tested and found to be on par with those distributed by the WHO and the CDC, and producers can make 10,000 kits per day (Klingler-Vidra, R., Tran, Ba L., & Uusikyla, I., 2020). By the beginning of May, the American news outlet Voice of America reported that Vietnam had received 20 orders for test kits from nations around the world (Voice of America, 2020). ii. Manufacturing of Related Goods Production of other pandemic-related supplies have also boosted Vietnam’s exports and economy. With the global economic downturn caused by the virus, Vietnam found a solution in producing face masks. The government converted numerous textile and clothing factories into face mask producers. Producing the masks is quick, efficient, and does not require complicated imports of raw materials. Factories can produce thousands of masks per day for domestic use and exports. By mid-April, the Ministry of Industry and Trade reported that 50 producers had the combined capacity to manufacture 8 million masks per day, or 200 million masks per month (Dougn, D., 2020).
“With the global economic downturn caused by the virus, Vietnam found a solution in producing face masks.”
Technology and Media i. Ministry of Health Website Along with information from the CDC and WHO, Vietnamese citizens can access information and announcements from their own government through the Ministry of Health website. The site is updated daily with announcements on new cases and links to news regarding COVID-19. The homepage features a live database, which is crucial for citizens to stay up to date on infections and the status of different regions. There are additional tabs for articles on the latest COVID research, recommendations from the Ministry of Health on a variety of topics, and updates on industry support. There is also a section where people can submit questions, find health support locations, look up instructions, and take quizzes on various health protocols. The Ministry of Health has also created digital posters on subjects such as hygiene, social distancing measures, tracing protocol, ways to manage stress, testing protocols, and advice on maintaining good health during a pandemic. Information is centralized, organized, and easy to access. The live webpage has a plethora of information that connects citizens to their government directly through the Ministry of Health.
195
. ii. Applications: NCOVI and Bluezone
“Military resources have been extensively utilized in the wake of the pandemic and have been supplemented by resources from police as well as preventative medicine.”
The Ministry of Health and the Ministry of Information and Communications launched the NCOVI application, a platform where users can find recommendations on COVID-19 protocols and send out voluntary health declarations. The application asks users to fill out contact information, family details, and take a health and disease survey. In March, the app reached the #1 spot in the Vietnam Apple Store, and the Google Play store shows that by August 13th, 41,293 people had downloaded the app. Many users have left comments finding the application useful, informative, and accessible. On April 21, technology firm Bkav and the Ministry of Information and Communications launched an additional application called Bluezone. Bluezone operates using Bluetooth to link smartphones within two meters and notifies users if they have had contact with an infected individual in the last 14 days. 42,849 users have downloaded the app from Google Play as of August 13th (Nguyen, D., 2020). The benefit of the app lies in its efficiency: the government does not have to systematically collect information from people and dispatch valuable resources for tracing. Leadership and Culture A strong, centralized leadership was vital to Vietnam’s swift and effective response to COVID-19. The key players in Vietnam’s coordinated pandemic response were the Vice Prime Minister, the military, and the Ministry of Health. These leaders covered a lot of ground in little time, which was what was needed when COVID-19 surfaced in Vietnam back in January. i. Vice Prime Minister The Vice Prime Minister Vu Duc Dam, also the Vice Chair of the Communist Party, is the first prong representing government leadership in Vietnam. Mr. Vu Duc Dam was delegated to lead the Ministry of Health after the previous MOH leader stepped down in 2019, and his tenure lasted through the first wave of the pandemic (Vietnam Investment Review, 2019). Despite not having a background in MOH leadership, Mr. Vu Duc Dam has been praised for his role in the early and swift campaign against COVID-19. The government and the country, he said in Hanoi Times, was prepared for the pandemic “before even the first infection,” presumably referring to the country’s battles with SARS in 2003 and MERS in 2008 (Pham, 2008). Working with
196
scientists, leaders in the MOH, and other parts of the government, he was able to secure the MOH in its position in spearheading campaigns for tracing, quarantine, and health resources for those who needed it. His post in the MOH has since been delegated to Mr. Nguyen Thanh Long as of July 7th, former MOH Vice Minister (Linen, 2020). ii. Military Leadership Military resources have been extensively utilized in the wake of the pandemic and have been supplemented by resources from police as well as preventative medicine officials from the MOH. Local and federal troops have been maintaining isolation centers since their creation, military research center MMU was central to developing the country’s globally used testing kit, and recent reports state that military student volunteers were mobilized to carry out contact tracing protocols and collect samples in Danang (Vu, K., Nguyen, P., 2020). In a speech translated by the Vietnam Law and Legal Forum back in March 2020, Commanderin-Chief and Secretary of the Central Military Commission Nguyễn Phú Trọng expressed that “each citizen must be a soldier in the battlefield against the disease,” advocating for the cooperation of all levels of leadership and community in the fight against COVID-19. The diversity of the military’s involvement showcases the extent to which its resources can be utilized in a variety of sectors, from technological research to facility management. iii. The Ministry of Health Simply put, the Ministry of Health had been preparing for a pandemic long before COVID-19. In 1961, the Direction of Healthcare Activities (DOHA) was established by Vietnamese leader Ho Chi Minh in order to facilitate central communication and guidance between higher tier administrative healthcare and lower tier hospital care. A report published in the Journal of Environmental Health and Preventive Medicine describes the DOHA’s two missions: “1. To build a sound collaboration network and support system among health facilities, particularly those at higher and lower levels, to help ensure equity of health and deliver quality healthcare services to all Vietnamese people. 2. To address the burden of too many patients in higher level centers. This means supporting improvements in the quality
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
of healthcare services provided at lower levels, particularly training and technical skills transfer activities to improve trust and respond to social demands.” (Takashimi et al., 2017) Dr. Lily Hue, a doctor at the central Bac Mai hospital, described how this initiative has established “a strong preventative medicine sector from [a] central level...down to [a] provincial, district, and communal level.” Hue believes it was thanks to this effective communication between central hospitals like Bac Mai and smaller hospitals that Vietnam’s healthcare network was able to act quickly and effectively under the leadership of the MOH to respond to COVID-19. Within the MOH, leadership prior to Mr. Vu Duc Dam and Mr. Nyugen Thanh Long had also prepared the country for an effective pandemic response. Dr. Nguyen Thi Kim Tien served as the Minister of Health for 8 years before stepping down in 2019, and during her time as minister she raised the country’s capacity for disease prevention by investing in the healthcare sector. Although the price increases she put in place were controversial and eventually led to her dismissal, she implemented innovative technology into hospitals, raised healthcare worker salaries, and provided patients with amenities like expanded hospital rooms and furnished waiting rooms (Tuổi Trẻ Online, 2019). The country’s response would not have been possible without these measures implemented by Dr. Tien before leaving her post. Although no official successor had been appointed, the Ministry of Health led temporarily by Deputy Prime Minister Vu Duc Dam was able to implement health screenings, tracing, and provide treatment for patients in the first wave of COVID-19 described earlier in this report.
Vietnam: A Model For Pandemic Response? The United States, as well as other countries around the world and the global scientific community, have a lot to learn from Vietnam and its response to COVID-19. Firstly, Vietnam had previously built up thorough pandemic preparedness framework due to previous outbreaks of infectious disease such as SARS in 2003. Vietnam also employed a preventative approach to COVID-19 that required effective coordination between government, science, technology, military resources, and the Ministry of Health. Vietnam serves as a model for the U.S. and others to build a pandemic response
SUMMER 2020
plan that serves each country’s needs and best utilizes its resources. The key takeaways from the Vietnamese response are a review and improvement of previous pandemic response tactics, early and enforced quarantine, effective testing and technology, and an organized leadership team. Of course, the citizens of Vietnam also played a major role in the effects of the response strategies. In the end, as Dr. Thanh Quang of the Vietnamese National Children’s Hospital stated, “everyone, from a 5 [year] old to a 90 [year] old, [it didn’t] matter what social-economical background, understood what COVID-19 [is] and how dangerous it is.” Without the compliance and understanding of its citizens, none of what Vietnam did would be possible. An informed public, organized leadership, and effective strategies are vital to an effective pandemic response, and Vietnam had them all.
“Hue believes it was thanks to this effective communication between central hospitals like Bac Mai and smaller hospitals that Vietnam's healthcare network was able to act quickly and effectively under the leadership of the MOH to respond to COVID-19.”
References After aggressive mass testing, Vietnam says it contains coronavirus outbreak. (2020, April 30). Reuters. https://www. reuters.com/article/us-health-coronavirus-vietnam-fight-insiidUSKBN22B34H A line runs through it: Vietnam and China complete boundary marking process. (n.d.). Retrieved August 25, 2020, from https://vietnamlawmagazine.vn/a-line-runsthrough-it-vietnam-and-china-complete-boundary-markingprocess-3227.html Andrew Salmon. (2020, June 15). Inside Korea’s low-cost, high-tech Covid-19 strategy. Asia Times. https://asiatimes. com/2020/06/the-secrets-behind-south-koreas-covid-19success/ Archived: WHO Timeline - COVID-19. (n.d.). Retrieved August 25, 2020, from https://www.who.int/news-room/detail/2704-2020-who-timeline---covid-19 BaiViet—Foreigners residing in provinces and cities ... (n.d.). Retrieved May 18, 2020, from https://lanhsuvietnam.gov.vn/ Lists/BaiViet/B%C3%A0i%20vi%E1%BA%BFt/DispForm.aspx ?List=dc7c7d75%2D6a32%2D4215%2Dafeb%2D47d4bee7 0eee&ID=1007 Bộ trưởng Bộ Y tế Nguyễn Thị Kim Tiến: ’Tôi cảm ơn những lời chỉ trích’—Tuổi Trẻ Online. (n.d.). Retrieved August 13, 2020, from https://tuoitre.vn/bo-truong-bo-y-te-nguyen-thi-kimtien-toi-cam-on-nhung-loi-chi-trich-20191121083316875. htm CDC. (2020, August 13). Coronavirus Disease 2019 (COVID-19) in the U.S. Centers for Disease Control and Prevention. https:// www.cdc.gov/coronavirus/2019-ncov/cases-updates/casesin-us.html CDC SARS Response Timeline | About | CDC. (2018, July 18). https://www.cdc.gov/about/history/sars/timeline.htm Chau, Mai N. & Nguyen, Xan Q., (March 19, 2020). Vietnam Military Increasing Isolation Housing to 60,000 Beds. .
197
Bloomberg.Com. https://www.bloomberg.com/news/ articles/2020-03-19/vietnam-is-increasing-quarantinecapacity-to-house-60-000-people Minh Le, S., (2020) Containing the coronavirus (COVID-19): Lessons from Vietnam. (n.d.). Retrieved May 20, 2020, from https://blogs.worldbank.org/health/containing-coronaviruscovid-19-lessons-vietnam Coronavirus Disease (COVID-19) Situation Reports. (n.d.). Retrieved August 25, 2020, from https://www.who.int/ emergencies/diseases/novel-coronavirus-2019/situationreports COVID-19 Information. (n.d.). U.S. Embassy & Consulate in Vietnam. Retrieved May 18, 2020, from https://vn.usembassy. gov/u-s-citizen-services/covid-19-information/ Dinh, L., Dinh, P., Nguyen, P. D. M., Nguyen, D. H. N., & Hoang, T. (2020). Vietnam’s response to COVID-19: Prompt and proactive actions. Journal of Travel Medicine, 27(3). https://doi. org/10.1093/jtm/taaa047 Dinh, L., Dinh, P., Nguyen, P. D. M., Nguyen, D. H. N., & Hoang, T. (2020). Vietnam’s response to COVID-19: Prompt and proactive actions. Journal of Travel Medicine, 27(3). https://doi. org/10.1093/jtm/taaa047 Disease 19(COVID-19), Ministry of Health and Welfare, Coronavirus. (n.d.). Coronavirus disease 19(COVID-19). Coronavirus Disease 19(COVID-19). Retrieved August 25, 2020, from http://ncov.mohw.go.kr/en/ Dougn, D. (2020, April 15). Coronavirus offers opportunity for face mask production business. Vietnam Insider. https:// vietnaminsider.vn/coronavirus-offers-opportunity-for-facemask-production-business/ EXECUTIVE HAZARDOUS INFECTIOUS INFECTIOUS DISEASE. (n.d.). Retrieved August 25, 2020, from http://vncdc.gov.vn/vi/ danh-muc-benh-truyen-nhiem/1082/benh-viem-duong-hohap-cap-nang-do-vi-rut GDP per capita (current US$)—Singapore, Taiwan, China, Hong Kong SAR, China, Korea, Rep. | Data. (n.d.). Retrieved May 19, 2020, from https://data.worldbank.org/indicator/ny.gdp.pcap. cd?locations=sg-tw-hk-kr&name_desc=false Hahn, R. (2020, April 11). COVID-19, From F0 to F5, Government Centralized Quarantine and Cohort Quarantine Guide A to Z in…. Medium. https://medium.com/@rachelhahn/covid-19from-f0-to-f5-government-centralized-quarantine-and-cohortquarantine-guide-a-to-z-in-2fc08d0a3f7a Hồi ức 45 ngày kinh hoàng chống dịch SARS. (n.d.-a). Retrieved August 25, 2020, from http://benhnhietdoi.vn/tin-tuc/chi-tiet/ hoi-uc-45-ngay-kinh-hoang-chong-dich-sars/220 Hồi ức 45 ngày kinh hoàng chống dịch SARS. (n.d.-b). Retrieved August 25, 2020, from http://benhnhietdoi.vn/tin-tuc/chi-tiet/ hoi-uc-45-ngay-kinh-hoang-chong-dich-sars/220 Hörmansdorfer, S., Campe, H., & Sing, A. (2008). SARS – Pandemie und Emerging Disease. Journal Für Verbraucherschutz Und Lebensmittelsicherheit, 3(4), 417–420. https://doi.org/10.1007/s00003-008-0374-0 https://plus.google.com/+UNESCO. (2020, March 4). COVID-19 Educational Disruption and Response. UNESCO. https:// en.unesco.org/covid19/educationresponse
198
Insider, V. (2020, February 14). Hanoi extends school closure for another week over coronavirus concerns. Vietnam Insider. https://vietnaminsider.vn/hanoi-extends-school-closure-foranother-week-over-coronavirus-concerns/ In the 15 years of the SARS pandemic, the horror has not yet faded. (n.d.). Retrieved August 25, 2020, from https:// vnexpress.net/15-nam-dai-dich-sars-noi-kinh-hoang-chuaphai-3723214.html Klingler-Vidra, R., Tran, Ba L., & Uusikyla, I. (April 9, 2020) Testing Capacity: State Capacity and COVID-19 Testing | Global Policy Journal. (n.d.). Retrieved May 20, 2020, from https://www.globalpolicyjournal.com/blog/09/04/2020/ testing-capacity-state-capacity-and-covid-19-testing Linen, T. T. (2020, July 7). Ông Nguyễn Thanh Long làm quyền Bộ trưởng Bộ Y tế. TUOI TRE ONLINE. https://tuoitre.vn/news20200706161424781.htm NCOVI - Ứng dụng trên Google Play. (n.d.). Retrieved May 27, 2020, from https://play.google.com/store/apps/ details?id=com.vnptit.innovation.ncovi&hl=vi Newsletter translated COVID-19 in the last 24 hours: Stop social isolation, people still have to wear masks when going out and keeping distance—Details—Ministry of Health— Newsletter of acute respiratory infections COVID- 19. (n.d.). Retrieved August 25, 2020, from https://ncov.moh.gov.vn/-/ ban-tin-dich-covid-19-trong-24h-qua-ngung-cach-ly-xahoi-nguoi-dan-van-phai-eo-khau-trang-khi-ra-ngoai-va-giukhoang-cach Nguyen, D. (2020, April 21). Vietnam launches Covid-19 contact tracing app. Vietnam Insider. https://vietnaminsider. vn/vietnam-launches-covid-19-contact-tracing-app/ Nguyen, K. (April 6, 2020). Quarantined In Vietnam: Scenes From Inside A Center For Returning Citizens. (n.d.). NPR.Org. Retrieved May 19, 2020, from https://www.npr.org/sections/ pictureshow/2020/04/06/823963731/quarantined-invietnam-scenes-from-inside-a-center-for-returning-citizens Nguyen, S. (March 24, 2020) Coronavirus: Life inside Vietnam’s army-run quarantine camps. South China Morning Post. https://www.scmp.com/week-asia/health-environment/ article/3076734/coronavirus-life-inside-vietnams-army-runquarantine Outbreak of Severe Acute Respiratory Syndrome— Worldwide, 2003. (n.d.). Retrieved August 25, 2020, from https://www.cdc.gov/mmwr/preview/mmwrhtml/ mm5211a5.htm Pearson, J. & Nguyen, P. (March 6, 2020). Vietnam quarantines tens of thousands in camps amid vigorous attack on coronavirus. (2020, March 26). Reuters. https://www.reuters. com/article/us-health-coronavirus-vietnam-quarantineidUSKBN21D0ZU Pham, L. Why does Vietnam gain international praise for fight against Covid-19? (n.d.). Hanoitimes.Vn. Retrieved August 25, 2020, from http://hanoitimes.vn/why-does-vietnam-gaininternational-praise-for-fight-against-covid-19-311680.html SARS Reference | SARS Timeline. (n.d.). Retrieved August 25, 2020, from http://sarsreference.com/sarsref/timeline.htm Severe Acute Respiratory Syndrome (SARS): The International Response. (n.d.). Retrieved August 25, 2020, from https:// www.everycrsreport.com/reports/RL32072.html
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Shira et al., (April 15, 2020). COVID-19 in Vietnam: Travel Updates and Restrictions. Vietnam Briefing News. https:// www.vietnam-briefing.com/news/covid-19-vietnam-travelupdates-restrictions.html/ Vu, K., Nguyen, P., Pearson, J. (2020, May 1). After mass testing, Vietnam says coronavirus outbreak contained. The Japan Times. https://www.japantimes.co.jp/news/2020/05/01/asiapacific/vietnam-coronavirus-outbreak-contained/ Sullivan, M., (April 16, 2020). In Vietnam, There Have Been Fewer Than 300 COVID-19 Cases And No Deaths. Here’s Why. (n.d.). NPR.Org. Retrieved May 20, 2020, from https://www.npr.org/sections/coronavirus-liveupdates/2020/04/16/835748673/in-vietnam-there-havebeen-fewer-than-300-covid-19-cases-and-no-deaths-hereswhy Takashima, K., Wada, K., Tra, T. T., & Smith, D. R. (2017). A review of Vietnam’s healthcare reform through the Direction of Healthcare Activities (DOHA). Environmental Health and Preventive Medicine, 22(1), 74. https://doi.org/10.1186/ s12199-017-0682-z Telephonic Briefing with Dr. Barbara Marston, CDC COVID-19 International Task Force Lead; and Dr. John MacArthur, CDC Thailand Country Director. (n.d.). United States Department of State. Retrieved August 25, 2020, from https://www.state. gov/telephonic-briefing-with-dr-barbara-marston-cdc-covid19-international-task-force-lead-and-dr-john-macarthur-cdcthailand-country-director/ Terhune, C., Levine, D., Jin, H., & Lee, J.L., Special Report: How Korea trounced U.S. in race to test people for coronavirus. (2020, March 18). Reuters. https://www.reuters.com/article/ us-health-coronavirus-testing-specialrep-idUSKBN2153BW The Ministry of Health issues guidelines on medical isolation at concentrated COVID-19 disease isolation facilities — Ministry of Health—COVID-19 acute respiratory disease epidemic page. (March 14, 2020).(n.d.). Retrieved May 20, 2020, from https://ncov.moh.gov.vn/web/guest/-/bo-y-teban-hanh-huong-dan-cach-ly-y-te-tai-co-so-cach-ly-taptrung-phong-chong-dich-covid-19 Times, V. (2020, March 16). Prevention of Covid 19: Vietnamese Customs continues giving directives about exporting face masks. Vietnam Times. https://vietnamtimes. org.vn/prevention-of-covid-19-vietnamese-customscontinues-giving-directives-about-exporting-facemasks-18450.html Top leader calls for solidarity against COVID-19. (n.d.). Retrieved August 13, 2020, from http://vietnamlawmagazine. vn/top-leader-calls-for-solidarity-against-covid-19-27105. html TRANG TIN VỀ DỊCH BỆNH VIÊM ĐƯỜNG HÔ HẤP CẤP COVID-19—Bộ Y tế—Trang tin về dịch bệnh viêm đường hô hấp cấp COVID-19. (n.d.). Retrieved August 25, 2020, from https://ncov.moh.gov.vn/ Tuan, V. & Tung, M. (April 11, 2020) VnExpress. (n.d.). Vietnam schools set to reopen in June after four-month break— VnExpress International. VnExpress International – Latest News, Business, Travel and Analysis from Vietnam. Retrieved May 18, 2020, from https://e.vnexpress.net/news/news/ vietnam-schools-set-to-reopen-in-june-after-four-monthbreak-4083091.html Vietnam: Government issues new coronavirus-related
SUMMER 2020
travel restrictions February 15 /update 8. (n.d.). GardaWorld. Retrieved May 18, 2020, from https://www.garda.com/ crisis24/news-alerts/314431/vietnam-government-issuesnew-coronavirus-related-travel-restrictions-february-15update-8 Vietnam aviation authority ceases all flights to and from coronavirus-stricken Wuhan. (n.d.). Tuoi Tre News. Retrieved May 18, 2020, from http://tuoitrenews.vn/news/ business/20200124/vietnam-aviation-authority-ceases-allflights-to-and-from-coronavirusstricken-wuhan/52707.html Vietnam Business Operations and the Coronavirus: Updates. (2020, August 13). Vietnam Briefing News. https://www. vietnam-briefing.com/news/vietnam-business-operationsand-the-coronavirus-updates.html/ Vietnam Business Operations and the Coronavirus: Updates. (2020, August 13). Vietnam Briefing News. https://www. vietnam-briefing.com/news/vietnam-business-operationsand-the-coronavirus-updates.html/ Vietnam Continues Nationwide School Shutdown Due to Covid-19 | Saigoneer. (February 16, 2020) (n.d.). Retrieved May 18, 2020, from https://saigoneer.com/saigon-health/18327vietnam-continues-nationwide-school-shutdown-due-tocovid-19 VietnamPlus. (2020, March 9). [Infographics] Phân loại cách ly người nhiễm, nghi nhiễm COVID-19 | Sức khỏe | Vietnam+ (VietnamPlus). VietnamPlus. https://www.vietnamplus.vn/ infographics-phan-loai-cach-ly-nguoi-nhiem-nghi-nhiemcovid19/627447.vnp Vietnam Poised to Export COVID-19 Test Kits | Voice of America—English. (April 30, 2020) (n.d.). Retrieved May 20, 2020, from https://www.voanews.com/covid-19-pandemic/ vietnam-poised-export-covid-19-test-kits Vietnam Population 2020 (Demographics, Maps, Graphs). (n.d.). Retrieved August 25, 2020, from https:// worldpopulationreview.com/countries/vietnam-population VIR VIR-. DPM Vu Duc Dam appointed as Secretary of MoH Party Affairs Committee. Vietnam Investment Review - VIR. Published October 15, 2019. Accessed August 25, 2020. https://www.vir.com.vn/dpm-vu-duc-dam-appointed-assecretary-of-moh-party-affairs-committee-71144.html Vu, K., Nguyen, P., Vietnam says origin of Danang outbreak hard to track as virus cases rise. (2020, August 2). Reuters. https://www.reuters.com/article/us-health-coronavirusvietnam-idUSKBN24Y0CL Vu, K. & Nguyen, M. (May 11, 2020) Vietnam reopens schools after easing coronavirus curbs. Reuters. https://www. reuters.com/article/us-health-coronavirus-vietnam-schoolsidUSKBN22N0QB Vượt qua “tử thần” SARS -Kỳ 2: Trong “tâm bão” SARS - Tuổi Trẻ Online. (n.d.). Retrieved August 25, 2020, from https:// tuoitre.vn/vuot-qua-tu-than-sars-ky-2-trong-tam-baosars-20200131102513948.htm WHO | Cumulative Number of Reported Probable Cases of SARS. (n.d.). WHO; World Health Organization. Retrieved August 25, 2020, from https://www.who.int/csr/sars/ country/2003_07_11/en/ WHO | Viet Nam SARS-Free. (n.d.). WHO; World Health Organization. Retrieved August 25, 2020, from https://www. who.int/mediacentre/news/releases/2003/pr_sars/en/
199
The Role of Ocean Currents and Local Wind Patterns in Determining Onshore Trash Accumulation on Little Cayman Island STAFF WRITERS: BEN SCHELLING ’21, MAXWELL BOND ’20, SARAH JENNEWEIN ’21, SHANNON SARTAIN '21 TA'S: MELISSA DESIERVO, CLARE DOHERTY | FACULTY EDITOR: CELIA CHEN Cover Image: A rainbow assortment of plastic accumulated on the beach. Source: Needpix.com
200
Abstract Each year, humans deposit billions of pounds of plastic and other trash into the ocean. These ocean plastics are distributed worldwide and pose a significant threat to marine ecosystems. The Caribbean Islands as a whole are the largest plastic polluter per capita, and the interconnected nature of the Caribbean Sea promotes trash transport among all of the islands. We quantified the onshore accumulation of plastics and other trash over four days at four sandy beaches on Little Cayman Island (two on the north side of the island and two on the south) to determine whether the deposition of trash on the island is driven by ocean currents or local wind patterns. Though previous research suggests that major ocean currents play a considerable role in the accumulation of trash on beaches, other studies have found that powerful local winds can overcome the influence of ocean currents. On Little Cayman Island, where ocean
currents come from the southeast, we might expect to see higher trash accumulation rates at sites on the southern side of the island. Alternatively, if local wind patterns have a greater influence than do ocean currents on where trash is deposited, we would expect to see trash accumulation variation on each side of the island depending on local wind patterns. We found that more southerly wind increases trash accumulation rate on the south side of the island and more northerly wind increases trash accumulation rate on the north side of the island. Based on our findings, we recommend focusing beach cleanup efforts for Little Cayman on recent wind patterns. Key Words: ocean, currents, wind, plastic, trash, Little Cayman Island, onshore accumulation
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Sites chosen for trash surveys on Little Cayman Island Image from Google Earth.
Introduction Plastic has become pervasive in every aspect of our lives. Each year, humans deposit over five million tonnes of plastic and other trash into the ocean, which later washes up on beaches or converges in oceanic gyres (Jambeck et al., 2015). Plastic waste has even been found in the guts of amphipods in the deepest parts of the sea (Jamieson et al., 2019). The widespread distribution of plastic and other trash is not without consequences. Oceanic plastic pollution poses a huge threat to marine ecosystems. Seabirds, turtles, seals, and other wildlife are dying at alarming rates from ingesting plastic or getting tangled in plastic products (Laist, 1997). A large portion of these plastics and other trash has accumulated in the five major oceanic gyres. The largest of these is the Great Pacific Garbage Patch, which is located between the coast of California and Hawaii and contains an estimated 79 thousand tonnes of plastic brought there by ocean currents (Lebreton et al., 2018). In addition to accumulating in oceanic gyres, plastic also washes up on shorelines around the world. In the Caribbean islands, which produce the most plastic pollution per capita (Ritchie & Roser, 2020), plastic waste is easily transported between islands due to the connected nature of the Caribbean Sea. The once pristine beaches in the Caribbean islands now have substantial amounts of plastic and other trash, alarming residents and visitors, harming wildlife, and potentially negatively
SUMMER 2020
affecting the vital tourism industry. One area that receives large amounts of plastic and trash from the other Caribbean islands is Little Cayman Island. Though environmental activists on Little Cayman are vying for a single-use plastics ban on the island (Young, 2020), the issue of onshore trash accumulation still persists. Researchers from the Central Caribbean Marine Institute on Little Cayman believe that most of the plastic and other trash that washes onto the island is brought primarily to the southeastern faces of the island from Haiti, Jamaica, and the Dominican Republic by major ocean currents (personal communication, L. Forbes). In response to this accumulation, there have been consistent grassroots efforts to remove the tonnes of plastic and other trash that wash up onto the beaches of Little Cayman. We aimed to develop baselines for trash accumulation to determine where and when to focus local cleaning efforts. We examined the role of ocean currents and local wind patterns in determining where and how much trash accumulates by quantifying the trash accumulation rate on various parts of the island. The Caribbean current, similar to that which created the Great Pacific Garbage Patch, is relatively constant throughout the year, and likely plays a major role in the accumulation of trash in the ocean. However, the local prevailing winds on the island change seasonally, and in the winter, notably, vary in magnitude and direction on short time scales (Burton, 1994). Studies have shown that strong, persistent local winds can overcome the influence of prevailing climate patterns on trash accumulation,
â&#x20AC;&#x153;Researchers from the Central Caribbean Marine Institute of Little Cayman believe that most of the plastic and other trash that washes onto the island is brought primarily to the southeastern faces of the island from Haiti, Jamaica, and the Dominican Republic by major ocean currents.â&#x20AC;?
201
Figure 2. Composition of total trash items collected separated into plastic, Styrofoam, glass, and other Created in excel by the authors
affecting or even reversing them (Swanson & Zimmer, 1990). If ocean currents are the primary determinant of trash deposition, we would expect more onshore trash accumulation on the southern side of the island regardless of wind direction due to the southeastern current hitting Little Cayman. However, if daily wind changes drive trash deposition, we would not expect a difference in trash accumulation among sites overall, but instead in trash accumulation among sites by day depending on wind direction.
Methods Observational Design
"We studied trash abundance at four beaches on Little Cayman Island, two on the south side of the island: the Department of Environment and Nighthawk; and two on the north side of
We studied trash abundance at four beaches on Little Cayman Island, two on the south side of the island: the Department of Environment and Nighthawk; and two on the north side of the island: Bloody Bay and Cumber’s (Figure 1). The beach substrate can have a large influence on the quantity and composition of trash accumulation (Pace, 1996). To normalize for these variations, we conducted all studies on sandy beaches. On the first day of our study (considered “day zero”), we cleared all trash present along three ten-meter transects at each of the four sites. Transects ran parallel to the waterline, two meters above the waterline, and were four meters wide. Specifically, we removed trash visible to the naked eye. Then, for four subsequent days, we counted and collected new trash pieces accumulated at each transect (N=48). In the lab, we categorized the trash we picked
up each day as either plastic, glass, Styrofoam, or “other.” We also counted the number of bottles, bottle caps, and plastic bags. Because pieces we picked up may have broken down in the process of transporting them to the lab, the number of trash pieces at each site were sometimes greater than our counts in the field. Therefore, we opted to use our field values for analyses. To estimate the number of microplastics present at each site, we took 10 cm cores (N=32) and used two- and five-mm sieves to separate and count microplastics between these sizes. Microplastics smaller than two mm were the same size or smaller than sand and therefore could not be sieved or identified easily. We took three cores, each in the middle of a transect, two meters up from the waterline. We also took five cores per site, four meters up from the
Figure 3: New trash items per day for north and south side sites. Values are per 10 meters. The six transects from the north side sites (three from each Bloody Bay and Cumber’s) are shown in blue and the six transects from the south side sites (three from each Department of Environment and Nighthawk) are shown in red. Wind vectors represent the average wind, found by averaging the wind components, over the day leading up to the time of the survey. They are shown in miles per hour, with the length and angle of the arrow corresponding to the speed and direction of the wind, respectively. Mean ± SE Image created in JMP 14.0 by authors.
202
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
wind values since the previous collection. To test how this daily trash accumulation rate was affected by both side of the island and northsouth wind, and to test how this daily trash accumulation rate was affected by both side of the island and east-west wind over this period, we made two general linear models— one for each wind component. We nested site within side of the island in both of these analyses.
Results
waterline, to avoid the area where microplastics would be washed away by waves. We used wind data taken every five minutes from Tropical Runaway Station ICAYMANB3, located on the west end of Cayman Brac (Tropical Runaway, 2020). Ocean current direction was determined from historical hydrographic surveys (Roemmich, 1981). Statistical Analyses To analyze the differences in daily trash accumulation between the sides of the island and between days, we performed an ANOVA analysis on log(1+x) transformed data. To test the significance of the different sites on the trash accumulation, we nested site within the side of the island. For each period between trash collections, we found both the average north-south component and the east-west component of wind. For each site, we related the number of new pieces of trash averaged across transects to the average
We found 1,773 pieces of trash in total. Of these, 1424 (80 percent) were plastic, 275 (16 percent) were Styrofoam, 61 (3 percent) were glass, and 15 (one percent) were “other” (Figure 2). Of the plastic, 523 (36 percent) was below 1 centimeter in size. In addition, we found 78 bottle caps, 21 straws, and 144 pieces of plastic packaging, including items such as plastic bags. The average accumulation rate per meter per day was 2.5 pieces for the Department of Environment, 1.6 pieces for Bloody Bay, 3.4 pieces for Nighthawk, and 3.6 pieces for Cumber’s. On average at our sites, 2.7 pieces of trash are deposited per meter of shoreline per day. In total, we collected 2.01 kg of trash. On average at our sites, 1.95 g accumulates per meter of shoreline per day. The total pieces of new trash per day did not differ between days (ANOVA: F3,11 =2.68, p=0.06; Figure 3) nor the side of the island (ANOVA: F1, 11=3.77, p=0.06). The interaction between day and side of the island was significant (ANOVA: F3,11=21.80, p<0.0001). Site had no influence on the trash accumulation within the sides of the island (ANOVA: F2, 11=0.11, p=0.89). The east-west component of wind had no effect on onshore trash accumulation on either side
Figure 4: Linear relationships between north-south wind component and trash accumulation rate, by side of the island. Points represent average trash accumulation at one site for one day, located either on the north or south side of the island. Negative wind component values correspond to northerly wind, and positive values correspond to southerly wind Image created in JMP 14.0 by authors.
"On average at our sites, 2.7 pieces of trash are deposited per meter of shoreline per day. In total, we collected 2.01 kg of trash. On average at our sites, 1.95 g accumulates per meter of shoreline
Supplemental Figure 1. Trash accumulation over the course of our study. Values are per 10 meters. Blue lines represent sites on the north side of the island and red lines represent sites on the south side of the island. Wind vectors are the same as those in Figure 2. Mean ± SE Image created in JMP 14.0 by authors.
SUMMER 2020
203
of the island (general linear model: F1,11= 0.92, p = 0.36). Trash accumulation on the north side of the island increased with northerly winds, and trash accumulation on the south increased with southerly winds (general linear model: F1,11= 17.40, p = 0.0019; Figure 4). Site nested within the side of the island had no significant effect (general linear model: F2,11= 0.88, p = 0.44). The average wind speed during our study was 6.1 mph, and average wind direction was ESE. Wind speeds ranged from 0-16 mph, from northwest to south wind directions. This is slightly weaker wind than the average wind speed for Little Cayman: 13 mph out of the east. No microplastics were found in any of the core samples.
"On the time scale of our study, trash accumulation on Little Cayman Island was primarily driven by wind strength and direction."
Discussion On the time scale of our study, trash accumulation on Little Cayman Island was primarily driven by wind strength and direction. Because trash accumulation did not differ between sides of the island, we can conclude that ocean currents had little influence on trash amounts relative to wind during the course of our study. Additionally, because the amount of trash accumulated did not differ across days over which there was varying wind, we can conclude that wind did not influence overall trash amounts. However, the amount of trash accumulated on one side of the island is dependent on the day— and therefore the direction and strength of the wind. Unfortunately, though we did not find any microplastics in the sand, we cannot conclude that there are no microplastics on Little Cayman Island— this was likely due to a sampling error. The influence of wind direction and magnitude on trash accumulation was also evident in examining trash accumulation over the course of this study (Supplemental Figure 1). On March 6th (Day 3), the leveling off in cumulative trash amounts was concurrent with the small wind vector on that day. If our calculated rates of 2.7 pieces of trash and 1.95 g per meter per day are representative of all of Little Cayman Island, we could expect almost 100,000 pieces and 74.4 kg of trash to be deposited each day over the roughly 37 kilometers of shoreline of Little Cayman. This ‘pieces accumulation rate’ is much higher than the results of a similar study, showing an average of 0.0034 pieces per meter per day of anthropogenic debris stranding on Atlantic Ocean shores (Barnes and & Milner, 2005). It is possible that these rates are not representative of all shoreline on Little Cayman.
204
Because we normalized for substrate by specifically choosing sandy beaches, we did not capture the diverse shorelines of the island, which include rocky substrates, mangrove forests, and other vegetated areas. Additionally, our samples were not evenly spaced around the island; Bloody Bay, Cumbers and the Department of the Environment are all located on the west side of the island, while Nighthawk is on the east side. A future study could choose sites more equally distributed around the perimeter of the island to learn more about how wind influences this entire area. Our sample period also may not represent how trash is accumulated all year on Little Cayman. In winter, the winds are more variable than they are in the summer. Given our results, these seasonal wind patterns could influence seasonal trash accumulation patterns. Future studies should survey during different seasons and for longer periods of time. This will help reveal the effects of ocean currents on trash accumulation over longer timescales. Nevertheless, given the magnitude of trash accumulation on Little Cayman, a strong contingent of conservationists gathers weekly to clean up local beaches. Though these efforts are already very successful at cleaning affected areas, studies such as ours can help inform these practices to be as efficient as possible because they can predict where trash will accumulate on a short time scale. In the future, trash cleanups on Little Cayman should focus on areas that have been receiving stronger winds. On-shore accumulation studies are also important for understanding the concentration and movement of trash around the world’s oceans. Mid-ocean trash concentrations steadily increased from 1960-1990, however, from 1990-2010 there was no trend despite increases in human trash (Law et al., 2010). This suggests that the sinks and sources of marine plastic are poorly understood. By studying onshore trash accumulations, we can contribute to our understanding of how marine trash concentrations, and our world’s oceans, are changing.
Acknowledgements Biggest of all shout-outs to the Central Caribbean Marine Institute staff, namely Lowell, Niki, and Miriam, who were absolutely essential in all parts of this study. Also, great thanks to our Professor and TAs, especially Clare, who spent several hours waiting in a boiling-hot van
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
for us to finish picking up plastic, and drove us to the store several times to get snacks when we were tired, hot, and hungry.
Author Contributions All authors contributed equally to this study. Benjamin Schelling was especially useful in finding tiny, tiny, tiny pieces of plastic as well as accurately counting everything, and Shannon Sartain was our sand-core and sand-sieving guru. References Barnes, D. K. A. and Milner, P. (2005). Drifting plastic and its consequences for sessile organism dispersal in the Atlantic Ocean. Marine Biology 146(815–825) Burton, F. J. (1994). Climate and tides of the Cayman Islands. In The Cayman Islands (pp. 51-60). Springer, Dordrecht. Eriksson, C., Burton, H., Fitch, S., Schulz, M., & van den Hoff, J. (2013). Daily accumulation rates of marine debris on subAntarctic island beaches. Marine pollution bulletin, 66(1-2), 199-208. Jambeck, J.R., Geyer, R., Wilcox, C. Siegler, T.R., Perryman, M. Andrady, A., Narayan, R., Law, K.L. (2015). “Plastic Waste Inputs from Land into the Ocean.” Science. 347 (6223): 768-71. Jamieson, A. J., Brooks, L. S. R., Reid, W. D. K., Piertney, S. B., Narayanaswamy, B. E., & Linley, T. D. (2019). Microplastics and synthetic particles ingested by deep-sea amphipods in six of the deepest marine ecosystems on Earth. Royal Society open science, 6(2), 180667. Lebreton, L., Slat, B., Ferrari, F. et al. (2018). Evidence that the Great Pacific Garbage Patch is rapidly accumulating plastic. Sci Rep 8, 4666. https://doi.org/10.1038/s41598-018-22939-w Pace, L. (1996). Factors that influence changes in temporal and spatial accumulation of debris on an estuarine shoreline, Cliftwood beach, New Jersey, USA. Theses. 1076. https:// digitalcommons.njit.edu/theses/1076 Ritchie, H. and Roser, M. (2020). “Plastic Pollution” Our World In Data. https://ourworldindata.org/plastic-pollution#citation. Roemmich, D. (1981). Circulation of the Caribbean Sea: A well‐ resolved inverse problem. Journal of Geophysical Research: Oceans, 86(C9), 7993-8005. Swanson, R. L., & Zimmer, R. L. (1990). Meteorological conditions leading to the 1987 and 1988 washups of floatable wastes on New York and New Jersey beaches and comparison of these conditions with the historical record. Estuarine, Coastal and Shelf Science, 30(1), 59-78. Tropical Runaway - ICAYMANB3 (2020). Weather Underground. https://www.wunderground.com/dashboard/ pws/ICAYMANB3?cm_ven=localwx_pwsdash. Young, K. (2020). The plastics problem: Cayman contends with a regional menace. Cayman Compass. https://www. caymancompass.com/2020/01/16/the-plastics-problemcayman-contends-with-a-regional-menace/.
SUMMER 2020
205
Astrobiology: The Origins of Life in the Universe STAFF WRITERS: SUDHARSAN BALASUBRAMANI '22, ANDREW SASSER '23, SAI RAYASAM (WAUKEE HIGH SCHOOL JUNIOR), TIMMY DAVENPORT (UNIVERSITY OF WISCONSIN JUNIOR), AVISHI AGASTWAR (MONTA VISTA HIGH SCHOOL SENIOR) BOARD WRITER: LIAM LOCKE '21 Cover image: Astrobiology is the study of life in the cosmos. Understanding how life arose on Earth gives insights into where and what to look for when searching for life in outer space Source: Flickr
206
Introduction
Short History of the Universe,” 2020).
The universe began 13.7 billion years ago with a tremendous and vibrant explosion known as the Big Bang, and went through immense transformations within fractions of a second. In 10-32 seconds, the universe grew from the size of a fingertip to the size of a galaxy. 10-6 seconds after the Big Bang, the expansion of the universe lowered the temperature just enough to allow for quarks to form and remain stable. Soon thereafter the universe further cooled to allow for these quarks to form protons and neutrons. At about one second, the universe’s temperature was about 1000 times that of the sun. A few minutes later, protons and neutrons started fusing to make light elements and isotopes such as helium and deuterium. 1520 minutes later, temperatures cooled down enough to stop further fusion, resulting in a universe consisting of about 92% hydrogen and 8% helium (along with a minute amount of lithium and other radioactive isotopes) (“A
For the next hundreds of thousands of years, nothing eventful happened short of from further cooling and expanding. But at around 380,000 years old, electrons and atomic nuclei within the universe started to combine to form neutral atoms. Due to the newly synthesized neutral atoms, the universe became transparent to a broad range of radiation wavelengths, causing a period of total darkness throughout the universe: the Dark Ages. As the universe further expanded, the neutral atoms were spread out relatively equally throughout the universe. However, by chance, there were some irregularities in the spreading out of matter, which caused clusters of matter to be abnormally close together. At around 380 million years after the Big Bang, gravity caused these atoms to clump together, which gave birth to the first star. With these stars came the first emission of light, ending the Dark Ages (“A DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Short History of the Universe,” 2020). To supply this light and prevent gravity from causing a star to fall in on itself, it fuses lighter elements into heavier elements to create energy. When a star died, it caused a massive explosion, called a ‘supernova,’ that released all the heavier elements into the universe. At around 600 million years after the Big Bang, the Milky Way started accumulating matter, and at around 5 billion years, it was fully formed. At around 9.3 billion years after the Big Bang, the Earth and the rest of the solar system formed from gas, dust, and the elements given off during supernovae. The first few hundred million years on Earth were rather rough; huge asteroids and comets regularly crashed into Earth, and the surface was completely molten. But after around 500-600 million years, things calmed down and Earth had a climate that offered steady temperatures and water. It was not much later, about a billion years perhaps, that the first life forms developed. And the rest is history (“A Short History of the Universe,” 2020). When scientists uncovered this sequence of events, many new questions arose. Was Earth unique? Or were there other objects in the universe that developed life? Enter ‘astrobiology.’ Astrobiology is an interdisciplinary scientific field concerned with the origins, distribution, early evolution, and future of life in the universe. This field of study strives to answer the basic unanswered questions about the long-term adaptation of living organisms to other environments. Astrobiology not only gazes up into space, but also dives into the deepest parts of the Earth, hoping to uncover the truth of how life came
SUMMER 2020
to be. Given the timeless fascination with the origins and prevalence of life, astrobiology shall endure long into the future (Hubbard, 2017). In 1944, the physicist Erwin Schrödinger published an essay titled What is Life? The Physical Aspect of the Living Cell. In this essay, he stated that the “obvious inability of present-day physics and chemistry to account for [biological] events is no reason at all for doubting that they can be accounted for by these sciences” (Schrödinger, 1944). Since Schrödinger’s initial publication, researchers have made resounding progress in understanding the physical and chemical underpinnings of life. In 1952, Hershey and Chase showed for the first time that deoxyribonucleic acid (DNA) was the molecule necessary for reproduction (Hershey & Chase, 1952). Just one year later, x-ray diffraction patterns produced by Rosalind Franklin and ratios between nucleotides published by Erwin Chargaff and his colleagues were used by Watson and Crick to determine the double-helical structure of the DNA molecule (Zamenhof et al., 1952; Watson & Crick, 1953). The ‘central dogma’ of molecular biology – DNA makes RNA makes proteins – has fueled an incredible amount of research on the structure and function of these biomolecules. Today, humankind possesses complete genome sequences for about 3,500 species and has determined the structures of over 150,000 proteins (Burley et al., 2019; Lewin et al., 2018).
Figure 1: Structure of DNA, the self-replicating molecule of modern biochemistry
At its most basic level, life is simply the existence of a self-replicating molecule capable of evolving and adapting to its environment. However, spontaneous replication of a biopolymer (for instance, DNA or protein) from a soup of free monomeric units (nucleic acids or amino acids) is largely prohibited by large energy barriers and unfavorable reaction conditions (Ram Prasad & Warshel, 2011). On Earth, additional molecules have been introduced to make selfreplication more successful. Proteins speed up DNA replication, provide the cell with energy, facilitate efficient signaling, and serve as life’s ‘molecular machines.’ Instructions on how to make these proteins are passed to the next generation by specific DNA sequences. Additionally, replication of DNA takes place at an optimal pH, salt concentration, and temperature, so a barrier has evolved to create a constant internal environment separated from the harsh external environment. The semi-permeable outer membrane of a cell, known as the plasma membrane, is made of a bilayer of phospholipids, molecules with hydrophilic (water-loving) phosphate heads on the exterior and hydrophobic (water-repelling)
"At its most basic level, life is simply the existence of a selfreplicating molecule capable of evolving and adapting to its environment."
Source: Wikimedia Commons
207
Figure 2: Most common elements in biological systems Source: Wikimedia Commons
"The molecules mentioned above – nucleic acids, proteins, phospholipids, and water – represent the basis of life on Earth, but our search for life in the universe must not be constrained
208
fatty acid chains on the interior. A cell is defined by this plasma membrane and is considered the smallest unit of life (Ruiz-Mirazo et al., 2014). It is worth noting that water is required for nearly all reactions in biochemistry, either as a reactant, product, or as a polar solvent facilitating the making and breaking of chemical bonds. It also contributes to the folding of proteins, stability of DNA, and formation of lipid bilayers through hydrogen bonding (Nelson et al., 2017). The molecules mentioned above – nucleic acids, proteins, phospholipids, and water – represent the basis of life on Earth, but our search for life in the universe must not be constrained by these criteria; astrobiology requires a more inclusive definition of what it means to be alive. Scientists at NASA have defined life as a “self-sustaining system capable of Darwinian evolution” (Benner & Hutter, 2002). However, some researchers disagree with the effort to define life because it narrows the search and neglects the possibility of discovering new and interesting chemical systems in outer space that may be viewed as a different kind of life (Cleland & Chyba, 2002). Nonetheless, much of our search for life in the universe has focused on water and small organic compounds, as these are the molecules known to work for life on Earth. This review will provide an overview of current research efforts in the field of astrobiology. First, theories surrounding the origins of life on Earth will be presented. This will include further description of modern biochemical processes, the formation of early biopolymers, chemical mutation and evolution, as well as a discussion of organisms that use alternative biochemistries or live in extreme environments (possibly resembling those encountered in space).
Next, civilization’s search for extraterrestrial life forms will be addressed. This will include a discussion of possible requirements and criteria that have guided the search effort, how understanding life on Earth may advise this search, where and what scientists are looking for, efforts and techniques used to characterize the extraterrestrial environments in our solar system (mostly Mars and some gas giant moons) as well as the efforts and techniques used to examine celestial objects outside our solar system. Finally, a discussion of the probability and search for intelligent life will be presented.
Life on Earth The origin of life on Earth is still an unresolved and hotly debated topic. This discussion is not meant to be a definitive explanation for how life arose on Earth, but rather a review of prevailing theories presented by experts in the field. A qualitative chemical approach will be employed to understand modern biochemistry, Earth’s prebiotic environment and possible mechanisms for the creation of early biopolymers. Speculating about the origins of life on Earth is largely an exercise in examining the current state of biological systems and turning back the clock, and a brief introduction to modern biochemistry will provide an endpoint to understand what had to form at the beginning. All known forms of life on Earth are largely composed of six standard elements: carbon, hydrogen, nitrogen, oxygen, with lesser quantities of phosphorus and sulphur. Among these six elements, carbon is most abundant; estimates suggest that the total biomass on earth contains up to 550 gigatons of carbon
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: RNA polymerase (blue) is a protein responsible for the synthesis of mRNA (green) from a DNA (orange). This figure shows an example of a proteinDNA complex and demonstrates the role of proteins in catalyzing biochemical reactions. Source: Wikimedia Commons
(Bar-On et al., 2018). This abundance of carbon and the stability of its bonds are important features of biomolecules; carbon-carbon bonds and carbon-hydrogen bonds do not generally react at standard temperatures (National Research Council, 2007). However, carbon itself is not sufficient to drive the reactions required for life; in order to promote reactivity, many biomolecules make use of oxygen and nitrogen, as these atoms have higher electronegativity values than carbon and induce the partial electric dipoles needed to promote reactivity and drive the creation of macromolecules (National Research Council, 2007). Phosphorus and sulfur also play significant roles in biochemical reactions; for example, sulphur is used in the amino acids cysteine and methionine (Brosnan & Brosnan, 2006). Phosphate groups are prevalent in phospholipids and nucleic acid backbones and are also required for synthesis of adenosine triphosphate (ATP)â&#x20AC;&#x201D;a key source of energy for many biochemical reactions (Schirber, 2012). Another important feature of carbon, nitrogen, oxygen, and phosphorus is their ability to form branched structures which can polymerize to form long chains (Kitadai & Maruyama, 2018). DNA and RNA are composed of a long, negatively charged backbone of phosphates and ribose sugars linked to one another by phosphodiester bonds. Considering that the human genome is about 3 billion base pairs spread over 46 chromosomes, the average length of a DNA molecule in a human cell is
SUMMER 2020
around 65 million base pairs (although the small size of some chromosomes means that these molecules can certainly be longer). The polymerization of nucleic acids into DNA allows for the storage of an incredible amount of information in a single molecule. Proteins, comprised of strings of amino acids, are not nearly as long as DNA (the average length of a protein is about 300 amino acids), but the great amount of variability among the 20 natural amino acid side chains endows these polymers with some extraordinary functions (Alberts, 2002). Although lipids are much smaller compounds and do not polymerize in the same way as nucleic acids and amino acids, they form interesting macromolecular structures like membranes and micelles (hollow spheres) which can compartmentalize cellular reactions. Possibly one of the most important features of these biopolymers is their ability to recognize one another in a sequence-specific manner. The hydrogen bond donors and acceptors of the nucleotide bases â&#x20AC;&#x201C; adenine, guanine, cytosine, thymine, and uracil â&#x20AC;&#x201C; ensure that A pairs with T (or U in RNA) and G pairs with C (Figure 1). During replication, each strand of a DNA molecule is used as a template to synthesize a new DNA molecule, and highfidelity proteins, known as DNA polymerases, incorporate the nucleotide with the correct hydrogen bonding characteristics. Researchers have shown how the physics of hydrogen bonding, base stacking, and steric hindrance give DNA polymerases a high degree of
"Possibly one of the most important features of these biopolymers is their ability to recognize one another in a sequence-specific manner."
209
Figure 4: The Miller-Urey experiment demonstrated that biological molecules could be formed from abiotic starting materials Source: Wikimedia Commons
"One approach to simulating the appearance of the first self-replicating molecule has been to recreate conditions of the primordial Earth and observe the results in the lab."
accuracy during this reaction, demonstrating how the underlying physics of our microscopic universe can govern the chemical reactions that create life. The creation of proteins also relies on nucleotide hydrogen bonding, but between DNA and mRNA in the process of transcription and between mRNA and tRNA during translation. Finally, proteins can have an enormous amount of specificity based on their amino acid sequence. The specific substrate that fits into an enzyme, protein that is phosphorylated by a kinase, and DNA sequences recognized by a transcription factor are all examples of proteins recognizing their sequence-specific targets. Interestingly, the major groove of the DNA double-helix is just large enough to fit a protein alpha-helix, suggesting a possible coevolution of these biomolecules (Alberts, 2002). The complexity of modern cells (as well as the organisms they comprise) is the product of billions of years of chemical evolution and natural selection. One of NASA’s specifications for extraterrestrial life is that the system be capable of Darwinian evolution (Benner & Hutter, 2002). This refers to the process of natural selection published by Charles Darwin and simultaneously theorized by Alfred Wallace in the late 1850s (Darwin, 1959; Costa, 2014). The theory states that variation occurs through sexual reproduction, and that geographic seclusion of a population and isolated mating can give rise to a new species better suited to that environment. The ‘survival of the fittest’ refers to the process by which only the organisms most equipped to thrive in their environments will survive and reproduce. Although originally applied to animals (and humans), this concept has been widely accepted to explain the process of chemical evolution. Consider a polymer that is made out of two molecules, X and Y. The polymer reproduces by copying X across from X and Y across from Y, but the incorporation of X is 100 times faster due to its chemical interactions. Consider that an error is made during a round of replication where an X is erroneously incorporated across from a Y. The resulting polymer would have the ability to reproduce faster and would presumably consume more resources, outcompeting the original molecule. Understanding how molecules recognize one another, reproduce, mutate, and compete with one another for survival gives insight into how a simple self-replicating molecule can eventually give rise to the extraordinary biodiversity observed today, but it does not explain the appearance of the first biomolecule.
210
One approach to simulating the appearance of the first self-replicating molecule has been to recreate conditions of the primordial Earth and observe the results in the lab. The famous Miller-Urey experiments combined water (H2O), hydrogen gas (H2), methane (CH4), and ammonia (NH3) in a sealed flask, boiled the solution and applied electrodes across the flask to simulate lightning (Miller, 1956). These researchers found that several natural amino acids were formed, and recent analysis of these specimens has shown that all 20 natural amino acids, as well as some unnatural amino acids, were formed in this experiment (McCollom, 2013). The Miller-Urey experiments show how a biopolymer could form at the surface of Earth’s oceans; it is thought that the first life forms were actually formed in deep-sea volcanic vents (though this is highly debated). This first cell, known as the last universal common ancestor (or LUCA), is thought to have appeared 3.6 billion years ago in deep sea vents when the Earth was only 560 million years old (De Guilo, 2003). The prevailing theory behind the first cells was that their biochemistry was completely dependent on RNA (Totani, 2020). RNA is single stranded and forms a three-dimensional structure with interesting functionality; notably, RNA has been shown to catalyze chemical reactions, even in modern cells. It is thought that the first cells were composed of a membrane containing only RNA that catalyzed the reaction of new RNA. The ability of RNA to store information and catalyze chemical reactions makes it an efficient system for early life to use to take hold. This theory of the origin of life is known as ‘RNA world’ and has garnered a significant amount of attention from the scientific community (Leslie E., 2004; Joyce & Szostak, 2018; Totani,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
2020). It is important to note that proteins also have the ability to store information and catalyze reactions, and a newer computational model has shown that protein, not RNA, could potentially be the original biopolymer (Guseva et al., 2017).
sophisticated enzymes. However, it is believed that although life could thrive in arsenic-based environments, the greater abundance of phosphorus on Earth contributed to the natural selection of phosphorus-based compounds (Wolfe-Simon et. al, 2009).
When considering life in outer space, it is important to recognize examples on Earth where organisms in different chemical environments may employ different biochemistries. While biochemical reactions are typically limited to carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur, there is a possibility that other, chemically similar elements could be incorporated into biomolecules. One such element – silicon – is a chemical analogue to carbon, as it possesses the same number of valence electrons and thus exhibits similar reactivity. Silicon is expected to form silane “chains” analogous to the aliphatic carbon chains found in lipids and other biomolecules (Rampelotto, 2010). However, it is worth noting that Silicon’s reactivity necessitates a chemical environment different from earth-like conditions. For instance, silicon burns spontaneously in oxygen and strips the oxygen from water by forming a silica “shell” (LeGrand, 1998). With this increased reactivity, silica-based life is only likely to be found in low oxygen-environments, where the primary solvent may be liquid methane or ethane. Additionally, as polysilanes are generally not stable under standard temperature and pressure conditions, it has been suggested that silicon-based life would be more likely to be found in low temperature, high pressure environments (Rampelotto, 2010). Carbon, however, still possesses some inherent advantages over silicon. For instance, carbon is more capable of forming double and triple bonds and is more readily able to bond to heteroatoms (Pace, 2001).
Other living organisms have been able to obtain energy needed for biochemical reactions via the oxidation of inorganic compounds – a process called chemolithoautotrophy (Amils, 2011). Although normally conducted through molecules like ammonia and hydrogen sulfide, some microorganisms have been able to take advantage of the more readily available transition metals. For example, a bacteria species called Candidatus Manganitrophus noduliformans was found to oxidize Mn2+ to Mn4+ species when exposed to aerobic conditions. The rate of growth of the bacteria was also found to have a linear relationship with the amount of Mn2+ present in solution (Yu and Leadbetter, 2o2o). Similarly, Acidithiobacillus ferrooxidans have been found to derive its energy from the oxidation of Fe2+ to Fe3+ under extreme acidic conditions. It is currently believed that this pathway depends on the presence of a so-called rus operon which encodes the 2 cytochromes and rusticyanin required for this pathway (Quatrini et. al, 2009).
Similarly, arsenic has been observed to serve as an analogue for phosphorus. Although largely toxic for most living things, arsenic has been found to be used by the bacterium GFAJ-1 in Mono Lake, California. Researchers found that the bacteria, which was located in a lake with high arsenic concentrations, incorporated arsenate ions into the synthesis of nucleic acids at a rate comparable to that of normal phosphate ions (Wolfe-Simon et al., 2011). Given that polyarsenates are more reactive than phosphates due to weaker covalent bonds within the molecule, it has been suggested that arsenic biomolecules would require less
SUMMER 2020
"While biochemical reactions are typically limited to carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur, there is a possibility that other, chemically similar elements could be incorporated into biomolecules."
Another possibility for alternative biochemistries is the synthesis of biomolecules with non-standard stereochemistry. Almost all life on Earth relies on L-amino acids and D-carbohydrates as standard biomolecules, but some species on earth have been able to take advantage of their respective enantiomers. For example, D-amino acids have been found to be incorporated into the peptidoglycans of cell walls, as well as peptide antibiotics synthesized by some bacteria and fungi. However, the stereochemical configuration of D-amino acids hinders their usage in life; for example, the rate of enzyme-catalyzed hydrolysis of peptide bonds of D-amino acids is significantly slower than that of the L configuration (Friedman, 1999). While L-amino acids and D-sugars have no known, inherent chemical advantages over their enantiomers, it is believed that preferential evolution towards L-amino acids and D-sugars is the result of meteor bombardments. Evidence found in the Murray and Murchison meteorites suggests that there may have been a small enantiomeric excess of these stereocenters (Bailey, 2002). However,
211
Figure 5: The polar ice caps and former bodies of water on Mars are of great astrobiological interest and have been the topic of future exploration by international space agencies Source: Wikimedia Commons
"One misconception that can constrain our collective understanding of life is that it can only be found on other planets. Yet growing evidence suggests that habitable zones can possibly be found on exoplanets, moons, asteroids, or other celestial objects besides planets."
the high enantiomeric excess of L-amino acids and D-sugars in nature is believed to have been amplified by the mechanism of some catabolic and anabolic reactions. This is the result of the theory of “mutual antagonism,” which suggests that some molecules may catalyze self-production while suppressing the synthesis of its enantiomers (Frank, 1953). There is some experimental evidence that suggests autocatalytic mechanisms may be possible in the development of homochirality; for example, the alcohol product of the alkylation of pyrimidyl aldehydes has been found to autocatalyze synthesis of either the R or S enantiomer in high excess, even if the starting enantiomeric excess was low (Soai et. Al, 1995). One of the most interesting aspects of life as we know it is the wide array of environments life has been found in. While most life generally thrives on land or near the surface of the oceans and freshwater lakes, some life has been able to persist in conditions that may otherwise seem completely hostile. Though mainly no more complex than simple bacteria and fungi, these “extremophiles” have been found to persist in otherwise intolerable conditions. Some of the most well-known examples of extremophiles persist best in areas of extreme heat or cold and may live in environments similar to those of primordial life on Earth. In particular, thermophiles – organisms that can thrive in high heat environments – have been found to thrive in hot springs and deep-sea hydrothermal vents. For example, Pyrlobus fumarii, found in a hydrothermal “black smoker” vent on the Mid Atlantic Ridge, has been found to survive in temperatures up to 113˚C (Blöchl et. al, 1997). These deep-sea hydrothermal vents may have been the location of the origin of the first protobacteria in an anaerobic environment, due to the abundance of organic matter and hydrogen sulfide. The vents also supply the needed energy required for the reduction of CO2 to various hydrocarbons (Colín-Garcia et. al, 2016). Other extremophiles have been able to thrive in extremely cold environments. For example, the lichen Xanthoria Elegans has been able to photosynthesize at temperatures as low as -24˚C (Barták et. al, 2007). Other extremophiles have demonstrated the ability to survive in both hypersaline environments and environments with extreme pH. Although normally hypertonic solutions cause cells to shrivel up due to diffusion of water out of a cell, some organisms have been able to adapt to the high concentrations of salt
212
and be isotonic relative to their surrounding environments. For example, Halothermothrix orenii, a “halophile,” has been able to thrive at salt concentrations of 4-20% due to its ability to synthesize high concentrations of “compatible solutes” (such as amino acids) to relieve osmotic pressure (Cayol et. al, 1994; Santos & Da Costa, 2002). Other organisms have been able to survive conditions of extremely high acidity or basicity. Although normally most proteins denature under these conditions, bacteria have taken steps to maintain neutral pH conditions in the cytoplasm through proton pumps. For example, both the acidophile Bacillus acidocaldarius and the alkaliphile gram-negative Pseudomonas alcaliphila pump protons out of and into each species’ respective membrane (Michels and Bakker, 1985; Matsuno et. al 2018).
Life in Outer Space Classifying life is a subjectively biased evaluation, especially once we expand our knowledge of life on other celestial objects. One misconception that can constrain our collective understanding of life is that it can only be found on other planets. Yet growing evidence suggests that habitable zones can possibly be found on exoplanets, moons, asteroids, or other celestial objects besides planets (Ramirez, 2018). With this possibility, biologists will have to consider how the diversity of life is circumstantial to the particular environment of each celestial body where life has been found. It would be rather peculiar if we found pandas on Mars. On Earth, extremophiles are one branch of
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 6: This image shows a diagram of a star-planet system and portrays the doppler redshift and blue-shift of the star’s light waves as it “wobbles.” This doppler shift is detected in the radial velocity method.
organisms that may provide insight as to how life can endure the “extreme” environmental conditions of other celestial objects. Some microbial extremophiles have been found to thrive in polar environments or can withstand high levels of UV radiation (Hoover and Pikuta, 2010). Since water is fundamental to life on Earth, it is key for astrobiologists to examine polar environments as an analog to celestial bodies such as the moons of Jupiter or the ice caps found on Mars, that are covered in solid or liquid water (Hoover and Pikuta, 2010). Understanding extremophiles and the ecological dynamics they are involved in within their environments can be the gateway to predicting the kind of life we may encounter on future exploration missions. However, when life is found on other celestial objects, the efficacy of tools used to characterize and classify organisms on Earth comes into question. To first clarify the expectations of the types of organisms that astrobiologists are predicting to encounter, the likelihood of discovering extraterrestrial microbial life is assumed to be greater than other forms of life as observed on Earth (Hoover and Pikuta, 2010). Yet when we consider the definitions of how we characterize life on Earth, we exclude the environmental dynamics found on other celestial bodies which may also contain life. Looking again to extremophiles, if the assumed characterization of the biosphere is already considered extreme in comparison to the environmental conditions of Earth, the microbes found on that particular celestial object would by default be characterized as extremophiles. It is therefore essential that astrobiologists consider the normalization of celestial environments prior to the characterization of life.
SUMMER 2020
“Where did we come from and where shall we go” is a key mantra in the spectrum of astrobiological studies. Searching for lifebearing celestial bodies where prospective contact could be made is most pragmatic if those bodies are relatively close to Earth. Some of the present candidates of highest astrobiological relevance within our solar system include Mars, Europa, and Titan. The latter two moons are classified by NASA as “icy worlds” where subsurface water, whether in liquid or solid form, is abundant (Hays et al., 2015). Researchers have also proposed the presence of hydrothermal vents at the bottom of Europa’s ice-covered liquid water oceans (Hays et al., 2015). On Earth, hydrothermal vents have been observed to foster a trove of microbial and multicellular life, which indicates that the presence of these vents on Europa could establish the means to sustain chemosynthetic microbial life (Hays et al., 2015). On the other hand, analyses of the geography of Titan has suggested the presence of a salty subsurface sea covered by several kilometers of ice (Hays et al., 2015). Titan’s atmosphere is also notably characterized by a haze that has been deemed analogous to prebiotic environmental conditions of early Earth, with evidence of photochemical activity in Titan’s lower atmosphere (Hays et al., 2015). Mars, which has become a focal point of current surface-rover missions by space agencies, contains atmospheric methane which has suggested to some astrobiologists that there were life processes occurring relatively recently in martian history due to the relatively short chemical lifespan of atmospheric methane (Hays et al., 2015).
"On Earth, hydrothermal vents have been observed to foster a trove of microbial and multicellular life, which indicates that the presence of these vents on Europa could establish the means to sustain chemosynthetic microbial life."
Current missions to explore life on Mars are underway through a collaboration between
213
multiple international space agencies (Figure 5). In late July of 2020, the Perseverance Rover was launched by NASA for the purpose of collecting terrestrial samples on Mars, which researchers hope will contain biochemical evidence of microbial life (Wiliford et al., 2018). A follow-up mission headed by the European Space Agency will aid in bringing the collected samples back to Earth and is expected to launch in 2026 (Wiliford et al., 2018).
"Scientists estimate there are about 100 thousand million exoplanets in the Milky Way alone."
One matter of discussion is what geographical features hold the highest probability of containing microbial life. Research on the development of early life on Earth stems from looking at stromatolites, which are sedimentary features that happen to excel in trapping microbial matter (Wiliford et al., 2018). Analogous sedimentary features, known as microbialites, have been identified in extinct Maritan lakes, and may be reservoirs or evidence for Martian microbes (Rizzo, 2020). Other missions by NASA to investigate other celestial objects include the Europa Clipper, which will take detailed imaging of Saturn’s icy moon Europa (Howell and Pappalardo, 2020). This mission will enhance insights into whether Europa contains the environmental conditions to support life (Howell and Pappalardo, 2020). Scientists estimate there are about 100 thousand million exoplanets in the Milky Way alone. Currently, we have confirmed a little over 4000 of them. Although that might sound feeble, finding that many exoplanets is impressive, and there are two main techniques (among others) used to detect them (Haynes, 2020). One method scientists use to detect exoplanets is the radial velocity method. As exoplanets revolve around their host star, the planet's gravitational pull causes the star to wobble, which makes the star seem like it is in a mini orbit. During this “orbit,” the star periodically moves towards and away from Earth. By understanding the doppler effect, scientists can determine if the star is wobbling or not. The doppler effect is essentially the expansion or compression of light waves. As a light emitting object moves towards another object, the light it emits becomes compressed, which makes the light approach the blue end of the spectrum. When the object is moving away from the other object, the opposite happens, and its light waves approach the red end of the spectrum. The doppler effect is also true for sound waves. In fact, it is noticeable when a police car with its siren on approaches. As it gets closer it seems to be higher pitched, and while it's moving away, it gets lower pitched.
214
By seeing if the star periodically red shifts and blue shifts, the scientists know that the star is wobbling, which likely means it hosts an exoplanet (NASA, 2020). The second main way scientists find exoplanets is by using the transit method. Whenever an exoplanet is in orbit around a star, it will pass between the star and Earth. As a result, the star’s light will be slightly dimmed. If a camera is pointed long enough at a star and its brightness is tracked over time, a dim in its brightness indicates an exoplanet. The simplicity of this process makes it the most effective and common way of tracking exoplanets (NASA, 2020). The ingenuity of this method not only allows us to spot exoplanets, but also to find out what elements or molecules are in their atmosphere. One of the main features scientists look for on exoplanets are the basic biochemistry elements, such as carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur. The main way to search for these elements is by looking at the atmosphere of the object. Certain elements absorb certain wavelengths of light, so by using the transit method, scientists can learn what elements are present in the atmosphere. When the exoplanet passes in front of the host star, the host star’s light penetrates the atmosphere of the exoplanet. By analyzing the light that comes off of the exoplanet using a technique called spectral analysis, the scientists can find out which wavelengths of light were absorbed by the planet, which the scientists can use to find out the elements present in the atmosphere and determine if they are the biological elements needed to support life (Piskunov et al., n.d) The other major thing scientists look for on other planets is water. While on Earth we can see water and taste it freely, searching for water on other planets is a challenge. Regular optical telescopes can sometimes give us a glimpse of water outside of Earth. A bright region of a planet or moon could indicate the reflections of frozen water, but carbon dioxide and other similar gases, if cold enough, form a reflective solid, which means optical telescopes alone cannot confirm the presence of water. To better verify the presence of water, a camera needs to be placed outside the atmosphere of Earth to create high resolution images. There are two main ways scientists do this. The first are orbiting spacecraft. Orbiting spacecrafts, such as the Hubble telescope, can get much closer to the celestial object. Normally these spacecrafts are giant telescopes with immense apertures
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 7: SETI Radio Telescope for Parabolic Research Source: Google Images
that can capture images of very far objects with great resolution. The second way is by using rovers, which are normally mounted to the orbiting spacecraft. Rovers can move around on an object and transmit the video back to humans on Earth. These landers and rovers can collect samples from the surface, which are then placed in an analysis chamber which can determine the chemical composition of the sample, like clay minerals or other materials that likely formed in a liquid water environment. Spacecraft can also collect such samples and return it back to Earth for a much more detailed analysis (Greenspon, 2019). The only problem with orbiting spacecraft and rovers is that scientists have only been able to use them for objects in our solar system. Utilizing them for exoplanets will take more innovation, which means that spectral analysis is currently the best option for determining the elements and molecules present on exoplanets.
Intelligent Life in Outer Space Although scientists may not know without doubt that there are intelligent civilizations out there, they believe it is worth attempting to find them. In 1984, scientists founded the Search for Extraterrestrial Intelligence (SETI). SETI is a non-profit corporation that functions as an institution for research and educational projects relating to life in the universe. SETI is headquartered in Mountain, View, CA, but has many locations throughout the world. The SETI institute focuses on three things: astrobiology (the efforts to understand the prevalence of life in general), education and outreach projects (the efforts to inform the public about research and motivate young students to pursue
SUMMER 2020
science), and SETI itself (experiments designed to detect radio waves and other light signals that could be the product of other-worldly sophisticated beings). SETI is most popular for the third, and that is what attracts so many scientists to the organization (SETI, 2020). SETI recognizes that looking for “life” itself may prove to be a tedious process, as the universe is always expanding. Rather, the SETI institute focuses on the idea of using technology as a proxy for intelligence (Taylor Redd, 2016). Astrobiologists at the SETI institute believe that for an extraterrestrial civilization to be deemed an intelligent civilization, technology must have surfaced at one point or another. That technology will most likely emit signals or other measurable properties or effects that provide scientists on Earth the evidence needed to confirm past or present technology. These signatures are known as ‘technosignatures’ and are analogous to the biosignatures that signal the presence of life, whether intelligent or not (SETI Institute, 2020). Though these signatures may not be deliberate, they expose some technologically advanced activity occurring somewhere in our universe, which is enough to pique astrobiologists’ interests. Specifically, SETI designs experiments to look for and detect electromagnetic radiation (EMR). These experiments include all of the different wavelengths of EMR and using varying types of telescopes, SETI can search across the entire spectrum for indicators of advanced technology. However, SETI’s primary focus lies in radio waves, for they are the prime indicator of purposeful technology (SETI, 2020). Almost all SETI experiments thus far have looked for what
"Astrobiologists at the SETI institute believe that for an extraterrestrial civilization to be deemed an intelligent civilization, technology must have surfaced at one point or another. That technology will most likely emit signals or other measurable properties or effects that provide scientists on Earth the evidence needed to confirm past or present technology."
215
Figure 8: What the Arecibo message will look like if decoded Source: Wikimedia Commons
"High-power tv and radio telescopes are capable of detecting transmitters at a few light-years away while planetary radar systems on earth can detect and be detectable across the entire galaxy."
are called “narrow-band signals,” which are radio emissions that only extend over a small part of the radio wavelength spectrum (SETI, 2020). This same feature is employed in the radio industry to allow for an everyday handheld radio to pick up over 200 channels (FCC, 2020). Though celestial bodies, such as pulsars, quasars, and interstellar nebulae make radio signals, the static from them spread across the entire radio dial (SETI, 2020). SETI calls these narrow-band signals “carriers,” as they pack immense energy into a small amount of spectral space and therefore are the easiest technosignature to look for in our galaxy. Current technology at SETI includes telescopes that detect signals over a wide range. Highpower tv and radio telescopes are capable of detecting transmitters at a few light-years away while planetary radar systems on earth can detect and be detectable across the entire galaxy (Berkley SETI, 2020). The SETI Institute at Berkeley University recently started Project Breakthrough Listen, known to SETI as the “Apollo of SETI” (Berkley SETI, 2020). This will be the largest ever SETI project and will span over ten years of scanning for artificial signals across 100 million stars and 100 galaxies (Berkley SETI, 2020). This EMR detection project collects over 2 petabytes of data each day, and SETI will eventually make their data public, ushering in a new era of analysis. Although SETI is an immensely innovative initiative, until recently, it was mainly a passive effort that was designed only to detect signals, not to send them. Humankind has been inadvertently transmitting signals into space for more than 50 years--primarily television, radio, and high-frequency radio (SETI, 2020). However, in 1974, the first deliberate broadcast was beamed into space from the Arecibo Radio Telescope in Puerto Rico. The broadcast was made as a celebration for a major upgrade to the Arecibo Telescope. The broadcast was sent to the globular star cluster M13, which is around 25,000 light years away (meaning the signal will take 25,000 years to reach), and the broadcast consisted of a simple, pictorial message. The broadcast was immensely powerful, as it utilized Arecibo’s megawatt transmitter attached to a 305-meter antenna. The antenna works by concentrating the energy of the broadcast into a very small patch of sky in M13. The emission was equivalent to a 20 trillion-watt broadcast, meaning it would be detectable anywhere in the galaxy that contains a receiving antenna similar in size to Arecibo’s (SETI, 2020).
216
What is interesting about the broadcast, however, is what the pictorial message contains. The message consists of 1679 bits of information, which were transmitted by frequency shifting at the rate of 10 bits per second. This message results in a graphic that consists of, among other things, a human figure, DNA, and the solar system (SETI, 2020). Although it is extremely unlikely that this message will prompt a reply, it was useful in getting us to think about how we can reach out to other civilizations. This experiment also inspired many other messages to space, including the 2008 beaming of the The Beatles’ song “Across the Universe” to the star Polaris, and the 2016 radio transmission also to Polaris called “A Simple Response to an Elemental Message” (Dunbar, 2017; Quast, 2020). Sending radio signals into space is not the only thing scientists have done to reach out to extraterrestrial civilizations. Launched in 1977, spacecraft Voyager 1 and 2 explored all the jovian planets and 48 of their moons (Nelson, 2020). Aboard both of these spacecrafts was a 12-inch gold-plated copper phonograph record famously called “The Golden Record.” Each record is encased in an aluminum jacket, along with a cartridge and needle. Additionally,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
instructions, in symbolic language, explain the origin of the spacecraft and detail how the record is to be played. The contents of The Golden Record were chosen by a NASA committee led by Carl Sagan, a world-famous astronomer and science communicator. Sagan and his associates put together 115 images and a variety of sounds, including wind and thunder, birds, whales, and other animals. Along with this, they added spoken greetings in fifty-five different languages (Nelson, 2020). The Voyager 1 spacecraft is moving away from Earth at around 3.5 AU per year, and in about 38,200 years, it will come within 1.7 light years of a star in the constellation Ursa Minor called AC+79 3888. Similarly, Voyager 2 is moving away from Earth at around 3.1 AU per year, and in about 40,000 years, it will come within 1.7 light years of a small star called Ross 24 in the Andromeda constellation, giving any potential planet orbiting that star 40,000 years to develop intelligent life (Nelson, 2020). The discussion on what we should do to communicate with extraterrestrial intelligence has always been a contested topic. The SETI Institute prides itself on listening for radio signals, not sending them (SETI, 2020). The institute works on improving receivers and incorporating larger radio dials to broadly search the universe for patterns; any broadcast sent by SETI is usually serendipitous (SETI, 2020). Individuals in the SETI community say the main reason for not transmitting signals is because individuals on Earth are not technologically committed to a long-term plan (David, 2014). That is, current technology may be able to send signals for a short 5 to 10-year period; however, in order to contact extraterrestrial intelligence, scientists need to have the determination to do so for a long time, over 10,000 years. The SETI community says that our Earth is currently not the technologically stable civilization that it needs to be to execute that long-term plan. Despite this, community members of SETI still plan on making themselves ready for contact. Many scientists suggest that mathematical language via radio messages is still our best hope of contacting intelligence (David, 2014). Additionally, culture continues to persist as a driving force in any civilization; many suggest that the earth should be sending information of economics, or symbols that could be thought of as universal symbols of cultures (David, 2020). Current technologies, specifically Elon Muskâ&#x20AC;&#x2122;s Neura-link, suggest that we will be able to read electrical signals as thoughts and that it might be interesting to communicate in that sense
SUMMER 2020
(Walter, 2019). The discovery of extraterrestrial life will pose fundamentally new questions in environmental ethics. Most contributors to the field of astrobiology agree that it would do us well both ethically and scientifically to support alien life as a commitment to enhancing the richness and diversity of life in the universe (McKay, 2011). If the true goal of astrobiology is to enhance the richness and diversity of life throughout the universe, then exploration and human actions with respect to life in the universe should take into account ethical, economic, and broad societal considerations (McKay, 2011). While human actions can enhance life, those same actions can be damaging if left unchecked. With the current and potential exploration of Mars, a crucial question is how we can explore Mars without contaminating it in the process. Currently, we take extreme measures to make sure astronauts who come back to earth after being in space are quarantined and decontaminated to avoid extraterrestrial matter infecting Earth (David, 2014). However, the same idea applies to Earth contamination of other planets. For places like Mars, we are uncertain of the planetâ&#x20AC;&#x2122;s biological state. Astrobiologists examine at least three possible biological states: (1) there is life on Mars and it is vastly different from the life on Earth; (2) there is life on Mars and it is genetically related to life on Earth; and (3) there is no life on Mars (McKay, 2011). Each of these different situations represents different ways we can best explore places like Mars; however, since we are unaware of the planetâ&#x20AC;&#x2122;s biological state, we must explore now in a way that is biologically reversible and keeps all options open. Other scientists, such as Stephen Hawking, have mentioned that alien life forms may not be friendly cosmic neighbors, and that we should be careful of what signals we send and how we explore different celestial bodies (Moskowitz, 2010).
"If the true goal of astrobiology is to enhance the richness and diversity of life throughout the universe, then exploration and human actions with respect to life in the universe should take into account ethical, economic, and broad societal considerations."
Along with societal considerations, the discovery of life comes with inevitable scientific implications, primarily regarding the theories behind panspermia and second genesis. Astrobiologists argue that if we were to find extraterrestrial life, we would like to ask one essential question: is this life related to us (deGrasse Tyson, 2020)? If it is, then the theory of panspermia arises. The theory describes the concept of life as traveling between planets as seed (Kawaguchi, 2019). If this life was related to us, then it would imply that microbial spores
217
may have escaped the planet from high altitudes of earth and sent off into space, eventually colonizing another solar system. The more interesting conclusions for astrobiologists arise when dealing with the topic of a second genesis. The moment we find a second genesis, regardless of whether life has DNA or not, we know that life is universal; if life can arise in a place other than Earth, then it can arise anywhere that is habitable (Tarter, 2009). Within the second genesis theory, two possibilities are heavily discussed: DNAbased life and non-DNA-based life (deGrasse Tyson, 2020). If a second genesis of life is DNAbased, then it implies that DNA is an inevitable consequence of complex organic chemistry. However, if life is not DNA-based, then we would have to redefine our entire biological definition of life (deGrasse Tyson, 2020). Astrobiologists argue that this is possible. If alternative life is indeed based on carbon and water, it may have a completely different biochemical system, as the number of macromolecules that can be constructed from carbon is aggregately massive (McKay, 2011). Therefore, it is very possible that extraterrestrial life may use different carbon-based molecules for storing genetic information and structural functions than DNA and RNA. This theory of the second genesis is interesting to astrobiologists because it allows for the comparing and contrasting of two different biochemical systems, both capable of sustaining life, and would imply that there could be a common origin to all life and even broader criteria for life to exist.
Conclusion "The moment we find a second genesis, regardless of whether life has DNA or not, we know that life is universal; if life can arise in a place other than Earth, then it can arise anywhere that is habitable."
218
With the ever-growing population of humans on Earth, astrobiologists have considered the possibility of expanding into other planets in order to sustain life. However, as previously stated, this must be done in a manner that is biologically reversible as to not affect possible undiscovered life. Specifically, the idea of colonizing and terraforming Mars has been a technological fantasy of many scientists (Steigerwald, 2018). Terraforming is the process of transforming a planet to resemble the Earth such that it can support and sustain human life. The proposed mechanism to terraform Mars is to release carbon dioxide into the atmosphere to thicken it and act as a blanket to warm the planet (Steigerwald, 2018). However, the current issue is that Mars does not retain enough carbon dioxide in its polar caps, minerals, and soil to warm Mars enough to have nearly the same atmospheric pressure as Earth (Steigerwald, 2018). In addition, the current layer of carbon
dioxide in the atmosphere is too thin to support liquid water. Along with technological dilemmas, terraforming Mars comes with its own ethical issues, for there could exist life in various forms on Mars we currently are not aware of. Although living on a different planet seems exciting, space is truly a dangerous and unfriendly place. Space missions impact the human mind and body in tremendous ways, and quite recently, NASA has been majorly pursuing research on how a mission to Mars can affect a person’s body. NASA has studied several risks relating to a three-year Mars mission, and there are three main risks (among others) that come with being on a new planet (Abadie et al., 2020). The first risk is gravity fields. There would be three gravity fields in a Mars mission: weightlessness during the sixmonth trek between Earth and Mars, one-third of Earth’s gravity while on Mars’ surface, and the re-adaption to Earth’s gravity when returning. Transitioning between gravity fields can affect many aspects of the human body, including bone density, spatial orientation, and hand-eye coordination. NASA’s primary solution for this problem is to consistently monitor the space traveler’s body during the trip and advise them to take nutrients or exercise when needed (Abadie et al., 2020). The second risk is hostile environments. NASA has learned that the ecosystems inside a spacecraft and on a Mars, base play a large role in a traveler’s life. In space, microbes can have different characteristics, and microorganisms that naturally live on the human body can more easily transfer from person to person. This can lead to elevated stress hormones and an altered immune system, which could cause higher susceptibility to disease. To address this, NASA mainly focuses on rigorous testing. They would consistently monitor air quality, urine, blood, and the immune system. Plus, the living quarters’ environment is carefully planned to ensure a balance of comfort and efficiency (Abadie et al., 2020).The third risk is radiation. Space radiation is by far the most dangerous aspect of traveling to Mars, and astronauts on the space station already receive ten times the radiation they would on Earth. This can lead to higher risk of cancer and increased damage to the body’s key biological systems. Additionally, Mars has nowhere near the magnetic field Earth contains, leading to no protection against solar radiation on Mars’ surface. NASA’s research on this risk is still in its infancy, but the optimization
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
of shielding seems most promising. Shielding is already implemented in the ISS, although it only partially blocks radiation (Abadie et al., 2020). In order to sustain survival on the spacecraft, humans require some of the same basic needs—food, water, and oxygen—that are required on Earth. However, the process of meeting these requirements is far more complex (“Human Needs in Space,” n.d; “Human Needs: Sustaining Life During Exploration,” 2007). Due to the lack of breathable air present in space, which according to scientists is in large part a “vacuum,” oxygen must be supplied by the spacecraft through the process of electrolysis. During this process, electricity obtained from the spacecraft’s solar panels is used to split water into H₂O and O. The oxygen gas can be supplied to the spacecraft through the spacecraft’s storage tank. For food, meals must be pre-packaged, densely-packed, and nutritious. Water is provided through fuel cells, which produce electricity using both hydrogen and oxygen: in this process, water is produced and can be used for drinking purposes. Methods are constantly being updated with new and improved technology to sustain human survival on spacecraft (“Human Needs in Space,” n.d; “Human Needs: Sustaining Life During Exploration,” 2007). There have been several advances in terraformation research over the past few years; these advances have helped scientists better understand the composition of exoplanets. In order to be able to terraform extra-solar celestial objects, an understanding of the variable atmospheric, geologic, and climatic features of the planetary body is required (Pazar, 2018). Generally speaking, the ability to terraform a celestial body depends on certain qualities including the sea level, temperature, amount of oxygen, and atmospheric pressure on the terrestrial body (Pazar, 2018). Pazar organizes the qualities for planet habitability into 3 groups – atmospheric, geologic, and astrophysical – which each contain factors such as biosphere cycles, orbital eccentricity, and geomorphic processes respectively (Pazar , 2018). To better understand the potential for terraformation and the specific environmental conditions required, scientists have used theoretical models. These include mathematical relationships for calculating the growth rate and biomass capacity based on factors like planetary temperature and presence of ecological resources (Pazar, 2018).
SUMMER 2020
Other factors incorporated include topography, water elevation, and surface distribution. To date, there have been 4,301 exoplanets discovered among 3,176 planet systems – these numbers are continuously being updated as discoveries are made (Exosolar Planets Encyclopedia). There are various findings of potential habitability (such as liquid water and rocky distribution) in “earth-sized” bodies. Some of the notable bodies include the 7 earth-sized planets that transit TRAPPIST-1 at 12 parsecs away (Pazar, 2018; Dittmann et al., 2017). Out of these planets, the Huanca exoplanet, containing a “rocky” terrain (Pazar, 2018; Bourrier et al., 2017) seems to be the most similar to Earth. LHS 1140b, a planet orbiting the LHS 1140 star, is also 12 parsecs away and is present in a habitable zone for liquid water (Pazar, 2018; Dittmann et al., 2017). Current research in the area of terraforming exoplanets indicates that there is much potential for terraformation beyond our solar system.
"In order to be able to terraform extra-solar celestial objects, an understanding of the variable atmospheric, geologic, and climatic features of the planetary body is required."
Although the prospects of life in outer space, terraforming celestial objects, and traveling the cosmos garner much excitement from the general public, no location in the universe has been discovered where life has developed to such an extent as it has on Earth. The unique conditions on our pale blue dot have allowed life to prosper, and the notion that these conditions are easily reproducible elsewhere is misguided. Successfully habiting other planets would require immense labor and resource production. The reality is that if humans can find a way to live in space, then they can also fix the problems facing our planet today. The future of space exploration is exciting, but in looking to the stars, we may miss the beauty right before us. References 5 Ways to Find a Planet | Explore. (n.d.). Exoplanet Exploration: Planets Beyond Our Solar System. Retrieved August 1, 2020, from https://exoplanets.nasa.gov/alien-worlds/ways-to-finda-planet A Short History of the Universe. (n.d.). Retrieved August 1, 2020, from http://www.sun.org/encyclopedia/a-shorthistory-of-the-universe A Simple Response to an Elemental Message. (n.d.). ASCUS. Retrieved August 1, 2020, from https://www.ascus.org.uk/asimple-response/ Administrator, N. C. (2017, June 1). NASA Beams Beatles’ “Across the Universe” Into Space. NASA; Brian Dunbar. http:// www.nasa.gov/topics/universe/features/across_universe. html Alberts, B. (Ed.). (2002). Molecular biology of the cell (4th ed). Garland Science.
219
Arecibo Message | SETI Institute. (n.d.). Retrieved August 1, 2020, from https://www.seti.org/seti-institute/project/details/ arecibo-message Beck, M. L., Freihaut, B., Henry, R., Pierce, S., & Bayer, W. L. (1975). A serum haemagglutinating property dependent upon polycarboxyl groups. British Journal of Haematology, 29(1), 149–156. https://doi.org/10.1111/j.1365-2141.1975.tb01808.x Benner, S. A., & Hutter, D. (2002). Phosphates, DNA, and the Search for Nonterrean Life: A Second Generation Model for Genetic Molecules. Bioorganic Chemistry, 30(1), 62–80. https:// doi.org/10.1006/bioo.2001.1232 Broadcasting a Message | SETI Institute. (n.d.). Retrieved August 1, 2020, from https://www.seti.org/seti-institute/project/ details/broadcasting-message Burley, S. K., Berman, H. M., Bhikadiya, C., Bi, C., Chen, L., Costanzo, L. D., Christie, C., Duarte, J. M., Dutta, S., Feng, Z., Ghosh, S., Goodsell, D. S., Green, R. K., Guranovic, V., Guzenko, D., Hudson, B. P., Liang, Y., Lowe, R., … Ioannidis, Y. E. (2019). Protein Data Bank: The single global archive for 3D macromolecular structure data. Nucleic Acids Research, 47(D1), D520–D528. https://doi.org/10.1093/nar/gky949 Cleland, Carol E.; Chyba, Christopher F., Origins of Life and Evolution of the Biosphere, v. 32, Issue 4, p. 387-393 (2002). (n.d.). Costa, J. T. (2014). Wallace, Darwin, and the origin of species. Harvard University Press. Darwin, C. (2008). The origin of species by means of natural selection, or, The preservation of favored races in the struggle for life. Bantam Classic. Di Giulio, M. (2003). The Universal Ancestor and the Ancestor of Bacteria Were Hyperthermophiles. Journal of Molecular Evolution, 57(6), 721–730. https://doi.org/10.1007/s00239-0032522-6 Dunbar, B. (2015, April 7). What is Astrobiology? [Text]. NASA. http://www.nasa.gov/feature/what-is-astrobiology Eschenmoser, A. (2007). The search for the chemistry of life’s origin. Tetrahedron, 63(52), 12821–12844. https://doi. org/10.1016/j.tet.2007.10.012 Exoplanet atmospheres—Department of Physics and Astronomy—Uppsala University, Sweden. (n.d.). Retrieved August 1, 2020, from https://www.physics.uu.se/research/ astronomy-and-space-physics/research/planets/exoplanetatmospheres/ FAQ | SETI Institute. (n.d.). Retrieved August 1, 2020, from https://www.seti.org/faq Guseva, E., Zuckermann, R. N., & Dill, K. A. (2017). Foldamer hypothesis for the growth and sequence differentiation of prebiotic polymers. Proceedings of the National Academy of Sciences, 114(36), E7460–E7468. https://doi.org/10.1073/ pnas.1620179114 Hershey, A. D., & Chase, M. (1952). INDEPENDENT FUNCTIONS OF VIRAL PROTEIN AND NUCLEIC ACID IN GROWTH OF BACTERIOPHAGE. The Journal of General Physiology, 36(1), 39–56. https://doi.org/10.1085/jgp.36.1.39 How Many Exoplanets Have Been Discovered, and How Many Are Waiting to Be Found? (n.d.). Discover Magazine. Retrieved
220
August 1, 2020, from https://www.discovermagazine.com/ the-sciences/how-many-exoplanets-have-been-discoveredand-how-many-are-waiting-to-be Joyce, G. F., & Szostak, J. W. (2018). Protocells and RNA SelfReplication. Cold Spring Harbor Perspectives in Biology, 10(9), a034801. https://doi.org/10.1101/cshperspect.a034801 Kitadai, N., & Maruyama, S. (2018). Origins of building blocks of life: A review. Geoscience Frontiers, 9(4), 1117–1153. https://doi.org/10.1016/j.gsf.2017.07.007 Kool, E. T. (2001). Hydrogen Bonding, Base Stacking, and Steric Effects in DNA Replication. Annual Review of Biophysics and Biomolecular Structure, 30(1), 1–22. https://doi.org/10.1146/ annurev.biophys.30.1.1 Leslie E., O. (2004). Prebiotic Chemistry and the Origin of the RNA World. Critical Reviews in Biochemistry and Molecular Biology, 39(2), 99–123. https://doi. org/10.1080/10409230490460765 Lewin, H. A., Robinson, G. E., Kress, W. J., Baker, W. J., Coddington, J., Crandall, K. A., Durbin, R., Edwards, S. V., Forest, F., Gilbert, M. T. P., Goldstein, M. M., Grigoriev, I. V., Hackett, K. J., Haussler, D., Jarvis, E. D., Johnson, W. E., Patrinos, A., Richards, S., Castilla-Rubio, J. C., … Zhang, G. (2018). Earth BioGenome Project: Sequencing life for the future of life. Proceedings of the National Academy of Sciences, 115(17), 4325–4333. https://doi.org/10.1073/pnas.1720115115 McCollom, T. M. (2013). Miller-Urey and Beyond: What Have We Learned About Prebiotic Organic Synthesis Reactions in the Past 60 Years? Annual Review of Earth and Planetary Sciences, 41(1), 207–229. https://doi.org/10.1146/annurevearth-040610-133457 Miller, S. L. (1953). A Production of Amino Acids Under Possible Primitive Earth Conditions. Science, 117(3046), 528–529. https://doi.org/10.1126/science.117.3046.528 Nelson, D. L., Cox, M. M., & Lehninger, A. L. (2017). Lehninger principles of biochemistry (Seventh edition). W.H. Freeman and Company ; Macmillan Higher Education. Perez, J. (2016, March 30). The Human Body in Space [Text]. NASA. http://www.nasa.gov/hrp/bodyinspace Ram Prasad, B., & Warshel, A. (2011). Prechemistry versus preorganization in DNA replication fidelity. Proteins: Structure, Function, and Bioinformatics, 79(10), 2900–2919. https://doi.org/10.1002/prot.23128 Ruiz-Mirazo, K., Briones, C., & de la Escosura, A. (2014). Prebiotic Systems Chemistry: New Perspectives for the Origins of Life. Chemical Reviews, 114(1), 285–366. https://doi. org/10.1021/cr2004844 Schrödinger, E. (1992). What is life? The physical aspect of the living cell ; with, Mind and matter ; & Autobiographical sketches. Cambridge University Press. Totani, T. (2020). Emergence of life in an inflationary universe. Scientific Reports, 10(1), 1671. https://doi.org/10.1038/ s41598-020-58060-0 Voyager—Fast Facts. (n.d.). Retrieved August 1, 2020, from https://voyager.jpl.nasa.gov/frequently-asked-questions/ fast-facts/ Voyager—Frequently Asked Questions. (n.d.). Retrieved August 1, 2020, from https://voyager.jpl.nasa.gov/frequently-
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
asked-questions/ Voyager—What’s on the Golden Record. (n.d.). Retrieved August 1, 2020, from https://voyager.jpl.nasa.gov/goldenrecord/whats-on-the-record/ Water Beyond Earth: The search for the life-sustaining liquid. (2019, September 26). Science in the News. http://sitn.hms. harvard.edu/flash/2019/water-beyond-earth-the-search-forthe-life-sustaining-liquid/ Watson, J. D., & Crick, F. H. C. (1953). Molecular Structure of Nucleic Acids: A Structure for Deoxyribose Nucleic Acid. Nature, 171(4356), 737–738. https://doi. org/10.1038/171737a0 Zamenhof, S., Brawerman, G., & Chargaff, E. (1952). On the desoxypentose nucleic acids from several microorganisms. Biochimica et Biophysica Acta, 9, 402–405. https://doi. org/10.1016/0006-3002(52)90184-4
SUMMER 2020
221
Capitalism and Conservation: A Critical Analysis of Eco-Capitalist Strategies STAFF WRITERS: EVA LEGGE '22, JESS CHEN '21, TIMMY DAVENPORT (UNIVERSITY OF WISCONSIN JUNIOR), JAMES BELL '21, LEANDRO GIGLIO '23 BOARD WRITER: ANNA BRINKS '21 Cover image: Climate change and wide scale environmental degradation are defining problems of the modern era. Eco-capitalism seeks to address these issues through market strategies that promote conservation and sustainability Source: Pixabay
Introduction 1.1 Agrarian Origins of Capitalism Capitalism has its roots in agriculture. During the 16th century, the social conditions of Medieval England were ripe for the birth of agrarian capitalism. In most pre-capitalist societies, peasants lived off access to common lands and used the crops they grew to feed their families (Wood, 1998). However, in England, these common lands were minimal, and the majority of land was owned privately. To access farmland, peasants were required to pay rent to the landlord. This is a noticeably different social structure as landlords controlled access to land, while tenants were property-less and could only access land through the sale of their labor (Comninel, 2000). To squeeze more rent from their tenants, landlords encouraged increased productivity. The more the tenants produced, the more they
222
could afford to pay in rent, which benefited the landlords. This created competition between tenants since they had to innovatively increase farming productivity to afford rent, or otherwise be evicted and replaced with someone who could produce more. Through this system, the forces of capitalism emerged: competition, accumulation, and profit maximization (Wood, 1998). From that point, land was increasingly privatized through enclosure; the more exclusive the land was and the more land that was accumulated, the more profitable it could be for those who owned it. Eventually, these forces of capitalism would drive colonialism by increasing the desire to gain more land, spreading this economic structure to the land that is now the United States. The defining characteristics of capitalism that emerged during this era have permeated American history: private property, a market driven by competition and profit maximization, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
and a two class system consisting of the capitalist class who own the means of production (land, property), and the working class who must sell their labor to access the means of production. 1.2 The Industrial Revolution and the Current Extinction Crisis Although Medieval England was the starting point of capitalism, what boosted its significance worldwide was the so-called First Industrial Revolution. This period spanned from 1750 to 1840 and involved major innovations in the manufacturing process that transformed Europe and the United States. Improved technology and new inventions resulted in capital accumulation, increased agricultural productivity, and income growth. In addition to the major transformation of economic and social landscapes introduced during the First Industrial Revolution, this period also planted the seed for many of the environmental issues facing the world today. Fossil fuels, primarily coal, were used to generate electricity and steam, and greenhouse gas emissions from these energy sources caused gradual changes to the Earth’s climate. Because of this, many scientists argue that since the industrial revolution, the Earth has now entered a geological era called the “Anthropocene”, illustrating that anthropogenic (human derived) actions are the major driver of global warming (Zalasiewicz, 2008; Clark and York, 2005). Global warming has threatened the existence of species across the globe. “It is generally agreed among biologists that another mass extinction is underway,” writes science reporter Elizabeth Kolbert for The New Yorker in 2009. “Though it’s difficult to put a precise figure on the losses, it is estimated that, if current trends continue, by the end of the century as many as half of Earth’s species will be gone” (Kolbert, 2009). In Earth’s long history, there have been only five mass extinctions. Each presented a loss of at least three-quarters of all species on Earth in at most 2.8 million years, a short time span on a geological timescale (Saltré & Bradshaw, 2019). Each of the “Big Five” mass extinctions, as they are called, were so devastating that it took millions of years for Earth’s biosphere to recover. And when ecosystems were restored, they were completely restructured, looking entirely different from the ecosystems present before the extinction event (Kolbert, 2019).
SUMMER 2020
A growing body of evidence suggests
that we have entered the sixth extinction (Barnosky et al., 2011). Extinction rates of all species on earth are (conservatively) 100-1000 times higher than before the Anthropocene (Pimm et al., 2014). In the past half-century, fifty percent of all vertebrate species have become extinct (Living Planet Index, 2018). If these trends continue, the loss of species interactions on every level of the food web will cause ecosystems to collapse and greatly threaten human survival (Gray, 2019). Unlike most other extinctions, this one is being caused by the destructive actions of one species: humans (Pievani, 2014). In fact, some studies show that the anthropogenic impacts on the environment, such as global climate change, altering the composition of the atmosphere, and degrading the landscape from resource extraction and population, has created the perfect storm for the next big extinction (Pievani, 2014). However, despite the common use of the word “Anthropocene”, some claim that a more apt name would be the “Capitalocene,” expressing that it is not humans but rather our economic system that is primarily responsible for the current ecological crisis (Moore, 2017). It is imperative for capitalist economies to grow: positive growth rates are necessary for firms to make profits and for individuals to prosper financially in a capitalist society. If the growth rate falls below a positive threshold, firms will incur losses, go out of business, and move the economy into a downward spiral. According to this model, capitalist economies can either grow at a sufficiently high rate or shrink if the growth rate falls. This means that, in the long run, a zero or negative average growth rate economy is not feasible (Binswanger, 2009). This poses a problem—a system based on constant growth is not sustainable within the context of our planet’s finite resources.
"Some studies show that the anthropogenic impacts on the environment, such as global climate change, altering the composition of the atmosphere, and degrading the landscape from resource extraction and population, has created the perfect storm for the next big extinction"
1.3 Potential Solutions Despite the seeming paradox of capitalism and conservation, there are many sustainability initiatives based within the capitalist system. This eco-capitalism movement hopes to harness the market, innovation, and productivity towards the construction of a sustainable future. Economic growth can increase income levels and standards of living, making individuals secure in the present and giving them the freedom to focus on the future. This stability can lead to increased demand for environmental protection, lower birth rates, and higher expectations for environmental quality 223
Figure 1. Marginal Social Cost vs. Marginal Private Cost Source: Wikimedia Commons
"Carbon emissions are undoubtedly the largest contributor to global climate change, and the most crucial target for securing a renewable future."
(Reilly, 1990). To have a chance at stopping the next mass extinction, a multifaceted approach to species conservation must be taken. This includes direct measures, such as planting new vegetation and maintaining intact tropical and temperate forests, which hold much of the planetâ&#x20AC;&#x2122;s biodiversity (Gray, 2019). But there are also other, perhaps more unconventional, approaches that have become vital components of combatting species loss, such as taxes on carbon and plastics, big game hunting, ecotourism, payment for ecosystem services, and more. With scientists predicting grave consequences if current climate change trends continue, it is critical to analyze the feasibility of these eco-capitalist solutions in order to take appropriate action in combatting climate change.
Taxes, Trading, and Governmental Regulation 2.1 Carbon Cap-And-Trade Schemes and Carbon Taxes Carbon emissions are undoubtedly the largest contributor to global climate change, and the most crucial target for securing a renewable future. Presently, there are two main systems for managing emissions. The first is a cap-and-trade system where governments set a quantity of emissions and individual stakeholders allocate units of emissions in a free market. The second is a carbon tax policy where firms are taxed based on each unit of emission. The ultimate goal for both of these systems is to limit the externality, the negative consequence of high emissions
224
created by each firm pursuing its own interests. A simplified economic model demonstrates that, with no regulation, the market equilibrium of carbon emissions will exceed the societal equilibrium. In Figure 1, it is clear that the Marginal Private Cost (MPC), the cost to an individual of using another unit of CO2 given the current production amount, is much less than the Marginal Social Cost (MSC), the cost to society of using an additional unit of CO2 given the current production amount. If firms supply up to their MPC, the overall impact will be a negative externality for society â&#x20AC;&#x201D; global warming, smog, preventable deaths from air pollution, and many other environmental problems. This stems from the fact that the negative externality does not factor into a firmâ&#x20AC;&#x2122;s decision making, but does in the MSC. The cap-and-trade policy and carbon tax realign individual incentives with societal incentives. Cap-and-trade policy reduces this negative externality by setting a limit on the total allowable quantity of CO2 emissions and allowing firms to purchase rights to a certain quantity. Carbon taxes seek to reduce the quantity of emissions by penalizing large CO2 emissions, shifting firms production from MPC to MSC. Both policies have their advantages and disadvantages. One of the major benefits of a cap-and-trade system is that it allows the market to efficiently allocate resources. For example, traditional standards may force all firms to upgrade their air-conditioning systems, which would regulate the output of ozone depleting
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: Accumulation of SUPB in man-made landfills have tarnished the natural ecosystem services which organisms rely upon for their wellbeing Source: Flickr
chemicals from these systems. For smaller firms, this would be particularly costly because they bear the same fixed costs (i.e. purchasing and installing the same air-conditioning system), but they cannot produce as much output as the larger firms to cover this cost. As a result, this smaller firm may have to shut down completely as it can no longer remain profitable. Under the cap-and-trade system, a smaller firm may choose not to install the new air-conditioning system, but rather limit their emissions and sell the rest of the emissions rights to a larger firm, thus allocating resources efficiently. One benefit of both cap-and-trade and carbon taxes is the long-term incentives created by such systems. For traditional standards, whereby governments set uniform regulations, the yearly incentives encourage firms to cut emissions to meet those standards on a yearly basis rather than investing in long-term, more efficient technologies (Stewart et al., 1988). In contrast, cap-and-trade creates an incentive to lower future emissions as the quota quantity is often set for years in advance (i.e. firm X has the rights to emit 25,000 metric tons of CO2 by 2025). Carbon taxes also create this incentive by lowering future costs associated with increasing CO2 emissions (Stavins, 2008). Another major benefit to cap-and-trade policy is ensuring an absolute maximum on CO2 emissions by setting an exact quota. Knowing the maximum CO2 a firm can emit is very important for
SUMMER 2020
effective climate policy and maintaining exact data on emissions. Unfortunately, this certainty comes with a major drawback to cap-andtrade. Certainty in supply means that supply is perfectly inelastic (meaning any change in price will have no effect on quantity emitted). As a result, demand shocks can and often do have a massive impact on the cost of CO2 and the price of future emissions. In turn, this can lead to businesses shutting down and a farreaching unintended negative externality (Fell et al., 2020).
"...carbon taxes allow for the quantity of emissions to shift to a new market equilibrium, as there is no set quantity for each firm."
Alternatively, carbon taxes allow for the quantity of emissions to shift to a new market equilibrium, as there is no set quantity for each firm. As a result, there is much less price volatility, but given the incentive to create more efficient technology, taxes are often adjusted to meet increasing output. Another benefit of taxes is that at the most basic level, they affect firms of all sizes proportionally. Of course, the effects depend on the design of the taxes, but unlike cap-and-trade policy, the taxes do not allocate most or all emissions to the largest firms that can afford to buy these permits. Consequently, taxes create industries that are more competitive and do not hurt smaller businesses that would otherwise benefit from shutting down and selling their emission permits to bigger firms (The FASTER Principles, 2005).
225
Figure 3. The black rhino is native to eastern and southern Africa. Due largely to poaching, the black rhino population has dropped by 98% between 1960 and 1995. The species remains critically endangered to this day, with its total population averaging between 5-5.5 thousand Source: Wikimedia Commons
2.2 Taxes on Plastic
"Using the power of money to impact consumer behavior is not a novel concept. “Sin taxes” imposed on items deemed unhealthy such as alcohol or nicotinebased products are levied to counteract consumption."
The true cost of a plastic bag is never free. In 2014 alone, the United States used over 100 billion single-use plastic bags (SUPB) (Wagner, 2017). These bags have a slow rate of decomposition and have subsequently accumulated in landfills and natural environments as litter (Wagner, 2017). As SUPBs degrade, the quality of shared ecosystem services is diminished by the consumer of the SUPB and is endured by the individuals or organisms that rely upon these littered environments (Figure 2). However, this true cost to the environment is not always captured by consumer behavior when there is no monetary cost of SUPB usage. To remedy this disconnect, local governments in the United States have started to impose per-use taxes or outright bans on SUPB. The purpose of these policies is to alter consumer behavior and help individuals realize that unsustainable actions have a shared environmental cost. In the instance of per-use taxes, when there is no tax, consumers do not have a direct incentive to avoid the overconsumption of SUPB when they are shopping. Conversely, a per-use tax would, in theory, help mitigate this behavior (Wagner, 2017). Using the power of money to impact consumer behavior is not a novel concept. “Sin taxes” imposed on items deemed unhealthy such as alcohol or nicotine-based products are levied to counteract consumption (O’Donoghue and
226
Rabin, 2006). While SUPB may not directly impose damages onto the consumer, the environment benefits from the reduced consumption, and individuals benefit from the cleaner environment. Long-term environmental degradation stemming from the action of an individual consumer is difficult to comprehend at a store checkout, but there is no doubt SUPBs make a considerable impact on the environment. This is where the emerging use of taxation on SUPBs holds the potential to put a price on the externalities associated with SUPB consumption and make consumers consider the full cost of a plastic bag.
Sustainable Monetization of Natural Resources 3.1 Big Game Hunting In 2014, professional hunter Corey Knowlton attended a hunting convention in Dallas. He was not planning on attending the meeting, but upon a request from his friend to provide the lowest bid for an item, he found himself immersed in the chaos of a big game hunting auction. The “lowest bid” that he promised to provide was priced at $350,000. And the prize of this auction? A “conservation tag” allowing the highest bidder to kill a critically endangered black rhino in Namibia. Despite having the lowest bid, Knowlton won that license to kill. Soon after he won the chance to kill an endangered species, he became an endangered species in his own right — the subject of countless death threats from livid
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
conservationists, ones that not only threatened Knowlton’s life but also the lives of his wife and children. As counterintuitive as it may seem, some conservationists see transactions such as these as vital components of Africa’s conservation effort. In fact, some Namibian officials believe that Namibia is one of the gold standards of African wildlife conservation precisely because of these “conservation tags”. In the eyes of the Namibian government, the benefit of this legal monetization of hunting is two-fold. First, this license to kill was not for just any black rhino — it was for elderly, aggressive male bulls. In fact, the bull that Knowlton ended up killing had already attacked and killed at least two other rhinos. Second, all of the profits from the trophy hunting bids funnel directly back into anti-poaching efforts. Since the Namibian government allowed people to buy, sell, and shoot wildlife on their land in the early 1980’s, the wildlife population in Namibia has increased by 80 percent, and black rhino populations in Namibia have increased by thirty percent (Grobler, 2019; Adler, 2015). Mike NortonGriffiths, an economist and conservationist, has long argued that the revenue from the trophy hunting industry is vital to the conservation of endangered species in Africa. Though this approach to conservation may be controversial, it is generally agreed upon among the conservation community that action must be taken to conserve Africa’s most endangered species. For decades, many African countries (such as Namibia, Kenya, and South Africa) have been overwhelmed by poachers, largely driven by Chinese market demand for the ivory in elephant tusks and rhino horns (Nuwer, 2016). Between 1969 and 1973, ivory market demand increased ivory prices tenfold, spurring sharp increases in poaching, and between 2006 and 2016, one in five elephants was slaughtered for its ivory (Nuwer, 2016). Black rhinos remain critically endangered, with their global population hovering around 5,500 (Save the Rhino, 2020). Not all African countries support the Namibian model for animal conservation. Many academics are skeptical of monetizing ecosystem services, and many African countries do not follow this model (Temel, 2008). Kenya, for example, banned the utilization of wildlife for profit in 1977 (Norton-Griffiths, 2007). Kenyan government officials, along with many other conservation groups across
SUMMER 2020
Africa, believe that the monetization of big game hunting sends the wrong message: that the only way to save endangered species is to create an industry around hunting them. Although Kenyan wildlife populations have decreased dramatically following the ban, their government officials have found other means of combating poaching. In 1989, Richard Leakey, the director of Kenya’s wildlife program, was the steward of 12 tons of ivory and rhino tusks that had been intercepted from the illegal poaching trade. This was not an unusual occurrence; many African countries had stockpiles of intercepted ivory. However, criminal attraction resulting from keeping the ivory put the lives of security guards at risk, and the ivory often found its way into the black market. Therefore, Leaky was presented with a choice: sell it, or destroy it. Even though the ivory would have brought him a windfall of three million dollars, Leakey decided to burn the ivory. This alternate conservation approach—to shame the buyers instead of trying to profit from them—worked. Demand for ivory dwindled and illegal poaching decreased by about 99 percent (Adler, 2015). This burning event also contributed significantly to the Convention on International Trade in Endangered Species of Wild Fauna and Flora’s decision to ban all international trade on ivory (Nuwer, 2016). In 2016, this burning act was repeated in Kenya, and 105 tons of tusks and horns went up in flames (Warner 2016).
"Even though the ivory would have brought him a windfall of three million dollars, Leakey decided to burn the ivory. This alternate conservation approach—to shame the buyers instead of trying to profit from them—worked. Demand for ivory dwindled and illegal poaching decreased by about 99 percent."
But no matter how many tusks are burned or “conservation tags” sold, they are only temporary solutions to a much deeper problem. Even a million $350,000 bids will not stem the tide of climate change. According to Leakey, the biggest threat to African wildlife is not hunting and poaching, but the dwindling water availability due to climate change (Anderson, 2020). “The fact is,” said Leakey in a recent interview for The New Yorker, “the problems we all face now are far beyond the power of individual conservationists to cope with. The mean temperature is getting warmer, the rainfall is getting less, the snowmelt is increasing, the ice formations are less, oceans are rising. It’s a strangulation grip on the environment, and there’s nothing Kenya can do to arrest climate change globally” (Anderson, 2019, pp 8). Therefore, the conservation game and the carbon-dioxide game are inextricably linked. If we want to save our species, we must curb our CO2 emissions.
227
Figure 4. Ethical Consumption depends on four main pillars: economics, ecology, politics and culture Source: Wikimedia Commons
"A 2019 metaanalysis of studies on ecotourism showed that ecotourism had positive impacts both on conservation efforts and a financial benefit for local families."
3.2 Ecotourism Ecotourism, or traveling to regions of natural beauty, has often been proposed as an effective model for sustainably monetizing some of the world’s most endangered regions. Ecotourism originated in the 1980s as a way to channel tourism revenues into conservation efforts (Stronza et al., 2019). However, the effects of ecotourism become much more complex when put into practice. The International Ecotourism Society now defines ecotourism as “responsible travel to natural areas that conserves the environment, sustains the well-being of the local people, and involves interpretation and education” (International Ecotourism Society). Just as the monetization of hunting involved much more than the revenues from one “conservation tag,” ecotourism is only successful if net effect on the environment is positive. That said, ecotourism has a host of benefits if it is approached correctly. Ecotourism companies can essentially become “an independently financed partner to the conservation community” (Kirby et al., 2011). A 2019 meta-analysis of studies on ecotourism showed that ecotourism had positive impacts both on conservation efforts and a financial benefit for local families (Stronza et al., 2019). Buckley and colleagues (2016) found that in most instances, the conservation benefits of ecotourism resulted in increased survivorship of highly threatened species. These positive effects on conservation are not just limited to individual species. Ecotourism in Peru, Tanzania, and the Galapagos has helped finance landscape conservation (Kirkby et al., 2010; Charnley, 2005; Stronza & Durham, 2008). In Costa Rica, ecotourism has led to a reduction in land degradation and an increase in reforestation (Stronza et al., 2019). However, these positive results of ecotourism are only possible if a specific set of criteria are met. First, there must already exist a specific forest conservation mechanism, as well as a specific spatial mechanism delineating the boundaries of a “protected area.” In other words, ecotourism companies are much more effective in conserving species when they are not the only regulatory entity defining and conserving the area. In addition, the definition of ecotourism requires local communities “receive direct economic benefits” (Stronza et al, 2019). According to the “alternative income hypothesis,” local residents who enter the ecotourism community will become less reliant on jobs in natural resources, thus reducing the degradation
228
of the landscape. It is also hypothesized that increasing local jobs in ecotourism can lead to local communities taking action on their own to promote the preservation of their surrounding ecosystems: “ecotourism has been associated with communities setting aside tracts of land and vital habitats, with rules assigned to protect resources and species” (Stronza et al., 2019, p. 238). Despite the overwhelmingly positive effects of ecotourism, more rigorous analyses are essential to ascertain the potentially harmful impacts of ecotourism (Stronza et al., 2019). Beginning in the late 1980’s, many conservation biologists have doubted the efficacy of ecotourism, even wagering that it is harmful to wildlife. Biologists are concerned that ecotourism habituates animals to humans, and that ecotourism may increase prey vulnerability to predators (Stronza et al., 2016). According to Geoffry and colleagues (2016), “it is essential to identify all potential costs to properly evaluate the net benefits.” Therefore, further studies must be conducted in order to ensure a sustainable future both for the ecotourism companies and the ecosystems they seek to protect. 3.3 Payment for Ecosystem Services Ecosystems provide humans countless benefits, including water and nutrient cycling, pest control, climate regulation, and spiritual enrichment (Garbach et al., 2012). These “ecosystem services” (ES) can broadly be defined as any benefits people obtain from nature (Schomers et al., 2013). From an economic perspective, degradation occurs because
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
many ES exhibit the characteristics of public goods, resulting in externalities. Payments for ecosystem services (PES) are an innovative conservation approach that aims to internalize the value of the services provided by intact ecosystems (Schomers et al., 2013). PES bridge the private interests of landowners and the public benefits of conservation management. This involves providing private landowners with financial incentives to implement conservation practices that preserve ES. Agriculture is an especially promising option, as farming landscapes have the potential to provide both ES and sustain food production (Garbach et al., 2012). Beneficiaries of environmental services can pay land stewards to implement land use practices (such as rotating crops, reducing tillage, and adopting agroforestry practices) that maintain the environmental quality of the area and provide the contracted ES (Schomers et al., 2013). According to the Coase Theorem, given low to no transaction costs on exchanges of goods with clearly defined and enforceable property rights, governmental authority is unnecessary to overcome the problem of internalizing external incentives. This means that individuals will include social costs into their consideration of personal costs without governmental intervention. Private market negotiations can be used among firms to create the optimal allocation of resources (Schomers et al., 2013). An example of a purely Coasean PES scheme includes benefits from upstream-downstream watershed management activities, whereby downstream water users pay upstream landowners to increase water quality and quantity. This practice is used at the Paso de Caballos River Basin in Nicaragua, where upstream landowners are paid by private, downstream households for reforestation efforts (Schomers et al., 2013). By offering financial incentives to increase water quality/ quantity, upstream users now have a tangible personal cost for not maintaining desired water quality/quantity. Conversely, the Pigouvian conceptualization involves governmental payment programs and is based on the Pigouvian philosophy of taxing negative or subsidizing positive externalities. In this case, the direct beneficiary of ES is not paying the service provider. Instead, the state acts as a third party on behalf of service buyers. While Coasean PES schemes often work at local scales and focus on ES that are characterized as club goods, Pigouvian PES schemes operate
SUMMER 2020
on a larger scale and provision public goods (Schomers et al., 2013). Costa Rica’s national PES program, named Pagos por Servicios Ambientales, targets four ES: greenhouse gas mitigation, hydrological services, scenic beauty, and biodiversity. In the program, private forest landowners are paid for forest conservation or reforestation efforts. However, this scheme does not perfectly fit the PES definition as commitment is not voluntary due to Costa Rica’s legal restrictions on forest clearing; the payments simply reduce landowners’ opposition to the legal restrictions (Schomers et al., 2013). Despite its potential, there are some limitations to PES schemes. Not all ecosystem processes sustain and fulfill human life; for example, natural fires are often vital for ecosystem function and provide services to nonhumans. Focusing exclusively on services valuable to humans excludes nonhuman needs (Redford & Adams, 2009). Additionally, focusing on maximizing a single service could justify replacing native species with exotic ones that are more effective but would have other negative impacts on the surroundings. For example, zebra mussels are highly efficient at filtering particulates from water but have other negative impacts on the ecosystem, harming native organisms by starving out other filter feeders and attaching to turtles, crustaceans, and other large animals. Markets also only exist for a certain range of ES, and some services are difficult to value and therefore difficult to regulate financially (such as the fertilizing effect of atmospheric dust from the African Sahel carried across the Atlantic) (Redford & Adams, 2009). Additionally, conserving the functional attributes of ecosystems does not ensure that the full spectrum of biodiversity will be protected (Redford & Adams, 2009).
"Payments for ecosystem services (PES) are an innovative conservation approach that aims to internalize the value of the services provided by intact ecosystems."
Poverty, biodiversity hotspots, and environmental degradation are also linked such that impoverished people often suffer the brunt of ES loss. For example, existing biodiversity hotspots such as Madagascar, the Guinean forests of West Africa, and Brazil’s Cerrado region are disproportionately concentrated in poorer countries in the Global South, as many developed countries have already decimated their natural resources. ES services most directly impact the quality of life of local people, but it would be unfair to expect them to bear the brunt of the economic cost of preserving these lands while more indirect global benefits (like carbon sequestration) are
229
Figure 5: Global primary energy consumption by source. Primary energy is calculated based on the 'substitution method' which takes account of the inefficiencies in fossil fuel production by converting nonfossil energy into the energy inputs required if they had the same conversion losses as fossil fuels. The image on the left is broken down by specific types of energy sources, while the image on the right shows broader categories including fossil fuels, renewables, and nuclear energy Source: Our World in Data
enjoyed by all. Coasean PES approaches, where the most direct ES beneficiaries are accountable for its provision, may result in burdening poor locals of low-income countries (Schomers et al., 2013). Despite these limitations, PES is an attractive economy-driven strategy to incentivize environmental protection and to engage money-driven stakeholders who are not swayed by arguments about the intrinsic value of nature (which reflects the perspective that nature has value in its own right, independent of human uses) (Rea et al., 2017).
indicate different ways of understanding the problem of over-consumption. In general, "sustainable consumption" reflects a greater concern with environmental issues, but it does not necessarily consider social issues. On the other hand, "conscious consumption" is often used by large companies to promote corporate social responsibility without decreasing consumption. These modes of consumption differ from ethical consumption, as they fail to capture the full effects of consumer behavior (Eleni et al., 2012).
Green Consumption and Production
The demand for products and services that respect the environment and society has become intense. Citizens are beginning to seek consumption alternatives that bring benefits to the community and environment, not only in the short term, but also in the long term.
4.1 Ethical consumption
"A sustainable company is one that not only values the preservation of the environment but also has a management system that allows it to be profitable."
Responsible consumption, conscious consumption, or ethical consumption constitute a set of alternative consumer behaviors used to combat the ecological consequences of overconsumption. These modes of consumption have been around since the post-war period and include fair trade and organic or communitysupported agriculture. The push toward growth in capitalistic societies has put a tremendous strain on natural resources; this development model is unsustainable and the best way to transform it is through the practice of another type of consumption (Delistavrou, 2017). Ethical consumption is a set of habits and practices that reduce social inequality and negative environmental impacts. The model seeks to align production, distribution, and acquisition of products and services with positive environmental and social impacts (Delistavrou, 2017). It is worth mentioning that consumption has been classified in many ways including conscious, sustainable, critical, ethical, and responsible. These different adjectives
230
4.2 Corporate sustainability Climate change has drawn the attention of business academics and corporations across a wide array of industries including energy, oil and gas, finance, and pharmaceuticals. Corporations that implement sustainable initiatives in their business practices are engaging in “corporate sustainability.” According to Wilson (2013), “while corporate sustainability recognizes that corporate growth and profitability are important, it also requires the corporation to pursue societal goals, specifically those relating to sustainable development — environmental protection, social justice and equity, and economic development.” Corporations’ concern with the environment is growing due to pressures from society and investors (Eccles and Klimenko, 2019). More and more, organizations have adapted their DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 6: Renewable energy technologies, such as the solar panels and windmills pictured here, are a promising way to mitigate fossil fuel use and reduce carbon emissions. Source: Wikimedia Commons
operations to current sustainability standards. But what exactly is a sustainable company? A sustainable company is one that not only values the preservation of the environment but also has a management system that allows it to be profitable. A company that cares about the environment, but subsequently adopts very expensive processes that compromise its profitability, will certainly not survive in the long term. Corporate sustainability must focus on both environmental, human, and financial factors. A well-known acronym used in the corporate world is ESG: Environmental, Social, and Governance. According to Eccles, Ioannou, and Serafeim (2014), “high sustainability companies are more likely to have established processes for stakeholder engagement, to be more long-term oriented, and to exhibit higher measurement and disclosure of nonfinancial information.” With a working definition of corporate sustainability in hand, it is necessary to evaluate whether a company fits this definition. From a financial point of view, the rational allocation of resources and prioritization of profitable sources of revenue will ensure the success of the company. Nevertheless, from the environmental point of view, the company must evaluate not only its own processes, but also those of its suppliers. Are there excessive emissions of pollutants? Is material disposal occurring in inadequate locations? The adoption of cleaner and more energy-efficient practices will help the company to achieve both
SUMMER 2020
financial and environmental sustainability. At first, sustainable processes may seem more costly, such as the use of solar energy and the adoption of more efficient waste disposal methods. According to Koerth-Baker (2010) and the most up-to-date U.S. DOE's Energy Information Administration, solar energy is undoubtedly pricey, as the cost to produce a single photovoltaic cell is $396 USD. However, in spite of the many challenges posed by sustainable initiatives, the company ends up achieving economic gains, such as energy savings and revenues from the sale of recyclable materials. Another important point is the intangible gain for the sustainable company's image in society, which can help a business thrive (Montiel and Delgado-Caballos, 2014). One way to meet environmental requirements is to form partnerships for the company's sustainability projects. A good example is using a company's waste as raw material for other industries, known as co-processing. Other examples are those related to the sharing of resources, such as freight, and even physical spaces, thus reducing energy costs.
"Currently, fossil fuels supply approximately 80% of the nation’s energy demand."
4.3 Renewable Inputs: Sustainable Energy Currently, fossil fuels supply approximately 80% of the nation’s energy demand (Desilver, 2020). Fossil fuels including coal, crude oils, and natural gas are “non-renewable,” meaning that they are not readily replaced, as they are formed from the fossilized remains of plants
231
Figure 7: The figure depicts a farmworker spreading pesticides on onion crops. Industrial farmworkers in the U.S., who are predominantly Latinx migrant workers and undocumented immigrants, are disproportionately exposed to synthetic pesticides and herbicides that may be linked to cancer and other long-term health issues due to the heavy use of these chemicals in farming Flocks, 2012
and animals over millions of years. After the link between carbon emissions, fossil fuels, and climate change was recognized, several steps were taken to mitigate carbon dioxide emissions. These actions included the 2015 Paris Agreement where the global community agreed to limit emissions to reduce average global temperatures as close as possible to preindustrial levels (Johnsson et al., 2018). Because the only two ways to reduce carbon dioxide emissions are through carbon capture or by leaving fossil fuels in the ground, there has been a large push towards replacing dependence on fossil fuels with renewable energy sources (Johnsson et al., 2018).
"Renewable energy sources include solar, thermal, photovoltaics, bioenergy, hydro, tidal, wind, wave, and geothermal energy."
Renewable energy sources include solar, thermal, photovoltaics, bioenergy, hydro, tidal, wind, wave, and geothermal energy (Boyle, 2004). Over the past decade, solar power has experienced the largest percentage growth of any U.S. energy source, from generating just over 2 billion kilowatt-hours of electricity in 2008 to more than 93 billion kilowatt-hours in 2018—an almost 46-fold increase. Despite this growth, however, solar accounted for only 1% of the nation’s total energy production in 2018; the largest renewable energy source remained hydropower at 2.8% of total production, followed by wind, wood, and biofuels (Desilver, 2018). So why, despite the clear need, does renewable energy still make up such a small percentage of total energy sources? There are many social, economic, technological, and regulatory barriers across a variety of sectors that hinder the large-scale deployment of renewable
232
energy (Seetharaman et al., 2019). Social barriers include lack of public awareness and uncertainty about the benefits of renewable energy or their financial feasibility. Social barriers also include “not in my backyard syndrome,” whereby people support renewable energy but do not want large windmills or solar panel fields in their own area. Economic barriers include lower economic benefits of renewable energy compared to fossil fuels, which is generally a cheaper alternative to renewable energy technologies. Additionally, there is a high initial capital cost to develop renewable energy projects and the return on investment has a long horizon, making it more difficult to find investors. Technological barriers include limited availability of infrastructure and facilities for developing renewable energy compared to traditional energy sources, as well as challenges in “grid integration,” the connection of new power-production facilities with old infrastructures. This is especially problematic due to the remote location of many renewable energy power plants. Finally, regulatory barriers include a gap between policy targets set by governments and actual implementation, inadequate fiscal incentives, and a lack of certifications to ensure that targets are appropriately met (Seetharaman et al., 2019). To address these barriers, taking steps to make procedures more user-friendly could improve the complex bureaucratic process involved in the deployment of renewable energy. Cost savings are also critical, as the largest barrier to renewable energy investment is competition with fossil fuels, which are generally less expensive. This could potentially be addressed
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
by successful research and development ventures that increase the efficiency of renewable energy technologies, thereby making them more competitive with traditional energy sources (Seetharaman et al., 2019).
CCS will become more accessible, affordable, and energy-efficient with the innovations being made in this sector of green technology (Rubin et al., 2012). 4.5 Industrial Agriculture and New Alternatives
Despite the obstacles to renewable energy, capitalistic systems can help drive progress if properly incentivized. “Under capitalism, innovative activity—which in other types of economy is fortuitous and optional—becomes mandatory, a life-and-death matter for the firm. And the spread of new technology, which in other economies has proceeded at a stately pace, often requiring decades or even centuries, under capitalism is speeded up remarkably because, quite simply, time is money” (Baumol, 2004). Although the competitive nature of the market causes issues when renewable energy systems are pitted against fossil fuels, competition within the renewable energy industry itself can drive efficiency, creativity, and rapid advancements in research and technology. This competition between renewable energy sources will push the technology closer to becoming a feasible, wide-spread alternative to fossil fuels. Although capitalist systems can vary greatly across different nations, capitalism has driven great leaps in material advancement and technological innovation across the entire economy and particularly within the energy industry. If these advancements can be properly harnessed and driven towards a sustainable future, it would be a powerful force in combating climate change. 4.4 Carbon Capture Some recent innovations in sustainable technologies have taken a unique approach for greenhouse gas emission reduction, including the implementation of carbon capture and storage. Carbon capture and storage (CCS) technologies aim to trap and contain atmospheric carbon dioxide as a way of removing greenhouse gases from the atmosphere. In the present state, CCS systems are appended to factories or other manufacturing plants. The systems filter the carbon dioxide out of the air or from a direct pipeline, and the CO2 is then stored underground (Rubin et al., 2012). While in concept this technology offers a novel mechanism to curb atmospheric carbon dioxide levels, present CCS systems require significant energy to power and there is a lack of incentive to invest in CCS technologies if not required (Rubin et al., 2012). However, it is expected that
SUMMER 2020
Agriculture started to become industrialized beginning with large scale monoculture systems on European colonial plantations in the 16th and 17th centuries, in which acres of land were used to cultivate a single type of crop (Kremen et al., 2012). With new industrial inventions, agriculture was mechanized so that by the late 1800s, steam tractors instead of horses were used to plow fields. The advancements made possible by improving technology continued into the “Green Revolution” during the mid20th century. The Green Revolution introduced "an integrated system of pesticides, chemical fertilizers, and genetically uniform and highyielding crop varieties" (Kremen et al., 2012, p. 44). In the 1940s, Norman Borlaug, an American scientist who is now named “the father of the Green Revolution,” developed new varieties of wheat and rice that were high-yielding, disease resistant, and bred specifically to respond well to fertilizers (Rhodes, 2017). These crops were also heavily dependent on pesticides to increase their yield. Throughout the 1960s, these new seed and agrochemical technologies were promoted and adopted around the world, predominantly in Mexico and India, as a way to feed a rapidly growing population. While the global human population more than doubled from 1960 to 2010, the Green Revolution allowed for the production of cereal crops to triple while land area cultivated for agriculture only increased by 30% (Pingali, 2012). Thus, the Green Revolution averted hunger for millions around the world.
"Carbon capture and storage (CCS) technologies aim to trap and contain atmospheric carbon dioxide as a way of removing greenhouse gases from the atmosphere."
However, this revolution did not come without a cost. Throughout the Green Revolution, global nitrogen and phosphorous usage for fertilizer increased eightfold and threefold, respectively, while pesticide production increased elevenfold. Today, intensive usage of synthetic fertilizers is linked to air and water pollution as well as soil depletion. Excessive chemical fertilizer can make soil more acidic, destroying microorganisms in soil that are essential for building soil health. The excess of nutrients in fertilizers also leaches into groundwater and waterways, running off into bodies of water and creating algal blooms, which produce
233
mass fish die-offs. Pesticides are poisonous to pests but can potentially be harmful to humans as well. While research is still ongoing, some synthetic pesticides have been linked to cancer and other long-term health impacts (Horrigan et al., 2002). Humans, especially farmworkers, are directly exposed to pesticides while applying the chemical, but anyone can come in contact with pesticides through spending time where the chemicals have been applied for landscaping, or even through ingesting produce that was sprayed with the chemicals.
"In response to the environmental concerns of industrial agriculture, new, more sustainable farming methods have emerged. These include organic farming, which prohibits the use of synthetic fertilizers and pesticides, and agroecology methods, which promote viewing farms as ecosystems that are best managed with holistic and integrated farming practices."
234
Industrial agriculture is characterized by intensive pesticide and fertilizer usage, monoculture cropping and diminished biodiversity, and heavy dependence on fossilfuel powered machinery. These features, along with many others, have allowed industrial agriculture to thrive under capitalistic forces of production and profit maximization. Large seed, fertilizer, pesticide, and machinery companies market their products directly to farmers who benefit from efficient production, lower costs, larger yields, and increased profits. It is estimated that agriculture contributes an estimated 24% of global greenhouse gas emissions (US EPA, 2016). Agriculture also accounts for about twothirds of all water usage worldwide (US EPA, 2015). Ultimately, industrial agriculture practices have led to widespread soil degradation and consumption of water and fossil fuels at unsustainable rates. In response to the environmental concerns of industrial agriculture, new, more sustainable farming methods have emerged. These include organic farming, which prohibits the use of synthetic fertilizers and pesticides, and agroecology methods, which promote viewing farms as ecosystems that are best managed with holistic and integrated farming practices (Kremen et al., 2012). This approach to farming ensures that farming solutions are environmentally and ecologically sound by considering the effects on the wider ecosystem. For example, instead of applying a chemical that will have harmful effects elsewhere, you can diminish pest populations by planting crop varieties in different plots of land every year; this is called crop rotation. This way, cucumber beetles that have laid their eggs in cucumber plots one year, will not have cucumber plants to feed on the following year. Furthermore, crop rotation is beneficial for soil fertility because it prevents one type of crop from depleting too much of a nutrient in one area. Other sustainable farming practices include using manure and
compost to fertilize soil, covering the ground with plants to prevent erosion so that there is no bare soil exposed, minimizing mechanical soil disturbance to protect soil structure and microbiology, and increasing plant diversity, among others. The regenerative agriculture movement promotes these sustainable farming methods and takes things a step further; “its guiding principle is not just to farm sustainably—that implies mere maintenance of what might, after all, be a degraded status quo—but to farm in such a way as to improve the land” (VelasquezManoff, 2018). The regenerative agriculture movement wants to shift farming away from being a contributor to climate change to instead being a solution. The movement places large emphasis on rebuilding soil health and building up carbon in soil, a practice known as “carbon farming,” that counteracts the negative effects of industrial agriculture. This is based on the fact that through photosynthesis, plants naturally sequester carbon into the soil, which is enhanced by employing sustainable methods like rebuilding soil health and eliminating agrochemicals (Velasquez-Manoff, 2018). There has been a movement among consumers to support small farmers who practice regenerative, organic farming by shopping at local farmers’ markets and buying directly from farmers. However, critics of regenerative agriculture argue that it is not feasible for widespread adoption; this is because regenerative agriculture goes against the forces of capitalism. Critics argue that regenerative, organic agriculture methods decrease the efficiency of production, require more labor, and will never produce enough yield to feed a growing population. Furthermore, the increased production costs would make food prices rise for consumers (Velasquez-Manoff, 2018). Vertical farming and hydroponics are newer innovations that are growing in popularity as alternatives to industrial agriculture. Vertical farming is urban farming that is done in large indoor controlled environments, growing crops in stacked vertical layers, which saves land space. It entails growing fruits, vegetables, and grains using hydroponics (water and nutrients) instead of soil as a growing medium (Al-Chalabi, 2015). Vertical farming is touted as being highly productive, profitable, and efficient, with less impact on the environment. While the practice is still in its infancy, many vertical farms have
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 8. The Whanganui river in New Zealand was granted legal human rights in 2017, due largely to the conservation efforts of indigenous Māori activists. According to Māori ideology, the river is not a resource, but an ancestor, worthy of both protection and respect Source: Wikimedia Commons
been built in cities across the U.S., and they are seen by many as the future of urban agriculture. Vertical farming, however, lacks the potential of regenerative farming to restore soils and ecosystems. The academic community is in debate over the best method for feeding a growing population. Many argue that innovative methods like vertical farming are needed to increase the production of food. On the other hand, some social scientists argue that the current global food system already produces enough food, but it is inequitable and wasteful. In the U.S., it is estimated that 30 to 40 percent of the food supply is wasted (U.S. FDA, 2020). These scientists do not believe in increasing food production; instead, they believe there needs to be better distribution and access to food for all.
Ethical Considerations, the Green New Deal, and Conservation Theories 5.1 Ethical Issues of Capitalism-Environmental Justice and the Dakota Pipeline The clashes between capitalism and environmentalism can be best understood through the idea of environmental justice, which is often conducted through the very justice systems that capitalism has created. Environmental justice is “the fair treatment and meaningful involvement of all people regardless of race, color, national origin, or income, with respect to the development, implementation, and enforcement of
SUMMER 2020
environmental laws, regulations, and policies” (EPA, 2020). When it comes down to enacting purposeful action against environmental degradation, it is the identity of those voicing these grievances that can determine whether effective action is taken. In April of 2016 individuals led by the Standing Rock Sioux Tribe of North Dakota gathered in protest against the construction of the Dakota Access Pipeline (DAPL) (Whyte, 2017). The goal of DAPL was to expedite the transport of crude oil from refineries in North Dakota to processing terminals in Illinois (Whyte, 2017). However, the construction of this pipeline was planned to run through lands and water sources of the Standing Rock Sioux Tribe, threatening the water and soil quality and destroying culturally significance lands (Whyte, 2017). The Army Corps of Engineers (ACE) originally blocked the construction of DAPL by Energy Transfer Partners (ETP) but reversed its decision under the Trump Administration (Johnson, 2017).
"In April of 2016 individuals led by the Standing Rock Sioux Tribe of North Dakota gathered in protest against the construction of the Dakota Access Pipeline (DAPL)."
The consistent conflict between indigenous communities and large corporations exemplifies the dynamics of environmental justice (Proulx and Crane, 2019). While the government must uphold an adequate standard of living for all, the inaction to protect the wellbeing of the Standing Rock Sioux Tribe demonstrates a hierarchy of urgency that works against those of a minority identity. Those that are in this minority have used their voices against the environmental injustices they must endure but have been selectively silenced by those in power. This reflects broader themes
235
that have prevailed throughout the history of capitalism and colonialism. 5.2 The Green New Deal
"The human-nature divide (HND) examines how the human race perceives itself in coexistence with other organisms."
The Green New Deal (GND) is a nonbinding congressional resolution that lays out an ambitious plan for tackling climate change through economic policy. Proposed in 2019 by Representative Alexandria Ocasio-Cortez of New York and Senator Edward J. Markey of Massachusetts (both Democrats), the GND aimed to transition the United States away from fossil fuels towards clean, renewable energy while creating new jobs in the energy industry (Friedman, 2019). The proposal has the goal of reaching net-zero emissions by 2050, with an intensive “10-year mobilization” plan to reduce carbon emissions in the United States; proposed actions include investment in renewable energy, upgrading the electricity grid, renovating buildings to be energy efficient, and investing in electric vehicles and high-speed rail. There is also an emphasis on social justice, for those who are most affected by climate change are poor and marginalized communities. These communities are disproportionately exposed to pollution through tainted natural resources, though they are the individuals contributing the least to it. Therefore, the bill states that clean air, water, and energy are basic human rights, and the government will have to provide the training necessary to support those that work in the fossil fuel industry in transitioning to renewable industries (H.Res.109 - 116th Congress, 2019). The Green New Deal has been polarizing, with many deeming it “radical” (Friedman, 2019). Ocasio-Cortez has acknowledged that the plan will be expensive, estimating that it will cost at least $10 trillion, but the representative has argued that it will pay for itself through economic growth in renewable energy (Relman, 2019). However, right-leaning critics of the GND have estimated that the plan would cost anywhere from $51 to $93 trillion (Natter, 2019). In March of 2019, the Senate ultimately rejected the proposal (Grandoni & Sonmez, 2019). Still, the GND has massive support from groups like the Sunrise Movement, a national youth-led political movement that advocates for political action on climate change. The GND could continue to see a surge in support in the upcoming years, especially if our conservation problems grow more severe. In July of 2020, democratic presidential nominee Joe Biden announced his climate plan, “A Clean
236
Energy Revolution,” which draws largely from the principles of the Green New Deal, including the goal to “ensure the U.S achieves a 100% clean energy economy and net-zero emissions no later than 2050” (Plan for Climate Change, n.d.). Also like the Green New Deal, Biden’s climate plan has an emphasis on economic and environmental justice. The plan outlines extensive ways to build a more resilient nation, rally the rest of the world to address climate change, “stand up to the abuse of power by polluters who disproportionately harm communities of color and low-income communities,” and support fossil fuel industry workers in the transition to clean energy jobs (Plan for Climate Change, n.d.). According to the Biden Campaign, “Biden’s climate and environmental justice proposal will make a federal investment of $1.7 trillion over the next ten years, leveraging additional private sector and state and local investments to total to more than $5 trillion” (Plan for Climate Change, n.d.). They will also incentivize the adoption of clean energy technology across the country. The plan was drafted by a task force that included Representative Alexandria Ocasio-Cortez as well as Varshini Prakash, co-founder and director of the Sunrise Movement (Prakash, 2020). Ultimately, the Green New Deal provides an influential framework that, at its core, ties climate change to transforming the economy, an idea that is becoming more mainstream. 5.3 Conservation Theories: The Human/Nature Divide The human-nature divide (HND) examines how the human race perceives itself in coexistence with other organisms. Two primary schools of thought on the HND include anthropocentrism and ecocentrism which interpret the HND in polarizing manners. Anthropocentrism claims that humans are the supreme species of all living organisms on Earth (Kortenkamp and Moore, 2001). This self-perceived superiority stems from humans’ cognitive capabilities relative to other species. These perceptions have led humans to assume the responsibility of saving species from problems that arise from their own actions. This egotistical dogma has called into question the role of humans in deciding the fates of other species. On the other hand, ecocentrism assumes that humans are equal to all other species; ecocentrism is considered as a more environmentalist outlook on the HND (Kortenkamp and Moore, 2001). In short, an anthropocentric understanding of nature would value the protection of nature due to the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
ways in which nature affects humans, while an ecocentric ethic would desire to protect nature due to its shared value to all species, including humans (Kortenkamp and Moore, 2001). Dynamic perspectives on how one views the HND may govern stances on global environmental problems such as anthropogenic climate change. Typically, sustainability initiatives rooted within the capitalist system tend to rely upon an anthropocentric view of nature. The role of humans in protecting nature from the effects of climate change is shaped by an understanding of the HND, and ecocentric perspectives may demand different solutions than anthrocentric ones. 5.4 New Animism and Earth Jurisprudence Earth Jurisprudence is a field of thought that focuses on the interconnectedness of humans and non-human beings. This way of thinking reasons that the natural world has rights, just like humans. This ideology has legal precedence. The Ecuador constitution states that nature “has the right to exist, persist, maintain, and regenerate its vital cycles” (Turkewitz, 2017, pp. 19). The practice of Earth Jurisprudence has a rich history which occurred long before characterization by Western academics. A river in the north island of New Zealand that is traditionally used by the indigenous Māori people is now a ‘legal human,’ due largely to 140 years of conservation efforts by Māori activists (Roy, 2017). This legal protection for natural features does not just protect it from degradation, but allows the natural world to fight back if these policies are violated. In the case that New Zealand’s river is illegally polluted, the river itself, not environmental justice organizations, has the right to sue. To date, American attempts for this “new animism” have failed. In 2017, a far-left environmental group called Deep Green Resistance tried to give the Colorado River legal human rights. Senator Steve Daines of Montana scathingly commented in The New York Times that “radical obstructionists who contort common sense with this sort of nonsense undercut credible conservationists” (Turkewitz, 2017, pp 8). The idea of new animism, of imbuing the natural world with legal human rights, seems to undercut many conservationists’ objectives of “overcoming human primacy” (Ferrando, 2013, p. 29). To award legal human rights to non-human entities seems, at the surface, to directly feed
SUMMER 2020
into the idea of human centrality. Even Mr. Flores-Williams, the lawyer fighting for the protection of the Colorado River, explicitly promotes a “human/nature dualism,” in which humans are separate (and perhaps superior to) the natural world. “The ultimate disparity exists between humans that are using nature and nature itself” (Turkewitz, 2017, pp. 11). However, new animism has a clever way of dissolving this fissure: by integrating just enough anthropocentrism to get our attention. “It’s not pie in the sky,” writes Flores. “It’s pragmatic.” When a nonhuman entity is granted legal human rights, it is not given because it is viewed as a person, but because it is viewed as something that is inextricably connected to humans’ well-being. Therefore, protecting its well-being would essentially be protecting the well-being of humans. If the Colorado River, as a human, were to have won the case, Americans may have become more aware of the fact that the river supplies drinking water to millions. This strand of law may not just be the most pragmatic but also the most powerful tool we now have to conserve wildlands.
"This new animist idea of human entanglement with the ecosystem has long been present in many indigenous ideologies. The Māori peoples, for example, believe themselves to be part of the universe, not only equal to the mountains and rivers, but inextricably entangled with This new animist idea of human entanglement with the ecosystem has long been present them." in many indigenous ideologies. The Māori peoples, for example, believe themselves to be part of the universe, not only equal to the mountains and rivers, but inextricably entangled with them. According to the lead Māori negotiator for the Whanganui river, giving the river legal human rights is simply an attempt “to find an approximation in law so that all others can understand that from our perspective treating the river as a living entity is the correct way to approach it, as an indivisible whole” (Roy, 2017, pp. 5). Therefore, to introduce the idea of Earth Jurisprudence and New Animism into Western ideology is not to present a new idea, but an effort to pay due attention to a sustainable development model that has existed for centuries in indigenous thought. In political theorist Jane Bennet’s paper “Steps Toward an Ecology of Matter,” she argues for “a renewed emphasis on our entanglement with things.” This, she believes, will allow us to “tread more lightly upon the earth, both because things are alive and have value and because things have the power to do us harm” (Bennett 2004, p. 375-376). In this sense, Earth Jurisprudence may be a powerful tool in the conservationist’s toolbox, “a suitable way of
237
departure to think in relational and multi-layered ways, expanding the focus of the non-human realm” (Ferrando, 2013, p. 30). This is what new animism begs people to do – to re-introduce to Western belief the indigenous ideology that the natural world is not something separate, but something enmeshed with human life.
Conclusion "Recent historical species losses have been up to 100 times higher than normal background rates, and there is concern that the current biodiversity crisis constitutes a sixth mass extinction comparable in magnitude and rate to previous events in the deep-time fossil record."
Climate change is the defining crisis of our time. From deep ocean trenches to the highest mountain tops, there is evidence of human impact. Nothing is impervious to the largescale, far-reaching effects of climate change, which is occurring at an unprecedented speed. Rising temperatures are fueling environmental degradation, natural disasters, weather extremes, food and water insecurity, economic disruption, conflict, and terrorism (UN, 2020). Although extinction is a natural process, current rates of biodiversity loss are hugely elevated. Recent historical species losses have been up to 100 times higher than normal background rates, and there is concern that the current biodiversity crisis constitutes a sixth mass extinction comparable in magnitude and rate to previous events in the deep-time fossil record (Turvey et al., 2019). As the population of the human race has exploded, economic systems have also grown in complexity and scale, evolving from simple, community level trading schemes to the interconnected, global system that exists today. Although innovation in technology and subsequent improvements in quality of life have occurred within capitalistic systems, these benefits are certainly not distributed equitably to all, and economic inequality continues to grow. Contrary to popular anthropocentric views, humans are embedded in an intricate web of life, and the impacts of capitalism and the consumption it stimulates has reverberated across this entire web. It is clear that capitalism has transformed both human and nonhuman existence across the globe. Finding solutions to adequately address environmental degradation caused by anthropogenic climate change can become challenging when considering the evolving understanding of who, where, and how climate change has affected global and local systems. What is necessary moving forward is an interdisciplinary collaboration to derive and implement pragmatic solutions to climate change that will be effective in the long term. This does not just include coordination between the subjects of the physical and life sciences. Delving into present market conditions and industry
238
trends using macro- and microeconomic theory allows individuals, firms, and policy-makers to understand the fiscal considerations of cutting carbon emissions. This was also observed to take place on the consumer level as individuals have only started to realize the true cost of plastic bags after taxation. Taxing actions that generate negative environmental externalities can be supplemented by subsidizing the positive externalities such as those generated through renewable energy production. Alternatively, emerging political actions, such as the Green New Deal, aim to revolutionize how the significance of climate change is managed and prioritized in the United States and around the world. It is without a doubt that the future state of climate change is dependent on the actions taken by the individual and industry within this generation. If anything should be gathered from this article, it is the fact that climate change is a nexus of many different contributing factors. If we do not address each and every one, we will be faced with a crisis even more multifaceted than that which currently confronts us. However, this idea works both ways. The contributing factors to environmental degradation are plentiful, but so are the solutions. Some solutions, such as profiting from limited big game hunting or promoting sustainable ecotourism efforts, are more straightforward, and may be achieved to some extent in the next few years. Other solutions, such as carbon capture and drastic reduction in emissions, are steps that will take more time and money, along with further technological innovation to successfully achieve. But no matter how overwhelming the solutions may be, each is an essential piece of the puzzle to combat the climate crisis. Our biosphere is a complex web of interconnected pushes and pulls between organisms, landscape, air and water. So, it may be unsurprising that our solutions must be as multifaceted as that which we wish to conserve. References Adler, Simon. “The Rhino Hunter.” Radiolab, WNYC Studios, 7 Sept. 2015, www.wnycstudios.org/podcasts/radiolab/articles/ rhino-hunter. Al-Chalabi, M. (2015). Vertical farming: Skyscraper sustainability? Sustainable Cities and Society, 18, 74–77. https://doi.org/10.1016/j.scs.2015.06.003 Anderson, Jon. “Can the Wildlife of East Africa Be Saved? A Visit with Richard Leakey.” New Yorker, 20 Feb. 2020, www. newyorker.com/culture/culture-desk/can-the-wildlife-ofeast-africa-be-saved-a-visit-with-richard-leakey.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Barnosky, A. D., Matzke, N., Tomiya, S., Wogan, G. O., Swartz, B., Quental, T. B., . . . Ferrer, E. A. (2011). Has the Earth’s sixth mass extinction already arrived? Nature, 471(7336), 51-57. doi:10.1038/nature09678 Baumol, W. J. (2004). The free-market innovation machine: Analyzing the growth miracle of capitalism (4. print., and 1. paperback print). Princeton Univ. Press. Bennett, J. (2004). The Force of Things. Political Theory, 32(3), 347-372. doi:10.1177/0090591703260853 Binswanger, M. (2009). Is there a growth imperative in capitalist economies? A circular flow perspective. Journal of Post Keynesian Economics, 31(4), 707–727. https://doi. org/10.2753/PKE0160-3477310410 Boyle, G., & Open University (Eds.). (2004). Renewable energy (2nd ed). Oxford University Press in association with the Open University. Buckley RC, Morrison C, Castley JG. 2016. Net effects of ecotourism on threatened species survival. PLOS ONE 11(2):e0147988 Center for Food Safety and Applied Nutrition. (2020a). Food Loss and Waste. FDA. https://www.fda.gov/food/consumers/ food-loss-and-waste Charnley, S, 2005. From nature tourism to ecotourism? The case of the Ngorongoro Conservation Area, Tanzania. Hum. Organ. 64(1):75–88. Clark, B., & York, R. (2005). Carbon metabolism: Global capitalism, climate change, and the biospheric rift. Theory and Society, 34(4), 391–428. https://doi.org/10.1007/s11186005-1993-4 Comninel, G. C. (2000). English feudalism and the origins of capitalism. The Journal of Peasant Studies, 27(4), 1–53. https:// doi.org/10.1080/03066150008438748 Delistavrou, A. (2017). Understanding Ethical Consumption: Types and Antecedents. 1-23. Desilver, D. (2020). Renewable energy is growing fast in the U.S., but fossil fuels still dominate. Pew Research Center. Durham WH. 2008. The challenge ahead: reversing vicious cycles through ecotourism. In Ecotourism and Conservation in the Americas, ed. A Stronza, WH Durham, pp. 265–71. Wallingford, UK: CABI Dyllick, T., & Hockerts, K. (2002). Beyond the business case for corporate sustainability. Business Strategy and the Environment, 11(2), 130-141. doi:10.1002/bse.323 Eccles, R. G., Ioannou, I., & Serafeim, G. (2014). The Impact of Corporate Sustainability on Organizational Processes and Performance. Management Science, 60(11), 2835–2857. https://doi.org/10.1287/mnsc.2014.1984 Eleni P., Paparoidamis N.G., Chumpitaz R. (2015) Understanding Ethical Consumers: A New Approach Towards Modeling Ethical Consumer Behaviours. In: Robinson, Jr. L. (eds) Marketing Dynamism & Sustainability: Things Change, Things Stay the Same…. Developments in Marketing Science: Proceedings of the Academy of Marketing Science. Springer, Cham. https://doi.org/10.1007/978-3-319-10912-1_72
SUMMER 2020
EPA. (2020). Environmental Justice. United States Environmental Protection Agency. Fell, H., Burtraw, D., & Morgenstern, R. (2020). Climate Policy Design with Correlated Uncertainties in Offset Supply and Abatement Cost. 24. Ferrando, F. (2013). Posthumanism, Transhumanism, Antihumanism, Metahumanism, and New Materialisms. Existenz, 8(2), 26-32. Flocks, J. (2012). The Environmental and Social Injustice of Farmworker Pesticide Exposure. UF Law Faculty Publications. https://scholarship.law.ufl.edu/facultypub/268 Frédérik Saltré Research Fellow in Ecology & Associate Investigator for the ARC Centre of Excellence for Australian Biodiversity and Heritage, & Corey J. A. Bradshaw Matthew Flinders Fellow in Global Ecology and Models Theme Leader for the ARC Centre of Excellence for Australian Biodiversity and Heritage. (2020, July 09). What is a 'mass extinction' and are we in one now? Retrieved from https://theconversation. com/what-is-a-mass-extinction-and-are-we-in-onenow-122535 Friedman, L. (2019, February 21). What Is the Green New Deal? A Climate Proposal, Explained. The New York Times. https://www.nytimes.com/2019/02/21/climate/green-newdeal-questions-answers.html Garbach, K., Lubell, M., & DeClerck, F. A. J. (2012). Payment for Ecosystem Services: The roles of positive incentives and information sharing in stimulating adoption of silvopastoral conservation practices. Agriculture, Ecosystems & Environment, 156, 27–36. https://doi.org/10.1016/j. agee.2012.04.017 Geoffrey Wall (1997). "Is ecotourism sustainable?", Environmental Management, 21:4, 483-491. Goulder, L. H., & Schein, A. R. (2013). CARBON TAXES VERSUS CAP AND TRADE: A CRITICAL REVIEW. Climate Change Economics, 04(03), 1350010. https://doi.org/10.1142/ S2010007813500103 Grandoni, D., & Sonmez, F. (2019, March 26). Senate defeats Green New Deal, as Democrats call vote a ‘sham.’Washington Post. https://www.washingtonpost.com/powerpost/ green-new-deal-on-track-to-senate-defeat-as-democratscall-vote-a-sham/2019/03/26/834f3e5e-4fdd-11e9-a3f778b7525a8d5f_story.html Gray, R. (2019, March 4). Sixth mass extinction could destroy life as we know it– biodiversity expert. Retrieved from https:// horizon-magazine.eu/article/sixth-mass-extinction-coulddestroy-life-we-know-it-biodiversity-expert.html Horrigan, L., Lawrence, R. S., & Walker, P. (2002). How sustainable agriculture can address the environmental and human health harms of industrial agriculture. Environmental Health Perspectives, 110(5), 445–456. https://doi. org/10.1289/ehp.02110445 H.Res.109 - 116th Congress: Recognizing the duty of the Federal Government to create a Green New Deal. (2019/2020). (2019, February 12). [Webpage]. https://www. congress.gov/bill/116th-congress/house-resolution/109/text Ivan Montiel, J. (n.d.). Defining and Measuring Corporate Sustainability: Are We There Yet? - Ivan Montiel, Javier Delgado-Ceballos, 2014. Retrieved
239
September 07, 2020, from https://journals.sagepub.com/ doi/10.1177/1086026614526413
rates of extinction, distribution, and protection. Science, 344(6187).
Johnson, T. (2019, January 25). The Dakota Access Pipeline and the Breakdown of Participatory Processes in Environmental Decision-Making. Environmental Communication. 13(3), pp. 335-352. https://doi.org/10.1080/17524032.2019.1569544.
Pingali, P. L. (2012). Green Revolution: Impacts, limits, and the path ahead. Proceedings of the National Academy of Sciences, 109(31), 12302–12308. https://doi.org/10.1073/ pnas.0912953109
Johnsson, F., Kjärstad, J., & Rootzén, J. (2019). The threat to climate change mitigation posed by the abundance of fossil fuels. Climate Policy, 19(2), 258–274. https://doi.org/10.1080/14 693062.2018.1483885
Plan for Climate Change and Environmental Justice, Joe Biden. (n.d.). Joe Biden for President: Official Campaign Website. Retrieved August 15, 2020, from https://joebiden. com/climate-plan/
Koerth-Baker, M. (2010, November 06). Shining Light on the Cost of Solar Energy. Retrieved September 07, 2020, from https://www.nationalgeographic.com/news/ energy/2010/11/101105-cost-of-solar-energy/
Prakash, V. (2020, May 13). I’m Joining the Sanders-Biden Taskforce on Climate. Here’s why. Medium. https://medium. com/sunrisemvmt/im-joining-the-sanders-biden-taskforceon-climate-here-s-why-90a3dd0ff546
Kolbert, E. (2009, May 25). The Sixth Extinction? Retrieved from https://www.newyorker.com/magazine/2009/05/25/the-sixthextinction
Proulx, G., & Crane, N. J. (2019, September 16). “To see things in an objective light”: the Dakota Access Pipeline and the ongoing construction of settler colonial landscapes. Journal of Cultural Geography. 37(1), pp. 46-66. https://doi.org/10.108 0/08873631.2019.1665856.
Kortenkamp, K. V., & Moore, C. F. (2001). ECOCENTRISM AND ANTHROPOCENTRISM: MORAL REASONING ABOUT ECOLOGICAL COMMONS DILEMMAS. Journal of Environmental Psychology, 21(3), 261–272. https://doi.org/10.1006/ jevp.2001.0205 Kirkby CA, Giudice R, Day B, Turner K, Silvera Soares-Filho B, et al. (2011). Closing the ecotourism- conservation loop in the Peruvian Amazon. Environ. Conserv. 38(01): 6–17 Kremen, C., Iles, A., & Bacon, C. (2012). Diversified Farming Systems: An Agroecological, Systems-based Alternative to Modern Industrial Agriculture. Ecology and Society, 17(4), art44. https://doi.org/10.5751/ES-05103-170444 Living Planet Report. (2018). Retrieved from https:// livingplanetindex.org/projects?main_page_ project=LivingPlanetReport&home_flag=1 Moore, J. W. (2017). The Capitalocene, Part I: On the nature and origins of our ecological crisis. The Journal of Peasant Studies, 44(3), 594–630. https://doi.org/10.1080/03066150.2016.1235 036
Redford, K., & Adams, W. (2009). Payment for Ecosystem Services and the Challenge of Saving Nature. Conservation Biology, 23(4), 785–787. https://doi.org/10.1111/j.15231739.2009.01271.x Reilly, W. K. (1990). The green thumb of capitalism. Policy Review, 54, 16. Business Source Ultimate. Relman, E. (2019, June 5). Alexandria Ocasio-Cortez says Green New Deal would cost $10 trillion—Business Insider. https://www.businessinsider.com/alexandria-ocasio-cortezsays-green-new-deal-cost-10-trillion-2019-6 “Rhino Populations: Rhino Facts: Save the Rhino International.” Save The Rhino, www.savetherhino.org/rhino-info/ population-figures/. close.
Natter, A. (2019, February 25). Green New Deal Would Cost $93 Trillion, Ocasio-Cortez Critics Say. Fortune. https://fortune. com/2019/02/25/the-green-new-deal-ocasio-cortez/
Rhodes, C. J. (2017). The Imperative for Regenerative Agriculture. Science Progress, 100(1), 80–129. https://doi.org/ 10.3184/003685017X14876775256165
Norton-Griffiths, Mike. “Whose Wildlife Is It Anyway?” New Scientist, vol. 193, no. 2596, 2007, p. 24., doi:10.1016/s02624079(07)60723-4.
Richard B. Stewart, "Controlling Environmental Risks through Economic Incentives," Columbia Journal of Environmental Law 13, no. 2 (1988): 153-170
Nuwer, Rachel. “Kenya Sets Ablaze 105 Tons of Ivory.” National Geographic, 30 Apr. 2016, www.nationalgeographic.com/ news/2016/04/160430-kenya-record-breaking-ivory-burn
Robert G. Eccles and Svetlana Klimenko. (2019, April 26). Shareholders Are Getting Serious About Sustainability. Retrieved September 07, 2020, from https://hbr.org/2019/05/ the-investor-revolution
O’Donoghue T, & Rabin M. (2006, November 1). Optimal sin taxes. Journal of Public Economics, 90(10), 1825-1849. https:// doi.org/10.1016/j.jpubeco.2006.03.001. Payment for Ecosystem Services and the Challenge of Saving Nature. (2009). Conservation Biology, 23(4), 785–787. https:// doi.org/10.1111/j.1523-1739.2009.01271.x Pievani, T. (2013). The sixth mass extinction: Anthropocene and the human impact on biodiversity. Rendiconti Lincei, 25(1), 85-93. doi:10.1007/s12210-013-0258-9 Pimm, S.L. et al (2014). The biodiversity of species and their
240
Rea, A. W., & Munns, W. R. (2017). The value of nature: Economic, intrinsic, or both? Integrated Environmental Assessment and Management, 13(5), 953–955. https://doi. org/10.1002/ieam.1924
Roy, E. A. (2017, March 16). New Zealand river granted same legal rights as human being. Retrieved from https://www. theguardian.com/world/2017/mar/16/new-zealand-rivergranted-same-legal-rights-as-human-being Rubin E. S., Mantripragada H., Marks A., Versteeg P., & Kitchin J. (2012). The outlook for improved carbon capture technology. Progress in Energy and Combustion Science. 38(5), pp. 630671. https://doi.org/10.1016/j.pecs.2012.03.003. Schomers, S., & Matzdorf, B. (2013). Payments for ecosystem services: A review and comparison of developing and
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
industrialized countries. Ecosystem Services, 6, 16–30. https:// doi.org/10.1016/j.ecoser.2013.01.002 Seetharaman, null, Moorthy, K., Patwa, N., Saravanan, null, & Gupta, Y. (2019). Breaking barriers in deployment of renewable energy. Heliyon, 5(1), e01166. https://doi. org/10.1016/j.heliyon.2019.e01166 Stavins, R. N. (2008). A meaningful U.S. cap-and-trade system to address climate change. St. Louis: Federal Reserve Bank of St Louis. Retrieved from https://search.proquest.com/docvie w/1698040382?accountid=10422 Stronza, A., Hunt, C., & Fitzgerald, L. (2019). Ecotourism for Conservation? Annual Review of Environment and Resources, 44(229), 229-253. Temel, Julia, et al. 2008. “Limits of Monetization in Protecting Ecosystem Services.” Conservation Biology, vol. 32, no. 5, 2018, pp. 1048–1062., doi:10.1111/cobi.13153.
An International Journal of Indigenous Literature, Arts, & Humanities, Issue 19.1, Available at SSRN: https://ssrn.com/ abstract=2925513. Wilson, M. (2003). Corporate Sustainability: What Is It and Where Does It Come From? Ivey Business Journal. https://iveybusinessjournal.com/publication/corporatesustainability-what-is-it-and-where-does-it-come-from/ Wood, E. (1998). The Agrarian Origins of Capitalism. Monthly Review, 50(3), 14. https://doi.org/10.14452/MR-050-03-199807_2 Zalasiewicz, J., Williams, M., Smith, A., Barry, T. L., Coe, A. L., Bown, P. R., Brenchley, P., Cantrill, D., Gale, A., Gibbard, P., Gregory, F. J., Hounslow, M. W., Kerr, A. C., Pearson, P., Knox, R., Powell, J., Waters, C., Marshall, J., Oates, M., … Stone, P. (2008). Are we now living in the Anthropocene. GSA Today, 18(2), 4. https://doi.org/10.1130/GSAT01802A.1
“The FASTER Principles for Successful Carbon Pricing: An Approach Based on Initial Experience.”The Organization for Economic Cooperation and Development and The World Bank, 2015. Turkewitz, J. (2017, September 26). Corporations Have Rights. Why Shouldn’t Rivers? New York Times. Retrieved from https://www.nytimes.com/2017/09/26/us/does-thecolorado-river-have-rights-a-lawsuit-seeks-to-declare-it-aperson.html Turvey, S. T., & Crees, J. J. (2019). Extinction in the Anthropocene. Current Biology, 29(19), R982–R986. https:// doi.org/10.1016/j.cub.2019.07.040 UN. (2020). The Climate Crisis- A Race We Can Win. Shaping Our Future: UN75. US EPA, OAR. (2016, January 12). Global Greenhouse Gas Emissions Data [Overviews and Factsheets]. US EPA. https:// www.epa.gov/ghgemissions/global-greenhouse-gasemissions-data US EPA, OECA. (2015, August 17). Agriculture and Land Use [Overviews and Factsheets]. US EPA. https://www.epa.gov/ agriculture/agriculture-and-land-use U.S. Food and Drug Administration. (2020b). Food Loss and Waste. FDA. https://www.fda.gov/food/consumers/food-lossand-waste Velasquez-Manoff, M. (2018, April 18). Can Dirt Save the Earth? The New York Times. https://www.nytimes. com/2018/04/18/magazine/dirt-save-earth-carbon-farmingclimate-change.html Wagner, T. (2017, December 1). Reducing single-use plastic shopping bags in the USA. Waste Management, 70, 3-12. https://doi.org/10.1016/j.wasman.2017.09.003. Warner, Gregory. “Up In Flames: Kenya Burns More Than 100 Tons Of Ivory.” National Public Radio, 30 Apr. 2016, www.npr. org/sections/parallels/2016/04/30/476031765/up-in-flameskenya-burns-more-than-100-tons-of-ivory#:~:text=Kenya, which introduced the world,pyres in Nairobi's National Park. What Is Ecotourism? (n.d.). Retrieved from https://ecotourism. org/what-is-ecotourism/. Whyte, K. (2017, May 20). The Dakota Access Pipeline, Environmentalism Injustice, and U.S. Colonialism. Red Ink:
SUMMER 2020
241
The Chemistry of Cosmetics STAFF WRITERS: ANNA KOLLN '22, ANAHITA KODALI '23, MADDIE BROWN '22 BOARD WRITER: NISHI JAIN '21 Cover Image: The cosmetics industry is an incredibly complex, nuanced, and powerful industry that takes up about $60 billion a year. It varies per region, per culture, and per person – in this paper, we try and elucidate its chemical underpinnings in an attempt to understand the basic building blocks that make up this giant of an industry Source: Wikimedia Commons
242
What does the cosmetics industry look like? The global cosmetics industry is one of the largest industries in the world. Before the coronavirus pandemic hit, it was expected to make $429.8 billion dollars in just 2 years and is rapidly growing (Rajput et al., 2019). The biggest consumer of cosmetics products in the world is the United States, while France is the largest exporter (Kumar et al., 2005). Over the past decades, the growth of the global economy, as well as increase in disposable incomes, had led to rising demands for cosmetics products. General market growth has shifted from the west to the east; however, western nations are currently experiencing increasing demand for herbal, natural, and organic products, which has contributed to rapid growth in the cosmetics industry and also offers potential areas for further growth in the coming years (Kumar et al., 2005, Rajput et al., 2019). Undoubtedly, the
global pandemic of 2020 has had a significant impact on the industry as it is so heavily reliant on disposable income. Despite initial cuts to profit, beauty companies have reported significant growth in e-commerce. Additionally, experts believe that the industry will recover within the next 5 years, as the beauty industry is relatively more stable and secure than other consumer industries (Utroske et al., 2020). There are hundreds of brands of cosmetics products around the world. However, there are a few mega-companies that control most of the industry. In fact, together, L’Oreal, Estee Lauder, Procter & Gamble, Coty Inc., Shiseido, and Johnson & Johsnon own 182 cosmetics brands (Willett). All of these companies but Johnson & Johnson are also in the top ten list of global cosmetics companies in 2020. The number one company is L’Oreal, with sales of $29.4 billion dollars annually. The company owns several luxury brands and has acquired many DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure I: Benzoyl Peroxide Source: Wikimedia Commons
global beauty brands, including Valentino, Nyx Cosmetics, and Ralph Lauren, which are what currently fuel the companyâ&#x20AC;&#x2122;s overall growth. The next top 9 companies, in order, are: Estee Lauder, Procter & Gamble, Coty Inc., Cosmax, Shiseido, Beiersdorf, Amore Pacific, Kao Corporation, and Intercose S.P.A. (Cosmetics ODM Market, 2019). Over the past decade, skincare has consistently represented the biggest percentage of products sold in the cosmetics market; in 2019, skincare represented about 40% of all products sold. The skincare industry has benefited significantly from movements towards natural products and has a much faster growth rate than the overall market. The most popular brand in the 2010s was Olay Regenerist, which sells a variety of lotions and creams and is owned by Procter & Gamble. The makeup industry is made up of several different products, with foundations making up the biggest proportion of the market share in the United States (Shahbandeh et al., 2019).
used over-the-counter treatments for acne and is regarded as safe and effective by the FDA. When applied to the skin, it enters the sebum-secreting pores (Dutil et al., 2010). There, it breaks down into free radicals which oxidize and kill the acne-inducing bacteria Propionibacterium acnes, or P. acnes. Due to this mechanism of action, P. acnes is unable to develop resistance to benzoyl peroxide like other antibiotic treatments (Tanghetti & Popp et al., 2009). The harmless product benzoic acid is excreted through the urine (Dutil et al., 2010). Benzoyl peroxide is commercially available in many topical forms, with gels being the most common, with concentrations ranging from 2.5% to 10%. Currently, there is no statistically significant proof that efficacy increases with concentration, although lower concentrations have been shown to cause fewer negative side effects such as skin irritation or dryness (Brandstetter & Maibach et al., 2013).
"Over the past decade, skincare has consistently represented the biggest percentage of products sold in the cosmetics market; in 2019, skincare represented about 40% of all products sold."
Another common topical treatment for acne, among other skin conditions such as warts,
Chemistry of Skincare Skincare is a powerful, lucrative industry that can be broken down into a few different sectors: skincare (including acne medication, wrinkle creams, and skin pigment creams) and makeup. Within skincare, there more limited FDA oversight as compared to more aggressive pharmaceutical products, thereby leading to possible effects in their efficacy (something that is normally heavily regulated by the FDA). As a result, understanding the base chemistry can prove to be especially effective in attempting to understand the way that the FDA may shed light on the relative effectiveness of various techniques. Benzoyl peroxide is one of the most widely
SUMMER 2020
Figure 2: Salicylic Acid Source: Wikimedia Commons
243
Figure 3: Tazarotene Source: Wikimedia Commons
"Topical retinoids are widely considered some of the most effective treatments of acne, as they target multiple contributing factors of acne."
lesions and calluses, is salicylic acid. Although its chemical structure resembles that of a Ă&#x;-hydroxy acid, its aromaticity gives it unique properties including lipophilicity, which allows it to dissolve in sebum. Once absorbed in the skin, salicylic acid is thought to disrupt intracellular connectors in the outer layer of the skin, resulting in an exfoliating effect while removing excess sebum. This mechanism can result in skin irritation and dryness (Arif et al., 2015). Toxic amounts of salicylic acid in the bloodstream can result in salicylism, which, while rare, has in extreme cases resulted in death (Madan & Levitt et al., 2014). As a result, over-the-counter acne treatments only contain concentrations ranging from 0.5% to 2%, although prescribed acne medications can be as high as 10% and treatments for other skin conditions can be as high as 40% (Akhadan & Bershad et al., 2003). Topical retinoids are widely considered some of the most effective treatments of acne, as they target multiple contributing factors of acne. Retinoids are derivatives of vitamin A. Within cells, retinoids bind to nuclear hormone receptors for retinoic acid, a metabolite of vitamin A, and regulate the expression of certain genes (Thielitz & Gollnick et al., 2008). One result is increased surface skin cell turnover, which clears the skin of clogged pores and impedes new clogs from forming. This effect also discourages the proliferation of P. acnes, which thrive in closed, anaerobic pores (Wolf et al., 2002). Additionally, retinoids exhibit an anti-inflammatory effect by inhibiting certain immune-response receptors and pathways. As with many acne treatments, irritation and dryness are common side effects (Thielitz & Gollnick et al., 2008). The retinoids currently approved by the FDA for topical use are tretinoin, adapalene, and tazarotene. They are offered in concentrations ranging from 0.02% to 0.3% depending on the retinoid (Akhadan & Bershad et al., 2003), and only adapalene is available over the counter. Clinical studies have
244
shown tazarotene to be the most effective against acne, and adapalene to have the least adverse effects on the skin (Thielitz & Gollnick et al., 2008). Despite their success as acne treatments, retinoids are known to have several more severe side effects. Orally prescribed retinoids are well-established teratogens due to their effects on cell growth. Although no link has been made between topical retinoids and birth defects, it is not advisable to use them during pregnancy due to the potential risk (Panchaud et al., 2012). Isotretinoin is controversial for its possible association with depression and suicide. The drugâ&#x20AC;&#x2122;s ability to cross the bloodbrain barrier suggests its potential to interfere with brain receptors. Despite its reputation, reviews have not found a statistically significant link between the medication and adverse mental health effects, although monitoring patients is recommended (Huang & Cheng 2017). Other retinoids have not been associated with increased risk of depression. Due to the wide range of physical manifestations and severity of acne, there is no catch-all treatment. Acne is a multifaceted condition with several targets for medication. An overproduction of sebum and skin cells clogs skin pores, causing P. acnes to flourish, triggering an immune response and inflammation (Leyden et al., 2003). In mild to moderate cases, combination treatments of the aforementioned topical medications are often recommended to treat multiple aspects of acne formation. For example, benzoyl peroxide in combination with tretinoin has proven to be an effective treatment (Leyden et al., 2003). Tretinoin works to prevent initial pore clogging while benzoyl peroxide targets P. acnes, resulting in a multipronged prevention of acne. Clinical studies have shown that increased risk of skin irritation from treatments like this can
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: Radicals that are present on the skin can be identified in an NMR machine and produce scans resembling this one Source: Wikimedia Commons
be averted by applying topical treatments at different times of day (Leyden et al., 2003). Topical antibacterial medications such as clindamycin are often prescribed alone or in combination with other treatments to combat acne, as they target P. acnes proliferation. However, the development of bacterial resistance to these treatments is a concern. Combination treatment of antibiotics and benzoyl peroxide can combat this problem, as P. acnes cannot become resistant to benzoyl peroxide (Seidler & Kimball et al., 2010). Acne is generally very treatable if patients are able to match the correct available medications to their individual condition. Alpha lipoic acids, an antioxidant, were discovered and isolated in 1951 as a part of the enzymatic complex that was involved in oxidative metabolism (Perricone et al., 2000, Sherif et al., 2014). When the alpha lipoic acid is applied to the skin topically, the substance is reduced to become dihydrolipoate, which is in and of itself an effective reducing agent that can then eliminate toxic superoxide, hydroxyl, and nitric oxide radicals (Matsugo et al., 2011). This reducing agent can also increase the production of antioxidants and prevent lipid peroxidation (Podda et al., 2001, Zhang et al., n.d.). It is a powerful agent that can act against not only against UV light due to its protection against radicals, but it can also inhibit NFkB signaling, thereby giving it potent antiinflammatory capabilities as well (Puizina-IviÄ&#x2021; et al., 2010). Alpha hydroxy acids are a class of compounds
SUMMER 2020
that consist of carboxylic groups substituted with hydroxyl groups on the alpha (adjacent) carbon (Babilas et al., 2012). These organic acids are naturally occurring in many fruits, but also can be synthetically created â&#x20AC;&#x201C; as they are in many skincare products. Alpha hydroxy acids are commonly used in skin moisturizing serums or wrinkle reduction creams due to their ability to increase water holding capacity, thereby also increasing skin hydration and skin turgor (Edison et al., 2004, Green et al., 2009). AHAs also induce desquamation, plasticization, and normalization of epidermal differentiation (through interference with intercellular ionic bonding), which then can reduce corneocyte cohesion and facilitate keratolysis (Kornhauser et al., 2012). Alpha hydroxy acids are used in home-use skin peelings, and their common forms include lactic acid, citric acid, mandelic acid, glycolic acid, tartaric acid, ascorbic acid, and malic acid (Tung et al., 2000).
"Copper peptides, an anti-aging component, the most common of which is glycyl-lhistidyl-l-lysine or GHK, stimulates blood vessel and nerve outgrowth, and supports the function of dermal fibroblasts."
Copper peptides, an anti-aging component, the most common of which is glycyl-l-histidyl-llysine or GHK, stimulates blood vessel and nerve outgrowth, and supports the function of dermal fibroblasts (Li et al., 2016, Pickart et al., 2008, Pickart et al., 2018). It additionally has potent anti-cancer and anti-inflammatory capabilities (through inhibition of NFkB signaling) (Pickart et al., 2015). Dermatologists have conducted multiple controlled studies on aged skin to show that GHK has potent effects in tightening skin, improving elasticity and skin firmness, reduction of fine lines, wrinkles, photodamage and hyperpigmentation (Mazurowska et al., 2008). GHK complexes with copper activate
245
Source: Flickr
many remodeling processes including those related to macrophages and mast cells, and also stimulate the synthesis of collagen, elastin, metalloproteinases, anti-proteases, vascular endothelial growth factor, fibroblast growth factor 2, nerve growth factor, neutrotropins 3 and 4, and erythropoietin (Pickart et al., 2015).
"...eye makeup has existed for thousands of years. In Ancient Egypt, men and women used kohl - a paint like substance containing lead, metal, and ash- to paint dark circles around their eyes to ward off disease."
Dimethylaminoethanol (DMAE), an agent commonly used in anti-wrinkle medications, is an analog of the B vitamin choline and a precursor of acetylcholine (Liu et al., 2014). DMAE is a potent anti-inflammatory agent that has effects on acetylcholine synthesis, storage, secretion, metabolism, and receptivity (Clares et al., 2010). When evaluated in a placebo-controlled trial, DMAE was shown to be efficacious (as well as safe) in the mitigation of forehead lines and periorbital fine lines, improving lip shape and lip fullness, as well as the overall appearance of aging skin (Tadini et al., 2009).
Figure 5: Alpha hydroxy acids are commonly used in wrinkle reduction skincare creams
Hydroquinone has been used since the 1950s in over the counter skin lightening serums but was stopped in the early 2000s due to health concerns (Boyle et al., 1986). It was mainly stopped due to the presence of arbutin. Although there are many other products on the market that contain arbutin, many of which are hair products, commercial availability for skin lightening was discontinued (Matsumoto et al., 2016, O’Donoghue et al., 2006). Its skin lightening capabilities stem from its use as a polymerization inhibitor, which removes circulating melanin and lightens skin (Andersen et al., 2010, Schwartz et al., 2020). Kojic acid is a naturally occurring metabolite that is produced by fungi that has the ability to inhibit catecholase and tyrosinase activity (Burnett et al., 2010). Kojic acid functions as an antioxidant in skin lightening that acts in a time-dependent fashion, which, like hydroquinone, reduces the amount of circulating melanin (Cabanes et al., 1994). This time‐dependence, which is unaltered by prior incubation of the enzyme with the inhibitor, is consistent with a first‐order chemical reaction involving catecholase inhibition. In addition to skin-lightening, kojic acid has been used in antioxidant, anti-proliferative, anti-inflammatory, radio protective capacities (Saeedi et al., 2019). L-ascorbic acid is a water-soluble enantiomer of vitamin C that has several proven functions within the skincare industry (Crisan et al., 2015). It has proven to be an effective antioxidant which destroys free radicals and strengthens
246
protection against UV light, as well as removes discoloration and helps in fighting melasma, post-acne discoloration and pigmentation (Dulińska-Molak et al., 2019). L-ascorbic acid also functions as an immunostimulant by strengthening the immunity of the skin, which is weakened under the influence of UV rays, meaning that it also prevents carcinogenic changes to the skin (Al-Niaimi et al., 2017). However, the most prolific quality of this molecule is its anti‐wrinkling agency to stimulate collagen synthesis, something that decreases with age (Fitzpatrick et al., 2002). It additionally increases density of skin, improves skin elasticity, and shallows minor surface wrinkles (Elmore et al., 2005). On a more biochemical level, it inhibits MMP‐1 activity, an enzyme of the metalloproteinases class that causes collagen and elastin degeneration (Telang et al., 2013).
Chemistry of Makeup Eye makeup comes in a variety of bright colors; everything from neutral browns to neon pinks and greens. However, eye makeup has existed for thousands of years. In Ancient Egypt, men and women used kohl - a paint like substance containing lead, metal, and ash- to paint dark circles around their eyes to ward off disease (Long, 2017). Kohl is not common in the world today as lead can be extremely toxic. Today, eye makeup, including eyeshadow and eyeliner is made from a wide variety of ingredients, and varies depending on the brand. However, while the ingredients list for these products can be extremely long, there are a series of similar
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 6: Copper peptide GHK Source: Wikimedia Commons
ingredients they have in common. A quick scan of a standard eyeshadow palette will most likely reveal the top ingredient is either talc or mica. Talc is a naturally occurring mineral made from magnesium, sodium and oxygen. It is the lowest mineral on Oh’s scale, making it one of the softest minerals in the world (King, Talc: The Softest Mineral, n.d.). Talc is added to powders and creams as a filler. In cosmetics, talc is ground into a fine powder that can be added to eye makeup in order to ensure the product slides on smoothly and makes color opaquer (Goins, 2012). However, talc, while not inherently harmful, has the potential to become a carcinogen. Johnson and Johnson were recently in the news due to lawsuits that claim that Johnson and Johnson’s talc baby powder resulted in ovarian cancer (Rabin, 2020). Talc, like all minerals, is harvested from deposits in the Earth. However, talc deposits often run near or intersect with asbestos deposits. Asbestos is a group of minerals that is known to cause lung, throat, and ovarian cancer. If talc is not carefully inspected, asbestos can contaminate cosmetic properties (Asbestos Exposure and Cancer Risk, 2017). On the other hand, mica is a metallic sheet mineral. Like talc, mica can be added to makeup as a filler and to help products apply smoothly. However, mica can also be used to help add color to the makeup since mica comes in a variety of natural colors (Goins, 2012). It
SUMMER 2020
also does not risk the same contamination issues of talc. Another common ingredient in eye makeup is zinc stearate. Zinc stearate is a zinc salt of a fatty acid. Fatty acids are carboxylic acids that contain a long chain of carbons and hydrogens. In the case of a salt, part of this fatty acid is negatively charged and associated with a positive ion such as zinc. Zinc derivatives are often added to eye makeup in order to act as adhesives as well as a thickening agent (Zinc Stearate, Cosmetics Information). Some makeup may use Magnesium derivatives instead of zinc, but the effects are the same. Cosmetic companies also usually add a “slip” to eye makeup in order to improve the texture. A common slip in the eye
"A quick scan of a standard eyeshadow palette will most likely reveal the top ingredient is either talc or mica."
Figure 7: DMAE Source: Wikimedia Commons
247
Figure 8: Hydroquinone Source: Wikimedia Commons
"Although there has been a recent push to minimize the amount of preservatives in products, preservatives are vital to ensuring cosmetics are not contaminated with bacteria."
Figure 9: Kojic Acid Source: Wikimedia Commons
makeup is dimethicone. Dimethicone is a manmade silicone polymer. Silicones are a family of polymers made from siloxane monomers and consist of a long chain of non-carbon atoms. Silicones have many useful properties including a high heat resistance and (Britannica, 2020). As such, they are found in everything from medicine to cookware. However, in the case of cosmetics, silicones are valued for their flexibility. The backbone of a silicone polymer consists of a central silicon atom bound to an oxygen atom. The silicon oxygen bond has a very low rotational barrier, meaning that the bond can rotate ‘freely’ in space (Polymer Properties Database). As a result, silicone products are often very flexible and smooth. This unique flexibility means that silicones can vastly improve the texture of eye makeup, allowing eye makeup to glide onto the eyelid with relative ease. As with much of the cosmetics industry, the exact slip varies based on company and there is no universal formula. Some companies may elect to use silicone alternatives such as the fatty ester ethylhexyl palmitate. In modern cosmetics, the bright colors in eye makeup come from color additives. The Federal Food and Drug act of 1938 regulates the color additives that can be used in cosmetics and only a sub portion of this list are approved for use on the eye. The list is extensive and includes everything from aluminum powder to FD&C Yellow No. 5 (Center for Food Safety and Applied Nutrition, 2017). However, some companies have found that the pigments approved for use FDA cannot give them all the colors they desire. Therefore, in recent years, there has been an increase in eye makeup marked “not safe for use around the eyes.” In this case, these pigments have been approved for use in cosmetics but are not approved for use around the eye due to increased risk of staining and allergic reactions
(Lebsack, 2019). The last component of eye makeup is the preservatives. Although there has been a recent push to minimize the amount of preservatives in products, preservatives are vital to ensuring cosmetics are not contaminated with bacteria. A common family of preservatives in cosmetics are parabens, with methylparaben and butylparaben being widely distributed. Parabens are currently approved for use by the FDA (and are often found in haircare and some skin care due to their effectiveness and low price) but concerns have been raised about correlations between parabens and cancer (Ross, 2019). In 2004, researchers found a concentrated number of parabens in breast cancer tissue, launching debates about whether parabens were promoting cancer growth (Harvey, 2004). Since parabens can act as hormone disruptors, some researchers are concerned about the potential effects of increased parabens levels in the body (Ross, 2019). However, recent human clinical studies have found no correlation between parabens and cancer, and the CDC has declared there is insufficient evidence to be concerned about paraben use (Ross, 2019). Nonetheless, parabens have begun to fall out of favor and are being replaced with other preservatives. Two popular alternatives are Glycol, a water-soluble preservative that can also act as a moisturizer, and Tocopherol, a vitamin that is also found in skincare (Seladi-Schulman, 2018). Many of the ingredients of eye makeup carry over into face makeup. Face makeup comes in many different forms and there is a large variation in formula. This section will focus on liquid face products, such as foundation or concealer. Much like eye makeup, face makeup begins with a base to help the ingredients stay together and apply smoothly on the skin. In the modern cosmetics market, most face makeup uses a water-silicone base. As in eye makeup,
248
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
allowing a much wider shade range. Synthetic Iron Oxides can be mixed in different color combinations or added to other colorants like titanium oxide until the desired shade is reached (Iron Oxides, 2020).
dimethicone is most commonly used as the silicone component due to its ability to cover skin imperfections and improve the texture of the product (Kimbrough, 2013). However, dimethicone is a hydrophobic molecule. As a result, a water-silicone base should begin to separate as silicone molecules repel the water molecules. In order to combat this, face makeup does not simply contain silicone and water mixed together in solution. Instead, silicone and water are bound through emulsifiers, preventing the product from breaking up(Kimbrough, 2013) A common emulsifier is dimethicone crosspolymer. In dimethicone crosspolymer, dimethicone and water are linked through covalent bonds, preventing the components of the base from separating even if they repel one another (Dimethicone Crosspolymer, 2020). Dimethicone crosspolymer is specifically useful for face makeup because the crosslinked polymers will form a film over the skin in order to keep the active ingredients in contact with the skin (Dimethicone Crosspolymer, 2020). Face makeup has a much wider array of possible ingredients compared to eye makeup, and other emulsifiers such as polysilicone 11 may be used for similar effects. Aside from texture, the pigment of face makeup is vital. Consumers are searching for the perfect shade and won’t buy face makeup that doesn’t match their skin tone. The most common way for face products to get pigment is iron oxide and titanium dioxide. Iron oxide is the main colorant used in face makeup and naturally occurs in several colors, primarily red, yellow, and black (Iron Oxides, 2020). However, Iron Oxide is produced synthetically for cosmetics,
SUMMER 2020
The last major category of cosmetics is lip makeup. While the first lip product most people think of is a traditional lipstick that spins up from a small compartment. However, lip products also include lip gloss, lip balm, bullet lipsticks, and multiple other “specialized” lip makeup that companies market to consumers. For simplicity, the traditional lipstick will be examined. Lipsticks can be broken down into three main ingredients: waxes, oils, and emollients (Freeman, 2009). The waxes are the foundation of any lipstick and allow lipstick to be molded into the well-known cylindrical shape. The most common waxes for lipstick are beeswax, paraffin and carnauba wax (Freeman, 2009). The next ingredients are oils, such as lanolin oil or cocoa butter. The oils allow lipsticks to deposit color onto the lips without crumbling and falling apart (Freeman, 2009). However, the real fun in lipstick is the color. The color in lipsticks can come from a variety of natural or synthetic ingredients similar or identical to the color additives found in eye makeup. Perhaps the most popular lipstick color is red. The red coloring in lipstick most commonly comes from a compound called Carmine. Carmine is a deep red color that is produced from carminic acid. Carmine is not just found in makeup but for food coloring as well (Yoquinto, 2013). However, some consumers have begun to avoid carminic acid because it is produced by crushing and soaking cochineal beetles in an acidic solution. Companies who want to be vegan or cruelty free turn to other synthetic
Figure 10: L-ascorbic acid Source: Wikimedia Commons
"Lipsticks can be broken down into three main ingredients: waxes, oils, and emollients."
Figure 11: This is a siloxane monomer with the Si atom bound to an oxygen atom. The R groups are representative of different substituents which will vary based on silicone type Source: Wikimedia Commons
249
red dyes. However, some synthetic dyes such as Red No.6 are derived from petroleum or other problematic issues (Yoquinto, 2013)
Conclusion "The cosmetics industry commands billions of dollars a year has products that appear to target the inherent machinery of the cells in order to achieve a result that is altered from the pretreatment condition."
The cosmetics industry commands billions of dollars a year has products that appear to target the inherent machinery of the cells in order to achieve a result that is altered from the pre-treatment condition. Of the multi-billiondollar market – much of which relies on heavy social media marketing through ads, celebrity endorsement, and other testimonials – 23% of the market is skincare (second only to haircare) (Dobric, 2020). The skincare market is is largely dominated by conglomerates that offer at-home treatments for conditions that range from acne to wrinkles to dark spots, among others. The beauty industry is not far behind, however, as it is cosmetic’s most profitable branch – with makeup and eyeshadow contributing most to this trend (Dobric, 2020). While there has been a recent push toward eco-friendliness among products, there has been a more limited effort in understanding just how the products are eco-friendly. Along those lines, there is also a limited understanding and limited attempt to understand the chemistry behind the cosmetics industry. Due to the limited attempts at understanding this and the limited FDA oversight of more topical treatments (that make up the majority of the cosmetics industry) these massive makeup empires have arisen (Cosmetics ODM Market, 2019). Understanding the inherent cellular machinery and medical manipulation of said machinery is key to ensuring the correct cosmetic investment – more widespread knowledge of the methods may shed light on the efficacy and may result in altered consumer choices. References Akhavan, A., & Bershad, S. (2003). Topical Acne Drugs. American Journal of Clinical Dermatology, 4(7), 473–492. https://doi. org/10.2165/00128071-200304070-00004 Arif, T. (2015). Salicylic acid as a peeling agent: A comprehensive review. Clinical, Cosmetic and Investigational Dermatology, 8, 455–461. https://doi.org/10.2147/CCID.S84765 Asbestos Exposure and Cancer Risk Fact Sheet. (2017). Retrieved September 03, 2020, from https://www.cancer.gov/ about-cancer/causes-prevention/risk/substances/asbestos/ asbestos-fact-sheet Bakkali, F., Averbeck, S., Averbeck, D., & Idaomar, M. (2008). Biological effects of essential oils – A review. Food and Chemical Toxicology, 46(2), 446–475. https://doi.org/10.1016/j. fct.2007.09.106
250
Brandstetter, A. J., & Maibach, H. I. (2013). Topical dose justification: Benzoyl peroxide concentrations. Journal of Dermatological Treatment, 24(4), 275–277. https://doi.org/10. 3109/09546634.2011.641937 Castañeda-Ovando, A., Pacheco-Hernández, Ma. de L., Páez-Hernández, Ma. E., Rodríguez, J. A., & Galán-Vidal, C. A. (2009). Chemical studies of anthocyanins: A review. Food Chemistry, 113(4), 859–871. https://doi.org/10.1016/j. foodchem.2008.09.001 Center for Food Safety and Applied Nutrition. (2017). Summary of Color Additives for Use in the United States. Retrieved September 03, 2020, from https://www.fda.gov/ industry/color-additive-inventories/summary-color-additivesuse-united-states-foods-drugs-cosmetics-and-medicaldevices Clares, B., Ruíz, M. A., Morales, M. E., Tamayo, J. A., & Lara, V. G. (2010). Structural characterization and stability of dimethylaminoethanol and dimethylaminoethanol bitartrate for possible use in cosmetic firming. Journal of Cosmetic Science, 61(4), 269–278 Cosmetics ODM Market: Information by Application (Skincare, Makeup, Haircare, Others)—Forecast Till 2026 (Rep. No. SR1404). (2019, December 2). Retrieved July 25, 2020, from Straits Research website: https://straitsresearch. com/report/cosmetics-odm-market Del Valle, E. M. M. (2004). Cyclodextrins and their uses: A review. Process Biochemistry, 39(9), 1033–1046. https://doi. org/10.1016/S0032-9592(03)00258-9 Dimethicone Crosspolymer. (2020, January 10). Retrieved September 03, 2020, from https://thedermreview.com/ dimethicone-crosspolymer/ Dutil, M. (2010). Benzoyl Peroxide: Enhancing Antibiotic Efficacy in Acne Management. Skin Therapy Letter, 15(10). https://www.skintherapyletter.com/acne/benzoyl-peroxideantibiotic-efficacy/ Editors of Encyclopaedia Britannica. (2020, March 04). Silicone. Retrieved September 03, 2020, from https://www. britannica.com/science/silicone Frith, D. K. T. (n.d.). Globalizing Beauty: A Cultural History of the Global Beauty Industry. 33. Freeman, S. (2009, March 09). How Lipstick Works. Retrieved September 03, 2020, from https://health.howstuffworks.com/ skin-care/beauty/skin-and-makeup/lipstick2.htm Gardner, T. L., & BèE, C. (2020). THE COSMETIC INDUSTRY. 13. Goins, L. (2012, November 15). The Makeup of Makeup: Decoding Eye Shadow. Retrieved September 04, 2020, from https://www.webmd.com/beauty/features/decoding-eyeshadow Green, B. A., Yu, R. J., & Van Scott, E. J. (2009). Clinical and cosmeceutical uses of hydroxyacids. Clinics in Dermatology, 27(5), 495–501. https://doi.org/10.1016/j. clindermatol.2009.06.023 Grimes, P. E., Green, B. A., Wildnauer, R. H., & Edison, B. L. (2004). The use of polyhydroxy acids (PHAs) in photoaged skin. Cutis, 73(2 Suppl), 3–13. Harvey, P. W. (2004). Discussion of concentrations of parabens
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
in human breast tumours. Journal of Applied Toxicology, 24(4), 307-310. doi:10.1002/jat.991 Huang, Y.-C., & Cheng, Y.-C. (2017). Isotretinoin treatment for acne and risk of depression: A systematic review and meta-analysis. Journal of the American Academy of Dermatology, 76(6), 1068-1076.e9. https://doi.org/10.1016/j. jaad.2016.12.028 Iron Oxides (CI 77491, CI 77492, CI 77499). (2020, February 24). Retrieved September 03, 2020, from https://thedermreview. com/iron-oxides-ci-77491-ci-77492-ci-77499/ Jain, N., & Chaudhri, S. (2009). History of cosmetics. Asian Journal of Pharmaceutics, 3(3), 164. https://doi. org/10.4103/0973-8398.56292 Kim, S.-H., Shum, H. C., Kim, J. W., Cho, J.-C., & Weitz, D. A. (2011). Multiple Polymersomes for Programmed Release of Multiple Components. Journal of the American Chemical Society, 133(38), 15165–15171. https://doi.org/10.1021/ ja205687k Kimbrough, S. (2013). Anatomy of a Beauty Product: Liquid Foundations. Retrieved September 03, 2020, from https:// www.beautylish.com/a/vxais/anatomy-of-liquid-foundations King, H. (n.d.). Talc: The Softest Mineral. Retrieved September 03, 2020, from https://geology.com/minerals/talc.shtml Krakowski, A. C., Stendardo, S., & Eichenfield, L. F. (2008). Practical Considerations in Acne Treatment and the Clinical Impact of Topical Combination Therapy. Pediatric Dermatology, 25(s1), 1–14. https://doi.org/10.1111/j.15251470.2008.00667.x Kumar, S. (2005). Exploratory analysis of global cosmetic industry: Major players, technology and market trends. Technovation, 25(11), 1263–1272. https://doi.org/10.1016/j. technovation.2004.07.003 Lebsack, L. (2019). Neon Eyeshadow Has Never Been More Popular, But Is It Safe? Retrieved September 03, 2020, from https://www.refinery29.com/en-us/2019/07/238234/neoneyeshadow-makeup-palette-pigment-safety Leyden, J. J. (2003). A review of the use of combination therapies for the treatment of acne vulgaris. Journal of the American Academy of Dermatology, 49(3, Supplement), S200–S210. https://doi.org/10.1067/S0190-9622(03)01154-X Lummiss, J. A. M., Oliveira, K. C., Pranckevicius, A. M. T., Santos, A. G., dos Santos, E. N., & Fogg, D. E. (2012). Chemical Plants: High-Value Molecules from Essential Oils. Journal of the American Chemical Society, 134(46), 18889–18891. https:// doi.org/10.1021/ja310054d Madan, R. K., & Levitt, J. (2014). A review of toxicity from topical salicylic acid preparations. Journal of the American Academy of Dermatology, 70(4), 788–792. https://doi. org/10.1016/j.jaad.2013.12.005 Mazurowska, L., & Mojski, M. (2008). Biological activities of selected peptides: Skin penetration ability of copper complexes with peptides. Journal of Cosmetic Science, 59(1), 59–69. O’Donoghue, J. L. (2006). Hydroquinone and its analogues in dermatology—A risk-benefit viewpoint. Journal of Cosmetic Dermatology, 5(3), 196–203. https://doi.org/10.1111/j.14732165.2006.00253.x
SUMMER 2020
Panchaud, A., Csajka, C., Merlob, P., Schaefer, C., Berlin, M., Santis, M. D., Vial, T., Ieri, A., Malm, H., Eleftheriou, G., Stahl, B., Rousso, P., Winterfeld, U., Rothuizen, L. E., & Buclin, T. (2012). Pregnancy Outcome Following Exposure to Topical Retinoids: A Multicenter Prospective Study. The Journal of Clinical Pharmacology, 52(12), 1844–1851. https://doi. org/10.1177/0091270011429566 Pickart, L., & Margolina, A. (2018). Regenerative and Protective Actions of the GHK-Cu Peptide in the Light of the New Gene Data. International Journal of Molecular Sciences, 19(7). https://doi.org/10.3390/ijms19071987 Podda, M., Zollner, T. M., Grundmann-Kollmann, M., Thiele, J. J., Packer, L., & Kaufmann, R. (2001). Activity of alphalipoic acid in the protection against oxidative stress in skin. Current Problems in Dermatology, 29, 43–51. https://doi. org/10.1159/000060652 Polymer Properties Database. Retrieved September 03, 2020, from https://polymerdatabase.com/polymer classes/Silicone type.html Polysaccharide Applications: Cosmetics and Pharmaceuticals. ACS Symposium Series 737 Edited by Magda A. El-Nokaly and Helena A. Soini (The Procter and Gamble Company). American Chemical Society: Washington, DC (Distributed by Oxford University Press). 1999. xvi + 348 pp. $135. ISBN 0-8412-3641-0. (2000). Journal of the American Chemical Society, 122(50), 12614–12614. https://doi.org/10.1021/ ja004825k Rabin, R. (2020, June 23). Women With Cancer Awarded Billions in Baby Powder Suit. Retrieved September 03, 2020, from https://www.nytimes.com/2020/06/23/health/babypowder-cancer.htm Rajput, N. (2019). Cosmetics Market by Category (Skin & Sun Care Products, Hair Care Products, Deodorants, Makeup & Color Cosmetics, Fragrances) and by Distribution Channel (General departmental store, Supermarkets, Drug stores, Brand outlets) - Global Opportunity Analysis and Industry Forecast, 2014 - 2022 (pp. 1-137, Rep.). Portland, OR: Allied Market Research. Ross, R. (2019, February 26). What Are Parabens? Retrieved September 03, 2020, from https://www.livescience. com/64862-what-are-parabens.html Saeedi, M., Eslamifar, M., & Khezri, K. (2019). Kojic acid applications in cosmetic and pharmaceutical preparations. Biomedicine & Pharmacotherapy = Biomedecine & Pharmacotherapie, 110, 582–593. https://doi.org/10.1016/j. biopha.2018.12.006 Seidler, E. M., & Kimball, A. B. (2010). Meta-analysis comparing efficacy of benzoyl peroxide, clindamycin, benzoyl peroxide with salicylic acid, and combination benzoyl peroxide/ clindamycin in acne. Journal of the American Academy of Dermatology, 63(1), 52–62. https://doi.org/10.1016/j. jaad.2009.07.052 Seladi-Schulman, J. (2018, September 29). Tocopheryl Acetate: Uses, Benefits, and Risks. Retrieved September 03, 2020, from https://www.healthline.com/health/tocopheryl-a Shahbandeh, M. (2019, October 24). Cosmetics Industry Statistics and Facts. Retrieved July 25, 2020, from Statista website: https://www.statista.com/topics/3137/cosmeticsindustry/
251
Tanghetti, E., & Popp, K. (2009). A Current Review of Topical Benzoyl Peroxide: New Perspectives on Formulation and Utilization. Dermatologic Clinics, 27(1), 17–24. https://doi. org/10.1016/j.det.2008.07.001 Telang, P. S. (2013). Vitamin C in dermatology. Indian Dermatology Online Journal, 4(2), 143–146. https://doi. org/10.4103/2229-5178.110593 Thielitz, A., & Gollnick, H. (2008). Topical Retinoids in Acne Vulgaris. American Journal of Clinical Dermatology, 9(6), 369–381. https://doi.org/10.2165/0128071-200809060-00003 Toxic Potential of Materials at the Nanolevel | Science. (n.d.). Retrieved July 7, 2020, from https://science.sciencemag.org/ content/311/5761/622.abstract Utroske, D. (2020, June 1). Coronavirus Impact: What the market research says. Retrieved July 25, 2020, from https:// www.cosmeticsdesign.com/Article/2020/04/20/CoronavirusImpact-what-market-research-says-about-beauty Willett, M. (2017, July 29). These 7 companies control almost every single beauty product you buy. Retrieved July 25, 2020, from https://www.businessinsider.com/companies-beautybrands-connected-2017-7 Wolf, J. E. (2002). Potential anti-inflammatory effects of topical retinoids and retinoid analogues. Advances in Therapy, 19(3), 109–118. https://doi.org/10.1007/BF02850266 Worret, W.-I., & Fluhr, J. W. (2006). Acne therapy with topical benzoyl peroxide, antibiotics and azelaic acid. JDDG: Journal Der Deutschen Dermatologischen Gesellschaft, 4(4), 293–300. https://doi.org/10.1111/j.1610-0387.2006.05931.x Yoquinto, L. (2013, May 30). The Truth About Red Food Dye Made from Bugs. Retrieved September 03, 2020, from https:// www.livescience.com/36292-red-food-dye-bugs-cochinealcarmine.html Zinc Stearate. (2016). Retrieved September 04, 2020, from https://cosmeticsinfo.org/ingredient/zinc-stearate
252
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SUMMER 2020
253
Inoculation to Operation Warp Speed: The Evolution of Vaccines STAFF WRITERS: ANDREW SASSER '23, ANNA LEHMANN '23, CAROLINA GUERRERO '23, MICHAEL MOYO '22, SOPHIA ARANA '22, SOPHIA KOVAL '21, SUDHARSAN BALASUBRAMANI '22
BOARD WRITERS: ANNA BRINKS '21 Cover image: Vaccines have revolutionized modern medicine. From Edward Jennerâ&#x20AC;&#x2122;s smallpox vaccine to the current race for a COVID-19 vaccine, some of the brightest minds in science have investigated this remarkable technology. Source: Pixabay
Introduction 1.1 Impact of Vaccines on Modern Medicine and Global Health An injection of attenuated or killed microorganisms (bacteria or viruses) is all it takes to awaken the immune system, allowing it to produce and recruit antibodiesâ&#x20AC;&#x201C; proteins that patrol the body via the blood, recognize foreign substances, and annihilate them. Even after exposure to foreign substances, antibodies continue to circulate, providing protection against future exposure to pathogens, which include causative agents of catastrophic diseases such as polio and measles. This process of administering a vaccine to initiate immunity against a disease has made an enormous contribution to global health with both humans and other animals benefiting, especially in the developing world. Mortality from smallpox and measles was
254
massive in the pre-vaccination period, and an epidemic could wipe out up to half of an affected population (Greenwood, 2014). Fortunately, through vaccination, smallpox was completely eradicated in 1979, becoming the only human infection to become eradicated through vaccination. Local transmission of measles, another potential candidate for eradication, is being disrupted in the Americas by intensive surveillance campaigns and rapid responses following detection of cases. The eradication of the rinderpest virus in 2011 represented another major milestone in the control of infectious diseases and the continued contribution of vaccination to global health (Greenwood, 2014). Rinderpest, closely related to measles, can cause high mortality in cattle, impoverishing families in developing countries dependent upon these animals and making them susceptible to malnutrition and various infectious diseases. Recent close interactions between research groups developing DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Global number of child deaths per year, by cause of death. The number of children younger than 5 years old who died in a year is depicted in the graph. The height of the bar shows the total number of deaths with colored sections showing the number of children who died of diseases that are wholly or partially preventable by vaccines. The number of child deaths for which there are vaccines available declined from 5.5 million deaths in 1990 to 1.8 million deaths 27 years later. Graphic: Our World in Data, Data Source: Samantha Vanderslott and Bernadeta Dadonaite, 2013
human and veterinary vaccines, facilitated by organizations such as the Jenner Vaccine Institute, have prompted further positive developments. For example, researchers have found that the tuberculosis vaccine could be used for both a man and his domestic animals (Greenwood, 2014). Overall, vaccines have revolutionized global health and the practice of medicine, preventing an estimated 2 to 3 million deaths each year (WHO, 2020). In fact, between 1990 and 2017, the number of child deaths for which there are vaccines available dropped a staggering amount from 5.5 million to 1.8 million deaths, as shown by Figure 1. 1.2 Basic Overview of the Immune System Humans and other mammals live in a world that is heavily populated by both pathogenic and non-pathogenic microbes, which harbor an array of toxins that can potentially threaten human health (Chaplin, 2010). To ensure that bodily function is maintained, the immune system, which consists of two arms––a nonspecific, innate arm and a more specific, acquired arm––holds these microbes in check by supporting normal tissue and organ function using a complex array of protective mechanisms. These mechanisms control and
SUMMER 2020
eliminate pathological microbes and toxic or allergenic proteins while avoiding responses that produce excessive damage to the body’s tissues or that might eliminate beneficial microbes (Chaplin, 2010). Essentially, the immune system utilizes an exquisite feature that relies on detecting structural characteristics of the pathogen or toxin that mark it as distinct from host cells. This host-pathogen or hosttoxin discrimination is essential to permit the host to eliminate the threat without damaging its own tissues (Chaplin, 2010).
"Essentially, the immune system utilizes an exquisite feature that relies on detecting structural characteristics of the pathogen or toxin that mark it as distinct from host cells."
Both the innate and adaptive immune systems exhibit self and non-self discrimination, (Gonzalez et al., 2011). On the one hand, the innate immune system is characterized by hardwired responses that are encoded by genes in the host’s germ line that recognize molecular patterns shared by many microbes and toxins that are not present in the mammalian host. The innate immune system consists of physical barriers, such as epithelial cell layers bound up by tight cell-cell contacts, the secreted mucus layer that overlays the epithelium in the respiratory, gastrointestinal and genitourinary tracts, and the epithelial cilia that sweep away this mucus layer and permit it to be constantly refreshed after it has been contaminated with inhaled or ingested particles (Chaplin, 2010). 255
The innate response also includes soluble proteins and bioactive small molecules that are either constantly present in biological fluids or that are released from cells as they are activated by foreign molecules (Chaplin, 2010). Finally, the innate immune system includes membrane bound receptors and cytoplasmic proteins that bind molecules distinctly expressed on the surfaces of invading microbes.
"...viruses can contain anywhere from three to more than 100 unique antigens whereas protozoa, fungi and bacteria, which are larger, more complex organisms, contain hundreds to thousands of different antigens."
On the other hand, the adaptive immune system is characterized by responses that are encoded by gene elements that somatically rearrange to assemble antigen-binding molecules with finely tuned specificity for unique foreign structures (Chaplin, 2010). The adaptive immune system produces long-lived cells that persist in an apparently dormant state with the potential to re-express effector functions swiftly after another encounter with their specific antigen, thereby permitting a more effective host response against specific pathogens or toxins when they are encountered a second time, even decades after the initial sensitizing encounter (Chaplin, 2010). This ability to re-express effector functions is the basis of immune memory–– a feature that vaccination relies on to trigger protection against a disease. Since the adaptive immune system consists of a small number of cells with specificity for any individual pathogen, toxin, or allergen, the responding cells must proliferate after encountering the antigen in order to attain sufficient numbers to mount an effective response against the microbe or the toxin. Therefore, the adaptive response generally expresses itself temporally after the innate response in host defense (Chaplin, 2010).
white blood cells, the most important of which are macrophages, B-lymphocytes and T-lymphocytes. Macrophages swallow up and digest the pathogens in addition to dead or dying cells, leaving behind antigens. B-lymphocytes then detect the produced antigens and assemble antibodies, which bind to the antigens. Finally, T-lymphocytes attack cells in the body that have already been infected by the pathogen (CDC, 2018). Antibodies attack antigens by specifically binding to a part of an antigen called the epitope or antigenic site. These antibodies flag pathogens, directing the immune system to destroy them. Prior to pathogen invasion, antibody concentration in the body is relatively low. However, greater quantities are produced and recruited following immune activation. Like pathogen invasion, vaccination also triggers an upsurge in antibody concentration (CDC, 2018). A vaccine may be either live or dead. Live bacterial or viral vaccines are typically attenuated, incapable of causing disease but capable of triggering an immune response. When a vaccine is administered into the body, the immune system recognizes it as foreign, thereby initiating an immune response and increasing the production of antibodies that attack the administered vaccine. Subsequent doses of the vaccine act to boost this response, resulting in the production of long-lived antibodies and memory cells. Thus, vaccines act to prime the body, so that when it is exposed to the live, unattenuated disease-causing organism, the immune system is able to respond rapidly at a high level of activity. This process destroys the pathogen before it causes disease and reduces the risk of it spreading to other people (CDC, 2018).
1.3 Vaccines: Mechanism of Action
256
The surfaces of pathogens contain antigens–– proteins or polysaccharides connecting to the outer surface of the pathogen. An antigen is a molecule that binds to antibody proteins in the body and initiates an immune response. Any given pathogen may contain many different antigens on its surface. For instance, viruses can contain anywhere from three to more than 100 unique antigens whereas protozoa, fungi and bacteria, which are larger, more complex organisms, contain hundreds to thousands of different antigens.
Vaccines vary in how they stimulate the immune system as some provide a broader response than others. Vaccines influence the immune response through the nature of the antigens they contain, including number and various other characteristics, or through the route of administration, such as oral, intramuscular, or subcutaneous injection. The use of adjuvants (immune response boosters) in vaccines helps to determine the type, duration, and intensity of the immune response and the characteristics of the resulting antigen-specific memory (CDC, 2018).
Upon exposure, the immune system produces antibodies that can bind to particular antigens on pathogens, leading to the activation of
For most vaccines, more than one dose may be required to provide sustained protection. Why? Firstly, for some vaccines (primarily
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
attenuated vaccines), the first dose provides insufficient immunity, so more than one dose is needed to build more complete immunity. The vaccine that protects against the bacteria Hib, which causes meningitis, is a good example of this principle (CDC, 2018). Secondly, for other vaccines, immunity begins to wear off after a while. At that point, a “booster” dose is needed to bring immunity levels back up. This booster dose usually occurs several years after the initial series of vaccine doses is given. For example, in the case of the DTaP vaccine, which protects against diphtheria, tetanus and pertussis, the initial series of four shots that children receive as part of their infant immunizations helps build immunity. But a first booster dose is needed at four to six years old and a second booster is needed at 11 years or 12 years of age. This booster for older children, teens, and adults is called Tdap (CDC, 2018). Thirdly, for some vaccines (primarily live vaccines), studies have shown that more than one dose is needed for everyone to develop the best immune response. For example, after one dose of the MMR vaccine, some people may not develop enough antibodies to fight off infection. The second dose helps maximize coverage across a population (CDC, 2018). Finally, in the case of flu vaccines, adults and children (six months and older) need to get a dose every year. An annual flu vaccine is needed because the flu viruses causing disease differ from season to season. Every year, flu vaccines are made to protect against the viruses that research suggests will be most common. Furthermore, the immunity a child gets from a flu vaccine wears off over time. Getting a flu vaccine every year helps keep a child protected, even if the vaccine viruses don’t change from one season to the next. Children six months through eight years old who have never gotten a flu vaccine in the past or have only gotten one dose in past years need two doses the first year they are vaccinated (CDC, 2018). In addition to individual protection, vaccination also leads to herd immunity: immunization of large portions of the population to protect the unvaccinated, immunocompromised, and immunologically naive by reducing the number of susceptible hosts to a level less than the threshold needed for transmission. For example, immunization of greater than 80% of the global population against smallpox virus reduced transmission rates to uninfected subjects to a point low enough to achieve eradication of the virus (Mallory et al., 2018). Herd immunity has proved to be extremely
SUMMER 2020
effective especially in developing countries where vaccination resources are scarce, or populations outweigh available resources.
History of Vaccine Development 2.1 Smallpox and Inoculation Smallpox is a devastating disease that ravaged humanity for centuries. It is caused by the variola virus, and it becomes contagious once the first sores appear in the mouth and throat. It can be spread through droplets from the nose or mouth that travel through the air when people cough or sneeze. Contact with the scabs, fluid within the sores, or contaminated bedding and clothing materials can also spread the virus. The virus remains contagious until the last smallpox scabs fall off (CDC, 2016). Initial symptoms include high fever followed by the appearance of a rash that begins as small red spots on the tongue and in the mouth that develop into sores. The rash will then appear on the face and spread outwards to the arms and legs and finally the hands and feet. The rash changes into sores and then finally pustules, which eventually form a crust, scab over, and fall off. These symptoms birthed smallpox’s alternative moniker the “speckled monster” which was commonly used in 18th-century England (Riedel, 2005). Smallpox results in death for approximately 3 out of 10 of those affected, and historically the case-fatality rate in infants was even higher, approaching 80% in London and 98% in Berlin during the late 1800s (CDC, 2016; Riedel, 2005). Survivors can suffer from permanent scars over large areas of their body and may even be left blind (CDC, 2016).
"Smallpox is a devastating disease that ravaged humanity for centuries. It is caused by the variola virus, and it becomes contagious once the first sores appear in the mouth and throat."
Smallpox arose sometime around 10,000 B.C.E., at the time of the first agricultural settlements in northeastern Africa. The devastation of smallpox is woven into history: early evidence of smallpox exists on the faces of mummies from the 18th and 20th Egyptian Dynasties (1570-1085 B.C.E.) as well as in ancient Sanskrit texts of India (Riedel, 2005). It was introduced to Europe between the fifth and seventh centuries and frequently caused epidemics during the Middle Ages. The beginnings of the decline of the Roman Empire in 108 C.E. coincided with a particularly large epidemic which caused the deaths of almost 7 million people. The Arab expansion, the Crusades, and the discovery of the West Indies all carried the deadly disease across the globe (Riedel, 2005). Infamously, smallpox was used during the French-Indian War in one of the first incidences of biological warfare by British commander
257
Figure 2: Smallpox lesions Source: Wikimedia Commons
"...2-3% of inoculated patients died from the disease, became the source of a new epidemic, or suffered from other diseases such as syphilis that could be transmitted during the inoculation process."
Sir Jeffrey Amherst against Native Americans (Riedel, 2005).
Africa, Asia, and Europe, specifically in the Ottoman Empire.
Early treatments included largely ineffective herbal remedies and cold treatments. Inoculation, or variolation, was the most successful weapon against smallpox before the use of vaccines (Riedel, 2005). Inoculation involves taking a sample, often in the form of a scab or pus, from a sick patient and administering the sample to a healthy individual. Samples were commonly administered into small cuts, inhaled through the nose, or simply rubbed on the skin. Inoculation was first widely practiced with smallpox. Those who were inoculated would typically experience a mild form of the disease which had a significantly reduced death rate compared to a natural smallpox infectionâ&#x20AC;&#x201D; one British ambassador who witnessed the use of inoculation in Northern Africa reported that mortality rates for natural smallpox were about 30%, while inoculation death rates were estimated to be 2% (Boylston, 2012).
Since the patient received a small dose of virus into the skin instead of inhaling a large dose, the mortality rates for inoculation were far below that of smallpox (Smith, 2011). However, inoculation was not without risks: 2-3% of inoculated patients died from the disease, became the source of a new epidemic, or suffered from other diseases such as syphilis that could be transmitted during the inoculation process (Riedel, 2005).
There is no historical consensus regarding where smallpox inoculation began, but it was certainly not, as is popularly believed, in Britain. 16th century accounts place the invention of inoculation in India in 1580 and in China as early as 1000 (Boylston, 2012). These accounts were from people who estimated how far in the past inoculation had begunâ&#x20AC;&#x201D;the practices were already widespread and trusted across several parts of Asia. Archaeologists have only found definitive documentation of inoculation from the mid 1500s forward, which has left the origin of inoculation a mystery. The words used to describe the practice are similar across languages, leading historians to believe that there was a single origin for the practice, and that the name and practice spread together (Boylston, 2012). Before reaching Britain, inoculation was practiced in areas of North
258
2.2 Edward Jenner and the Smallpox Vaccine Edward Jenner was one of the most prolific scientists in the field of immunization, and his work on eradicating smallpox is widely regarded as the origin of immunology (Smith, 2011). Jenner was born on May 17, 1749, the eighth child of the Reverend Stephen Jenner (Dunn, 1996). Orphaned at a young age, he was apprenticed at age 13 to a country surgeon. Jenner was an avid scientist, and he studied a variety of subjects including research on cuckoo hatchlings and the hibernation of hedgehogs. In 1796, after hearing that dairy maids were protected from smallpox after suffering from the milder affliction of cowpox, Jenner decided to carry out an experiment. He used material from the fresh cowpox lesions of a young dairymaid named Sarah Nelms to inoculate a young boy. Then, he inoculated the boy with smallpox, and observed that he was unaffected. Although it took several decades, vaccinations eventually became widely recognized, officially replacing inoculation in 1840. While Jenner was not the first to discover vaccination, his meticulous research and persistent advocacy for the practice allowed it to become widespread (Riedel, 2005).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
In 1953, the first proposal to undertake a global smallpox eradication campaign was made by the WHO. Deemed unrealistic, it would not be until 1966 that a plan to eradicate the disease in 10 years would be approved, calling for the WHO to contribute $2.4 million per year with additional cooperation from countries around the world. The largest obstacle was producing a heat-stable, fully potent vaccine; initially, less than 10% of the vaccine batches met these standards, but after improved production methods, more than 80% of the vaccine needed was produced in developing countries. The invention of the bifurcated needle, which increased the percentage of successful vaccination, also enhanced vaccination efforts. On May 8, 1980, the WHO officially announced the successful eradication of smallpox (Henderson, 2011). 2.3 Louis Pasteur – Development of Cholera, Anthrax, and Rabies Vaccines French scientist Louis Pasteur succeeded in creating vaccines against fowl cholera, anthrax, and rabies. In the 19th century, fowl cholera was killing thousands of chickens across the country. Pasteur cultured the bacteria, Pasteurella multocida, which caused the disease and noticed that when he injected the cultures into a chicken, the chicken would develop cholera. However, old cultures no longer had that effect on chicken. When a chicken was injected with the old cultures, it could be exposed to a virulent strain and survive. Pasteur called this process “vaccination”, and presented his results to other scientists in the Académie des Sciences (Berche, 2012). Subsequently, Pasteur focused his efforts on preventing anthrax, which is caused by Bacillus anthracis. At the time, anthrax was widely infecting livestock across France. Pasteur began culturing Bacillus anthracis and found that these cultures lost their virulence as time went on. He vaccinated animals with an attenuated culture, and then with a virulent culture 12 days later. He was asked to perform his experiments for the public, so he inoculated 31 animals with the same procedure. When these animals and a control group were exposed to a highly virulent strain two weeks later, all of the control livestock died or became very ill, while all the inoculated animals survived and stayed healthy. Word spread throughout France of this success, and in 1894, 3.4 million cattle were vaccinated against anthrax (Berche, 2012).
SUMMER 2020
Pasteur is also credited with the invention of the vaccination against rabies. He attenuated the virus by inserting it into a rabbit spinal cord and then removing the spinal cord and hanging it in a glass flask for 15 days. Pasteur had the opportunity to test whether or not this vaccine worked when a nine-year-old boy who had been bitten by a rabid dog was brought to him. Pasteur was reluctant to test the vaccine on the child but knew the boy would most likely die from rabies if nothing was done. The boy received 12 injections using attenuated virus from the rabbit spinal cords and survived (Berche, 2012). 2.4 Typhoid In 19th century Britain, pathologist Almroth Wright decided to use killed viruses for vaccines, instead of the method of using attenuated viruses that Pasteur had favored. Wright believed that killed viruses were less risky and just as effective (Chakrabarti, 2010). He focused his efforts on making a vaccine for typhoid fever, caused by Salmonella enterica serovar Typhi (S. Typhi). The bacterium typically enters the body after an individual eats or drinks contaminated food or water, and the mortality rate ranges from 5-30% if the patient is not treated. Wright, along with Richard Pfeiffer and Wilhelm Kolle, developed the typhoid vaccine in 1896 using heat-killed, phenol preserved, and acetone killed bacteria. The vaccine was used in England and Germany but is no longer used due to the side effects it causes (Sahastrabuddhe and Saluja, 2019). These side effects include inflammation, pain, and fever in 9-34% of those who receive the vaccine (Marathe et al., 2012). Currently, there are improved typhoid vaccines in use, including the live attenuated Ty21a vaccine (administered orally) and the Vi-polysaccharide vaccine (administered subcutaneously or intramuscularly) (Syed et al., 2020).
"In 19th century Britain, pathologist Almroth Wright decided to use killed viruses for vaccines, instead of the method of using attenuated viruses that Pasteur had favored."
2.5 Bubonic Plague Waldemar Haffkine was a bacteriologist who worked with Louis Pasteur at the Pasteur Institute in Paris. He was working in India when there was an outbreak of the bubonic plague, otherwise known as the “Black Death”. He was sent by the government of India to Bombay in 1896 to study the illness and find treatments for it. He discovered that heat-killed plague bacillus protected rabbits from succumbing to the plague. Haffkine injected himself with heat-
259
killed plague bacillus in 1897 to prove the safety of the vaccine, and then went on to vaccinate prisoners in a jail in Bombay (Bannerman, 1904). More than 20 million people were vaccinated against the plague with Haffkine’s vaccine in the years following, but the vaccine fell out of favor due to side effects such as fever (Butler, 2014). Currently, modern health practices including sanitation and public health practices have largely mitigated the impact of the disease, and antibiotics are also available to treat it. 2.6 Diphtheria
"In the late 19th century, scientists began developing serum therapies (a therapy that uses the serum of animals that have been immunized) to provide immunity against diphtheria and tetanus."
260
In the late 19th century, scientists began developing serum therapies (a therapy that uses the serum of animals that have been immunized) to provide immunity against diphtheria and tetanus. At the time, these two diseases were very dangerous. During the American Civil War, there was a 90% mortality rate among soldiers infected with tetanus (Kaufmann, 2017). In 1892, 50,000 German children died of diphtheria (Winau and Winau, 2002). German physician Emil von Behring and Japanese physician Baron Kitasato Shibasaburō pioneered the development of serum therapy for the treatment of diphtheria, a disease caused by Corynebacterium diphtheriae bacteria, and tetanus, caused by Clostridium tetani. They performed experiments in which they injected serum from mice that had recovered from tetanus infections into mice that had not yet been exposed. When the second group of mice was subsequently exposed to the bacterium, they did not become infected (Kaufmann, 2017). Soon after, Behring performed a similar experiment on guinea pigs. He infected healthy guinea pigs with the diphtheria bacteria and then injected them with the serum of guinea pigs that had survived diphtheria and were now immune. The recovery rate with this therapy was high, leading Behring to conclude that the sera of animals that were immune to the disease could induce disease resistance in other animals as well (Winau and Winau, 2002). Soon, this technique was developed for use in humans. In the 1890s, Paul Erlich developed a method to produce large quantities of antidiphtheria serum from horses (Bosch and Rosich, 2008; Kaufmann, 2017). The horses were made immune to diphtheria through controlled exposure to the bacteria. At the beginning, they were administered a small dose, but the doses increased in size as the horses built up a tolerance, and ultimately became immune (Winau and Winau, 2002). Serum would then be harvested for use in humans and other horses.
When serum therapy was used in children with diphtheria within two days of their diagnosis, the recovery rate was near 100% (Kaufmann, 2017). 2.7 Pertussis Whooping cough, the illness caused by Bordetella pertussis, was a major cause of childhood death at the time Dr. Pearl Kendrick and Dr. Grace Eldering began their work on developing a vaccine. They began by conducting research on pertussis patients in the town of Grand Rapids, Michigan during an outbreak in 1932 (Shapiro-Shapin, 2010). They collected cough plates from infected people in the town to analyze for their research. They then designed a better cough plate growth medium than what was currently in use to make the bacteria grow faster and allow for more rapid diagnosis of those with the disease (ShapiroShapin, 2010). This new method of rapid testing also allowed for the determination of safe quarantine lengths (Shapiro-Shapin, 2010). At the time, there was no established protocol for conducting clinical trials, and most tests involving human subjects used orphans or institutionalized patients treated against their will (Shapiro-Shapin, 2010). Kendrick and Eldering instead relied on the trust of doctors and parents who volunteered to have their kids vaccinated. The original vaccine was a whole-cell vaccine made of the inactivated Bordetella pertussis administered in four doses of increasing bacteria content (Kendrick, 1942; Shapiro-Shapin, 2010). The results of the trial showed that the vaccinated group presented significantly lower rates of infection compared to the control group. As a consequence, Kendrick and Eldering’s pertussis vaccine was in regular use throughout the country by the 1940s (Kendrick, 1942; Shapiro-Shapin, 2010). 2.8 Polio Poliomyelitis, commonly known as polio, had been endemic to the United States for some time before an uptick in cases starting in the 1940s. Scholars attribute the increase in polio cases to the onset of denser living conditions within the United States, as well as the hygiene hypothesis (Mnookin, 2012). The hypothesis states that as hygiene standards increased in the country, children were exposed to fewer diseases when they were young and protected by IgA antibodies delivered through their mothers’ breast milk; this lack of exposure to disease in infancy ultimately led to more cases
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: A UNICEF officer administering the oral polio vaccine in Hawassa, Ethiopia in 2010. Source: Flickr
of serious illness, such as polio, later in life (Colt, 2009). Jonas Salk, an American researcher at the University of Pittsburgh, was the first to create a polio vaccine, allowing the United States to effectively eliminate the disease by 1979. While the vaccine was widely considered a success, the case of the Salk vaccine exemplifies a number of key issues in the history of vaccination. Perhaps most pertinent is the story of how a disastrous failure of a vaccine manufacturer to deactivate the virus in a batch of polio vaccines led to the beginning of modern vaccine regulation. With the American public yearning for a reprieve from the often-paralytic polio virus, Salkâ&#x20AC;&#x2122;s research was heavily funded by the government despite criticism of his methods by other scientists (Offit, 2007). While the vaccine was found to be safe and effective when properly prepared, there was an incident where a batch of vaccine that contained live poliovirus was administered to over 200,000 American children. The incident, which caused 51 people to be paralyzed and left ten dead, led to the formation of the Division of Biological Standards within the Food and Drug Administration (Offit, 2007). The division required larger portions of vaccine batches to be quality tested before administration and continues to outline procedures for safe vaccine testing and distribution today.
SUMMER 2020
The Salk vaccine also has a counterpart - the Sabin polio vaccine. While the Salk vaccine includes a deactivated version of a highly virulent strain of poliovirus, the Sabin vaccine includes a live, attenuated form of the virus. Both of these vaccines are still administered today, as both have distinct advantages and disadvantages. A dead virus vaccine is considered less dangerous, since the dead virus has no potential to cause disease. In rare cases, live, attenuated viruses may mutate to become virulent again and cause the disease they were meant to prevent. That being said, live vaccines have the benefit of eliciting a more robust immune response as the virus is alive and replicating in the body. The Salk vaccine only elicits systemic immunity (IgG antibodies), while the Sabin vaccine elicits both systemic and mucosal immunity (both IgG and IgA antibodies) (Baicus, 2012). With the dead virus vaccine, multiple doses are needed to generate protective antibody titers, and those titers decrease over a patientâ&#x20AC;&#x2122;s lifetime. The live Sabin vaccine was the vaccine of choice for the World Health Organization (WHO) when they resolved to eliminate polio globally in 1988. It is cheap, administered nasally or orally, and has the additional benefit of a herd effect (Baicus, 2012). The attenuated virus, like the virulent virus, spreads through the fecal-oral method, meaning that vaccinated members of a community shed the attenuated virus in their stool, and anyone who comes in contact with it could be effectively immunized (Altamirano et
"While the vaccine was found to be safe and effective when properly prepared, there was an incident where a batch of vaccine that contained live poliovirus was administered to over 200,000 American children. The incident, which caused 51 people to be paralyzed and left ten dead, led to the formation of the Division of Biological Standards within the Food and Drug Administration."
261
al., 2018). WHO has since switched to the dead Salk vaccine, however, due to concerns about the live vaccine’s capacity to mutate. While effective, this dead vaccine is more costly and must be administered through injection into the muscle (Baicus, 2012). Polio has been eliminated in nearly every country across the globe, yet the disease remains endemic to Afghanistan, Nigeria, and Pakistan. The push for the global elimination of polio is ongoing with a current goal of elimination by 2023 (KFF, 2020).
Current Innovations in Vaccine Therapies 3.1 Administration Route
"The most wellknown and conventional method of vaccine delivery is injection via hypodermic needle."
Choosing an effective administration route for vaccination is critical for initiating the desired immune response. The most well-known and conventional method of vaccine delivery is injection via hypodermic needle. In this route, a liquid-based vaccine is typically injected intramuscularly with a syringe. However, despite its widespread use, this method has many shortcomings. For instance, many children and adults suffer from trypanophobia, or the fear of needles, making vaccination a stressful ordeal (Mitragotri, 2005). In addition, needle-based vaccinations pose a risk for healthcare workers worldwide: an estimated 5% of injections result in accidental needle-stick injuries (Mitragotri, 2005). Moreover, in developing countries, the high cost of hypodermic needles encourages the reuse of syringes – a dangerous practice that promotes the spread of diseases (Mitragotri, 2005). These challenges have encouraged the development of alternative administration routes that do not require needles, including cutaneous and mucosal methods. Cutaneous administration routes include the liquid-jet, ballistic, and topical application methods. In the liquid-jet method, a needleless injector generates a high-velocity liquid vaccination jet to penetrate the skin. This enables the delivery of a vaccine to the intradermal or intramuscular regions without a needle (Mitragotri, 2005). Besides avoiding the use of a hypodermic needle, an additional benefit of this method is that it spreads the vaccine over a larger region than a standard intramuscular injection. Moreover, the liquid-jet targets the skin, which is highly involved in the immune response (Mitragotri, 2005). Thus, a lower dose is needed than for needle injections to generate adequate immunity. However, the liquid jet method has its own drawbacks – the vaccination is often more painful than needle-based injections and
262
blood can contaminate the nozzle, enabling the spread of diseases between patients if the nozzle is reused (Mitragotri, 2005). In the ballistic method, also known as epidermal powder immunization (EPI), powdered vaccines are accelerated to penetrate the stratum corneum – the outermost layer of the skin (Mitragotri, 2005). The stratum corneum is enriched with Langerhans cells, which promote the immune response. Additionally, powdered vaccines are easier to ship and store than liquid-based vaccines, which could make them especially appealing in developing countries or remote areas (Mitragotri, 2005). Though this is a relatively new method and mainly used in animals, it is being studied for more widespread use in humans. There are numerous topical application methods, including adjuvant patches, colloidal carriers, ultrasound techniques, and microneedles. These methods have been widely studied for general drug delivery but are new for vaccine administration. Overall, while topical application methods are easily deliverable and avoid painful needle injections, they often do not yield an effective enough immune response by themselves since the stratum corneum is difficult to permeate (Mitragotri, 2005). As such, supplementary techniques to increase the permeability of the stratum corneum are needed. One such method is tape stripping, which involves using commercially available tape or rubbing the skin with abrasive emery paper to peel layers from the stratum corneum before vaccinating (Mitragotri, 2005). Researchers are still investigating topical vaccination routes, since such vaccines would be easily administered and avoid many issues encountered with other routes provided that they are effective in generating an immune response. Beyond cutaneous administration methods, vaccines are also delivered via mucosal routes. These include delivery via the oral, nasal, ocular, pulmonary, vaginal, and rectal mucosal membranes. The most common mucosal vaccination routes are oral and nasal; for instance, FluMist is delivered as a nasal spray and there are widely used oral vaccines for polio, typhoid fever, cholera, and rotavirus (Mitragotri, 2005). These vaccines are easily deliverable and avoid cross-contamination by bypassing the need for a needle or nozzle. However, a drawback of oral administration routes is that these vaccines encounter
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: Researchers at the Texas Center for Cancer Nanomedicine (TCCN) are working on the development of nano-vaccines for cancer therapy. In this research, bone marrow cells were stimulated with cytokines (signaling molecules used extensively for intercellular communication) to favor differentiation into antigen presenting cells, known as dendritic cells. These dendritic cells are then presented with the nano-vaccines (as shown in this image), which are porous silicon particle discs loaded with immune-stimulating molecules and tumor antigens. These now activated cells are then injected back into the host to stimulate an anti-tumor response. Creator: Brenda Melendez and Rita Serda, Ph.D., Source: NCI Visuals Online, public domain
regions with high enzymatic activity and harsh chemical environments, such as the highly acidic gastrointestinal tract (Mitragotri, 2005). Thus, the oral delivery of non-living vaccines is difficult since DNA and proteins denature and break down in such environments. As a result, most orally-delivered vaccines are live, attenuated pathogens (Mitragotri, 2005). However, researchers are investigating the use of mediums to shield antigens from these harsh environments until they reach their target, such as polymer microspheres and bacterial ghosts (Mitragotri, 2005). 3.2 Target: Beyond Viruses Although vaccines have traditionally been used to combat viruses, recent research has explored their potential to be used against other targets such as cancer, allergies, and addiction. Therapeutic cancer vaccines, unlike normal vaccines which are administered to healthy individuals, are used to strengthen cancer patientâ&#x20AC;&#x2122;s own immune responses in order to help them better attack cancer cells. Vaccines are also important as a preventative measure in cancer. For example, the human papillomavirus accounts for about 70% of cervical cancers and the hepatitis B virus can cause liver cancer (Guo et al., 2013). Vaccines against these viruses
SUMMER 2020
can therefore reduce the prevalence of their associated cancers. However, other vaccines can be used to directly target cancer itself. Examples of cancer vaccines include tumor cell vaccines, which may be prepared using irradiated patient-derived tumor cells or two or three established human tumor cell lines (Guo et al., 2013). Dendritic cell (DC) based vaccines are another option: DCs are the most effective antigen-presenting cell (APC) responsible for sensitizing naive T cells to specific antigens and therefore are appealing vehicles for antitumor vaccines (Cintolo et al., 2012). Figure 4 shows the development of a cancer nano-vaccine that relies on dendritic cells. Peptide based cancer vaccines can be used to target specific tumor associated antigens or stimulate the immune system with immunostimulatory adjuvants (Guo et al., 2013). Recent developments include a new cancer vaccine developed by the Mater Research team that has the potential to treat a variety of blood cancers including myeloid leukemia, non-Hodgkin's lymphoma, multiple myeloma, and pediatric leukemias, plus solid malignancies including breast, lung, renal, ovarian, and pancreatic cancers, and glioblastoma. The vaccine is made of human antibodies linked to a tumor-specific protein
"Although vaccines have traditionally been used to combat viruses, recent research has explored their potential to be used against other targets such as cancer, allergies, and addiction."
263
(Pearson et al., 2020). Allergies are typically caused by a hyper-immune response due to immunoglobulin E (IgE) production against harmless environmental antigens (Linhart et al., 2012). Antibodies can be either beneficial or detrimental, depending on their epitope specificity. Allergen-specific immunotherapy (SIT) aims to induce antibodies which block, but do not enhance the allergic reaction. IgG antibodies may block IgE binding to allergens, interfering with allergen specific IgE responses and blocking the anaphylactic reaction (Knittelfelder, 2009). Natural allergen extracts have been traditionally used for the preparation of the vaccines, and multiple applications of increasing allergen doses are required to become therapeutically effective. Recently, improved understanding of allergen structures and epitopes have promoted the engineering of new vaccines that are safer and more effective (Linhart et al., 2012). Mimotopes, peptides that mimic proteins, carbohydrates, or lipid epitopes are also being investigated to achieve immunogenicity and induce epitopespecific antibody responses upon vaccination (Knittelfelder, 2009).
"Recombinant DNA vaccines take advantage of genetic engineering techniques to target the production of desired antigens while simultaneously removing potential co-contaminants."
Vaccines can also be used to combat addiction: anti-addiction vaccines can produce antibodies to block the effects of drugs on the brain. An estimated 149 to 272 million people, or 3.3–6.1% of the population aged 15–64, have used illicit substances at least once in the previous year. Addiction poses a significant social and medical problem, and current treatments have limited success (Shen et al., 2011). Drugs of abuse are generally small molecules that can readily cross the blood brain barrier, while antibodies are larger molecules that cannot get into the brain. Therefore, in order to be an effective form of therapy, the antibodies must bind to illicit drugs and prevent them from entering the brain. Drugs do not usually provoke an immune response, so in order to induce the production of antibodies, the drug must be chemically linked to toxins (Kosten, 2005). Alternatively, passive immunotherapy uses monoclonal antibodies that are generated in a laboratory and administered intravenously (Kosten, 2005). In 1994, a cocaine vaccine was produced by attaching the cocaine to the surface of an antigenic carrier protein, which for this first-generation vaccine was deactivated cholera toxin B subunit protein combined with the FDA-approved human adjuvant alum. The vaccine has continued to be refined and has
264
undergone several clinical trials. There are also vaccines in development that target nicotine, opiates, and methamphetamines. While further research is required in order to make these vaccines sufficiently strong and long-lasting, if successful, they have enormous potential to ameliorate the morbidity and mortality associated with illicit drug use (Shen et al., 2011). Antibodies can be used to treat drug overdose, reduce the incidence of relapse, or protect at-risk populations from becoming drug dependent (Kosten, 2005). 3.3 Type of Vaccines Recent developments in vaccine technology have resulted in the development of many different types of targeted vaccines. Although traditional vaccination methods such as Pasteur’s attenuated viruses and Wright’s use of inactivated viruses are still used, modern biotechnology techniques have allowed researchers to take advantage of newer and more effective routes to vaccine creation. Examples of these new types of vaccines include vaccines that use recombinant DNA, modified toxins (“toxoids”), and RNA. Recombinant DNA vaccines take advantage of genetic engineering techniques to target the production of desired antigens while simultaneously removing potential cocontaminants. Produced from the isolation of desirable DNA fragments via restriction enzymes, recombinant DNA is typically propagated through the insertion of plasmids into sample cells - a process called transformation (Griffiths et al., 2000). The success of recombinant DNA vaccines was first demonstrated in the 1980s with the production of vaccinia virus recombinants for Hepatitis B and Herpes. Encoded to produce the Hepatitis B surface antigen and the Herpes virus Glycoprotein B via genetically engineered poxviruses, it was found that the recombinant vaccine raised the survival rate of mice infected with these viruses to 100% (Paoletti et al., 1984). Recombinant vaccines have also been demonstrated to boost long-term immunity. For example, a recombinant DNA vaccine for Hantavirus using material from the Gn and LAMP-1 Hantavirus vaccines produced an antibody titer 102,000 after 28 weeks in mice; the inactivated virus only produced a titer of 6400. Histological tissue analysis also demonstrated no significant toxic impacts when compared to healthy mice (Jiang et al., 2017).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Another modern vaccination is toxoid vaccines, produced through the deactivation of toxins secreted by certain kinds of bacteria. Toxoids are produced through the purification and denaturation of toxic proteins, either by high temperatures or the addition of formaldehyde. This allows for the toxic particle to provoke an immune response without causing damage (Yadav et al., 2014). For example, the toxoids for tetanus and diphtheria, first discovered in 1927, are capable of provoking cellular immune responses in up to 90% of all patients (Blencowe et al., 2010). Additionally, unlike attenuated vaccines, toxoid vaccines generally last longer and are incapable of causing symptoms of the disease (Baxter, 2007). However, similar to inactivated vaccines, toxoid vaccines require multiple doses. For example, the tetanus vaccine typically requires at least 2 doses of toxoid (Blencowe et al., 2010). Finally, RNA-based vaccines have also shown potential as a novel alternative to traditional vaccination techniques. RNA-based vaccines depend upon the delivery of mRNA molecules to encode desirable antigens. These vaccines can be delivered ex vivo - through the injection of dendritic cells infused with mRNA, or in vivo - typically by packaging them in lipid nanoparticles (Verbeke et. al, 2019). Although there are no mRNA vaccines currently approved for human usage, candidates have been developed for several types of viruses, including Moderna’s COVID-19 vaccine that is currently in phase 1 of clinical trials (Garde, 2020). mRNA vaccines have also been suggested as a potential immunotherapy treatment for some types of cancer (McNamara et al., 2015). These vaccines are believed to possess several advantages over their standard vaccine counterparts. For example, they avoid the risk of genomic integration (where viral DNA becomes integrated into the host’s DNA) and are capable of encoding any protein desired. Furthermore, because mRNA-based vaccines do not depend on viral growth, vaccines can be produced en masse without requiring extensive containment protocols (Armbruster et al., 2019).
Modern Development of Vaccines Vaccine development is a long and complex process, often involving 10-15 years of private and public involvement, with plenty of oversight from the Center for Biologics Evaluation and Research (CBER) within the FDA. The federal government has been overseeing approval of vaccines since 1902, after 13 children were
SUMMER 2020
killed by contaminated diphtheria antitoxin. This tragic incident led to Congress passing the Biologics Control Act, which mandated facility inspections and other certification guidelines (Marshall and Baylor, 2011). The current system of vaccine development is derived from the 20th century, specifically deriving its process from the U.S. Public Health Service Act of 1944 and the Food, Drug, and Cosmetic Act of 1938. These acts defined drugs as biologics “intended for diagnosis, cure, mitigation, treatment, or prevention of disease” (Gruber and Marshall, 2018). This definition makes vaccines a unique class of pharmaceuticals that falls under both a drug and a biological product, subjecting them to difficult tests and trials. 4.1 Exploratory Stage Development begins with the exploratory stage, sometimes referred to as the preInvestigational New Drug (pre-IND) phase. During this phase, scientists spend up to two to four years assessing and developing a procedure for vaccine development based on the disease in question (FDA, 2019). Proper planning at this phase directs scientists into identifying or developing the correct immunogens or synthetic antigens. Once the antigens that might help prevent or treat the disease of interest have been identified, private companies begin the development of the candidate vaccine to be used in the preclinical stage.
"The preclinical stage lasts approximately one to two years; however, despite being one of the shorter phases, most vaccines never progress beyond this stage, for they fail to produce the desired immune response."
4.2 Pre-clinical stage The preclinical stage lasts approximately one to two years; however, despite being one of the shorter phases, most vaccines never progress beyond this stage, for they fail to produce the desired immune response. The pre-clinical stage employs animal testing or testing in live non-human cell-culture systems to assess the safety of the vaccine along with its immunogenicity (ability to provoke an immune response) (Gruber and Marshall, 2018). The vaccine is also tested for its ability to provoke an immune response in various dosages and under different methods of administration (Gruber and Marshall, 2018). If the response is unsatisfactory and unable to substantiate the initial exploratory work, scientists will use the information gained and return to the exploratory stage for additional research on a candidate vaccine. However, if the response goes as intended, then results will be summarized in a report to a sponsor and
265
Figure 5: Traditional timeline of the vaccine development stages with an accelerated timeline for the current COVID-19 pandemic Source: Flickr
development will progress towards the clinical phase.
"Before beginning any official phases of the vaccine development process, the private company in practice must apply for an Investigational New Drug (IND) to the FDA and be approved."
266
4.3 Clinical Development Before beginning any official phases of the vaccine development process, the private company in practice must apply for an Investigational New Drug (IND) to the FDA and be approved (FDA, 2019). The private company in question will often have a sponsor approach the FDA and carry out the IND application process. The FDA encourages the sponsor to request a meeting with their review board before applying to discuss any pre-clinical developments, study designs, data requirements, and potential issues that may arise during the trials (Gruber and Marshall, 2018). This pre-IND meeting essentially serves as an oral IND application in front of the review board and often catches small concerns that can be addressed before submitting the IND. When applying for an IND license, the sponsor must provide a set of three specific descriptions: (i) a description of the composition and method of manufacture of the vaccine along with the methods of testing for safety, purity, and potency of the vaccine; (ii) a summary of the preclinical and exploratory experiments and results with the candidate vaccine; and (iii) a proposal of the
clinical study and the names and qualifications of each investigator in the private company (Gruber and Marshall, 2018). 4.4 Phases Clinical development amounts to three separate phases in the clinical evaluation of the candidate vaccine; however, there is often overlap between the phases. Also, most vaccines may experience highly iterative testing as the first couple of phases may be repeated continuously with the influx of more data and research behind the vaccine. (Gruber and Marshall, 2018). Clinical trials start with Phase I â&#x20AC;&#x201C; mainly employed to get a preliminary evaluation of the safety and immunogenicity of the candidate vaccine. Phase I is mainly thought of as a second preclinical trial but on humans, requiring it to be a much more strenuous and careful process (Gruber and Marshall, 2018). Phase I trials generally involve 20-80 individuals; if the target group of the vaccine is children, trials will start with adults and gradually decrease in age until the target group is reached. Phase I trials have no blinding component, meaning that both the researchers and subjects may know whether a vaccine or a placebo is used (Gruber and Marshall,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
2018). It is important to note that any positive response expressed by individuals, though it may indicate satisfaction with the vaccine, should not be considered as scientific proof of its efficacy; only later do larger trials determine if the candidate vaccine truly protects against the disease of interest. The goal of Phase II is to substantiate the findings of Phase I with a larger group of people. Companies will ask hundreds of volunteers to participate in clinical trials, some of which may be selected because they are at risk of acquiring the disease (National Vaccine Advisory Committee, 1997). These trials are randomized, well-controlled, include placebo trials, and are often double-blinded (neither the participants or experimenters know who is receiving a particular treatment (Gruber and Marshall, 2018). In addition to confirming the safety and immunogenicity of the candidate vaccine from Phase I, Phase II also tests proposed dosages, schedules of immunizations, and methods of delivery. Phase II will often fall back into Phase I with new findings of dosage amounts and delivery methods. If results from these trials are promising, some companies will also bring upon human challenge trials on a small number of people. Human challenge trials are trials in which volunteers, regardless of immunizations, are challenged with an infectious disease. The challenge pathogen is subject to attenuation or kept as close to wild-type and pathogenic as possible (WHO, 2016). To minimize risk, however, most clinical trials genetically modify the pathogen in a manner that puts the volunteersâ&#x20AC;&#x2122; health into consideration. These challenge trials may provide preliminary data on a vaccineâ&#x20AC;&#x2122;s activity against infectious diseases but must be conducted so in a manner within an ethical framework in which truly informed consent is given (WHO, 2016).
questions about whether the vaccine prevents the disease, whether it leads to the production of antibodies or other immune responses, and/ or whether it will prevent infection (Gruber and Marshall, 2018). Sometimes the human challenge trials may occur here too to provide additional corroboration of the vaccineâ&#x20AC;&#x2122;s safety and efficacy. Only when a vaccine shows satisfactory results in Phase III can it move on to the post-marketing phase where it becomes available to the general population. 4.5 Regulatory review and approval The licensing stage occurs once the clinical trials are completed. A biologics license application (BLA) must be submitted to the CBER Office of Vaccines Research and Review, which needs to include data proving safety, efficacy, and a process to manufacture the vaccine in a consistent way. A multidisciplinary team of clinicians, statisticians, pharmacologists, and other scientists review the application. The product may be licensed, or the reviewers may ask for additional studies to be performed. There are accelerated review processes for vaccines which may prevent life-threatening diseases. Some vaccines, such as vaccines against bioterrorism threats, may be approved with only animal trial data if a human trial is unethical (Marshall and Baylor, 2011). After regulatory approval, the vaccine is further monitored in the post-approval phase. The manufacturing process is closely monitored to ensure that contaminants are not being introduced into the vaccines. Those who receive the vaccines are also monitored for adverse effects, and those who manufacture the vaccines are required to continue to monitor and report safety data (Marshall and Baylor, 2011).
"A biologics license application (BLA) must be submitted to the CBER Office of Vaccines Research and Review, which needs to include data proving safety, efficacy, and a process to manufacture the vaccine in a consistent way."
4.6 Manufacturing Phase III trials serve as the true test of whether a vaccine can truly protect against an infection or disease. The scientific method and experimental procedures are heavily stressed during these trials, as Phase III evaluates an experimental vaccine by comparing the rate of infection in individuals given the experimental vaccine with that of in a group given a placebo. These groups range from thousands of people to tens of thousands of people to test safety in large groups (Gruber and Marshall, 2018). This is because certain side-effects of the candidate vaccine may only surface when a large group is tested (FDA, 2019). These trials also test the efficacy of the vaccine, asking
SUMMER 2020
Once vaccines have been approved for distribution, the next phase of development is mass manufacture. The first step in this stage of development is the production of the desirable antigen - either from growth and inactivation of the pathogen or the production of a desirable recombinant DNA fragment. Vaccines are typically grown in a variety of different media; influenza is grown in chicken eggs, while Hepatitis A is propagated in diploid cells (Gomez et al., 2013). The viral particles or toxoids produced are then collected, purified, and deactivated, typically through extensive heating or the addition of formaldehyde.
267
Figure 6: A computer rendering of the COVID-19 virus. Source: Pxhere
"Operation Warp Speed aims to deliver 300 million doses of a safe, effective COVID-19 by January 2021."
268
The formaldehyde is later removed through extensive purification to eliminate potential harm to humans (WHO, 2020). Next, for some pathogens that cannot generate a sufficient immune response, adjuvants are added to stimulate the response. Examples of these adjuvants include aluminum salts for the pertussis vaccine and squalene for influenza (Di Pasquale et al., 2015). Finally, some vaccines may have stabilizers and preservatives added to maintain their effectiveness while in storage. Such agents include sorbitol for the yellow fever vaccine, potassium glutamate for the rabies vaccine, and 2-phenoxy ethanol for the polio vaccine (Gomez et al., 2013).
standard, national regulatory agencies follow an “independent lot release” system. Under this system, individual “lots” of manufactured vaccines are evaluated independently from the manufacturer to ensure quality control (WHO, 2013). Ultimately, after distribution has begun, national regulators are empowered to engage in “Postmarket surveillance.” As a part of this process, a national regulator will collect data on the distribution of a vaccine, noting if there was an unexpected increase in adverse reactions (Raj et al., 2019).
4.7 Quality control
5.1 Infrastructure and Current Research
Finally, up until vaccines are distributed, they are extensively evaluated for quality control purposes. As many vaccines are composed of multiple ingredients, typically several different types of assays are required, depending on the type of vaccine in question. One of the most important factors considered is vaccine efficacy, as measured through the quantity of antigens and their quality. Techniques like mass spectroscopy and the enzyme-linked immunosorbent assay, otherwise known as ELISA, work to quantify the number of antibodies and identify any defects in the structure of antigen proteins (Metz et al., 2009; Engvall and Perlmann, 1972). Other techniques, like isoelectric focusing and reversed phase chromatography, serve to insure the purity of vaccine material (Metz et al., 2009). To ensure that product quality is always to
Operation Warp Speed aims to deliver 300 million doses of a safe, effective COVID-19 by January 2021. This initiative is part of a broader strategy that works to accelerate the development, manufacturing, and distribution of COVID-19 vaccines, therapeutics, and countermeasures (HHS, 2020). Operation Warp Speed is a comprehensive effort that involves partnerships among components of the Department of Health and Human Services (HHS), including the Centers for Disease Control and Prevention (CDC), the Food and Drug Administration (FDA), the National Institutes of Health (NIH), and the Biomedical Advanced Research and Development Authority (BARDA), and the Department of Defense (DoD). There is also collaboration with private firms and other federal agencies. To expedite the vaccine development process, rather than
COVID-19 Vaccine Case Study: Operation Warp Speed
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 7: A chart displaying the percentage of children 12-23 months of age who have received a diphtheria, pertussis, and tetanus vaccine (DPT). The size of the circle denotes the country’s population. Multiple DPT vaccines are required to induce immunity, so the vaccination rate for DPT is considered a good indicator for the strength of a country’s vaccination program Image source: Our World in Data. (Vanderslott et al., 2013)
eliminating steps, several phases will proceed simultaneously. Millions of dollars have been invested in promising vaccine candidates and manufacturing infrastructure (HHS, 2020). Multiple approaches to developing a vaccine against SARS-CoV2 are currently underway. Vaccine development focuses on the S-Protein of the virus, which directly attaches to the host cell (Padron-Regalado, 2020). One method of vaccine development that is being explored is the use of inactivated and live-attenuated viruses (Padron-Regalado, 2020). As a modern advance on Pasteur’s vaccine development method, live attenuated coronavirus vaccines contain viruses that have been genetically altered to reduce their infective and replicative abilities (Padron-Regalado, 2020). Another method that is being explored is recombinant viral vectors. This involves genetically engineering another virus to express the SARSCoV2 S-protein in order to generate immunity against this component of the coronavirus without introducing the infective virus into patients (Padron-Regalado, 2020). The approach that is currently in the furthest stage is an mRNA-based vaccine developed by Moderna and the NIH Vaccine Research
SUMMER 2020
Center (Amanat and Kramer, 2020). This vaccine consists of mRNA encoding the virus’s spike protein encased in a lipid capsule (Amanat and Kramer, 2020; Jackson et al., 2020). So far, in its phase I and II clinical trials, the vaccine has had promising results; trial participants developed antibodies against the coronavirus (Jackson et al., 2020). A vaccine developed at Oxford, which has also just entered stage III trials, uses a coldlike chimpanzee virus modified to express the SARS-CoV2 S-protein (Folegatti et al., 2020). This vaccine has also had promising results in both safety and development of an immune response (Folegatti et al., 2020). 5.2 Challenges Besides the record-speed needed to develop a COVID-19 vaccine in the midst of a pandemic, there are numerous challenges facing the development and dissemination of such a vaccine. First, there is evidence to suggest that immunity to SARS-CoV-2 may not be long lasting and that antibody loss seems to be faster for SARS-CoV-2 than has previously been determined for SARS-CoV-1 (Ibarrondo, 2020). Researchers are still investigating the timeline for antibody decay, but this could pose a challenge for ensuring that a COVID-19 vaccine
"...there is evidence to suggest that immunity to SARSCoV-2 may not be long lasting and that antibody loss seems to be faster for SARS-CoV-2 than has previously been determined for SARSCoV-1."
269
is durable (Ibarrondo, 2020). Also, regarding immunity concerns, the virus seems to express disproportionately harsh phenotypes in older populations, perhaps due to the waning immune response with age (Heaton, 2020). For influenza vaccinations, older adults typically need a larger dose to trigger an adequate immune response; as such, researchers must determine appropriate dosages for different age groups to generate immunity (Heaton, 2020).
"...since the virus does not grow in wild mice, researchers have struggled to induce disease in animals, thus making animal trials challenging."
Another challenge to the development of a SARS-CoV-2 vaccine is the lack of an approved coronavirus vaccine. As many vectors being studied for COVID-19 have not been approved as vaccines before, including DNA and RNAbased technologies, there are no preexisting large-scale manufacturing capacities (Amanat & Krammer, 2020). Additionally, since the virus does not grow in wild mice, researchers have struggled to induce disease in animals, thus making animal trials challenging (Amanat & Krammer, 2020). Finally, even if a vaccine is approved, there is an additional challenge of getting the vaccine to the public and deciding who should get the vaccine first. One poll showed that in the United States, only 49% of the population planned to get vaccinated – this is problematic in terms of establishing herd immunity (Mello et al., 2020). Furthermore, since the US Constitution largely relegates public health legislation to the state level, a cohesive vaccine mandate would likely be difficult to coordinate nationwide (Mello et al., 2020).
Conclusion Despite the impressive scientific advances made in the last two centuries, there are still significant economic barriers to vaccine development. Developing a vaccine is a cost intensive process, and vaccines have only a 6% chance of making it from the preclinical phase to market (Pronker et al., 2013). Although estimates vary widely, the average vaccine costs between $200-500 million and an average of 10 years to develop. Manufacturing costs are also high, and it costs up to $700 million to commission and equip a manufacturer (Kis et al., 2018). Because of these high costs and the high failure rate, there are only five different companies producing vaccines within the United States (Institute of Medicine, 2004). Market incentives for vaccine development are minimal, since vaccine development is such an expensive and risky process, offering little gain even for successful vaccines.
270
Additionally, although vaccines are hailed as the most effective public health intervention in history, portions of the world’s population remain unvaccinated or under-vaccinated. The primary reason for under-vaccination is a lack of vaccine equity—not all people have equal access to vaccines. Like most health issues, being under-vaccinated is associated with the social determinants of health. Most notably, “parental socioeconomic status, number of years in education and/or ethnicity” correlate with the likelihood of a person to be fully vaccinated (Boyce et al., 2019). Even wealthy countries have under-vaccinated populations, partially because many countries do not collect enough data on vaccine administration to identify which populations lack access to vaccines. In order to address vaccine inequity, countries and global health imperatives must address inequity at large, which is no easy task. There is no singular solution to the issue of vaccine inequity: “in some countries it may be necessary to develop policies, in others to adapt services, in others to develop systems to analyze and disaggregate data....Addressing inequities is not a one-off action, it is a shift in conceptualizing how services are delivered and how the goals and targets are set” (Boyce et al., 2019). A less common reason for under-vaccination is anti-vaccine (anti-vaxx) sentiment. Antivaxx sentiment began growing in the US in the 1980s, but vastly expanded in 1998 when former-gastroenterologist Andrew Wakefield published a study suggesting that the MMR vaccine caused autism in children. The study was riddled with ethical violations and conflicts of interest, namely that Wakefield was exclusively testing children who were already experiencing neurological issues. Additionally, Wakefield was hoping to market a new measles vaccine of his own, and was funded by a lawyer who defended parents suing vaccine manufacturers (Rao & Andrade, 2011; Dyer, 2010). Wakefield has since had his medical license revoked, but maintains a small but ardent following of antivaxxers. Today, wealthier countries tend to harbor more anti-vaxx sentiment. According to a study by Wellcome, a British research-charity, 95% of those surveyed in low-income countries had a high level of trust in vaccine safety, while in high-income countries, 82% had a high level of trust in vaccine safety (“Wellcome Global Monitor”, 2018). Anti-vaxxers are a small but vocal and dangerous threat to herd immunity in high-income countries. Measles was considered
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
eradicated from the US in 2000, but voluntary vaccine refusal has caused a resurgence of the disease and several outbreaks (CDC, 2019). In 2019, measles cases worldwide increased 300% relative to the first three months of 2018 (CDC, 2019). One cannot help but wonder—if people refuse tried-and-true vaccines like MMR, will they voluntarily receive a newly made COVID-19 vaccine once one becomes available? Vaccines have undoubtedly transformed medical practice and quality of life worldwide. In a hypothetical world in which vaccines were never invented, smallpox alone would kill 5 million people every year. This implies that between 1980 and 2018, around 150 to 200 million lives have been saved (UNICEF, 2020). Despite these successes, there are still barriers preventing vaccines from reaching their full potential: approximately 6.6 million children still die each year and about a half of these deaths are caused by infections, including pneumonia and diarrhea, which could be prevented by vaccination (Greenwood, 2014). Clearly, there is still progress to be made, and the rate of innovation and current research is a powerful force that continues to drive advancements in vaccine technology and improve health outcomes. The collective effort of scientists, policy advocates, and public health educators has saved millions of lives and continues to be crucial in combating the current COVID-19 pandemic. References Altamirano, J., Sarnquist, C., Behl, R., García-García, L., FerreyraReyes, L., Leary, S., & Maldonado, Y. (2018). OPV Vaccination and Shedding Patterns in Mexican and US Children. Clinical Infectious Diseases: An Official Publication of the Infectious Diseases Society of America, 67(Suppl 1), S85–S89. https:// doi.org/10.1093/cid/ciy636 Amanat, F., & Krammer, F. (2020). SARS-CoV-2 Vaccines: Status Report. Immunity, 52(4), 583–589. https://doi.org/10.1016/j. immuni.2020.03.007 Armbruster, N., Jasny, E., & Petsch, B. (2019). Advances in RNA Vaccines for Preventive Indications: A Case Study of a Vaccine against Rabies. Vaccines, 7(4). https://doi.org/10.3390/ vaccines7040132 Baicus, A. (2012). History of polio vaccination. World Journal of Virology, 1(4), 108–114. https://doi.org/10.5501/wjv.v1.i4.108 Bannerman, W. B. (1904). The Plague Research Laboratory of the Government of India, Parel, Bombay. Proceedings of the Royal Society of Edinburgh, 24, 113–144. https://doi. org/10.1017/S0370164600007781 Baxter, D. (2007). Active and passive immunity, vaccine types, excipients and licensing. Occupational Medicine, 57(8), 552–556. https://doi.org/10.1093/occmed/kqm110
vaccination. Clinical Microbiology and Infection, 18, 1–6. https://doi.org/10.1111/j.1469-0691.2012.03945.x Blencowe, H., Lawn, J., Vandelaer, J., Roper, M., & Cousens, S. (2010). Tetanus toxoid immunization to reduce mortality from neonatal tetanus. International Journal of Epidemiology, 39 Suppl 1, i102-109. https://doi.org/10.1093/ije/dyq027 Bosch, F., & Rosich, L. (2008). The Contributions of Paul Ehrlich to Pharmacology: A Tribute on the Occasion of the Centenary of His Nobel Prize. International Journal of Experimental and Clinical Pharmacology, 82(3), 171–179. https://doi. org/10.1159/000149583 Boyce, T., Gudorf, A., de Kat, C., Muscat, M., Butler, R., & Habersaat, K. B. (2019). Towards equity in immunisation. Eurosurveillance, 24(2). https://doi.org/10.2807/1560-7917. ES.2019.24.2.1800204 Boylston, A. (2012). The origins of inoculation. Journal of the Royal Society of Medicine, 105(7), 309–313. https://doi. org/10.1258/jrsm.2012.12k044 Butler, T. (2014). Plague history: Yersin’s discovery of the causative bacterium in 1894 enabled, in the subsequent century, scientific progress in understanding the disease and the development of treatments and vaccines. Clinical Microbiology and Infection, 20(3), 202–209. https://doi. org/10.1111/1469-0691.12540 CDC. (2016). Smallpox. Centers for Disease Control, NCEZID, DHCPP. CDC. (2018). Understanding How Vaccines Work. CDC Media Statement: Measles cases in the U.S. are highest since measles was eliminated in 2000 | CDC Online Newsroom | CDC. (2019, April 26). https://www.cdc.gov/ media/releases/2019/s0424-highest-measles-cases-sinceelimination.html Chakrabarti, P. (2010). “Living versus dead”: The Pasteurian paradigm and imperial vaccine research. Bulletin of the History of Medicine, 84(3), 387–423. https://doi.org/10.1353/ bhm.2010.0002 Chaplin, D. D. (2010). Overview of the immune response. Journal of Allergy and Clinical Immunology, 125(2), S3–S23. https://doi.org/10.1016/j.jaci.2009.12.980 Cintolo, J. A., Datta, J., Mathew, S. J., & Czerniecki, B. J. (2012). Dendritic cell-based vaccines: Barriers and opportunities. Future Oncology, 8(10), 1273–1299. https://doi.org/10.2217/ fon.12.125 Colt, S. (2009). The Polio Crusade (No. 2) [TV Episode]. In American Experience. PBS. Di Pasquale, A., Preiss, S., Tavares Da Silva, F., & Garçon, N. (2015). Vaccine Adjuvants: From 1920 to 2015 and Beyond. Vaccines, 3(2), 320–343. https://doi.org/10.3390/ vaccines3020320 Dunn, P. M. (1996). Dr Edward Jenner (1749-1823) of Berkeley, and vaccination against smallpox. Archives of Disease in Childhood - Fetal and Neonatal Edition, 74(1), F77–F78. https://doi.org/10.1136/fn.74.1.F77 Dyer, C. (2010). Lancet retracts MMR paper after GMC finds Andrew Wakefield guilty of dishonesty. BMJ: British Medical Journal, 340(7741), 281–281. JSTOR.
Berche, P. (2012). Louis Pasteur, from crystals of life to
SUMMER 2020
271
Engvall, E., & Perlmann, P. (1972). Enzyme-Linked Immunosorbent Assay, Elisa: III. Quantitation of Specific Antibodies by Enzyme-Labeled Anti-Immunoglobulin in Antigen-Coated Tubes. The Journal of Immunology, 109(1), 129–135. FDA. (2019). Vaccine Product Approval Process. Folegatti, P. M., Ewer, K. J., Aley, P. K., Angus, B., Becker, S., Belij-Rammerstorfer, S., Bellamy, D., Bibi, S., Bittaye, M., Clutterbuck, E. A., Dold, C., Faust, S. N., Finn, A., Flaxman, A. L., Hallis, B., Jenkin, D., Lazarus, R., Makinson, R., Minassian, A. M., … et al. (2020). Safety and immunogenicity of the ChAdOx1 nCoV-19 vaccine against SARS-CoV-2: A preliminary report of a phase 1/2, single-blind, randomised controlled trial. The Lancet, 396(10249), 15–21. https://doi.org/10.1016/S01406736(20)31604-4 Garde, D. (2020, July 2). Trial of Moderna Covid-19 vaccine delayed, investigators say, but July start still possible. STAT. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source =web&cd=&ved=2ahUKEwjB7q-5x9jrAhWJU80KHTwGA_MQ FjAAegQIARAB&url=https%3A%2F%2Fwww.statnews. com%2F2020%2F07%2F02%2Ftrial-of-moderna-covid-19vaccine-delayed-investigators-say-but-july-start-still-possible% 2F&usg=AOvVaw1GMZRkwYlLbs7L32piqFW1 Gomez, P. L., Robinson, J. M., & Rogalewicz, J. A. (2013). 4— Vaccine manufacturing. In S. A. Plotkin, W. A. Orenstein, & P. A. Offit (Eds.), Vaccines (Sixth Edition) (pp. 44–57). W.B. Saunders. https://doi.org/10.1016/B978-1-4557-0090-5.00019-7 González, S., González-Rodríguez, A. P., López-Soto, A., Huergo-Zapico, L., López-Larrea, C., & Suárez-Álvarez, B. (2011). Conceptual aspects of self and nonself discrimination. Self/ Nonself, 2(1), 19–25. https://doi.org/10.4161/self.2.1.15094 Greenwood, B. (2014). The contribution of vaccination to global health: Past, present and future. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1645), 20130433. https://doi.org/10.1098/rstb.2013.0433 Griffiths, A. J., Miller, J. H., Suzuki, D. T., Lewontin, R. C., & Gelbart, W. M. (2000). Making recombinant DNA. An Introduction to Genetic Analysis. 7th Edition. https://www.ncbi.nlm.nih.gov/ books/NBK21881/ Gruber, M. F., & Marshall, V. B. (2018). Regulation and Testing of Vaccines. Plotkin’s Vaccines, 1547-1565.e2. https://doi. org/10.1016/B978-0-323-35761-6.00079-1 Guo, C., Manjili, M. H., Subjeck, J. R., Sarkar, D., Fisher, P. B., & Wang, X.-Y. (2013). Therapeutic Cancer Vaccines. In Advances in Cancer Research (Vol. 119, pp. 421–475). Elsevier. https://doi. org/10.1016/B978-0-12-407190-2.00007-1 Heaton, P. (2020). The Covid-19 Vaccine-Development Multiverse. The New England Journal of Medicine. https://doi. org/10.1056/NEJMe2025111 Henderson, D. A. (2011). The eradication of smallpox – An overview of the past, present, and future. Vaccine, 29, D7–D9. https://doi.org/10.1016/j.vaccine.2011.06.080 HHS. (2020). Fact Sheet: Explaining Operation Warp Speed. U.S. Department of Health & Human Services. Ibarrondo, F. (2020). Rapid Decay of Anti–SARS-CoV-2 Antibodies in Persons with Mild Covid-19. The New England Journal of Medicine. https://doi.org/10.1056/NEJMc2025179
272
Institute of Medicine. (2004). Financing Vaccines in the 21st Century: Assuring Access and Availability. The National Academies Press. https://doi.org/10.17226/10782 Jackson, L. A., Anderson, E. J., Rouphael, N. G., Roberts, P. C., Makhene, M., Coler, R. N., McCullough, M. P., Chappell, J. D., Denison, M. R., Stevens, L. J., Pruijssers, A. J., McDermott, A., & et al. (2020). An mRNA Vaccine against SARS-CoV-2— Preliminary Report. The New England Journal of Medicine. https://doi.org/10.1056/NEJMoa2022483 Jiang, D.-B., Sun, L.-J., Cheng, L.-F., Zhang, J.-P., Xiao, S.B., Sun, Y.-J., Yang, S.-Y., Wang, J., Zhang, F.-L., & Yang, K. (2017). Recombinant DNA vaccine of Hantavirus Gn and LAMP1 induced long-term immune protection in mice. Antiviral Research, 138, 32–39. https://doi.org/10.1016/j. antiviral.2016.12.001 Kaufmann, S. H. E. (2017). Remembering Emil von Behring: From Tetanus Treatment to Antibody Cooperation with Phagocytes. American Society for Microbiology, 8(1), 1–6. https://doi.org/10.1128/mBio.00117-17 Kendrick, P. L. (1942). Use of Alum-Treated Pertussis Vaccine, and of Alum-Precipitated Combined Pertussis Vaccine and Diphtheria Toxoid, for Active Immunization. American Journal of Public Health and the Nation’s Health, 32(6), 615–626. https://doi.org/10.2105/ajph.32.6.615 KFF. (2020, April 28). The U.S. Government and Global Polio Efforts. KFF. https://www.kff.org/global-health-policy/factsheet/the-u-s-government-and-global-polio-efforts/ Kis, Z., Shattock, R., Shah, N., & Kontoravdi, C. (2018). Emerging Technologies for Low‐Cost, Rapid Vaccine Manufacture. Biotechnology Journal, 1800376. https://doi.org/10.1002/ biot.201800376 Knittelfelder, R., Riemer, A. B., & Jensen-Jarolim, E. (2009). Mimotope vaccination – from allergy to cancer. Expert Opinion on Biological Therapy, 9(4), 493–506. https://doi. org/10.1517/14712590902870386 Kosten, T. R. (2005). Future of anti-addiction vaccines. Studies in Health Technology and Informatics, 118, 177–185. Linhart, B., & Valenta, R. (2012). Vaccines for allergy. Current Opinion in Immunology, 24(3), 354–360. https://doi. org/10.1016/j.coi.2012.03.006 Mallory, M. L., Lindesmith, L. C., & Baric, R. S. (2018). Vaccination-induced herd immunity: Successes and challenges. Journal of Allergy and Clinical Immunology, 142(1), 64–66. https://doi.org/10.1016/j.jaci.2018.05.007 Marathe, S. A., Lahiri, A., Negi, V. D., & Chakravortty, D. (2012). Typhoid fever & vaccine development: A partially answered question. The Indian Journal of Medical Research, 135, 161–169. Marshall, V., & Baylor, N. W. (2011). Food and Drug Administration Regulation and Evaluation of Vaccines. PEDIATRICS, 127(Supplement), S23–S30. https://doi. org/10.1542/peds.2010-1722E McNamara, M. A., Nair, S. K., & Holl, E. K. (2015). RNA-Based Vaccines in Cancer Immunotherapy. Journal of Immunology Research, 2015. https://doi.org/10.1155/2015/794528 Mello, M., Silverman, R., & Omer, S. (2020). Ensuring Uptake of Vaccines against SARS-CoV-2. The New England Journal of Medicine. https://doi.org/10.1056/NEJMp2020926
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Metz, B., van den Dobbelsteen, G., van Els, C., van der Gun, J., Levels, L., van der Pol, L., Rots, N., & Kersten, G. (2009). Qualitycontrol issues and approaches in vaccine development. Expert Review of Vaccines, 8(2), 227–238. https://doi. org/10.1586/14760584.8.2.227 Mitragotri, S. (2005). Immunization without needles. Nature Reviews Immunology, 5(12), 905–916. https://doi. org/10.1038/nri1728 Mnookin, S. (2012). The Panic Virus: The True Story Behind the Vaccine-Autism Controversy. Simon & Schuster. National Vaccine Advisory Committee. (1997). United States Vaccine Research: A Delicate Fabric of Public and Private Collaboration. Pediatrics, 100(6), 1015–1020. https://doi. org/10.1542/peds.100.6.1015 Offit, P. (2007). The Cutter Incident. Yale University Press. Padron-Regalado, E. (2020). Vaccines for SARS-CoV-2: Lessons from Other Coronavirus Strains. Infectious Diseases and Therapy, 9(2), 255–274. https://doi.org/10.1007/s40121-02000300-x Paoletti, E., Lipinskas, B. R., Samsonoff, C., Mercer, S., & Panicali, D. (1984). Construction of live vaccines using genetically engineered poxviruses: Biological activity of vaccinia virus recombinants expressing the hepatitis B virus surface antigen and the herpes simplex virus glycoprotein D. Proceedings of the National Academy of Sciences of the United States of America, 81(1), 193–197. Pearson, F. E., Tullett, K. M., Leal‐Rojas, I. M., Haigh, O. L., Masterman, K., Walpole, C., Bridgeman, J. S., McLaren, J. E., Ladell, K., Miners, K., Llewellyn‐Lacey, S., Price, D. A., Tunger, A., Schmitz, M., Miles, J. J., Lahoud, M. H., & Radford, K. J. (2020). Human CLEC9A antibodies deliver Wilms’ tumor 1 (WT1) antigen to CD141 + dendritic cells to activate naïve and memory WT1‐specific CD8 + T cells. Clinical & Translational Immunology, 9(6). https://doi.org/10.1002/cti2.1141
Shen, X., Orson, F. M., & Kosten, T. R. (2011). Anti-addiction vaccines. F1000 Medicine Reports, 3. https://doi.org/10.3410/ M3-20 Smith, K. A. (2011). Edward Jenner and the Small Pox Vaccine. Frontiers in Immunology, 2. https://doi.org/10.3389/ fimmu.2011.00021 Syed, K. A., Saluja, T., Cho, H., Hsiao, A., Shaikh, H., Wartel, T. A., Mogasale, V., Lynch, J., Kim, J. H., Excler, J.-L., & Sahastrabuddhe, S. (2020). Review on the Recent Advances on Typhoid Vaccine Development and Challenges Ahead. Clinical Infectious Diseases, 71(Supplement_2), S141–S150. https://doi.org/10.1093/cid/ciaa504 UNICEF. (2020). Small Pox Deaths. United Nations Children’s Fund. Verbeke, R., Lentacker, I., De Smedt, S. C., & Dewitte, H. (2019). Three decades of messenger RNA vaccine development. Nano Today, 28, 100766. https://doi.org/10.1016/j. nantod.2019.100766 Wellcome Global Monitor 2018 | Reports | Wellcome. (n.d.). Retrieved August 17, 2020, from https://wellcome.ac.uk/ reports/wellcome-global-monitor/2018 WHO. (2016). Human Challenge Trials for Vaccine Development: Regulatory Considerations. EXPERT COMMITTEE ON BIOLOGICAL STANDARDIZATION. WHO. (2020). Immunization Coverage. Winau, F., & Winau, R. (2002). Emil von Behring and serum therapy. Microbes and Infection, 4(2), 185–188. https://doi. org/10.1016/S1286-4579(01)01526-X Yadav, D. K., Yadav, N., & Khurana, S. M. P. (2014). Chapter 26 Vaccines: Present Status and Applications. In A. S. Verma & A. Singh (Eds.), Animal Biotechnology (pp. 491–508). Academic Press. https://doi.org/10.1016/B978-0-12-416002-6.00026-2
Pronker, E. S., Weenen, T. C., Commandeur, H., Claassen, E. H. J. H. M., & Osterhaus, A. D. M. E. (2013). Risk in Vaccine Research and Development Quantified. PLoS ONE, 8(3), e57755. https://doi.org/10.1371/journal.pone.0057755 Raj, N., Fernandes, S., Charyulu, N. R., Dubey, A., G. S., R., & Hebbar, S. (2019). Postmarket surveillance: A review on key aspects and measures on the effective functioning in the context of the United Kingdom and Canada. Therapeutic Advances in Drug Safety, 10. https://doi. org/10.1177/2042098619865413 Rao, T. S. S., & Andrade, C. (2011). The MMR vaccine and autism: Sensation, refutation, retraction, and fraud. Indian Journal of Psychiatry, 53(2), 95–96. https://doi. org/10.4103/0019-5545.82529 Riedel, S. (2005). Edward Jenner and the history of smallpox and vaccination. Baylor University Medical Center Proceedings, 18(1), 21–25. https://doi.org/10.1080/08998280 .2005.11928028 Sahastrabuddhe, S., & Saluja, T. (2019). Overview of the Typhoid Conjugate Vaccine Pipeline: Current Status and Future Plans. Clinical Infectious Diseases, 68(Supplement_1), S22–S26. https://doi.org/10.1093/cid/ciy884 Shapiro-Shapin, C. G. (2010). Pearl Kendrick, Grace Eldering, and the Pertussis Vaccine. Emerging Infectious Diseases, 16(8), 1273–1278. https://doi.org/10.3201/eid1608.100288
SUMMER 2020
273
The Genetic Engineering Revolution
STAFF WRITERS: BRYN WILLIAMS '23, DEV KAPADIA '23, SAI RAYASAM (WAUKEE HIGH SCHOOL JUNIOR), SUDHARSAN BALASUBRAMANI '22 BOARD WRITER: SAM NEFF '21 Cover Image: With modern genetic engineering techniques, it has become possible to rewrite the genetic code – restoring mutant genes to a fully functional state. Source: Needpix, Public Domain
274
Introduction: What is Genetic Engineering Ever since its origin in 1973, genetic engineering has occupied a prominent space in the scientific field (Encyclopedia Britannica, 2020). Genetic engineering centers around the use of recombinant DNA (rDNA), or DNA molecules from two species that are inserted into a host organism, to artificially alter an organism’s genetic material (NIH, 2020; Encyclopedia Britannica, 2020). Recombinant DNA technology involves the use of restriction enzymes that cut DNA at certain specified sequences (DDC, 2020). The excised gene is then placed in a vector, typically viruses, yeast cells, or circular pieces of bacterial DNA called plasmids (Encyclopedia Britannica, 2020; DDC, 2020). Once the gene is inserted in the vector, it is then placed into bacteria or eukaryotic cells where it is multiplied, resulting in large numbers of that gene which can be used in
multiple ways (DDC, 2020). Each human cell contains approximately six feet of genetic material, and rDNA technology allows scientists to scan this large amount of DNA and identify, isolate, and manipulate specific sequences (Encyclopedia Britannica, 2020). The ability to alter genes opens doors in multiple scientific fields, solving a variety of problems. Foods can be altered to contain more nutrients, insulin can be mass produced, and new cures for devastating genetic diseases can be explored (DDC, 2020). The possibilities resulting from genetic engineering are vast and will continue to impact people’s everyday lives as new applications are discovered and ethical debates arise around issues like the genetic alteration of human DNA.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
of vaccines and the study of epidemiology. After all, it was the third deadliest flu in human history, after the Black Plague and the smallpox pandemic of the 16th century (LePan, 2020).
A Brief History of the Genome: From DNA Discovery to Sanger Sequencing The scientific community’s understanding of DNA has evolved dramatically in the past century. This now culturally commonplace biological construct, which holds the genetic information encoding every feature of living organisms, was a mystery molecule just 100 years ago. Gregor Mendel’s work led to the first notion that there was some sort of genetic material. His painstaking efforts at cross-breeding pea plants and noting the characteristics of offspring over multiple generations demonstrated that parent plants pass visible traits on to their offspring, but not to equal extents (i.e. some traits are dominantly expressed and others are recessive – their expression masked by the dominant ones). Mendel’s work, which would produce the modern field of genetics, was scarcely known until it was replicated and verified by other scientists at the dawn of the 20th century (“Gregor Mendel,” n.d.). This included the independent efforts of scientists Erich von Tschermak, Hugo de Vries, and Carl Correns (“Hugo de Vries,” n.d.). Mendel’s life of relative solitude surely helped him focus on his painstaking, concentrated efforts, but it unfortunately did not facilitate the publicization of his work. After the rediscovery of Mendel’s laws, the field of genetics proceeded at a rapid pace. The first major question that had to be answered–what was the identity of the genetic material, passed on from parents to offspring over successive generations, that is capable of determining their physical traits? In the 20th century, the locus of discovery shifted from Europe to the United States. In the aftermath of the Spanish Flu, which battered an already battle-weary global population at the conclusion of World War I, scientists were focused on the development
SUMMER 2020
Working in the spirit of the times, the English microbiologist Frederick Griffith, succeeded in a series of experiments to make key discoveries about the deadly bacterium Streptococcus Pneumonia. Streptococcus infection is now prevented quite easily by vaccination (although it still affects low income countries without sufficient access to vaccines), but it was not in Griffith’s times (“Pneumococcal Disease,” n.d.). Griffith’s landmark experiments, published in 1928, demonstrated the principle of bacterial transformation–that virulent, but dead (heat-killed) bacteria could pass on their virulence to living, but non-virulent bacteria. This information was valuable from an epidemiological perspective as well as to the field of genetics. If dead bacteria still could transform the living, then there must be some material (the “transforming principle”) that they are passing on (O’Connor, 2008). Determining the molecular nature of this “transforming principle” followed in a series of scientific advances. The molecule DNA, or deoxyribonucleic acid, was actually known before Mendel conducted his pea plant experiments. It was discovered in 1869 by the Swiss scientist Friedrich Meischer (Pray, 2008). But it was a team of researchers in New York– Oswald Avery, Colin MacLeod, and Maclyn McCarty–who finally showed that DNA was the ‘transforming principle’ that Griffith had proposed. They built upon Griffith’s work by taking the extract from the heat-killed bacteria and subjecting it to various enzymes, a protease and two nucleases (and RNAse and a DNase), that dismantled molecules of protein, RNA, and DNA respectively (1944). In finding that the capacity of the heat-killed bacteria to transform non-virulent bacteria was annulled by the DNase, the scientists demonstrated that DNA was the transforming principle (O’Connor, 2008). A subsequent experiment by Alfred Hershey and Martha Chase (1952) would confirm that DNA is indeed the genetic material and not protein. Hershey and Chase studied the dynamics of viral infection of bacterial cells– proving that the viral coat, made of protein, was shed in the process of infection, and that the internal payload of the virus, DNA, entered bacterial cells and acted to produce new copies of the virus. The experiment is well-known as the “waring blender experiment” because of the blender used to shake empty viral coats
Figure 1: The Griffith Experiment - Frederick Griffith demonstrated in 1928 that the genetic material of dead, heat-killed bacteria could transform living bacteria to be virulent (disease-causing). The discovery that genetic material could be transferred between cells and alter their appearance or activity was a significant one, and it held implications for later theories of the genome. Source: Wikimedia Commons
"Mendel’s work, which would produce the modern field of genetics, was scarcely known until it was replicated and verified by other scientists at the dawn of the 20th century."
275
Figure 2: The structure of DNA The DNA double helix is made up of two rope-like strands of phosphate-sugar repeats (referred to as the backbone), bridged by nucleic acids. One nucleic acid is covalently bound to each strand, and they are joined in the center of the helix by a hydrogen bond. Source: Wikimedia Commons
"Precisely, a genome is the complete genetic material and sum of DNA found in an organism. The term is normally applied to the genetic material found in nuclear DNA, but it can also be applied to the organelles that have their own DNA."
from the surface of the cells (O’Connor, 2008).
The Modern Landscape This all leads to the well-known work of James Watson and Francis Crick, who spelled out the structure of DNA at Cambridge University. The picture of a double helix–a twisted ladder made up of two long strands (the DNA backbone of alternating pentose sugar and phosphate groups) linked by bridges (of deoxyribonucleotides connected by hydrogen bonds)–was set forth by these two scientists and has stood the test of time (Pray, 2008). But the theory of Watson and Crick, would not have emerged without the help of other scientists–lab director Maurice Wilkins and Rosalind Franklin, who painstakingly imaged the x-ray structure of DNA and provided the experimental evidence that informed Watson and Crick’s model. All her work with x-rays probably cost Franklin her life (she died of ovarian cancer at age 37 in 1958), and she didn’t receive much credit for her essential contributions until long after her death (“Rosalind Franklin,” n.d.). It was Watson, Crick, and Wilkins who would get the Nobel Prize, but, like Mendel, she is now rightfully recognized as a landmark figure in the history of genetics. With knowledge of the DNA structure in hand, scientists sought to determine how the ‘transforming principle’ operated; how its internal modules–genes–could be decoded and translated to produce changes within cells. Precisely, a genome is the complete genetic material and sum of DNA found in an organism. The term is normally applied to the genetic material found in nuclear DNA, but it can also be applied to the organelles that have their own DNA (like mitochondria or chloroplasts). Furthermore, the genome also comprises non-chromosomal genetic elements, such as plasmids, viruses, and transposable elements (DNA sequences that can change their position within a genome) (Gao et al., 2015). The study of the global properties of genomes is known as genomics, and for decades, geneticists have strived to “sequence” the entire genome of a species, which refers to the determination of the complete order of genes and how they affect the organism. This goal is not necessarily a recent one, and the history of genome sequencing is quite compelling (“All about Genes,” 2009). The first use of the word genome was in 1920, when Hans Winkler, a professor of botany at the University of Hamburg, created the word by combining the German root word gene with the Greek suffix -ome (which means body). For the
276
subsequent five decades, the word was used sporadically, and genomics was not a popular field of science (Gao et al., 2015). That all changed in 1965 when Robert Holley and colleagues produced the first partial nucleic acid sequence of tRNA found in Saccharomyces cerevisiae. They did this by using RNase enzymes to cut the tRNA at specific sites in order to measure their genetic material. However, it took a long time for Holley to map out this genome because the sequencing techniques available at the time were quite slow and only measured nucleotide composition. Furthermore, this method could only sequence small pure RNA species, not eukaryotic DNA molecules. This meant that this technique did not have much promise, and a much more complex method was needed. But no scientists were willing to specialize in sequencing techniques, so DNA sequencing research stalled for the next ten years (Heather and Chain, 2016). In 1975, Frederick Sanger and Alan Coulson came up with the “plus and minus” method of DNA sequencing. This method worked by creating a series of DNA molecules that varied in length, which could be separated using gel electrophoresis. By running models on this gel, the experimenter could infer the position of nucleotides at each position in the sequence. Not only did this allow for full genome sequencing, but it was also much faster than previous sequencing methods at the time. The scientists then used this technique to deduce the DNA sequence of a bacteriophage called ΦX174, which was the first complete genome to be sequenced. Although the technique was super effective, a
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: The “plus and minus” method became the pinnacle achievement of genome sequencing. This diagram outlines the comprehensive steps that the ddNTPs are put through in order to analyze them. Source: Wikimedia Commons
few problems still remained with the “plus and minus” method, so Sanger and Coulson set out to create a more accurate and rapid technique. They joined British colleague Steve Nicklen to develop a technique similar to the “plus and minus” method which uses dideoxynucleotide (ddNTPs) chain-terminating inhibitors (chemical analogues of deoxyribonucleotides, which are the monomers of DNA strand). These chain-terminating inhibitors are used because they stop the polymerization of DNA during replication. The ddNTPs are elongated and put through four separate reactions containing each ddNTP base. When these four reactions are separated side by side with a gel and autoradiograph, the sequence can be read from the film. With this method, the researchers sequenced mitochondrial DNA, which became the pinnacle of their careers and earned them the Nobel Prize. In honor of Sanger’s prominent role in developing this technique, it is now called Sanger sequencing, and it is so rapid and effective that it is still used to this day (Jeffers, 1998; Heather and Chain, 2016)
phase and finishing phase. In the shotgun phase, the scientists used an approach called the hierarchical shotgun method, which generated overlapping clones of DNA to individual human chromosomes. These DNA clones were then transferred to bacterial artificial chromosome vectors, which could be amplified using a technique called polymerase chain reaction (PCR). The fragments were then measured using gel electrophoresis, which generated contigs (sets of overlapping DNA) on each of the 24 chromosomes (22 autosomes and the X and Y chromosomes). After generating the sequence, there were still ambiguous areas of DNA that could not be found during the shotgun phase, so the scientists had to move onto the finishing phase, which consisted of filling in those gaps with greater amplification. Ultimately, by the end of 2002, the full human genome had been sequenced to about 99% accuracy. The final form of the human genome consisted of around 2.85 billion nucleotides, and to this day, the map of the human genome is constantly being refined (Chial, 2008).
Sanger and his techniques jump started the next era of DNA sequencing. In 1990, the U.S. Department of Energy and the National Institutes of Health initiated the Human Genome Project (HGP), an international effort at fully sequencing the human DNA genome. The HGP utilized a multitude of DNA sequencing methods, including Sanger sequencing. The project was split into two phases, the shotgun
Although we have fully mapped the human genome, genomics will keep making huge strides in the future. In fact, many new sequencing technologies, called next generation sequencing (NGS), offer the possibility of sequencing an entire genome within a single day. There are two main categories of NGSs: long read sequencing (LRS), which allows for production of very
SUMMER 2020
"...many new sequencing technologies, called next generation sequencing (NGS), offer the possibility of sequencing an entire genome within a single day."
277
large reads of DNA, and short read sequencing (SRS), which breaks DNA up into short fragments and amplifies them to read the full sequence (Johnson and Raza, 2018). In recent years, however, it has been found that SRS methods have major shortcomings, including difficulty in phasing alleles and trouble discriminating paralogue sequences. Ultimately, LRS, if done properly, can be much more effective, and the most famous platform of this method is Nanopore (“Genomeweb,” 2019).
"In Nanopore sequencing, a very tiny hole, generally less than one nanometer in diameter, is embedded into a synthetic membrane using an electric beam. This creates an ionic current, so when nucleobases pass through the hole, the current is altered."
In Nanopore sequencing, a very tiny hole, generally less than one nanometer, is embedded into a synthetic membrane using an electric beam. This creates an ionic current, so when nucleobases pass through the hole, the current is altered. Each nucleobase has a unique effect on the current, which can be measured to read out the whole sequence (Nanopore, 2020). Ultimately, DNA sequencing has come a long way from Sanger sequencing. Although it is still used today, Sanger sequencing will inevitably be replaced by NGS and LRS methods.
History of Genetic Engineering: From Selective Breeding to CRISPR Gene Editing Before the recent discoveries in modern genetic engineering, genetic engineering just referred to the process of genetic manipulation through the means of heredity and reproduction as well as DNA recombinant technology. The following notable developments in biotechnology have since altered the comprehension of nature and its applications in the public arena. One of the first developments in genetic engineering was the process of selective breeding. Though human ancestors did not possess any knowledge of modern-day genetics, they were still able to manipulate the genetic content of their food and livestock through selective breeding. Selective breeding, or artificial selection, is a selection technique in which parent organisms with desired characteristics are selected and then bred together to combine those traits and propagate them onto their progeny (Encyclopedia Britannica, 2020). This process is repeated over many generations until the desired characteristics are found. These desired characteristics are selected for due to potential increase in production of food, usefulness, appearance, pest resistance in plants, and more.
278
The first uses of artificial selection trace back over 32,000 years ago in East Asia where humans practiced hunting and gathering (Zimmer, 2013). Wolves in China started to linger around human hunters to maintain a stable food source, effectively domesticating themselves. Scientists have found that wolves likely started transforming into dogs at this time, as hunters valued the vapid and docile nature in some wolves. Over generations, humans have performed selective breeding with various traits including hair length, color, body shape, and personalities leading to the array of domesticated dogs we see today. In addition, artificial selection has been used with plants and livestock to often increase yield and feed mass populations. A major example is the evolution of corn. Corn began as a small wild grass known as teosinte with very few rows of kernels (Doebly, 2013). Some plants may have grown larger or tasted better, so they were selected for in the growing process. Cobs became larger over time, eventually taking on the form of modern maize (Doebly, 2013). Though there is a stark difference in the appearance of teosinte and modern corn, they only differ by five main genes (Doebly, 2013). More recently the term “genetic engineering” has shifted into the realm of DNA recombinant technology. In 1973, scientists Herbert Boyer and Stanley Cohen had a breakthrough discovery in genetic engineering by discovering a way to express antibiotic resistance in a strain of E. Coli cells that had not previously been resistant (Cohen et al., 1973). This process of recombinant technology is still widely used today by engineers who wish to express certain traits within organisms. Boyer and Cohen took advantage of plasmids to accomplish this feat. Plasmids are small circular pieces of DNA that replicate independently from the host’s chromosomal DNA; however, they still are capable of directing protein synthesis and will be reproduced into the next generation (Encyclopedia Britannica, 2020). Boyer and Cohen also used type II restriction endonucleases, enzymes that cleave DNA at a specific site in the plasmid, to cut open the plasmid and insert a piece of foreign DNA that coded for antibiotic resistance in the same site (Cohen et al., 1973). These pieces of DNA were then combined with ligase, an enzyme that recombines DNA, to complete the process of transformation. When these new cells were cloned and expressed, they would be resistant to the antibiotic.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: A summarizing timeline of the discovery of three major gene editing tools. Created by author Bryn Williams, based on sources Cassandri et al., 2017; Chandrasegaran and Carroll, 2016
Since its initial discovery, the recombinant DNA method of genetic engineering has been widely used in food production and also has major applications in the creation of medications. The first applications in medicine include synthesizing human insulin and growth hormone (Fried, 2019). The technology often serves a purpose in ameliorating specific genetic diseases and has also inspired similar technologies with viral vectors, where a virus is used to introduce foreign DNA into a host cell. The first targeted genetic mutations occurred in yeast and mice in the 1970s and 1980s. The edits were made using homologs which was more precise than previous techniques, but it was inefficient and not applicable to other organisms (Carroll, 2017). By examining data from DNA damage and repair research, scientists discovered that genetic editing could be made more efficient by making a targeted DNA double stranded break (DSB) (Carroll, 2017). There are now multiple known several nucleases that can make these DSBs: zinc finger nucleases (ZFNs), transcription activator-like effector nucleases (TALENs), and clustered regularly interspaced short palindromic repeats (CRISPR) (Carroll, 2017).
A Vision for the Future Before the discovery of CRISPR, genome editing technology revolved around engineering existing DNA endonucleases and having them cut any unique sequence of interest. However, most restriction enzymes could not be adapted since they had very short recognition sequences. Even if one was engineered, there was usually a high chance that it would cut very weakly, since the engineered version would have changed the enzyme's conformation to the point where it would no longer function normally. Type II (S) restriction enzymes, however, are an exception: they have separate cleaving and DNA recognition domains. The enzyme Fok1 is a major example of Type II (S) restriction enzyme (Wah et al., 1998). In addition to being separate, Fok1’s cleaving domain is a dimer that has no specificity and can readily function on
SUMMER 2020
its own; all the protein needs is two dimerized halves (Wah et al., 1998) Then, scientists just need to take Fok1’s cleaving domain and fuse it to a binding domain of their choice. Zinc Finger Nucleases (ZFNs) is one application of this technique, and it was first discovered in the 1980s. Of the various DNA-binding domains in the human genome, zinc fingers are the most common, with nearly 1500 human genes expressing the motif (Kim and Kim, 2014). This high frequency allows for scientists to readily find and engineer zinc fingers of their choice. Structurally, zinc fingers adopt a simple ββα fold and bind to the major groove of DNA, allowing for a strong understanding of the crystal structure. These folds are held together with a single molecule of zinc, hence its name (Carroll, 2011). Based on the position of certain residues on the α-helix of the zinc finger, scientists can engineer zinc finger proteins that would bind to a site of their choice. Each zinc finger binds an amino acid (Carroll, 2011). To create a ZFN, three to four of these engineered zinc fingers are fused to half of the heterodimer Fok1 cleaving domain. Only binding half of the dimer is advantageous because there is less chance for off-target cleavage. Additionally, since the Fok1 must dimerize to cut, it forces more specificity, as the second half of the dimer would have to be connected to three more zinc fingers on the complementary strand of the DNA (Carroll, 2011). In total, the dimerization allows for at most 24 base pairs of specificity, making it a strong tool for gene editing.
"The first targeted genetic mutations occurred in yeast and mice in the 1970s and 1980s. The edits were made using homologs which was more precise than previous techniques, but it was inefficient and not applicable to other organisms."
The second, more widely used Type II (S) restriction enzymes, is transcription activatorlike effector nucleases (TALENs), which were first discovered in 2009. TALENs employ TALE proteins, which are commonly found in plant pathogenic bacteria which activate plant genes to support virulence of the pathogen (Bogdanove et al., 2010). Similar to zinc fingers, TALE proteins are a class of binding protein with highly predictable specificity, binding on the major groove DNA, and very simple code, making it easy to mutate and adapt. Structurally, TALE proteins are made up of repeating variable
279
"The general process proceeds as follows: the gRNA binds to the target sequence, the Cas9 enzyme cuts both strands of the DNA, and the DNA repairs itself while incorporating the new piece of DNA."
domains (RVDs) that are made up of the same 34 amino acids, with the exception of residues 12 and 13; these residues are responsible for the binding specificity of the TALE protein (Joung and Sander, 2012). However, unlike zinc fingers, TALE proteins bind to just one base pair instead of three, allowing for high specificity (up to 30 to 40 base pairs of specificity). An array of these TALE proteins is attached to a nuclease, such as half of the Fok1 dimer. A second array with the other half of the Fok1 would be used to once again decrease the amount of off-target DSBs. ZNFs and TALENs remain the most specific geneediting tools; however, this technology requires the creation of engineered proteins (Mussolino and Cathomen, 2012). Therefore, the quick and reliable CRISPR is often employed in place of TALENs. Though ZFNs, TALENs, and meganucleases all have their individual pros and cons, the most commonly used genetic editing technique by far is the CRISPR-Cas9 system. First discovered in 2012 by researchers Jennifer Doudna and Emmanuelle Charpentier, the application of the Cas9 enzyme to CRISPR allows for genetic editing and is considered one of the greatest discoveries of all time. Generally, CRISPR-Cas9 is used to edit the genetic material of the target cell. However, because of the use of "guide" RNAs (gRNA) to target the sequence, using CRISPR-Cas9 is often far simpler, efficient, cheaper, and versatile than its other gene editing counterparts (Chaudhary et al., 2018). The main components of the CRISPR-Cas9 system are the Cas9 enzyme that cuts the DNA strands to allow for gene addition or deletion and the gRNA which guides the Cas9 enzyme to the target sequence. This CRISPR-Cas9 system was originally seen in bacteria to protect against viral DNA invasion. If a virus entered the bacteria, it would use the CRISPR enzyme to cut the parts of the viral DNA, thereby leaving it harmless while saving parts to incorporate into their own DNA to remember the virus and be better protected from future invasion (Ratner et al., 2016). The general process proceeds as follows: the gRNA binds to the target sequence, the Cas9 enzyme cuts both strands of the DNA, and the DNA repairs itself while incorporating the new piece of DNA. The gRNA consists of two types of RNA: CRISPR RNA (crRNA) and transactivating CRISPR RNA (tracrRNA). The crRNA is the RNA sequence that researchers used to identify the target sequence. The tracrRNA, on the other hand, provides the structure of the gRNA that the Cas9 enzyme will then bind to
280
when trying to attach to the target sequence (Ratner et al., 2016). Another vital component in the CRISPR-Cas9 process is the identification of the Protospacer Adjacent Motif (PAM). The PAM is a DNA base pair sequence that follows the target sequence of the Cas9 enzyme. The PAM is needed because CRISPR is carried out by the gRNA complex and the Cas9 enzyme which randomly binds to various sequences that match the target sequence. If the PAM is not recognized, then the enzyme might accidentally bind to the loci in the CRISPR complex instead of the target (Rein et al., 2018). There are two primary uses for the CRISPR-Cas9 system: gene knockouts and gene knockins. In gene knockout protocols, the Cas9 complex simply binds to the gene, cleaves out the desired target sequence or enough to disrupt the expected expression of the sequence, and the cell is left to repair the cleaved site. Gene knockin technology, on the other hand, either substitutes the base pairs to produce the desired protein or simply cleaves and adds another target sequence that the cell then incorporates into the genome (Rein et al., 2018). Once the target sequence on the bacteria has been identified by the gRNA and the Cas9 enzyme is bound, Cas9 cleaves the DNA and the cell has two choices of DNA repair: nonhomologous end joining (NHEJ) or homology-directed repair (HDR). In NHEJmediated repair, the cell immediately repairs the broken ends of DNA without need for a homologous template from the other strand. This can cause point mutations that can change one or many of the proteins that the genetic sequence is coded for. NHEJ is often used when a gene is to be neutralized to test the effects on a system without it. In contrast, HDR uses the DNA homology sequences of the base pairs around the cleavage site as a template to make more accurate DNA repair. With HDR repair, the cell can incorporate the DNA template provided alongside the CRISPR-Cas9 system (Rein et al., 2018). There are several methods that researchers use to facilitate cell HDR instead of NHEJ. The most common methods are introducing a mutation into the Cas9 protein so that only one of the two DNA strands is cleaved or just inhibiting the NHEJ pathway with a small NHEJ polymerase inhibitor. In either scenario, the cell would repair the pathway using the HDR method, allowing for greater mutation adoption. Therefore, many gene knockouts use
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
they were free of their VOCs. This trial will end around May 2022 to fully assess if CTX001 can treat sickle cell (Vertex, 2020). CRISPR Therapeutics also reported a licensing of Lipid Nanoparticle technology from MIT to optimize their delivery processes (CRISPR Therapeutics Announces Exclusive License of Lipid Nanoparticle…, 2017). Furthermore, Sangamo Therapeutics, another gene editing company, and Pfizer Pharmaceuticals are currently in phase three clinical trials for a Hemophilia A therapy through delivery genetically altered DNA by a recombinant viral vector (“Sangamo Therapeutics,” 2017).
NHEJ to incorporate their mutations because, for many, the goal is to just to change the sequence so that the proteins produced are different and its regular function is disrupted. In contrast, DNA knockins must use HDR because there is a specific substitution or addition that is being made by the CRISPR-Cas9 system; if NHEJ repair is used, the sequence and the resulting function would be different (Ratner et al., 2016). With the recent advancements in CRISPR, many biotech companies are utilizing it for numerous applications, especially in medicine. One of the biggest companies employing CRISPR is CRISPR Therapeutics. Partnering with Vertex Pharmaceuticals, another contender in geneediting, CRISPR Therapeutics has developed a program to treat sickle cell disease. Sickle cell disease is an inherited blood disorder caused by mutations in the beta-globin gene, which leads to abnormal and deficient hemoglobin. As a result, red blood cells take up a sicklelike shape that can block small blood vessels, causing acute pain, organ syndrome, and vasoocclusive crises (VOCs). CRISPR Therapeutics and Vertex Pharmaceuticals started their collaboration in 2015 and have developed a treatment called CTX001. CTX001 was first tested on two patients, and it works by using CRISPR to engineer the patients’ hematopoietic cells in order to produce high levels of fetal hemoglobin (HbF). HbF is a form of hemoglobin present at birth that is eventually replaced by an adult form of hemoglobin. The hope was that by elevating the amount of HbF, the patients could be alleviated from their painful and debilitating crises (Vertex, 2019). Although the trial is still going on, the patients have already seen immense improvements. The patients’ hemoglobin levels have roughly doubled, and
SUMMER 2020
Another frontrunner in CRISPR is Editas Medicine. Partnering with Allergan, Editas Medicine is striving to treat LCA10, an inherited type of blindness. On March 4, 2020, the two companies treated their first patient in the clinical trial of AGN-151587. In this trial, physicians injected microscopic droplets carrying a hollowed-out virus that is engineered to carry out the genetic instructions to manufacture the CRISPR gene edits. The CRISPR will cut out the genetic mutation that causes blindness, which should prevent retinal cell death and restore vital proteins. So far, the treatment has been used on only one patient, and the results will be apparent in the near future. The scientists in charge of this study believe the treatment is incredibly promising, and if it works, the two companies would officially have found out how to cure LCA10 (Terry, 2020). Clearly CRISPR is widely used for medical applications, but its application in agriculture is equally promising. Although a multitude of misinformation has been spreading about GMOs, CRISPR can benefit the world’s agriculture in many ways. According to an IBISWorld report, twenty-six countries were planting genetically modified crops over 185.1 million hectares of land in 2016. Another 31 countries were importing these biotech crops for food or feed use (Leach, 2019). In fact, Bayer, one of the worlds largest Life Sciences companies that specializes in both healthcare and agriculture, has been dedicated to genetically modified plants for years. The biggest names in agricultural biotech include Caribou Biosciences, Bayer CropSciences, and Monsanto. However, due to the infancy of agricultural biotech compared to medicinal companies, many smaller startup companies are taking their shot at CRISPR modified agriculture, including Pairwise. In 2018,
Figure 5: The above diagram depicts a simplified version of the CRISPR process. In the process, the CRISPR target sequence is identified and copied into a crRNA along with the genetic material that is being incorporated, if this is a gene knockin, and transported with the tracrRNA. Together, this complex randomly binds to the plasmid until the target sequence is identified, and the Cas9 enzyme subsequently cleaves the target plasmid and incorporates the transported genetic material. Source: Wikimedia Commons
"Clearly CRISPR is widely used for medical applications, but its application in agriculture is equally promising."
281
Figure 6: CRISPR based biotech companies are on the rise, but just like any other industry, there are giants. This image shows the biggest giants in the CRISPR industry spread throughout the world Source: Flickr
"Recombinetics is another company that tried to introduce the benefits of CRISPR to the public while they used CRISPR to breed hornless cows. This was advantageous to farmers, as normally they have to mechanically remove the horns for safety reasons."
282
Pairwise announced a $125 million investment by Monsanto for a project to use CRISPR to genetically modify many different agricultural products, ranging from sweeter strawberries with a longer shelf life to corn that could tolerate drought and flooding. The two companies are still laying out the plans for their five- to ten-year project, so the exact details on how CRISPR will be used is unavailable. They hope that once this huge project comes to an end and the benefits of CRISPR in agriculture is apparent, the public’s perception of it will change (Brodwin, 2018). Recombinetics is another company that tried to introduce the benefits of CRISPR to the public while they used CRISPR to breed hornless cows. This was advantageous to farmers, as normally they have to mechanically remove the horns for safety reasons. Recombinetics inserted CRISPR into a cow in order to “turn off” the gene that codes for the horn. At first, the treatment seemed like a success, for the cow rapidly produced offspring that were hornless, and it seemed as if everything else about the cow was identical. However, it turned out that was not the case. Scientists found foreign DNA in the genetic sequence of the cow, which were most likely genes that the DNA incorporated during the repairing phase of the CRISPR insertion. While there is no evidence that the mutation was unsafe, no one could guarantee that it had
no effect either. The publicizing of this mistake ultimately reinforced the cloud of doubt that surrounds CRISPR today (Bloch, 2019). In a literature review of the various studies centering around the safety assessment of genetically modified plants, researchers concluded that there have been more studies publishing results pointing to the harmful consequences of human consumption of these plants (Domingo and Bordonaba, 2011). More concerning is that many of the studies that published results supporting the safety of these genetically modified plants are funded by companies that produce or are trying to market these plants, indicating a potential conflict of interest. Therefore, the credibility of these studies, and the decisions of health organizations worldwide to allow these plants, should be questioned since it affects the safety of human diets (Domingo and Bordonaba, 2011). The team also notes that the studies almost exclusively revolve around the analysis of three crops: soybean, corn, and, to a much lesser extent than the former two, rice. Although it is true that these crops have the widest use of genetic modification compared to other crops, this disregards genetically modified potatoes, peas, peppers and more that are not getting a suitable amount of attention (Leach, 2019). The effects of genetically modified plants can
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
be further extended towards the environmental effects. While Bayer claims that these plants do not have harmful effects on the environment, this claim has also been put under scrutiny in recent years. A study analyzing the research regarding environmental analysis of crops found that the exact environmental effects of genetically modified organisms is still unknown. Plantsâ&#x20AC;&#x2122; environmental effects depend on a myriad of factors, but it is clear that independent analyses and toxicology reports are crucial to determining environmental harm resulting from the cultivation of genetically modified plants. While it is true that these crops may have a great positive impact in the agricultural industry, their potential harm cannot be ignored, especially for those crops that are lightly studied in the research community (Tsatsakis et al., 2017). Although the debate on CRISPR and other gene editing technologies is unresolved, many biotech companies are still utilizing CRISPR and striving to make an impact on their environment. Whether it is helping to treat or cure human diseases, or making healthier food and livestock, CRISPR offers many positive applications, regardless of the populationâ&#x20AC;&#x2122;s perspective of it, so the future looks bright for firms leveraging the novel technologies. There is currently a shift in the industry away from gene-based diagnostics towards gene-based therapies, and this is simply because the therapy and treatment market is far more lucrative than the diagnostics market (Curran, 2020). The genetic editing market is expected to grow at a compounded annual growth rate of 25.7% from 2018 to 2023, making it an extremely high-growth industry over the next three years (Fan, 2019). While more stringent government regulation is a challenge to the market, this headwind might prove to be less of a concern for optimists of the market than it would have seemed a year ago (Ugalmugle & Swain, 2019). The emergence of the coronavirus pandemic illustrates how governments can be highly amenable to relaxing regulations, as evident from the suspension of HIPAA regulations along with many of the rules surrounding the pharmaceutical development process. While it is true that these are exceptional times that call for exceptional measures, these emergency events could reveal the benefits of less regulated research markets. Especially with the rates of antimicrobial resistance rising, gene editing might prove to be an extremely effective protection against resistant bacteria.
SUMMER 2020
Some of the biggest biotechnology companies in the world today are incorporating gene editing into their day-to-day and entire companies are even being built around these tools. Horizon Discovery, a gene editing and cell building company that valued at over $160 million, is one of the largest players in the gene editing market. In their gene editing workflow, CRISPR-CAS9 technology is fundamental in every scenario, from single gene knockout to loss-of-function screening of multiple genes to gene knock-in, Cas9 reagents are used as the main vector to increase editing efficiency. However, Horizonâ&#x20AC;&#x2122;s reliance on Cas9 does not stop them from continuing to innovate their methods. At the beginning of July of this year, in fact, Horizon introduced a new CRISPR activation system using dCas9. Using dCas9 paired with a guide RNA, the system can reach the promoter region and activate the transcription start much easier, facilitating successful delivery and expression in the target cells (Edit R DCas9 VPR, n.d., p. 9). With the genetic engineering capabilities of several gene editing tools like CRISPR, many wonder if this could lead to the end of genetic diseases. Gene editing has the potential to help cure any genetic disease, but currently there are a only few top candidates on which research has already begun (Fernandez, 2019). As mentioned previously, the first CRISPR trial in the U.S. and Europe focuses on treating blood disorders, particularly sickle cell anemia and beta-thalassemia which affect oxygen transportation in the blood (Fernandez, 2019).
"...the first CRISPR trial in the U.S. and Europe focuses on treating blood disorders, particularly sickle cell anemia and beta-thalassemia which affect oxygen transportation in the blood."
Additionally, the potential use of CRISPR-Cas9 to treat cancer is currently being researched in China as 86 patients diagnosed with esophageal cancer are being treated with CRISPR (Fernandez, 2019). The treatment centers around the removal of the PD-1 gene which codes for a protein on immune cells where tumors can attach (Fernandez, 2019). The first U.S. CRISPR cancer study began in early 2019 at the University of Pennsylvania and involved the treatment of three patients: one man with sarcoma, and two women with multiple myeloma (Couzin-Frankel, 2020; NPR, 2019). Several months after the trial began, the research team reported that, although the CRISPR technique did not aid in cancer treatment, with one patient dying and two others worsening, the CRISPR technology proved safe and practical making future studies with gene editing technology more plausible (Couzin-Frankel, 2020). The scientists at the
283
Figure 7: Nanoparticles are believed to be one of the most promising methods of drug delivery. Because of their small size and large surface area, nanoparticles can easily cross tight barriers in the bloodline and endothelial cells while carrying a variety of materials, such as targeting peptides, antibodies, or drugs. CRISPR Therapeutics recently entered into a licensing agreement with MIT to use and further develop lipid nanoparticles as they could be very useful in transporting the RNA sequences better, leading to a higher success rate of gene editing. Source: Wikimedia Commons
"Nanoparticle delivery is relatively simple to understand: a nanoparticle coat is designed to encapsulate gene therapy, with molecules on its surface called ligands that bind to receptors on the surface of specific cells."
University of Pennsylvania stated that the main purpose of the study was to prove the possibility of using CRISPR in humans, and the study fulfilled its purpose (Couzin-Frankel, 2020). This marks a significant turning point in using CRISPR to treat genetic disease because it supports the feasibility and safety of the technique. Most recently, CRISPR has been used in a clinical trial to treat blindness resulting from genetic mutations (Khanna, 2020). Research teams in both the U.S. and Ireland have administered CRISPR for the first time to a person suffering from childhood blindness (Khanna, 2020). CRISPR was used to attack a mutated gene associated with childhood blindness which negatively affects the functioning of the retina (Khanna, 2020). In the U.S. alone, almost 200,000 people suffer from genetic retinal diseases for which there are currently no cures (Khanna, 2020). Some scientists consider the eye as an optimal testing ground for clinical CRISPR use because retinal tissue does not elicit the same immune response as the rest of the body, so the injection of foreign material such as gene editing technology would be less likely to trigger a detrimental immune response (Khanna, 2020). As such, Luxturna is the first FDA approved gene therapy drug to treat childhood blindness (Khanna, 2020). Research has also recently begun on treatments for several other genetic conditions including HIV/AIDS, cystic fibrosis, muscular dystrophy, and Huntington’s disease (Fernandez, 2019). The research surrounding these diseases seem promising, but there are no FDA approved clinical trials using gene-editing technology to treat these diseases yet (Fernandez, 2019). Though clinical trials with CRISPR-Cas9 have begun, the research process with tools like CRISPR remains slow due to the uncertainty surrounding the new technology. CRISPR’s complications are not well understood, and the possibility of harmful side-effects, like those caused by off-target mutations, significantly delays the widespread use of gene editing tools to treat human genetic diseases (Molteni, 2019). In the past few years, scientists have been exploring ways to more precisely control CRISPR-Cas9 technology, making its use in medical treatments more likely (Molteni, 2019). Scientists are declaring now as the time when “the training wheels come off” when it comes to gene editing in humans (NPR, 2019). While CRISPR is the most popular gene editing tool used today, it is certainly not without its faults. CRISPR continues to be researched
284
by academic groups, corporate labs, and independent researchers to address the drawbacks of CRISPR technologies. These developments usually center around three main fields of study: specificity, delivery, and ethics (Ju et al., 2018). These are all places where CRISPR can be greatly improved. One of the biggest challenges in using CRISPR as a therapy is delivering it to cells. As it stands, researchers must change the delivery method based on whether ribonucleoproteins, plasmid DNA or mRNA is delivered. These genetic materials are traditionally stored in Adenoassociated viral (AAV) vectors and have worked well in mice (Vassalli et al., 2003). However, human genomes are much more complex and require much more genetic material, which can cause material to be lost if it is all transported via AAV. Nanoparticles have been proposed as one method to ensure accurate delivery of genetic material to the target sequence, but the safety of this method is not entirely understood due its infancy. Nanoparticle delivery is relatively simple to understand: a nanoparticle coat is designed to encapsulate gene therapy, with molecules on its surface called ligands that bind to receptors on the surface of specific cells. This manufactured mechanism for targeting specific cells is attractive as an alternative to viral delivery, given that the risk of a virusbased immune response is negated. However, no nanoparticle-based gene therapies have yet been approved by the FDA, and these nanoparticles pose their own unique problems (e.g. aggregation and entry into unintended tissues). There is also still some risk of immune rejection (Chen et al., 2016). Furthermore, what happens to the elements of the nanoparticle after delivery is still unknown. This is important to ensure that these components do not affect biological processes after delivery (Lino et al., 2018).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Scientists are also working to augment the specificity of the process in order to mitigate off-target effects. These off-target effects can arise if the Cas9 system misidentifies the target sequence and cuts the incorrect part of the genome. This is especially a concern in HDR techniques where the process is intended to be much more controlled than NHEJ (Lino et al., 2018). Specific improvements to CRISPR have come in the form of simply improving the protein engineering mechanisms to lead to higher specificity; others have developed methods that require binding of Cas9 in the sites adjacent to the target sequence, thereby increasing the matching requirement and lowering chances of off-target effects (Ju et al., 2018). As great as the promise of gene therapy is for those suffering from genetic diseases, in vivo gene therapy is fundamentally limited by the ability to deliver the treatment to only cells that need it. For individuals with cystic fibrosis, the challenge is getting the drug distributed effectively throughout the convoluted network of bronchioles, navigating through airways that are damaged by inflammation and obstructed by mucus. And that is not to mention all the other regions of the body that are affected by disease–including the intestines, pancreas, liver, and more (Neff, 2020). For brain diseases like Huntington’s, the drug would also need to effectively traverse the highly secure bloodbrain barrier (Di Marco et al., 2019). Clearly, the problem of drug delivery manifests at several levels. The first issue is one that confronts all drug candidates, not just gene therapies: optimizing bioavailability, or the amount of drug that gets into the blood. Of course, delivering the therapy into the blood via needle would result in a bioavailability of 100%; however, typically patients prefer to take their medicine orally, so that is the way that most drugs are manufactured. Several considerations are essential. First, the gene therapy must be both water and lipid soluble. In other words, it must be capable of dissolving across the epithelial and endothelial cell layers that divide the digestive tract from the bloodstream as well as traveling through the bloodstream itself. It must also be non-toxic, affecting only the cells in organs that it is intended to treat, without wreaking havoc in other regions of the body (Loftsson, 2015). As previously mentioned, the CRISPR-Cas9
SUMMER 2020
system, or any other form of gene therapy, is not simply packaged into a pill as many pharmaceutical compounds are. Genetic material, if administered unprotected into the body, would be degraded quickly by enzymes in the blood and the protective cells of the immune system (Chen et al., 2016). It takes a special type of vessel to deliver gene therapy–the most common containers are viral vectors and the aforementioned nanoparticle carriers. Viral particles are naturally suited to infiltrate human cells. But in the lab, a virus can be used to suit a more positive purpose. With its reproductive capabilities removed and its payload of viral DNA replaced with a drug (but its capacity to infect cells left intact), a virus becomes a highly effective vehicle for delivery of gene therapy. These so-called viral vectors come in a variety of different forms: adenoviruses, retroviruses, lentiviruses, and even an altered form of the herpes simplex and measles viruses are being employed for drug delivery. Of nearly 3,000 clinical trials for gene therapy conducted by 2017, almost 70% involved viral vector delivery. Nonetheless, there has always been concern about the adverse effects of viral drug delivery, and the number of approved drugs relying on viral delivery can be counted on one hand. AAV viral vectors, for example, trigger an immune response when administered repeatedly. Consequently, non-viral vectors have also been explored (Lundstrom, 2018). Ethics are also paramount to consider. When performing genome editing on humans, researchers can either target somatic cells or the germline (which includes embryos, sperm and eggs). Though there are certainly concerns around gene editing of somatic cells, they are much less controversial than germline cells. This is because gene editing of somatic cells are isolated to the specific individual. However, germline modifications could be inherited into offspring, which could prove troublesome if noticeable phenotypic modifications result (Lino et al., 2018). This could also lead to researchers attempting to optimize genomes of embryos to create customized “designer babies.” By enabling scientists to modify our DNA and subsequent phenotypes, there is fear that humans could bestow upon themselves a "god-like" responsibility of dictating which human receives what characteristics (Mulvihill et al., 2017). Additional pushback against this application of genetic engineering cites how the allowance will most likely favor the rich and powerful that can afford or have enough
"The first issue is one that confronts all drug candidates, not just gene therapies: optimizing bioavailability, or the amount of drug that gets into the blood."
285
"Today, anyone could theoretically participate in doit-yourself (DIY) biohacking by simply buying a PCR machine, reagents, primers, and other materials necessary for genetic engineering online."
influence to decide who gets it. Even those who are not in a higher social status will most likely have much more social and economic mobility if they are genetically engineering, putting those who are not at an ironic inherent disadvantage in life (Mulvihill et al., 2017). Thus, researchers have mostly tested instead on nonviable embryos, but the controversy remains on whether society will ever be able to accept true germline mutations. Beyond human ethics, the genetic modification of animals also has controversies. One of the most common arguments in support of the practices is the utilitarian approach that is also used to justify human gene editing, though with more success in this case. By allowing the genetic modification of animals, not only will humans have healthier food but also more of it. If animals are allowed to be genetically modified, then researchers can simulate humanlike conditions within the body of the animal, which can allow for more rapid development of drugs and medical products that could benefit humans, our environments, and even the animals depending on the products being developed (Almond, 2000). The big opposition to the utilitarian argument focuses on the rights of the animals themselves. This refers to the fact that since animals do not have a say in whether they are genetically modified or not, it is not our place to force it upon them. However, because of the wide benefits from animal gene editing along with the assumed superiority of humans over animals, the utilitarian argument has gained much more support than the animal rights argument, though both sides have been gaining followers that could shift regulation in either direction (Ormandy et al., 2011). Another argument that can be used in the debate for both human and animal gene editing is the virtue approach. In this approach, it is assumed that each practice or organism has a function that they must perform that should not be hindered (Almond, 2000). For instance, a water bottle is for water holding, and a chair is for support. This argument can be used for either side of this debate. If one wants to argue in support of gene editing, they could state that gene editing is meant to edit the genes of humans, plants, and animals, so it should be allowed. On the other hand, one could state that human and animal purpose would be altered by the tampering of characteristics, thereby disturbing the virtue (Almond, 2000). While the conversation on ethics of genetic
286
engineering has traditionally been reserved for the labs of corporations and academic institutions, new advances in the technology have allowed for genetic-engineeringenthusiasts to perform their own experiments from the comfort of their own (Zettler et al., 2019). Today, anyone could theoretically participate in do-it-yourself (DIY) biohacking by simply buying a PCR machine, reagents, primers, and other materials necessary for genetic engineering online (Gruber, 2019). In fact, â&#x20AC;&#x153;the Odinâ&#x20AC;? is a now infamous company that was selling full DIY genetic engineering kits for only $1,849. If home genetic engineering sounds too good to be true, it is. If it sounds terrifying, lawmakers agree. In 2017, the FDA released a statement that put the development of DIY genetic engineering kits in a legal gray zone. It is not clear even to lawmakers what constitutes a legal kit and what constitutes an illegal one, but Josiah Zeyner, founder of the Odin, was nevertheless arrested based on this law. In 2019, California made its stance clear by making all home genetic engineering kits illegal and other states are expected to follow suit (Gent, 2019). Ultimately, the legal repercussions that could arise if an individual were to become injured due to a misstep in performing home experiments using a DIY biohacking kit far outweigh the benefits for many companies. Therefore, there has not been much government regulation in the field as there are not many individuals willing to take on that risk (Zettler et al., 2019). Many of the regulations are privatized, with individual research labs holding strict safety standards on what constitutes as proper biosafety. Even competitions such as the International Genetically Engineering Machine (iGEM) competition require participants to abide by a strict program of bioethics. The privatized regulation can even be seen from the top of the supply chain in the industry with genetic engineering supplier group International Gene Synthesis Consortium holding strict screening guidelines to ensure that products will not be in the hands of those who will use them for destructive purposes (Zettler et al., 2019). While genetic engineering is a highly researched field of study, there is still much to be discovered in the field. ZFNs and TALENs are great alternatives to CRISPR-Cas9 technology but are still much too complex and expensive in their current state to replace CRISPR-Cas9.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
There is still much to be improved upon in CRISPR-Cas9. Nevertheless, it will be interesting to see where the industry goes considering the wide uses of genetically modified products. Numerous academic institutions and corporations have licensed technology to share the cutting-edge research of the industry, and the market for genetic engineering techniques continues to grow.
org/10.1016/j.jmb.2015.10.014
References
Cohen, S. N., Chang, A. C. Y., Boyer, H. W., & Helling, R. B. (1973). Construction of Biologically Functional Bacterial Plasmids In Vitro. Proceedings of the National Academy of Sciences of the United States of America, 70(11), 3240–3244.
A Long-Read vs. Short-Read Platform Comparison (Part 2): Best NGS Approaches for Human Transcriptome Sequencing. (n.d.). GenomeWeb. Retrieved August 1, 2020, from https:// www.genomeweb.com/resources/webinars/long-readvs-short-read-platform-comparison-part-2-best-ngsapproaches-human All about genes. (n.d.). Retrieved August 1, 2020, from http:// www.beowulf.org.uk/ Allergan and Editas Dose First Patient in Historic CRISPR Trial for Inherited Blindness. (n.d.). BioSpace. Retrieved August 1, 2020, from https://www.biospace.com/article/allergan-andeditas-dose-1st-patient-in-crispr-trial/ Almond, B. (2000, March 1). Commodifying animals: Ethical issues in genetic engineering of animals.: EBSCOhost. http://web.a.ebscohost.com/ehost/pdfviewer/ pdfviewer?vid=1&sid=3207ba42-d6da-4a8b-9145fb58f93ee1f5%40sessionmgr4007 An introduction to nanopore sequencing. (2018, November 2). Oxford Nanopore Technologies. http://nanoporetech.com/ resource-centre/introduction-nanopore-sequencing Application of an in Vitro Blood–Brain Barrier Model in the Selection of Experimental Drug Candidates for the Treatment of Huntington’s Disease | Molecular Pharmaceutics. (n.d.). Retrieved August 1, 2020, from https://pubs.acs.org/ doi/10.1021/acs.molpharmaceut.9b00042 Bogdanove, A. J., Schornack, S., & Lahaye, T. (2010). TAL effectors: Finding plant genes for disease and defense. Current Opinion in Plant Biology, 13(4), 394–401. https://doi. org/10.1016/j.pbi.2010.04.010 Brodwin, E. (n.d.). A new Monsanto-backed company is on the verge of producing the first fruit made with a blockbuster gene-editing tool that could revolutionize agriculture. Business Insider. Retrieved August 1, 2020, from https://www. businessinsider.com/monsanto-gmo-gene-editing-crisprproduce-2018-3 Carroll, D. (2011). Genome Engineering With Zinc-Finger Nucleases. Genetics, 188(4), 773–782. https://doi.org/10.1534/ genetics.111.131433 Carroll, D. (2017). Genome Editing: Past, Present, and Future. The Yale Journal of Biology and Medicine, 90(4), 653–659. Cassandri, M., Smirnov, A., Novelli, F., Pitolli, C., Agostini, M., Malewicz, M., Melino, G., & Raschellà, G. (2017). Zinc-finger proteins in health and disease. Cell Death Discovery, 3(1), 1–12. https://doi.org/10.1038/cddiscovery.2017.71 Chandrasegaran, S., & Carroll, D. (2016). Origins of Programmable Nucleases for Genome Engineering. Journal of Molecular Biology, 428(5, Part B), 963–989. https://doi.
SUMMER 2020
Chaudhary, K., Chattopadhyay, A., & Pratap, D. (2018). The evolution of CRISPR/Cas9 and their cousins: Hope or hype? Biotechnology Letters, 40(3), 465–477. https://doi. org/10.1007/s10529-018-2506-7 Chen, J., Guo, Z., Tian, H., & Chen, X. (2016). Production and clinical development of nanoparticles for gene delivery. Molecular Therapy - Methods & Clinical Development, 3. https://doi.org/10.1038/mtm.2016.23
Couzin-FrankelFeb. 6, J., 2020, & Pm, 2:00. (2020, February 6). Cutting-edge CRISPR gene editing appears safe in three cancer patients. Science | AAAS. https://www.sciencemag. org/news/2020/02/cutting-edge-crispr-gene-editingappears-safe-three-cancer-patients CRISPR Therapeutics and Vertex Announce Progress in Clinical Development Programs for the Investigational CRISPR/Cas9 Gene-Editing Therapy CTX001 | Vertex Pharmaceuticals. (n.d.). Retrieved August 1, 2020, from https://investors.vrtx.com/news-releases/news-releasedetails/crispr-therapeutics-and-vertex-announce-progressclinical CRISPR Therapeutics Announces Exclusive License of Lipid Nanoparticle…. (2017, May 8). CRISPR. http://www. crisprtx.com/about-us/press-releases-and-presentations/ crispr-therapeutics-announces-exclusive-license-of-lipidnanoparticle-technologies-developed-at-mit-1 Curran, J. (2020, May). Biotechnology in the US. IBISWorld. https://my-ibisworld-com.dartmouth.idm.oclc.org/us/en/ industry/nn001/about Discovery of DNA Double Helix: Watson and Crick | Learn Science at Scitable. (n.d.). Retrieved July 26, 2020, from https://www.nature.com/scitable/topicpage/discovery-ofdna-structure-and-function-watson-397/ Doebley, J., Stec, A., & Hubbard, L. (1997). The evolution of apical dominance in maize. Nature, 386(6624), 485–488. https://doi.org/10.1038/386485a0 Domingo, J. L., & Giné Bordonaba, J. (2011). A literature review on the safety assessment of genetically modified plants. Environment International, 37(4), 734–742. https://doi. org/10.1016/j.envint.2011.01.003 Edit R dCas9 VPR. (n.d.). Horizon Discovery. Retrieved July 25, 2020, from https://horizondiscovery.com/en/products/ gene-modulation/overexpression-reagents/crispra/edit-rdcas9-vpr#overview Fan, M. (2019, March). Global Genome Editing Market Size, Technology & Industry Report. https://www.bccresearch. com/market-research/biotechnology/genome-editing.html FDA finds surprise in gene-edited cattle: Antibiotic-resistant, non-cow DNA. (2019, August 15). The Counter. https:// thecounter.org/fda-gene-edited-cattle-antibiotic-resistantcrispr-dna/ Feynman, R. P. (n.d.). Plenty of Room at the Bottom. 7.
287
First U.S. Patients Treated With CRISPR As Human GeneEditing Trials Get Underway. (n.d.). NPR.Org. Retrieved July 26, 2020, from https://www.npr.org/sections/healthshots/2019/04/16/712402435/first-u-s-patients-treated-withcrispr-as-gene-editing-human-trials-get-underway Frederick Sanger | Biography & Facts. (n.d.). Encyclopedia Britannica. Retrieved August 1, 2020, from https://www. britannica.com/biography/Frederick-Sanger Gao, D., Jiang, N., Wing, R. A., Jiang, J., & Jackson, S. A. (2015). Transposons play an important role in the evolution and diversification of centromeres among closely related species. Frontiers in Plant Science, 6. https://doi.org/10.3389/ fpls.2015.00216 Genetic Engineering. (n.d.). Genome.Gov. Retrieved July 22, 2020, from https://www.genome.gov/genetics-glossary/ Genetic-Engineering Genetic engineering | Definition, Process, & Uses. (n.d.). Encyclopedia Britannica. Retrieved July 22, 2020, from https:// www.britannica.com/science/genetic-engineering Gent, E. (2019, August 19). California Passed the Country’s First Law to Prevent Genetic Biohacking. Singularity Hub. https://singularityhub.com/2019/08/19/california-passed-thecountrys-first-law-to-prevent-genetic-biohacking/ George H. Fried, P. D., & George J. Hademenos, P. D. (2019). The Nature of the Gene. McGraw-Hill Education. /content/ book/9781260120783/chapter/chapter7
Ju, X.-D., Xu, J., & Sun, Z. S. (2018). CRISPR Editing in Biological and Biomedical Investigation. Journal of Cellular Biochemistry, 119(1), 52–61. https://doi.org/10.1002/ jcb.26154 Khanna. (2020, June 25). Gene therapy and CRISPR strategies for curing blindness. University of Massachusetts Medical School. https://www.umassmed.edu/news/newsarchives/2020/06/gene-therapy-and-crispr-strategies-forcuring-blindness/ Kim, H., & Kim, J.-S. (2014). A guide to genome engineering with programmable nucleases. Nature Reviews Genetics, 15(5), 321–334. https://doi.org/10.1038/nrg3686 Leach, N. (2019, August). Global Fertilizers & Agricultural Chemicals Manufacturing—MyIBISWorld. https:// my.ibisworld.com/gl/en/industry/c1932-gl/about LePan, N. (2020, March 14). Visualizing the History of Pandemics. Visual Capitalist. https://www.visualcapitalist. com/history-of-pandemics-deadliest/ Lino, C. A., Harper, J. C., Carney, J. P., & Timlin, J. A. (2018). Delivering CRISPR: A review of the challenges and approaches. Drug Delivery, 25(1), 1234–1257. https://doi.org/ 10.1080/10717544.2018.1474964 Lundstrom, K. (2018). Viral Vectors in Gene Therapy. Diseases, 6(2). https://doi.org/10.3390/diseases6020042
Gruber, K. (2019). Biohackers: A growing number of amateurs join the do‐it‐yourself molecular biology movement outside academic laboratories. EMBO Reports, 20(6). https://doi. org/10.15252/embr.201948397
Molteni, M. (n.d.). A New Crispr Technique Could Fix Almost All Genetic Diseases. Wired. Retrieved July 26, 2020, from https://www.wired.com/story/a-new-crispr-technique-couldfix-many-more-genetic-diseases/
Heather, J. M., & Chain, B. (2016). The sequence of sequencers: The history of sequencing DNA. Genomics, 107(1), 1–8. https:// doi.org/10.1016/j.ygeno.2015.11.003
Mulvihill, J. J., Capps, B., Joly, Y., Lysaght, T., Zwart, H. A. E., & Chadwick, R. (2017). Ethical issues of CRISPR technology and gene editing through the lens of solidarity. British Medical Bulletin, 122(1), 17–29. https://doi.org/10.1093/bmb/ldx002
Hemoglobinopathies. (n.d.). CRISPR Therapeutics. Retrieved July 25, 2020, from http://www.crisprtx.com/programs/ hemoglobinopathies Hsu, P. D., Lander, E. S., & Zhang, F. (2014). Development and Applications of CRISPR-Cas9 for Genome Engineering. Cell, 157(6), 1262–1278. https://doi.org/10.1016/j.cell.2014.05.010 Hugo de Vries (1848-1935): CSHL DNA Learning Center. (n.d.). Retrieved July 26, 2020, from https://dnalc.cshl.edu/ view/16222-Biography-6-Hugo-de-Vries-1848-1935-.html Human Genome Project: Sequencing the Human Genome | Learn Science at Scitable. (n.d.). Retrieved August 1, 2020, from https://www.nature.com/scitable/topicpage/dna-sequencingtechnologies-key-to-the-human-828/ Ishino, Y., Krupovic, M., & Forterre, P. (2018). History of CRISPRCas from Encounter with a Mysterious Repeated Sequence to Genome Editing Technology. Journal of Bacteriology, 200(7). https://doi.org/10.1128/JB.00580-17 Isolating the Hereditary Material | Learn Science at Scitable. (n.d.-a). Retrieved July 26, 2020, from https://www.nature.com/ scitable/topicpage/isolating-hereditary-material-frederickgriffith-oswald-avery-336/ Joung, J. K., & Sander, J. D. (2013). TALENs: A widely applicable technology for targeted genome editing. Nature Reviews Molecular Cell Biology, 14(1), 49–55. https://doi.org/10.1038/
288
nrm3486
Mussolino, C., & Cathomen, T. (2012). TALE nucleases: Tailored genome engineering made easy. Current Opinion in Biotechnology, 23(5), 644–650. https://doi.org/10.1016/j. copbio.2012.01.013 Neff, S. (2020). The Evolution of Cystic Fibrosis Therapy—A Triumph of Modern Medicine. Dartmouth Undergraduate Journal of Science, 21(2), 90–99. Nemudryi, A. A., Valetdinova, K. R., Medvedev, S. P., & Zakian, S. M. (2014). TALEN and CRISPR/Cas Genome Editing Systems: Tools of Discovery. Acta Naturae, 6(3), 19–40. Nirenberg History—Gregor Mendel—Nirenberg History— Gregor Mendel—Office of NIH History and Stetten Museum. (n.d.). Retrieved July 26, 2020, from https://history.nih.gov/ display/history/Nirenberg+History+-+Gregor+Mendel Ormandy, E. H., Dale, J., & Griffin, G. (2011). Genetic engineering of animals: Ethical issues, including welfare concerns. The Canadian Veterinary Journal, 52(5), 544–550. Ormond, K. E., Mortlock, D. P., Scholes, D. T., Bombard, Y., Brody, L. C., Faucett, W. A., Garrison, N. A., Hercher, L., Isasi, R., Middleton, A., Musunuru, K., Shriner, D., Virani, A., & Young, C. E. (2017). Human Germline Genome Editing. American Journal of Human Genetics, 101(2), 167–176. https://doi. org/10.1016/j.ajhg.2017.06.012
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Payne, A., Holmes, N., Rakyan, V., & Loose, M. (2018). Whale watching with BulkVis: A graphical viewer for Oxford Nanopore bulk fast5 files [Preprint]. Genomics. https://doi. org/10.1101/312256 Petit, L., Khanna, H., & Punzo, C. (2016). Advances in Gene Therapy for Diseases of the Eye. Human Gene Therapy, 27(8), 563–579. https://doi.org/10.1089/hum.2016.040 Pneumococcal Disease (Streptococcus pneumoniae) | Disease Directory | Travelers’ Health | CDC. (n.d.). Retrieved July 26, 2020, from https://wwwnc.cdc.gov/travel/diseases/ pneumococcal-disease-streptococcus-pneumoniae Ratner, H. K., Sampson, T. R., & Weiss, D. S. (2016). Overview of CRISPR–Cas9 Biology. Cold Spring Harbor Protocols, 2016(12), pdb.top088849. https://doi.org/10.1101/pdb.top088849 Recombinant DNA | Definition, Steps, Examples, & Invention. (n.d.). Encyclopedia Britannica. Retrieved July 22, 2020, from https://www.britannica.com/science/recombinant-DNAtechnology Recombinant DNA Technology for Health & Nutrition. (2016, January 30). DDC. https://dnacenter.com//blog/creatingsolutions-health-nutrition/
Vassalli, G., Büeler, H., Dudler, J., von Segesser, L. K., & Kappenberger, L. (2003). Adeno-associated virus (AAV) vectors achieve prolonged transgene expression in mouse myocardium and arteries in vivo: A comparative study with adenovirus vectors. International Journal of Cardiology, 90(2– 3), 229–238. https://doi.org/10.1016/S0167-5273(02)00554-5 Wah, D. A., Bitinaite, J., Schildkraut, I., & Aggarwal, A. K. (1998). Structure of FokI has implications for DNA cleavage. Proceedings of the National Academy of Sciences of the United States of America, 95(18), 10564–10569. Yeadon, J., & Ph.D. (n.d.). Pros and cons of ZNFs, TALENs, and CRISPR/Cas. The Jackson Laboratory. Retrieved July 25, 2020, from https://www.jax.org/news-and-insights/jax-blog/2014/ march/pros-and-cons-of-znfs-talens-and-crispr-cas Zettler, P. J., Guerrini, C. J., & Sherkow, J. S. (2019). Regulating genetic biohacking. Science (New York, N.Y.), 365(6448), 34–36. https://doi.org/10.1126/science.aax3248 Zimmer, C. (2013, May 16). From Fearsome Predator to Man’s Best Friend. The New York Times. https://www.nytimes. com/2013/05/16/science/dogs-from-fearsome-predator-tomans-best-friend.html
Rein, L. A. M., Yang, H., & Chao, N. J. (2018). Applications of Gene Editing Technologies to Cellular Therapies. Biology of Blood and Marrow Transplantation, 24(8), 1537–1545. https:// doi.org/10.1016/j.bbmt.2018.03.021 Ridley, M. (2006). Genome: The autobiography of a species in 23 chapters (First Harper Perennial edition). Harper Perennial. Rosalind Franklin: DNA from the Beginning. (n.d.). Retrieved July 26, 2020, from http://www.dnaftb.org/19/bio-3.html Sangamo Therapeutics And Pfizer Announce That SB-525 Investigational Hemophilia A Gene Therapy Receives Orphan Medicinal Product Designation From The European Medicines Agency | Sangamo Therapeutics, Inc. (2017, June 7). https://investor.sangamo.com/news-releases/newsrelease-details/sangamo-therapeutics-and-pfizer-announcesb-525-investigational Seven Diseases That CRISPR Technology Could Cure. (n.d.). Retrieved July 26, 2020, from https://www.labiotech.eu/ crispr/crispr-technology-cure-disease/ Silva, G., Poirot, L., Galetto, R., Smith, J., Montoya, G., Duchateau, P., & Pâques, F. (2011). Meganucleases and Other Tools for Targeted Genome Engineering: Perspectives and Challenges for Gene Therapy. Current Gene Therapy, 11(1), 11–27. https://doi.org/10.2174/156652311794520111 Thorsteinn Loftsson. (2015). Essential pharmacokinetics: A primer for pharmaceutical scientists. Elsevier/AP, Academic Press is an imprint of Elsevier. Tsatsakis, A. M., Nawaz, M. A., Kouretas, D., Balias, G., Savolainen, K., Tutelyan, V. A., Golokhvast, K. S., Lee, J. D., Yang, S. H., & Chung, G. (2017). Environmental impacts of genetically modified plants: A review. Environmental Research, 156, 818–833. https://doi.org/10.1016/j. envres.2017.03.011 Ugalmugle, S., & Swain, R. (2019, January). Gene Editing Market Share Analysis 2018-2024 Size Forecasts Report. Global Market Insights, Inc. https://www.gminsights.com/ industry-analysis/gene-editing-market
SUMMER 2020
289
An investigation into the field of Genopolitics STAFF WRITERS: GRACE LU '23, JENNY SONG '23, ZOE CHEN '23, NEPHI SEO '23 BOARD WRITER: NISHI JAIN '21 Cover Image: Genetic influence on political behavior has seen a significant increase in academic interest following recent technological advances within the genomics industry. Source: Public Domain Pictures
What is Genopolitics? Since the COVID-19 outbreak, the United States has witnessed an unprecedented outpouring of divisive political statements and actions. The upcoming 2020 presidential election has invigorated this trend and helped produce a growing public interest in the underpinnings of varying political behaviors and orientations. One tool to explain the diversity of political opinion is genopolitics, a discipline that draws evidence from behavioral genetics, neuroscience, and evolutionary psychology to explain political trends (Beauchamp, 2011). The field has been growing, albeit with significant criticism, since its origin in 2005 — when the American Political Science Review published an article investigating identical and fraternal twins' political orientations. Advocates of genopolitics claim that there are genetic underpinnings to one's political attitude, orientation, and behaviors. While
290
initial studies focused heavily on investigating whether or not there was a genetic influence in politics, current scholars are more drawn to examining the exact mechanism by which genes affect political traits (Fowler & Dawes, 2013). Past genopolitics studies have probed voter turnout, political ideology, and civic engagement, all aiming to provide convincing evidence of genetic determinants underlying political affiliation. The first genopolitics study — about identical and fraternal twins' political orientations — was a twin study, a research method that is often used in health science and psychology (Beauchamp, 2011). A twin study is an approach that measures identical and fraternal twins' differences and similarities with regard to a single trait, determining to what extent that trait is shaped by genetic versus environmental factors. Another research model, the candidate gene association study (CGA), is performed DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
by focusing on a select few genetic variants that are likely to be associated with a single trait. An approach often contrasted with CGA is genome-wide association study (GWAS). A GWAS is conducted by studying variants across whole genomes, rather than a few genes, to determine their association with a specific trait (Chaney & English, 2014). The aforementioned research methods carry limitations that have been targeted by critics ever since the advent of the discipline. A current challenge for the discipline of genopolitics is identifying the exact connection, not just the apparent correlation, of genetic influences on political traits (Dawes et al., 2014). Critics have indicated that some critical research limitations have affected study results and conclusions. Because researchers have yet to provide the reasoning behind such correlations, critics continue to claim that there are fundamental flaws in the study of genopolitics that make the results untrustworthy (Charney & English, 2013).
Introduction to Twin Studies One method that has been used to investigate the extent of genetic and environmental influences of particular traits and behaviors is the twin study. Twin studies are accomplished through comparisons of identical, or monozygotic (MZ) and fraternal, or dizygotic (DZ) twins (Segal, 1985). MZ twins share 100% of their genes since they are born of a single fertilized egg that divided into two early on in development. DZ twins, on the other hand, only share 50% of their genes since each twin developed from a separate egg fertilized by a separate sperm. If MZ twins show more similarity on a given trait compared to DZ twins, this increased similarity is most likely due to genetic influence. However, if MZ and DZ twins share the same amount of similarity on a given trait, it is likely that the environment plays a greater role than genetic factors (Scarr & McCartney, 1983). Twin studies have been used in many disciplines, particularly biology and psychology. One research question where twin studies have been employed is whether or not sexual orientation is inherited. In a review of multiple twin studies conducted on this topic, Mastanski and his colleagues found that the heritability of sexual orientation is greater for males than females, although there was a significant genetic component for both sexes (Mustanski et al., 2002). Another area where twin studies have been applied is to SUMMER 2020
assess the underpinnings of personality traits. Researchers have found that genes account for 40-60% of the variance between individuals for all personality traits (Olson et al., 2001). Some of the most notable twin studies in psychology have been conducted at the Minnesota Center for Twin and Family Research (MCTFR). The center has produced a series of longitudinal studies, a set of studies that involve repeat observations. One of these studies examined both identical and nonidentical twins, some raised together, and some raised apart. Researchers in this project examined many characteristics, including intelligence, personality, alienation, and aggression, finding that identical twins, whether they were raised together or not, were likely to be similar (Miller et al., 2012). Despite the usefulness of twin studies, there are a few limitations. The first is that while this type of study can help researchers understand the extent that a particular trait, behavior, or disorder is influenced by genetic factors, they donâ&#x20AC;&#x2122;t tell researchers about the specific genes involved. Furthermore, twin studies assume that the equal environments assumption (EEA) holds true. This means that twin studies assume that pairs of MZ and DZ twins are exposed to shared environmental factors in similar degrees. However, it is likely that MZ twins are exposed to shared environmental factors to a greater degree than DZ twins. As a result, violating the EEA would lead to an overemphasis on the genetic contribution to the subjectâ&#x20AC;&#x2122;s behavior (Iancoco et al., 2005).
"Twin studies are accomplished through comparisons of identical, or monozygotic (MZ) and fraternal, or dizygotic (DZ) twins."
Twin Study Findings in Genopolitics One notable twin study on genopolitics was conducted by a team of researchers from America and Sweden (Dawes et al., 2014). Prior research had illustrated that voter turnout was influenced by genetic variation (Fowler, Baker, and Dawes, 2008) and demonstrated that political attitudes had a genetic basis (Alford, Funk, and Hibbing, 2005; Martin et al., 1986). However, it was unclear if there was an empirical link between genes, personality traits, and political participation. In this study, researchers focused on three potential intermediate psychological traits: cognitive ability, personal control, and extraversion. The team looked at cognitive ability with the hypothesis that political participation required a variety of cognitive abilities. Furthermore, individuals with a higher cognitive ability may be better at absorbing political information from both print and online media sources and 291
Figure 1: Twin studies help elucidate differences between genetic and environmental factors Source: Wikimedia Commons
"...genetic factors accounted for approximately half the variation in interest in politics. However, the researchers failed to find significant heritability for participating in a protest, boycotting, making a financial contribution, and signing a petition."
prior research has established a link between cognitive ability and voter turnout (Deary, Batty, and Gale, 2008; Denny and Doyle, 2008; Hauser, 2000). The team examined the characteristic of ‘personal control,’ or accountability for one’s own actions, because previous research has shown that personal control influences political participation via political efficacy, or the sense that individual citizens have the capacity to influence political events (Cohen, Vigoda, and Samorly, 2001; Guyton, 1998). Lastly, the team selected extraversion as an intermediate psychological trait because it has been theorized that since social interaction is essential for many acts of political participation, one’s eagerness to engage with others should influence willingness to participate in politics, especially for attending rallies, signing petitions, and participating in political discussions (Mondak and Halperin, 2008). In this study, the authors analyzed results from SALTY (Screening Across the Life-Span Twin Study Younger Cohort), a survey administered in Sweden to twins born between 1943-1958. This survey collected answers about political attitudes, predispositions, and behaviors and measured personal control and extraversion. To gather data about cognitive ability, the authors matched the social security numbers of men in the sample to the Archives of Sweden and collected results from the cognitive ability tests they took (Mondak and Halperin, 2008).
292
After collecting the data, the researchers employed two models: a univariate as well as a bivariate ACE model. Both of these models assume that the variance in observed behavior could be split into additive genetic factors, environmental factors that are shared or common to co-twins, and unique environmental factors (Mondak and Halperin, 2008). Using the first model, the researchers estimated how much the variation in political participation could be attributed to genetic and environmental factors. Using the second model, they estimated the amount of genetic and environmental variation that political behaviors shared with psychological traits (Mondak and Halperin, 2008). One significant result produced by the univariate ACE model was that the heritability estimates for the following acts of political participation were significantly greater than zero: voting in the Swedish parliamentary election, contacting a politician, and contacting a public sector official. Additionally, genetic factors accounted for approximately half the variation in interest in politics. However, the researchers failed to find significant heritability for participating in a protest, boycotting, making a financial contribution, and signing a petition (Mondak and Halperin, 2008). Furthermore, the bivariate ACE model demonstrated that personal control, extraversion, and cognitive ability (in males)
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
were correlated to different acts of political participation such as voting and on occasion voting with a specific affiliation, although the correlations were relatively moderate. This suggests that the majority of the heritable variation in political participation is likely mediated by traits other than cognitive ability, personal control, and extraversion (Mondak and Halperin, 2008). Furthermore, the researchers found that most of the relationship between psychological traits and political participation can be explained by the same set of genes as the genetic correlates between political predispositions and psychological traits, accounting for 50%-100% of the total correlation (Mondak and Halperin, 2008). In another study, researchers sought to find whether or not voting, volunteering, and donating money can be explained by the same genetic factors (Dawes et al., 2015). Previous research on this subject had only considered voting, even though civic participation can include a number of other behaviors – for example, donating money or volunteering for a political cause. The material costs of voting are lower while the cognitive costs are higher than donating time or money. Furthermore, voting and donating money are more private in nature while volunteering tends to be social, producing deeper social connections and leading to greater social rewards. Lastly, voting is inherently a conflictual and zero-sum action, an action in which what is gained by one side is by loss of the other, while donating time and money are not (Dawes et al., 2015). To study their question, the researchers relied on a dataset based on a longitudinal study conducted by the Minnesota Center for Twin and Family Research (MCTFR), with a study population consisting of both MZ and DZ twins. Participants completed a personality questionnaire as well as the Wechsler Adult Intelligence Scale-Revised, a measure of IQ (Miller et al., 2012). At age 29, participants were also asked to provide info about the extent to which they agreed with statements concerning their civic engagement. Some statements that participants had to rate their agreement with included, “I volunteer my time for community or public service activities” and “I vote in national or state elections” (Miller et al., 2012). Similar to the SALTY study, the authors used both a univariate and bivariate twin model. The former was used to measure how much of the variation in the overall measure of civic
SUMMER 2020
engagement, as well as individual acts, can be attributed to genetic and environmental factors. The latter was used to see if the two traits are influenced by the same set of genes. Researchers found that the heritability estimates for all three acts of civic engagement (voting, volunteering, and donating money) are significantly different from zero. Moreover, they found that the genetic correlations between donating and volunteering as well as donating and voting were high (Miller et al., 2012). On the other hand, the genetic correlation between voting and volunteering was low, which suggests that any observed phenotypic correlations between these two actions is not strongly influenced by genetic factors. The researchers also found that genetic factors accounted for 57% to 71% of the correlation between positive emotionality, the tendency to smile often, and the three acts of participation, which helps explain the nature of the relationship between personality traits and civic engagement. Furthermore, cognitive ability is only related to voting and the overall measure of engagement which is consistent with the idea that voting is a cognitively demanding task. Overall, this study suggests that a significant portion of the relationship between psychological traits and political participation can be explained by the same set of genes (Miller et al., 2012). Both of the studies just described used the ACE model, which assumes that the EEA holds true. However, if EEA is violated, the heritability estimates found by the researchers would be too high. Additionally, the ACE model assumes that the variance in observed behavior can be split into three distinct factors. However, if this is not true, the estimates of heritability and covariance would be called into question. Given these limitations, it would be valuable to recreate these studies using different models and see if similar results for heritability and covariance are produced (Berck & Chalfant 1990).
"Researchers found that the heritability estimates for all three acts of civic engagement (voting, volunteering, and donating money) are significantly different from zero."
In addition, it is important to note that these heritability estimates are specific to the time and population of the sample. While the results from the study with Swedish participants is similar to those based on samples in the US and Denmark and the study using data from MCTFR had participants who were comparable to the overall Minnesota population, it may not be possible to extrapolate results to other regions or time periods.
293
Figure 3: An example of a GWAS analysis Source: Wikimedia Commons
"GWAS techniques essentially compare many healthy people against ill patients to determine how the ill patients differ. For people with the same disease, if the same SNPs repeatedly occur, scientists are able to draw an association between the SNP and the condition."
Intro to GWAS GWAS, or genome-wide association studies, are a relatively new research method that have risen to prominence following the 2003 completion of the human genome project. A powerful method of creating a connection between a physical or psychological trait and a patient’s genome, GWAS studies have made a significant impact not only in the field of genetic and hereditary diseases but also in oncology, cardiology, and nephrology (Beauchamp et al., 2011). By studying a patient’s individual singlenucleotide polymorphisms (SNPs), or genetic variation that results from the substitution of a single nucleotide at a very specific point in the genome, scientists are able to ascertain the impact of individual genetic sequences as they relate to the onset of a particular disease (Benjamin et al., 2012). GWAS techniques essentially compare many healthy people against ill patients to determine how the ill patients differ. For people with the same disease, if the same SNPs repeatedly occur, scientists are able to draw an association between the SNP and the condition (Beauchamp et al., 2011). This allows future scientists and physicians who are looking to diagnose patients – with a fair amount of certainty – to determine how strongly a patient is predisposed to a particular kind of illness based on their DNA. GWAS is essentially conducted in four steps, the first of which is to note the phenotypic qualities of the diseased population versus the healthy population (Benjamin et al., 2012). Second, the genotypes of all the involved subjects need to be analyzed with a special eye towards some of the common SNPs (pre-determined based on the disease under investigation). Third, the SNPs are compared across the healthy and diseased population using software packages, many of which are now open source (Charney et al., 2012). Finally, the sample’s p-value is determined, and results are analyzed to find a handful of SNPs that appear to have statistical significance across the diseased and healthy populations.
GWAS Findings in Genopolitics While GWAS has an important place in medicine, in recent years it has found a place in politics. Genomics experts, neurobiologists, and political scientists have attempted to determine how genetics might affect how people associate with particular political views and how that might manifest itself in voting patterns and political
294
affiliation (Rietveld et al., 2014). One of the most prolific genopolitical GWAS studies came about in 2008 and demonstrated how voter turnout is significantly affected by two alleles called 5HTT and MAOA, both of which significantly affect the serotonin system. 5HTT and MAOA both transcribe neurochemicals that are implicated in fear, trust, and social interaction. SNPs associated with these alleles that result in the less transcriptionally efficient versions of these genes have been associated with a variety of antisocial behavior. Since voting is considered a social activity, the SNPs that resulted in less efficient transcription led to a lessened voting intention (Fowler et al., 2o08). While the aforementioned study found that certain SNPs on genes significantly contributed to voter turnout, another study conducted in 2012 could not definitively determine that there was a correlation between those SNPs and economic and political preferences. This Swedish study drew from dense SNP data to estimate the proportion of variance in the phenotypic traits (the political affiliation) that was associated with a particular set of SNPs that govern similar neurochemical alleles. This study used a special method called genomic-
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
relatedness-matrix restricted maximum likelihood (GREML), a subset of GWAS studies in which there is a lower-bound estimate of the effect of heritability on political behavior that does not rest on the same set of assumptions used in twin studies or in traditional GWAS studies (Benjamin et al., 2012).
that the environments for both types of twins are equal, when in reality, identical twins might grow up in a more equal environment because of how similar they are (i.e. their similar genetic predispositions dictate that they are comfortable in similar environments) (Dawes et al., 2014).
Another study conducted in 2014 measured risk averseness as the key phenotype and found, similar to the 2012 study, no statistically significant correlation between SNPs and risk averseness. Risk aversion related to political affiliation through reliance on the markets, which dictate investment decisions – traditionally more conservative perspectives value market stability and are much more risk averse while liberal individuals have been shown to continue to invest despite more volatile market conditions. In a group of over 10,000 people, a select number of SNPs related to risk aversion were tested and none were found to be statistically significant determinants of risk preferences – and while this could reveal the polygenic nature of this phenotype (meaning that there are multiple genes implicated), it also reveals that GWAS may not be accurate or may be limited in scope while attempting to prove genetic influence on political affiliation. It may also prove that there is no connection altogether and in the question of nature vs. nurture, nurture proves far more valuable (Harrati et al., 2014).
From existing findings within the field of genopolitics, the likelihood of a future where ubiquitous environmental factors like education shape a fully liberal or conservative population is unlikely. Genopolitics lends insight into why both ends of the American political spectrum remain entrenched (at times, generationally, regionally, etc.) in certain views (Mondak et al., 2010). That being said, 40% of the population describes themselves as politically undecided, suggesting that the genetic effect on politics are not absolute. An over-zealous belief in the genetic determination of political affiliation could have stark consequences for the modern political landscape. The rift between left- and right-wing politics may only grow wider if it appears that certain people are ‘predestined’ to believe certain ideas and act in certain ways. Efforts to communicate or bridge the gap may appear futile (Mondak et al., 2014). In general, what is certain is that providing a genetic basis to political behavior integrates the social sciences with biology. A biological legitimacy in the social sciences granted through these genetic underpinnings offers views of a future where the social and natural sciences are united. Genopolitics gives a potential scientific legitimacy to human behaviors and attitudes; whether that should be the aim or not remains a topic of contention (Kinder & Kiewiet, 1979).
Implications When investigating any form of social science, deciding on the proper methodology is key to developing appropriate conclusions. Current research in the realm of genopolitics is confined to bivariate studies, but pathways contributing to traits and behaviors are complex and involve multiple genes (Charney & English, 2013). The impact of these genes is also complex, affecting psychological characteristics, political predispositions, and political participation. Political traits appear to be polygenic in nature, involving a large number of genes with separate and small effects. Thus, to produce a reliable understanding of the effect of genes on political affiliation and political action, large sample sizes are a must. Twin studies also come under scrutiny for their assumption of equal environments, comparing identical twins to fraternal ones on the basis that observed differences would be due only to genetic differences and not to the environment. Their success hinges on assuming
SUMMER 2020
'In a group of over 10,000 people, a select number of SNPs related to risk aversion were tested and none were found to be statistically significant determinants of risk preferences..."
Another area touched by the findings of genopolitics, but not defined yet within the academic field, is single issue voting. How are voter blocks who tend to align across parties based only on specific issues (such as abortion, animal rights, the second amendment, etc.) affected genetically? Although research establishing a tendency for fundamental conservative or liberal beliefs speaks on consistency over a range of issues, the passion these voters have for one issue and the relative indifference they have for others is yet to be elucidated by genopolitics. In addition to single-issue voting, genopolitics has shown some potential to define religious affiliation. Religion could be suspected to act as a major influence in the processing of emotion and environmental stimuli, which in turn affects
295
voting behavior (Fowler et al., 2013). Religion could also prove a powerful environmental factor. Beliefs positively reinforced, especially from an early age, most definitely create political patterns that could interfere with one’s genetic predispositions. Religion is also intimately tied with secular issues. Abortion, the LGBTQ+ community, and more remain contentious topics in certain religious institutions. Thus, it becomes an important factor to consider in the creation of one’s political identity. However, researchers must also acknowledge the baselines for certain attitudes, as it is affected by the inertia of certain psychological traits.
"...genetics could help predict the success of important government and campaign events such as voting, donating to a campaign, volunteering, attending rallies, and vocally broadcasting beliefs to the community."
The ongoing nature v. nurture debate informs many of the perspectives researchers hold. The future of genopolitics research could inspire vast opportunities. Obtaining more concrete correlations between genetic data and political traits could then lead to predicting other genetic events. Or, even more directly, genetics could help predict the success of important government and campaign events such as voting, donating to a campaign, volunteering, attending rallies, and vocally broadcasting beliefs to the community. Though currently larger samples are needed, researchers contend that there could be enough scientific footing to link traits to single-nucleotide polymorphisms (SNP’s), zeroing in on the roles of specific genes. As with any realm of science that seeks to investigate the genetic foundations of human behavior, a host of ethical implications arise. Lleaning too heavily on genetics as an explanation for any cognitive behavior may revive questions about eugenics and scientific racism. Scientists set on giving political attitudes a genetic basis must tread carefully so as to not reduce the human experience to a matter of what is contained in our DNA. Efforts to understand the dynamic complexities between genes, environments, and identities should not be diminutive, but rather open-ended and humbling. References Beauchamp, J. P., Cesarini, D., Johannesson, M., van der Loos, M. J. H. M., Koellinger, P. D., Groenen, P. J. F., Fowler, J. H., Rosenquist, J. N., Thurik, A. R., & Christakis, N. A. (2011). Molecular Genetics and Economics. Journal of Economic Perspectives, 25(4), 57–82. https://doi.org/10.1257/ jep.25.4.57 Benjamin, D. J., Cesarini, D., van der Loos, M. J. H. M., Dawes, C. T., Koellinger, P. D., Magnusson, P. K. E., Chabris, C. F., Conley, D., Laibson, D., Johannesson, M., & Visscher, P. M. (2012). The genetic architecture of economic and political preferences. Proceedings of the National Academy of Sciences, 109(21), 8026–8031. https://doi.org/10.1073/ pnas.1120666109
296
Charney, E., & English, W. (2012). Candidate Genes and Political Behavior. American Political Science Review, 106(1), 1–34. https://doi.org/10.1017/ S0003055411000554 Charney, E., & English, W. (2013). Genopolitics and the Science of Genetics. American Political Science Review, 107(2), 382–395. https://doi.org/10.1017/ S0003055413000099 Dawes, C., Cesarini, D., Fowler, J. H., Johannesson, M., Magnusson, P. K. E., & Oskarsson, S. (2014). The Relationship between Genes, Psychological Traits, and Political Participation: GENES, PSYCHOLOGICAL TRAITS, AND PARTICIPATION. American Journal of Political Science, 58(4), 888–903. https://doi.org/10.1111/ ajps.12100 Dawes, C. T., Settle, J. E., Loewen, P. J., McGue, M., & Iacono, W. G. (2015). Genes, psychological traits and civic engagement. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 370(1683), 20150015. https://doi.org/10.1098/ rstb.2015.0015 FOWLER, J. H., & DAWES, C. T. (2013). In Defense of Genopolitics. The American Political Science Review, 107(2), 362–374. JSTOR. Fraga, M. F., Ballestar, E., Paz, M. F., Ropero, S., Setien, F., Ballestar, M. L., Heine-Suñer, D., Cigudosa, J. C., Urioste, M., Benitez, J., Boix-Chornet, M., Sanchez-Aguilera, A., Ling, C., Carlsson, E., Poulsen, P., Vaag, A., Stephan, Z., Spector, T. D., Wu, Y.-Z., … Gartler, S. M. (2005). Epigenetic Differences Arise during the Lifetime of Monozygotic Twins. Proceedings of the National Academy of Sciences of the United States of America, 102(30), 10604–10609. JSTOR. Haidt, J. (2013). The righteous mind: Why good people are divided by politics and religion. Harrati, A. (2014). Characterizing the Genetic Influences on Risk Aversion. Biodemography and Social Biology, 60(2), 185–198. https://doi.org/10.1080/19485565.2014 .951986 Harrati, A. C. (n.d.). Understanding Risk Aversion in Older Americans: New Approaches Using Genetic Data. 106. Iacono, W. G., McGue, M., & Krueger, R. F. (2006). Minnesota Center for Twin and Family Research. Twin Research and Human Genetics: The Official Journal of the International Society for Twin Studies, 9(6), 978–984. https://doi.org/10.1375/183242706779462642 Kinder, D. R., & Kiewiet, D. R. (1979). Economic Discontent and Political Behavior: The Role of Personal Grievances and Collective Economic Judgments in Congressional Voting. American Journal of Political Science, 23(3), 495–527. JSTOR. https://doi.org/10.2307/2111027 Larregue, J. (2018). « Une bombe dans la discipline »: L’émergence du mouvement génopolitique en science politique (“A bomb in the discipline”: the emergence of the genopolitical movement in political science). Social Science Information. https://doi. org/10.1177/0539018418763131 Miller, M. B., Basu, S., Cunningham, J., Eskin, E., Malone,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
S. M., Oetting, W. S., Schork, N., Sul, J. H., Iacono, W. G., & Mcgue, M. (2012). The Minnesota Center for Twin and Family Research Genome-Wide Association Study. Twin Research and Human Genetics : The Official Journal of the International Society for Twin Studies, 15(6), 767–774. https://doi.org/10.1017/thg.2012.62 MONDAK, J. J., HIBBING, M. V., CANACHE, D., SELIGSON, M. A., & ANDERSON, M. R. (2010). Personality and Civic Engagement: An Integrative Framework for the Study of Trait Effects on Political Behavior. The American Political Science Review, 104(1), 85–110. JSTOR. https://doi. org/10.2307/27798541 Rietveld, C. A. (2014). Essays on the intersection of economics and biology. ERIM. http://hdl.handle. net/1765/76907 Scarr, S., & McCartney, K. (1983). How People Make Their Own Environments: A Theory of Genotype → Environment Effects. Child Development, 54(2), 424–435. JSTOR. https://doi.org/10.2307/1129703 Segal, N. L. (1985). Monozygotic and Dizygotic Twins: A Comparative Analysis of Mental Ability Profiles. Child Development, 56(4), 1051–1058. JSTOR. https://doi. org/10.2307/1130115 Sophie Dulesh. (2014). Genopolitics and the Future of Secular Humanism. Humanist Perspectives, 189. https:// www.humanistperspectives.org/issue189/dulesh.html The Relationship between Genes, Psychological Traits, and Political Participation—Dawes—2014—American Journal of Political Science—Wiley Online Library. (n.d.). Retrieved August 31, 2020, from https://onlinelibrarywiley-com.dartmouth.idm.oclc.org/doi/full/10.1111/ ajps.12100 Transcription Impacts the Efficiency of mRNA Translation via Co-transcriptional N6-adenosine Methylation. (n.d.). Retrieved August 31, 2020, from https://www-ncbinlm-nih-gov.dartmouth.idm.oclc.org/pmc/articles/ PMC5388891/
SUMMER 2020
297
Mastering the Microbiome STAFF WRITERS: ANAHITA KODALI '23, AUDREY HERRALD '23, CAROLINA GUERRERO '23, SOPHIA KOVAL '21, TARA PILLAI '23 BOARD WRITER: SAM NEFF '21 Cover Image: The human microbiome is home to whole ecosystems of microscopic organisms, unique in its composition for each human host. Source: Pixabay
298
Introduction A two-liter gallon of soda weighs approximately five pounds. So does a sack of potatoes, a pile of textbooksâ&#x20AC;&#x201D;and the human microbiome. Most individuals harbor around five pounds worth of tiny living organisms in their gut, on their skin, and throughout the body. These organisms help digest food, regulate the immune system, protect against disease-causing microbes, and produce essential vitamins, among other functions (Hair and Sharp, 2014). Despite such a universally critical set of functions, the types of bacteria that make up an individualâ&#x20AC;&#x2122;s microbiome differ significantly between individuals. Diet, activity levels, geographic location, and mental state all influence the makeup of the microbiome. Interactions between the microbiome and these outside factors are critical to understand, especially as the microbiome is becoming a central element in diagnosing and treating disease. Research is ongoing, and much about the microbiome
remains to be understood; after all, scientists only discovered the microbiome in the late 1990s. This article traces the discovery of the microbiome from the discovery of germs up through present-day investigations of its role in disease diagnosis and treatment.
Understanding the Microbiome: A Brief History This story of the human microbiome begins at a grim moment in medical history; the years prior to the institution of antiseptic techniques were dangerous ones for any unlucky enough to be afflicted with any illness or injury. Patients were better off seeking treatment at home, secluded from exposure to others with disease, than in a hospital if they could afford it. The conditions in hospitals were so wretched and unsanitary that the preponderance of disease rendered all but the most necessary treatments undesirable as a pathway to regaining good health. Infections of the blood and skin, bouts of gangrene that DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
ate away skin, muscle, and bone, and waves of childbirth fever in maternity wards killed countless patients post-operation. Yet despite these dark times, a stream of innovations in medical practice slowly improved this state of affairs. Hospitals were ultimately transformed, in the words of renowned author Lindsay Fitzharris, from ‘houses of death’ to ‘houses of healing’ (Fitzharris, 2017). Perhaps the greatest medical innovator of the 19th century was the surgeon, educator, and inventor Joseph Lister. Lister had decades of experience as a surgeon to inform him of the sorry state of hospital care. A surgeon and teacher at hospitals in London, Edinburgh, and Glasgow, he watched countless patients die at increasing rates under the grip of postoperative infection. A startling truth confronts students of 19th century medical history: the invention of anesthesia (ether), despite improving and controlling what was undoubtedly a horrible experience, led to an increase in the rate of postoperative deaths. Why? More people were willing to undergo surgery for their medical afflictions, and surgeons were willing to try more risky procedures (Fitzharris, 2017). And despite the relative peace of the 19th century compared to the preceding (and the successive) century after the Napoleonic Wars, there were still significant conflicts such as The Crimean War (1853-1856) and the American Civil War (1861-1865). The evolution of surgical techniques was further stimulated by the increased demand for their use resulting from
SUMMER 2020
these conflicts, yet scientific progress was hindered by an understanding of infectious disease that was still stuck in a primitive mode. This is the backdrop of which Joseph Lister pitched his antiseptic principles.
Figure 1: Joseph Lister, British surgeon, teacher, and scientific researcher, revolutionized the way in which surgical treatment was performed by introducing carbolic acid as the first widelyused antiseptic.
Lister was handy with a microscope. Ever since his father had bought him one as a child, Lister was ever fascinated by the contraption, made famous by Dutch scientist Antonie van Leeuwenhoek in the 17th century. His observations of hospital infection were supplemented by experimental tinkering in his laboratory–examining the microscopic characteristics of infected blood and other samples from the body. Lister’s surgical and scientific research experience both prompted his development of antiseptic techniques: the use of carbolic acid to sterilize wounds and the recommendation that surgeons wash their hands before undertaking surgery (Fiitzharris, 2017). But, Lister was not alone as a proponent of sterile surgery. The Hungarian physician Ignaz Semmelweis, though not well recognized in his lifetime, dramatically reduced the rates of childbirth fever in his own hospital by recommending practices similar to Lister (Zoltán, 2020).
Source: Flickr
Lister was quite popular in his own time–and for good reason. He was a gifted educator, and he knew how to spread the word about his own view. Lister went on speaking tours throughout Europe and America, spreading his antiseptic technique abroad and influenced countless physicians that crossed his path (Fitzharris, 2017). He even got the opportunity to serve as royal physician to Queen Victoria in 1871, sterilizing the surgical space with carbolic acid before draining a large abscess from her armpit (Fawcett, 2017). This is not to say that there were not many who saw Lister’s innovations as useless, or even harmful. Lister’s antisepsis entered a world where the major view of infectious disease was that it was spread by miasma (“foul air”). The idea that there were living microscopic agents that caused disease, the so-called ‘germ theory’, was still a novel one at the time, and it was hard to accept. But Lister was not alone in searching for a new, more scientific explanation of infectious disease.
"Perhaps the greatest medical innovator of the 19th century was the surgeon, educator, and inventor Joseph Lister..."
One fellow traveler was a man with whom Lister corresponded frequently on scientific matters– Louis Pasteur. Pasteur worked as a scientist in the ‘city of light’ (Paris was the first city in the world to adopt electric streetlights in 1878), but early 19th century medical practice in Paris was as dark and gloomy as it was across the English 299
Figure 2: Pictured here is Dr. Alexander Fleming, the scientist credited with finding the first antibiotic. Source: Wikimedia Commons
“He [Ehrlich] described the idea of a 'magic bullet', medicines that targeted only diseasecausing microbes and did not effect a patient's own cells.”
Channel. A deadly cholera epidemic in 1832 had crippled the city, which surely shaped Pasteur’s later focus on infectious disease, alongside those of other French doctors and scientists (Guinness, 2010). Pasteur specifically worked on developing a vaccine for chicken cholera in the 1870s, as well as vaccines for anthrax, erysipelas (skin infection), and rabies throughout that and the following decade. But he is perhaps most famous for his theory of spontaneous generation–that microorganisms would grow in sterile flasks that were open but not in sterile flasks that were closed. He developed pasteurization, his namesake technique, to prevent wine and milk from becoming contaminated by bacteria (Ullmann, 2009). Like Lister and his contemporary Robert Koch (whose postulates laid out a framework for isolating and identifying disease-causing pathogens), Pasteur contributed mightily to the germ theory of disease, laying out a new science that provided the theoretical framework for the astounding development of antibiotics in the 20th century (“Koch’s Postulates,” n.d.). Following the advancements of Pasteur, Lister, and Koch, the discovery of antibiotics in the early 20th century was one of the biggest milestones in the history of bacterial research. Though it was Dr. Alexander Fleming who is credited with discovering penicillin, the world’s first antibiotic in 1928, there were researchers in the late 1800s that laid the foundation for his studies. Sir John Scott Burdon-Sanderson, an English physiologist, published work in 1870 that described the inhibition of bacterial growth when cultured fluid (fluid containing bacteria) was covered in mold, showing the antibacterial effects of mold. In 1871, Lister found that the fungus Penicillium glaucium, when applied to human tissue, had an antibacterial effect (Gould, 2016). In 1874, Dr. William Roberts, a Welsh physician, observed that Penicillium glaucum in laboratory samples prevented bacterial contamination (Foster & Raoult, 1974). Then in 1875, Dr. John Tyndall, an Irish physicist, showed that Penicillium notatum had antimicrobial properties (Gould, 2016). Drs. Louis Pasteur and Jules Joubert, both French biologists, found that anthrax bacteria could be inhibited through use of molds, which may be Penicillium notatum (Sams et. al., 2014). Dr. Ernest Duchesne, a French physician, used Penicillium notatum to treat guinea pigs with typhoid (Gould, 2016). The “modern” antibiotic era–the years when researchers finally formally discovered antibiotics and started producing them–did not begin
300
until the start of the 1900s with German Drs. Paul Ehrlich, Alfred Bertheim, and Sahachiro Hata’s research into syphilis. Dr. Ehrlich made an observation that synthetic dyes were only able to stain specific microbes, which he believed could be applied to the treatment of microbes in humans. He described the idea of a “magic bullet,” medicines that targeted only disease-causing microbes and did not affect a patient’s own cells (Bosch & Rosich, 2008). In 1904, he, along with Drs. Bertheim and Hata, started to search for a cure to syphilis, which at the time was almost incurable. Their approach to finding the drug involved a novel largescale systematic screening approach to test individuals; this approach has become a staple of pharmaceutical research. The research that the team did became the foundation for later antibiotics (Aminov, 2010). Despite the careful standards set by Ehrlich, Bertheim, and Hata, which revolutionized pharmaceutical research and made testing much more methodological, Fleming stumbled upon his finding in 1928 accidentally (Figure 2). While conducting experiments on staphylococcal bacteria, Fleming left a petri dish unopened, which became contaminated with mold. He observed that the bacteria near the mold were dying and isolated the mold to do further study. After identifying it as a member of the Penicillin family, he found that it was effective against pathogens that caused pneumonia, meningitis, scarlet fever, gonorrhea, and diphtheria (Tan & Tatsumura, 2015).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Pictured are the differences in the cell wall between gram-positive (left) and gram-negative (right) bacterial cell walls. These differences allow researchers to classify bacteria through its staining properties in the Gram stain. Gram-positive bacteria may be gram-stained because they possess a thick, exposed peptidoglycan layer, whereas gram-negative bacteria possess an outer phospholipid cell membrane instead. Source: Wikimedia Commons
With the standards created by Ehrlich's team and Fleming’s discovery of penicillin, the pharmaceutical research community exploded. The 1940s to 1960s have been deemed the “golden era” of antibiotic discovery. Researchers found sixteen major antibiotic classes: today, the most popular are tetracyclines, quinolones, macrolides, glycopeptides, aminoglycosides, and carbapenems (Ribeiro da Cunha et. al., 2019; Anderson, 2003). Additionally, much was discovered about how antibiotics work. For example, antibiotic-mediated cell death is a complex process and is still being researched today. Generally, it begins with the interaction of the antibiotic drug molecule with its bacterial target; the drug will then alter the bacteria on a biochemical and molecular level, including the formation of breaks along DNA double-strands, cause arrest of RNA synthesis, and also lead to mistranslation of proteins in the ribosome (Kohanski et. al., 2010). By the advent of the 21st century, research of the microbiome was well underway. Yet 2001 is often regarded as the birth-year of microbiome investigation (Prescott, 2017). Why then? One likely answer lies in the emergence of increasingly efficient and accessible genetic sequencing technologies. A novel technique called “High Throughput Sequencing,” for example, enabled researchers to sequence the DNA and RNA of microbiota in a rapid and cost-effective manner (Ellermann et al., 2017). This expedited sequencing ability gave rise to the most comprehensive microbiome
SUMMER 2020
studies ever performed; the NIH-funded Human Microbiome Project uses sequencing techniques to characterize the genetic fingerprint of the human microbiome and to analyze its role in human health and disease (Human Microbiome Project, 2020). The Human Microbiome Project is still underway, now in a post-characterization phase that focuses on the microbiome’s role in diseases and disorders. For many years, scientists have understood that the microbiomes of diseased subjects tend to differ from those of healthy subjects. One of the many remaining questions, however, is whether that microbiome disruption is a cause of disease, a symptom, or both. The second phase of the Human Microbiome Project seeks to answer this question through the lens of three cases in particular: the onset of Inflammatory Bowel Disease, the onset of Type-2 Diabetes, and preterm births (Human Microbiome Project, 2020). While these studies have not yet concluded, related analyses suggest that a loss in enteric microbiome diversity is implicated in many diseases and is even associated with increased mortality rates among older individuals (Durack & Lynch, 2019). Interestingly, this relationship works both ways; in certain cases, the same bacterial clade whose loss is associated with the onset of disease can actually exert a protective effect when supplied in excess (Breyner et al., 2017). Evidently, the microbiome is a dynamic, personalized mechanism for both diagnosis and treatment of disease.
“For many years, scientists have understood that the microbiomes of diseased subjects tend to differ from those of healthy subjects.”
301
Figure 4: Staphylococcus aureus biofilm growing on the surface of a catheter. The web-like structure between bacteria is the extracellular matrix. When biofilms develop on medical implants like catheters, they often cannot be treated, and the implant must be removed (CDC, 2005). Source: CDC PHIL
"While bacteria are often conceived of in their single-celled or planktonic forms, most bacteria in nature live in the form of biofilms. It is estimated that biofilms account for “80% of chronic and recurrent microbial infections.'"
As understanding of the microbiome expands, the list of diseases and disorders in which it is thought to play a role has grown accordingly. Among the most recent research hotspots is the field of neurology; clinical evidence implicates the microbiome in Alzheimer's disease, autism spectrum disorder, multiple sclerosis, Parkinson's disease, and stroke (Cryan et al., 2019). Though researchers are not yet certain of the specific molecular pathways through which gut microbiota affect the brain, it is clear that both direct pathways (e.g., the vagus nerve) and indirect pathways (e.g., neuroactive metabolites produced in the gut) facilitate gut-brain interaction (Cryan et al., 2019). Neurological disorders, however, are only the beginning: lung, liver, metabolic, respiratory, and autoimmune diseases have also been linked to the microbiome (Wang et al., 2017). As sequencing techniques and computational tools continue to develop, the role of the microbiome in disease will grow clearer. What’s certain now is the gut microbiome’s unquestionable effect on health, even in processes that are well-removed from the stomach.
Viewing the Microbiome: a look at the major players Most bacteria can be classified as grampositive or gram-negative depending on the composition of a peptidoglycan polymer in their cell wall. The cell wall of gram-positive bacteria has a variety of carbohydrate polymers in addition to the thick layer of peptidoglycan. A gram-negative cell wall, on the other hand, has a thinner peptidoglycan layer and also an outer phospholipid bilayer membrane (Gram Positive vs Gram Negative, n.d.). These differences in
302
the cell wall composition affect the staining properties, which is how bacteria is classified to either be gram-positive or gram-negative in the gram stain. These cell wall differences allow for a reliable method to classify bacteria, but it is a broad biochemical characterization. Bacteria can be more specifically identified by mapping phylogenetic relationships. Phylogeny refers to the connections among organisms and the evolutionary relationships. These phylogenetic relationships are determined by comparing small subunit rRNA sequences. This enhances our knowledge of bacteria by allowing researchers to test hypotheses on microbial evolution and determine the genes that contribute to how bacteria is classified. While bacteria are often conceived of in their single-celled or planktonic forms, most bacteria in nature live in the form of biofilms. It is estimated that biofilms account for “80% of chronic and recurrent microbial infections” (Sharma et al., 2019). Biofilms are collaborative communities of microbes that can be made up of one or many different species of microorganisms, and which adhere to and grow on surfaces. The biofilm formation process begins with the attachment of planktonic bacteria to a surface—first reversibly, attached to a surface by one pole, then irreversibly, attached along its axis with reduced flagellar rotation (Petrova and Sauer, 2012). At this point, gene expression in the microbes changes, and the biofilm forms an extracellular matrix (ECM), which protects microbes during the maturation of the biofilm, made of exopolysaccharides, extracellular DNA, and various proteins (Sharma et al., 2019).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
The microbes within a biofilm communicate with one another through quorum sensing, which is the elicitation and receiving of signal molecules between cells. In a biofilm, quorum sensing begins after irreversible attachment and is central to the ability of microbes to collaborate with one another (Petrova and Sauer, 2012; Hibbing, 2010). Quorum sensing alters gene transcription and allows for bacterial conjugation, which is when bacteria transfer genetic material between organisms. These processes in turn cause the biofilm to alter its communal pH and osmolality among other physical and chemical features. These chemical changes, in addition to the protection that the ECM provides, are two of the most likely factors that contribute to a biofilm’s increased antimicrobial resistance (Sharma et al., 2019). Quorum sensing also allows microbes to communicate with one another when their environment is no longer suitable for biofilm maintenance—they signal to one another to detach from the surface and return to their planktonic form (Petrova & Sauer, 2012). The microbiome consists of a community made up of many different types of microorganisms that include viruses, fungi, protozoa, and bacteria. The interactions between these microbes are an important research area for the microbiome topic, especially when looking at the development and function of the host’s immune system. Bacteria, fungi, and protozoa interact with each other and rely on other microbes in order to survive or reproduce through relationships that include symbiotic cooperation or antagonist competition. However, viruses are different from the other
SUMMER 2020
microorganisms in that they cannot survive on their own and need to take over the processes in other cells to reproduce. This results in different interactions between viruses and the rest of the microbiome. For example, bacteria can often be hosts to viruses through the lytic cycle. In this cycle, a virus called a bacteriophage infects bacteria and reproduces inside of it. After replication, lytic phages lyse the bacteria cells and look for new hosts to infect and repeat the cycle. It has been observed that phages have the ability to regulate bacterial populations through the lytic life cycle while also being able to facilitate and exacerbate bacterial infection. Additionally, the human virome plays a role in disseminating antibiotic resistant genes through horizontal gene transfer (Rowan-Nash et al., 2019). Currently, there is a large knowledge gap in the roles of bacteriophages and the impact of these interactions in the human body and microbiome. Because a disruption in the balance of the different microorganisms in the microbiome can contribute to disease and disruption of normal bodily functions, studying the interactions between the microorganisms will be vital to fully understanding how the microbiome affects human health and disease.
Figure 5: Rahnella aquatilis growing on a MacConkey agar plate. Since the bacteria can grow, we know this species is gram-negative. The lower right quadrant, where there is no bacteria, is still pink, while the quadrants with bacteria growing in them are yellow. This color change from pink to yellow indicates that this species of bacteria is able to ferment lactose (CDC, 1985). Source: CDC PHIL
Analyzing the Microbiome - Evolution of Scientific Techniques Microbiology began as an observational practice—scientists would observe bacteria that they found growing in nature or on food. It was not until 1860 that Louis Pasteur created the first artificial liquid culture medium out of “‘yeast soup’, ashes, sugar and ammonium salts” (Bonnet et al., 2020). The ability to culture bacteria is crucial as it allows microbiologists to grow large quantities of bacteria in a replicable manner, paving the way for advancement in bacterial knowledge. A limitation of liquid culture, however, is that it does not allow scientists to isolate different species of microbes from one another; since bacterial samples were collected from nature, food, or the body, there were almost always multiple types of microbes present in these samples. Only a solid growth media would allow for individual colonies to grow and allow species isolation as it allows bacteria to be spread apart and stationary. In 1882, Fannie Hesse recommended the use of agar to solidify media, as she used the substance to solidify jams. Robert Koch took her advice and popularized the use of agar plates (Bonnet et al., 2020).
"The microbiome consists of a community made up of many different types of microorganisms that include viruses, fungi, protozoa, and bacteria."
Microbial culture media must include a few key components to grow bacteria: “water, a carbon
303
Figure 6: 16S Sequencing, in tandem with whole genome sequencing, allows researchers to infer the identity of bacterial species present in a sample and begin constructing their evolutionary relationships. Source: Wikimedia Commons
"While growth factors are used to promote bacterial growth, “selective media” is created with the purpose of only growing bacteria with certain characteristics."
304
source, a nitrogen source and some mineral salts” (Bonnet et al., 2020). Some bacteria can grow in this minimal media, while others require other elements to grow. These elements, called growth factors, can include amino acids, antioxidants, blood derivatives, purine and pyrimidine bases, and more (Bonnet et al., 2020). While growth factors are used to promote bacterial growth, “selective media” is created with the purpose of only growing bacteria with certain characteristics. Selective media is created with one or more elements that will limit the growth of microbes based on their ability to handle that element. In many laboratory procedures, antibiotics are included in media to ensure that only bacteria with resistance to that antibiotic will grow. Before techniques like DNA sequencing were widely available and affordable, microbiologists used selective media
to identify and characterize unknown microbes (Bonnet et al., 2020). Whether a microbe is aerobic or anaerobic, or whether a microbe can break down glucose or lactose, can all be determined by tests conducted in selective media. A popular type of selective media is MacConkey agar, which is doubly selective for gram-negative bacteria and lactose fermentation. The media is formulated with bile salts to prevent gram-positive bacteria from growing. While non-lactose fermenting gramnegative bacteria can still grow on the agar, those that do ferment lactose will form pink colonies due to the presence of a pH indicator that reacts to the lowering of local pH after lactose fermentation (Smith, 2019). MacConkey agar is just one of many selective media still in use today.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
In the last few years, the long-used classical cultural techniques for isolating and identifying bacterial species have been supplemented, if not yet supplanted, by modern molecular identification techniques. Though not yet validated for use in the clinical setting, certain labs have begun using 16S sequencing and mass spectrometry in tandem to identify bacterial species (at the genus level) and show what molecules are being produced. The full community of the bacterial microbiome, as well as their modes of molecular exchange, is starting to be illuminated (Janda and Abbott, 2007). How do these techniques work? 16S sequencing is capable of identifying bacterial species by targeting the 16S rRNA gene that encodes the small ribosomal subunit in prokaryotic species (bacteria and archaea). Ribosomes are the cellular factories that churn out new proteins based on the template provided in DNA and transcribed into mRNA. Eukaryotic speciesâ&#x20AC;&#x201C; including fungi, fish, birds, and mammals and moreâ&#x20AC;&#x201C;do not have a 16S rRNA gene. They possess a 30S rRNA gene instead, and therefore will not be identified during sequencing. That being said, even though each bacterial species possesses the same 16S gene, they can still be distinguished. This is true because even though each species possesses the same gene, the DNA sequence differs in certain hypervariable regions (Osman et al., 2018). In contrast, instead of identifying bacteria by their DNA, mass spectrometry aims to uncover the identity of other small molecules produced by bacteria, such as bacterial toxins like botulinum neurotoxin (the cause of botulism) (Barr et al., 2005). At the fundamental level, mass spectrometry involves the fragmentation of molecules in a sample into a multitude of tiny charged fragments. These fragments are shot at high speed through a chamber and subjected to an electromagnetic field. Due to their charge, fragments are deflected by the electromagnetic field, but to different degrees based on their size, and ultimately smash into a detector at the end of the chamber. The resulting pattern of deflected fragments, to the trained scientist, is a puzzle to be solved. Using the identity of the fragments and the pattern on the detector, a scientist can piece the original molecules back together and determine what molecules were initially present in the sample (Urban, 2016).
SUMMER 2020
A recent paper developed by an international team of scientists at Carnegie Mellon and the National Research University Higher School of Economics in St. Petersburg Russia demonstrates just how valuable these experimental techniques can be. The team essentially developed an experimental pipeline that would take existing data sets (with both 16S sequencing and mass spectrometry data) and subject them to further analysis that would determine how specific bacterial species were connected with the production of different molecules. The researchers tested their pipeline on data produced by the American Gut Project (AGT) and the HUMAN-CF initiative, which provided gut microbiome samples from cystic fibrosis patients. Using certain analytical tools, the researchers were not only able to show how the different bacterial species present in the samples contributed to the production of various moleculesâ&#x20AC;&#x201C;they were also able to show how the bacteria broke down metabolites produced by the human host and transform them into unique products (Cao, Shcherbin, and Mohimani, 2019). Beyond 16S sequencing and mass spectrometry, other experimental techniques go even deeper in their analysis, allowing experimenters to see how bacterial gene expression changes amidst variable circumstances, such as with the administration of antibiotics or the emerging presence of another microbial species. RNA sequencing (RNA-Seq) technology is an incredibly powerful research tool that allows scientists to precisely understand RNA transcripts. Since its creation in the mid 2000s, researchers have made several key discoveries into the extent and complexities of eukaryotic transcriptomes (Wang et. al., 2009). The potential for RNA sequencing into the microbiome is almost limitless as it allows researchers to gauge bacterial gene expression under different experimental conditions, helping them understand the role of the microbiome in cancers, immune response, and in general health. Advancements in RNA-Seq in the 2010s have allowed researchers to find genes actively expressed in complex bacterial communities, helping them better understand the transition from healthy microbiome environments to diseased microbiome environments (Bashiardes et. al., 2016).
"In the last few years, the longused classical cultural techniques for isolating and identifying bacterial species have been supplemented, if not yet supplanted, by modern molecular identification techniques."
Perhaps the biggest step forward in recent years was the use of high-throughput transfer RNA sequencing in 2018. A team of researchers
305
Figure 7. An electron microscopy image of Enterobacteriaceae bacteria, a family of potentially infectious bacteria that includes E. coli and salmonella. The Enterobacteriaceae family was the type of bacteria identified by Salosensaari and his research team as a potential predictor of lifespan. Source: Janice Haney Carr, CDC, Wikimedia Commons
"As scientists become more capable of identifying what microbes are present in human microbiomes, they have begun putting more emphasis on epidemiological analyses."
studied transfer RNA (tRNAs) by using a combination of direct sequencing approaches (which sequence RNA directly rather than relying on transcription from DNA sequencing), tRNA-seq, and tRNA-seq-tools (a software package). Direct sequencing approaches were arguably the biggest step forward; prior to development of direct sequencing techniques (techniques allowing researchers to directly read the genome) researchers relied on indirect approaches, which consumed significantly more genetic material and required researchers to first sequence DNA, resulting in higher possibility of error. They tested cecal samples of mice with different diets and uncovered information on taxon- and diet-dependent variations in tRNA. The use of these technologies allowed the team to get sequential information, abundance profiles, and post-transcriptional modification data for the tRNA; these applications will unlock information about the microbiome in years to come (Schwartz et. al., 2018). As scientists become more capable of identifying what microbes are present in human microbiomes, they have begun putting more emphasis on epidemiological analyses. The goal of these epidemiological studies is to understand what microbial community structures are most commonly associated with different diseases. Eventually, scientists hope to have a deep enough understanding of the human microbiome to be able to predict what diseases a patient may be susceptible to, based on the composition of their microbiome; alternatively, this understanding of a patient’s microbiome could be used to personalize their treatment (Foxman and Martin, 2015). There are a number of challenges to epidemiological studies of the microbiome including the variability of the microbiome and the techniques used to study it. Regarding the former, microbiomes, “like the human genome— can be uniquely identifying” (Foxman and Martin, 2015). Microbiomes not only vary largely between individuals but also within themselves. Most microbes reproduce quickly and do so in response to the nutrients available to them, the other microbes in their environment, and the outside organisms they are exposed to. This means that a human’s microbiome is shifting constantly in response to events as simple as eating. These short-term changes shift the amount of each microbe present, not necessarily which species are present. On the other hand, broad-spectrum antibiotics cause changes to the microbiome that can be detected weeks or
306
even months after treatment. If an individual’s microbial community structure is identified once, their microbiome will almost certainly be different the next time that procedure is carried out, albeit to varying extents based on innumerable factors (Foxman and Martin, 2015). Another challenge to epidemiological studies is the variation of results based on the techniques used to gather and analyze microbial community data. Factors such as the number of freeze-thaw cycles a sample undergoes, the length of sample storage before testing, and the brand of kit used for DNA extraction can influence the outcome of microbial sequencing. These factors are difficult to standardize across labs, or even within the same study; many samples are taken from study participants’ stool, which is often collected at home and returned to the lab after varied amounts of time. While the ability of 16S rRNA sequencing to detect low levels of bacterial DNA can be beneficial, the technique also amplifies the DNA of bacterial contaminants, skewing study results (Goodrich et al., 2016). A 2016 study in Cell “found that all commercial reagents, from extraction kits to primers and polymerases, may be contaminated with microbial DNA, and levels can differ between batches from the same vendor” (Goodrich et al., 2016). The final challenge to epidemiological studies of the microbiome is the lack of consensus on how to best analyze and synthesize the data obtained. Additionally, “because the measures are highly discriminatory, it is possible to obtain statistically significant results from relatively small samples,” making it difficult for scientists to determine when a statistical difference translates to a true clinical difference (Foxman and Martin, 2015). Beyond these statistical issues, if a patient’s microbial community was DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
only identified after they presented with a disease, it is impossible to distinguish whether their microbiome factored into the onset of their disease, or if their microbiome was changed as a result of the disease. To combat this issue of cause-or-effect, longitudinal studies of healthy and diseased individuals must be conducted (Foxman and Martin, 2015). Today’s study of the microbiome is still relatively new, and its study techniques are still being refined. As the field grows, these concerns will be studied, addressed, and likely standardized, allowing for microbiome studies to be more broadly applied in medical practice.
Mastering the Microbiome - A Vision for the Future Rational Drug Design: Rational drug design is a fairly new method for developing new drugs, including antibiotics. The goal of rational drug design is to design drugs based on knowledge of their target, especially structural and chemical characteristics, rather than basing drug discovery on trial-anderror. As such, new methods in rational drug design are allowing researchers to develop powerful new antibiotics and repurpose known antibiotics in an efficient manner. The process of rational drug design begins with a few cycles of research and development which culminate in bringing a potentially therapeutic compound to clinical trials and then to the market (Anderson, 2003). In the first cycle, researchers must clone, purify, and determine the structure of the drug target, which is typically a protein or nucleic acid as these have direct effects on bodily functions (Anderson, 2003). X-ray crystallography, nuclear magnetic resonance spectroscopy, and homology modeling are some of the methods used to determine the target structure. In developing antimicrobial drugs, the goal is total inhibition to kill the pathogen (Anderson, 2003). This is an important distinction from the goal of drugs that target human proteins–these drugs should generally regulate a function rather than completely disable it, as total inhibition might harm the patient (Anderson, 2003). As such, when developing drugs to attack a pathogen in a human host rather than regulate one of the human host’s own proteins, the target should ideally be unique to the pathogen (and not present in the healthy human host) and play a crucial role in the cell (Anderson, 2003). Thus, when inhibited, its loss is so severe that
SUMMER 2020
it induces pathogenic cell death while leaving the human host unharmed (Anderson, 2003). After establishing a clear structure, drug developers may use computer algorithms to screen different molecules for their binding affinities to the target (Anderson, 2003). One useful database of molecules is ZINC15, and some programs used to study theoretical binding affinities include UCSF Chimera, DOCK, SLIDE, FlexX, and others (Anderson, 2003). After selecting the most promising compounds based on the computer models, researchers can determine the structure of the drug target in complex with, or bound to, the prospective drug (Anderson, 2003). Studying the drug target in complex with a compound of interest using these programs allows researchers to identify potential weak spots or even more promising areas to aim for. As such, drug developers may determine ideal target sites for inhibition and also use this information to identify exactly where and how to adjust the drug’s structure to maximize drug potency and efficacy (Anderson, 2003). Following many cycles, the result is a lead compound that shows specificity for its target, which is then further studied and may enter animal and eventually human trials to obtain certification to be sold commercially (Anderson, 2003).
"The goal of rational drug design is to design drugs based on knowledge of their target, especially structural and chemical characteristics, rather than basing drug discovery on trialand-error."
Computational methods are critical in the rational development of antibiotics, including artificial intelligence (AI) and machine learning. In one study, a team of researchers at MIT used AI to develop an antibiotic called “halicin,” which has shown promising results against tuberculosis and other bacteria (Marchant, 2020). They created a neural network, or a computer algorithm that operates similarly to neurons in the brain, that can recognize patterns in drug activity and identify potentially potent compounds that humans cannot see (Marchant, 2020). Even though the neural networks are trained on past data, the sheer amount of data they may analyze allows the computers to make more frequent connections and recognize patterns much faster than any researcher could. Whereas drug developers have historically based new antibiotic designs on previously successful structures, as evident in the numerous penicillin derivatives used today to fight bacterial infections, the neural network can identify novel structures that may be effective antibiotics. Researchers have also been exploring machine learning for the rational drug design of
307
Figure 8: Probiotics help your body in several ways, four of which are illustrated in this image. Box A depicts the bacteria (pictured in blue) in direct competition with pathogens (pictured in brown) for nutrients. Probiotics make bacteria stronger and therefore more easily able to get nutrients (Maldonado et. al., 2019). Box B depicts probiotic adhesion to pathogen binding sites, which keeps pathogens from adhering to the host’s cells (Monteagudo et. al., 2019). Box C depicts the enhancement of the immune system from probiotics, which helps the body fight pathogens (Markowiak & Śliżewska, 2017). Box D depicts bacteria directly attacking pathogens, as probiotics induce bacteria to kill pathogens (Maldonado et. al., 2019). With the use of the microbiome, doctors may be able to give patients highly specific probiotics that are tailored to help with their specific issues, greatly increasing the efficacy of treatment. Source: Wikimedia Commons
"There are six main antibiotics that treat UTIs; however, since many of the bacteria that cause UTIs exist asymptomatically in humans, they are regularly exposed to antibiotics that do not completely eradicate the infection, leaving those bacterial left to develop resistance and grow."
308
antibiotics. While government funding for such research is low, the need for new antibiotics is high as many have become useless on bacteria that have grown resistant through natural selection (Didelot and Pouwels, 2019). As a result, developing new antibiotics is not profitable for pharmaceutical companies and many have abandoned such research or tried desperately to find new ways of cutting costs and time (Didelot and Pouwels, 2019). These harmful phenomena are exacerbated when considered with Eroom’s Law– the reverse of Moore’s Law–in which research and development (R&D) efficiency, “measured simply in terms of the number of new drugs brought to market by the global biotechnology and pharmaceutical industries per billion US dollars of R&D spending,” has declined in past years despite numerous technological advances (Scannell et al., 2012). The lack of government funding for producing new drugs due to complex economic forces creates a bleak outlook for antimicrobial drug development despite a growing need for new drugs. To combat this, some researchers have explored using machine learning to assist appropriate selection of antibiotic prescriptions
so that existing drugs may be used efficiently, especially for urinary tract infections (UTIs) (Didelot and Pouwels, 2019). There are six main antibiotics that treat UTIs; however, since many of the bacteria that cause UTIs exist asymptomatically in humans, they are regularly exposed to antibiotics that do not completely eradicate the infection, leaving those bacterial left to develop resistance and grow (Didelot and Pouwels, 2019). Thus, doctors must try to determine which antibiotic will best treat each patient, as exposing the bacteria to an insufficient drug will exacerbate resistance. In one study, researchers used gradient-boosting decision trees–a form of machine learning–to train an algorithm that takes into account a patient’s demographics, infection history, and previous antibiotic use to determine the most effective type and quantity of UTI antibiotic (Didelot and Pouwels, 2019). The results were promising; in all cases, doctors prescribed an inappropriate antibiotic 8.5% of the time whereas the algorithm selected an inappropriate antibiotic only 5% of the time (Didelot and Pouwels, 2019). These results
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
suggest that not only can AI and machine learning be used to develop novel antibiotics but also to personalize patient prescriptions to minimize the development of antibiotic resistant bacteria. Disease Diagnostics: For many disorders and diseases, the microbiome of the diseased individual is significantly different from their healthy microbiome (Durack and Lynch, 2019). As discussed previously, this relationship suggests that the microbiome can and does interact with surrounding bodily systems. Additionally, this relationship raises a question: if the diseased microbiome appears differently enough and consistently enough from the healthy microbiome, might our microbiota eventually serve as diagnostic tools? Could the abundance of certain microbial molecules relative to the baseline signal the need for medical intervention? Recent findings are promising. A pair of investigations published in early 2020 found that the microbiome might be able to reveal the presence of many diseases even better than our own genes (Tierny et al., 2020; Salosensaari et al., 2020). In the first of these studies, Tierny and colleagues compared the genomic sequencing information from microbiomes in diseased individuals to sequencing information from 24 genome-wide association studies (which correlate specific genetic variants with diseases). Remarkably, the genetic signature of gut microbes was 20% better at discriminating between a healthy and an ill person than a person’s own genes (Tierny et al., 2020). In the second study, researchers retrospectively analyzed stool samples from a longitudinal Finnish project to examine the link between microbiome and lifespan. They found that individuals with particularly high levels of certain bacteria were 15% more likely than those without the abundance to die within the next fifteen years (Salosensaari et al., 2020). Both of these microbiome-diagnosis studies remain in a pre-publication stage, and the lead researchers acknowledge that their analyses are preliminary (Ortega, 2020). Other researchers, like Jeroen Raes of the VIB-KU Leuven Center for Microbiology, have weighed in with concerns; Raes cautions that despite these recent findings, scientists still don’t understand the microbiome nearly as well as they understand our genes, which can make comparisons between the two diagnostic methods “risky” (Ortega, 2020).
SUMMER 2020
However, as new findings continue to implicate the diagnostic power of the microbiome, there is still optimism that a microbiomedriven diagnostic tool has the potential to greatly improve our detection of diseases with complicated pathologies. Both type-2 diabetes and schizophrenia, for example, have large environmental components. While this tends to limit the diagnostic and predictive power of genetic sequencing, the microbiome—given its acute sensitivity to factors like diet and exercise—might be optimally suited to raise diagnostic flags for these hard-to-diagnose conditions (Tierny et al., 2020). Precision Medicine: Precision medicine is also on the next frontier of healthcare. It allows doctors to personalize care for patients by taking into account genetic profile, lifestyle choices, and physical environment. This allows doctors to save enormous amounts of time and money along with giving patients better care from the start (Ginsburg and Phillips, 2018). The microbiome may be the key to popularizing the precision medicine model. In general, the health of the microbiome is an indicator of a person’s overall health. Given that ecological strength is correlated with the chemical surroundings of the bacterial colonies, better understanding the microbiome will be able to give physicians an excellent overview of their patients’ health (Kashyap et. al., 2017).
"Given that ecological strength is correlated with the chemical surroundings of the bacterial colonies, better understanding the microbiome will be able to give physicians an excellent overview of their patients’ health."
There are a few specific areas that the microbiome will play a role in during adaptation of precision medicine. The first is microbiome-xenobiotic interactions, or the way that the microbiome interacts with drugs. By better understanding the impact that drugs have on gut bacterial colonies of the individual, physicians will be able to better predict how patients will react to different treatments and what drugs will be most efficacious with the least negative side effects. Secondly, physicians will be able to put an emphasis on preventative, rather than reactionary, medicine by giving patients prebiotics, or plant fibers that naturally boost the levels of beneficial bacteria in the gut. Finally, the door is open to create precision probiotics, which are substances containing live organisms that will inoculate the stomach with bacteria. Doctors will give patients highly specified probiotics that include the specific bacteria that they need to fight off disease (Kuntz and Gilbert, 2017).
309
Fighting Antimicrobial Resistance:
"Resistance to antibiotics is characterized by bacterial chromosomal mutations or the acquisition of resistant genes through horizontal gene transfer (HGT)."
Antimicrobial resistance is one of the most studied phenomena enabling bacteria to survive exposure to antibiotics; the mechanisms for bacterial development of antimicrobial resistance include resistance, heteroresistance, tolerance, and persistence (Bakkeren et al., 2020). Understanding how these phenomena are related is crucial to developing methods to fight antimicrobial resistance from all sides to lead to more effective treatments. Resistance to antibiotics is characterized by bacterial chromosomal mutations or the acquisition of resistant genes through horizontal gene transfer (HGT) (Bakkeren et al., 2020). Such mutations increase the minimum inhibitory concentration (MIC) of the antibiotic needed to kill the pathogens, and these are generally observed on a population-wide scale (Bakkeren et al., 2020). On the other hand, heteroresistance occurs when only some members of a clonal bacterial population exhibit resistance (Bakkeren et al., 2020). Heteroresistance may be further divided into “stable” and “unstable” categories, in which a subpopulation’s heteroresistance remains unchanged over time or the levels of susceptibility vary depending on the presence of antibiotics, respectively (Andersson et al., 2019). Tolerance, on the other hand, occurs when genetically susceptible bacteria can survive antibiotic exposure above the MIC but cannot replicate in such conditions, unlike resistant bacteria (Bakkeren et al., 2020). Finally, persistence resembles tolerance, except that only a portion of the clonal population exhibits the tolerant trait (Bakkeren et al., 2020). Research has suggested that antibiotic persistence is not heritable, meaning that clones isolated from the persistent or tolerant bacterial populations will replicate to produce a mixture of persistent and susceptible bacteria (Bakkeren et al., 2020). There are also two main types of persistence – spontaneous and triggered persistence. The former occurs without exposure to a stimulus, while the latter is in response to a stressful stimulus; thus, most bacterial populations naturally contain some persistent bacteria (Bakkeren et al., 2020). The exact mechanisms through which genotypic alterations influence phenotypes and the evolutionary mechanisms encouraging persistence remain largely unknown (Bakkeren et al., 2020). However, researchers have established that antibiotic persistence can promote both the mutation and acquisition of genes by HGT–the
310
two mechanisms which give rise to antibiotic resistance (Bakkeren et al., 2020). Researchers have recently made some interesting discoveries regarding one of the key mechanisms that lead to antibiotic resistance: target protection. In target protection, a resistance protein becomes physically associated with an antibiotic target, effectively blocking the antibiotic from reaching its active site (Wilson et al., 2020). Target protection affects numerous classes of antibiotics, such β-lactams, which include penicillin-based drugs, and tetracyclines, which are used to fight more serious bacterial infections; however, advancements in rational drug design have allowed researchers to devise ways to combat this mechanism (Wilson et al., 2020). For instance, new methods involving small-molecule inhibitors of the resistance mechanisms are being studied to block target protection and fight antibiotic resistance (Wilson 2020).
Conclusion When the whole of human history is taken into account, our understanding of the human microbiome has expanded at a tremendous rate in the past two centuries, matching the accelerated rate of progress in the field of biology at large and other realms of scientific knowledge. The antiquated miasma theory of infectious disease (that it spread by ‘bad air’)– dominant from the age of Rome to the mid-19th century–has given way to a more accurate and productive view of infectious disease caused by microscopic organisms, and the even smaller molecules that they produce. The world has seen a great constellation of achievements, from the pioneering work of Lister and Pasteur, to the antibiotic discoveries of Ehrlich and Fleming, to the modern Human Microbiome Project. And these scientific discoveries have brought about not only a greater understanding of the vast community of microscopic organisms that occupy every corner of the environment but has also provided us with a better conception of ourselves. Microbes are not just agents of human disease; in fact, they are more often passive occupants or even helpful partners in the maintenance of human life. With the new technologies for molecular detection and genomic sequencing mentioned in this report, the ways in which microbes alter the physiology of the human host, for good or ill, are becoming ever more well understood. There is great hope that new insights into the microbiome will fuel further progress towards tackling the grave
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
problem of antibiotic resistance and produce a far more holistic picture of human physiology to support disease diagnosis and personalized medical care.
Aumentos: 6,836×. Bacteria in photos. https://commons. wikimedia.org/wiki/File:Escherichia_coli_electron_ microscopy.jpg
10 Intriguing Facts About Joseph Lister. (2017, September 12). https://www.mentalfloss.com/article/503311/10-intriguingfacts-about-joseph-lister
Cryan, J. F., O’Riordan, K. J., Cowan, C. S. M., Sandhu, K. V., Bastiaanssen, T. F. S., Boehme, M., Codagnone, M. G., Cussotto, S., Fulling, C., Golubeva, A. V., Guzzetta, K. E., Jaggar, M., Long-Smith, C. M., Lyte, J. M., Martin, J. A., Molinero-Perez, A., Moloney, G., Morelli, E., Morillas, E., … Dinan, T. G. (2019). The Microbiota-Gut-Brain Axis. Physiological Reviews, 99(4), 1877–2013. https://doi.org/10.1152/physrev.00018.2018
1832—The deadly epidemic that helped shape today’s Paris. (2010, November 18). RFI. https://www.rfi.fr/en/visitingfrance/20101118-1832-epidemic-helped-shape-todays-paris
Didelot, X., & Pouwels, K. B. (2019). Machine-learning-assisted selection of antibiotic prescription. Nature Medicine, 25(7), 1033–1034. https://doi.org/10.1038/s41591-019-0517-0
Aminov, R. I. (2010). A Brief History of the Antibiotic Era: Lessons Learned and Challenges for the Future. Frontiers in Microbiology, 1. https://doi.org/10.3389/fmicb.2010.00134
Durack, J., & Lynch, S. V. (2019). The gut microbiome: Relationships with disease and opportunities for therapy. The Journal of Experimental Medicine, 216(1), 20–40. https://doi. org/10.1084/jem.20180448
References
Anderson, A. C. (2003). The process of structure-based drug design. Chemistry & Biology, 10(9), 787–797. https://doi. org/10.1016/j.chembiol.2003.09.002 Antibiotics: List of Common Antibiotics & Types. (n.d.). Drugs. Com. Retrieved July 26, 2020, from https://www.drugs.com/ article/antibiotics.html Bakkeren, E., Diard, M., & Hardt, W.-D. (2020). Evolutionary causes and consequences of bacterial antibiotic persistence. Nature Reviews Microbiology, 1–12. https://doi.org/10.1038/ s41579-020-0378-z Barr, J. R., Moura, H., Boyer, A. E., Woolfitt, A. R., Kalb, S. R., Pavlopoulos, A., McWilliams, L. G., Schmidt, J. G., Martinez, R. A., & Ashley, D. L. (2005). Botulinum Neurotoxin Detection and Differentiation by Mass Spectrometry. Emerging Infectious Diseases, 11(10), 1578–1583. https://doi.org/10.3201/ eid1110.041279 Bashiardes, S., Zilberman-Schapira, G., & Elinav, E. (2016). Use of Metatranscriptomics in Microbiome Research. Bioinformatics and Biology Insights, 10, BBI.S34610. https:// doi.org/10.4137/BBI.S34610 Bonnet, M., Lagier, J. C., Raoult, D., & Khelaifia, S. (2019). Bacterial culture through selective and non-selective conditions: The evolution of culture media in clinical microbiology. New Microbes and New Infections, 34. https:// doi.org/10.1016/j.nmni.2019.100622 Bosch, F., & Rosich, L. (2008). The Contributions of Paul Ehrlich to Pharmacology: A Tribute on the Occasion of the Centenary of His Nobel Prize. Pharmacology, 82(3), 171–179. https://doi. org/10.1159/000149583 Breyner, N. M., Michon, C., de Sousa, C. S., Vilas Boas, P. B., Chain, F., Azevedo, V. A., Langella, P., & Chatel, J. M. (2017). Microbial Anti-Inflammatory Molecule (MAM) from Faecalibacterium prausnitzii Shows a Protective Effect on DNBS and DSS-Induced Colitis Model in Mice through Inhibition of NF-κB Pathway. Frontiers in Microbiology, 8, 114. https://doi.org/10.3389/fmicb.2017.00114 Cao, L., Shcherbin, E., & Mohimani, H. (2019). A Metabolomeand Metagenome-Wide Association Network Reveals Microbial Natural Products and Microbial Biotransformation Products from the Human Microbiota. MSystems, 4(4). https://doi.org/10.1128/mSystems.00387-19 Carr, J. H. (2016). Español: Escherichia coli (strain O157:H7).
SUMMER 2020
Ellermann, M., Carr, J. S., Fodor, A. A., Arthur, J. C., & Carroll, I. M. (2017). Chapter 2 - Characterizing and Functionally Defining the Gut Microbiota: Methodology and Implications. In M. H. Floch, Y. Ringel, & W. Allan Walker (Eds.), The Microbiota in Gastrointestinal Pathophysiology (pp. 15–25). Academic Press. https://doi.org/10.1016/B978-0-12-804024-9.00002-1 Fast Facts About the Human Microbiome (n.d.). Retrieved August 15, 2020, from https://depts.washington.edu/ceeh/ downloads/FF_Microbiome.pdf Fitzharris, L. (2017). The butchering art: Joseph Lister’s quest to transform the grisly world of Victorian medicine (First edition). Scientific American/Farrar, Straus and Giroux. Foster, W., & Raoult, A. (1974). Early descriptions of antibiosis. The Journal of the Royal College of General Practitioners, 24(149), 889–894. Foxman, B., & Martin, E. T. (2015). Use of the Microbiome in the Practice of Epidemiology: A Primer on -Omic Technologies. American Journal of Epidemiology, 182(1), 1–8. https://doi.org/10.1093/aje/kwv102 Ginsburg, G. S., & Phillips, K. A. (2018). Precision Medicine: From Science To Value. Health Affairs, 37(5), 694–701. https:// doi.org/10.1377/hlthaff.2017.1624 Goodrich, J. K., Di Rienzi, S. C., Poole, A. C., Koren, O., Walters, W. A., Caporaso, J. G., Knight, R., & Ley, R. E. (2014). Conducting a Microbiome Study. Cell, 158(2), 250–262. https://doi. org/10.1016/j.cell.2014.06.037 Gould, K. (2016). Antibiotics: From prehistory to the present day. Journal of Antimicrobial Chemotherapy, 71(3), 572–575. https://doi.org/10.1093/jac/dkv484 Gram Positive vs Gram Negative. (n.d.). From Technology Networks. Retrieved July 26, 2020, from https://www. technologynetworks.com/immunology/articles/grampositive-vs-gram-negative-323007 Hibbing, M. E., Fuqua, C., Parsek, M. R., & Peterson, S. B. (2010). Bacterial competition: Surviving and thriving in the microbial jungle. Nature Reviews. Microbiology, 8(1), 15–25. https://doi. org/10.1038/nrmicro2259 High Throughput Sequencing—An overview | ScienceDirect Topics. (n.d.). Retrieved July 23, 2020, from https:// www-sciencedirect-com.dartmouth.idm.oclc.org/topics/ immunology-and-microbiology/high-throughputsequencing
311
Human Microbiome Project—Overview. (n.d.). Retrieved July 23, 2020, from https://commonfund.nih.gov/hmp/overview Ignaz Semmelweis | Biography & Facts. (n.d.). Encyclopedia Britannica. Retrieved July 26, 2020, from https://www. britannica.com/biography/Ignaz-Semmelweis Janda, J. M., & Abbott, S. L. (2007). 16S rRNA Gene Sequencing for Bacterial Identification in the Diagnostic Laboratory: Pluses, Perils, and Pitfalls. Journal of Clinical Microbiology, 45(9), 2761–2764. https://doi.org/10.1128/JCM.01228-07 Kashyap, P. C., Chia, N., Nelson, H., Segal, E., & Elinav, E. (2017). Microbiome at the Frontier of Personalized Medicine. Mayo Clinic Proceedings, 92(12), 1855–1864. https://doi. org/10.1016/j.mayocp.2017.10.004 Koch’s Postulates. (n.d.). Retrieved July 26, 2020, from https:// science.umd.edu/classroom/bsci424/BSCI223WebSiteFiles/ KochsPostulates.htm Kohanski, M. A., Dwyer, D. J., & Collins, J. J. (2010). How antibiotics kill bacteria: From targets to networks. Nature Reviews Microbiology, 8(6), 423–435. https://doi.org/10.1038/ nrmicro2333 Konstantinidi, E. M., Lappas, A. S., Tzortzi, A. S., & Behrakis, P. K. (2015, May 27). Exhaled Breath Condensate: Technical and Diagnostic Aspects [Review Article]. The Scientific World Journal; Hindawi. https://doi.org/10.1155/2015/435160 Kuntz, T. M., & Gilbert, J. A. (2017). Introducing the Microbiome into Precision Medicine. Trends in Pharmacological Sciences, 38(1), 81–91. https://doi.org/10.1016/j.tips.2016.10.001 Louis Pasteur | Biography, Inventions, Achievements, & Facts. (n.d.). Encyclopedia Britannica. Retrieved August 2, 2020, from https://www.britannica.com/biography/Louis-Pasteur Maldonado Galdeano, C., Cazorla, S. I., Lemme Dumit, J. M., Vélez, E., & Perdigón, G. (2019). Beneficial Effects of Probiotic Consumption on the Immune System. Annals of Nutrition and Metabolism, 74(2), 115–124. https://doi. org/10.1159/000496426 Marchant, J. (2020). Powerful antibiotics discovered using AI. Nature. https://doi.org/10.1038/d41586-020-00018-3 Markowiak, P., & Śliżewska, K. (2017). Effects of Probiotics, Prebiotics, and Synbiotics on Human Health. Nutrients, 9(9). https://doi.org/10.3390/nu9091021 Monteagudo-Mera, A., Rastall, R. A., Gibson, G. R., Charalampopoulos, D., & Chatzifragkou, A. (2019). Adhesion mechanisms mediated by probiotics and prebiotics and their potential impact on human health. Applied Microbiology and Biotechnology, 103(16), 6463–6472. https://doi.org/10.1007/ s00253-019-09978-7 Mystery of superior Leeuwenhoek microscope solved after 350 years. (n.d.). Retrieved August 2, 2020, from https://phys.org/ news/2018-03-mystery-superior-leeuwenhoek-microscopeyears.html Nayfach, S., Shi, Z. J., Seshadri, R., Pollard, K. S., & Kyrpides, N. C. (2019). New insights from uncultivated genomes of the global human gut microbiome. Nature, 568(7753), 505–510. https:// doi.org/10.1038/s41586-019-1058-x Osman, M.-A., Neoh, H., Ab Mutalib, N.-S., Chin, S.-F., & Jamal, R. (2018). 16S rRNA Gene Sequencing for Deciphering the Colorectal Cancer Gut Microbiome: Current Protocols and Workflows. Frontiers in Microbiology, 9. https://doi.
312
org/10.3389/fmicb.2018.00767 Petrova, O. E., & Sauer, K. (2012). Sticky Situations: Key Components That Control Bacterial Surface Attachment. Journal of Bacteriology, 194(10), 2413–2425. https://doi. org/10.1128/JB.00003-12 Prescott, S. L. (2017). History of medicine: Origin of the term microbiome and why it matters. Human Microbiome Journal, 4, 24–25. https://doi.org/10.1016/j.humic.2017.05.004 Quévrain, E., Maubert, M. A., Michon, C., Chain, F., Marquant, R., Tailhades, J., Miquel, S., Carlier, L., Bermúdez-Humarán, L. G., Pigneur, B., Lequin, O., Kharrat, P., Thomas, G., Rainteau, D., Aubry, C., Breyner, N., Afonso, C., Lavielle, S., Grill, J.-P., … Seksik, P. (2016). Identification of an anti-inflammatory protein from Faecalibacterium prausnitzii, a commensal bacterium deficient in Crohn’s disease. Gut, 65(3), 415–425. https://doi.org/10.1136/gutjnl-2014-307649 Ribeiro da Cunha, Fonseca, & Calado. (2019). Antibiotic Discovery: Where Have We Come from, Where Do We Go? Antibiotics, 8(2), 45. https://doi.org/10.3390/ antibiotics8020045 Rodrigo Ortega. (2020, January 22). The microbes in your gut could predict whether you’re likely to die in the next 15 years. Science | AAAS. https://www.sciencemag.org/news/2020/01/ microbes-your-gut-could-predict-whether-you-re-likely-dienext-15-years Rowan-Nash, A. D., Korry, B. J., Mylonakis, E., & Belenky, P. (2019). Cross-Domain and Viral Interactions in the Microbiome. Microbiology and Molecular Biology Reviews, 83(1). https://doi.org/10.1128/MMBR.00044-18 Sams, E. R., Whiteley, M., & Turner, K. H. (2014). ‘The battle for life’: Pasteur, anthrax, and the first probiotics. Journal of Medical Microbiology, 63(11), 1573–1574. https://doi. org/10.1099/jmm.0.081844-0 Schwartz, M. H., Wang, H., Pan, J. N., Clark, W. C., Cui, S., Eckwahl, M. J., Pan, D. W., Parisien, M., Owens, S. M., Cheng, B. L., Martinez, K., Xu, J., Chang, E. B., Pan, T., & Eren, A. M. (2018). Microbiome characterization by high-throughput transfer RNA sequencing and modification analysis. Nature Communications, 9(1), 5353. https://doi.org/10.1038/s41467018-07675-z Sharma, D., Misba, L., & Khan, A. U. (2019). Antibiotics versus biofilm: An emerging battleground in microbial communities. Antimicrobial Resistance & Infection Control, 8(1), 76. https:// doi.org/10.1186/s13756-019-0533-3 Smith, K. (n.d.). The Origin of MacConkey Agar. ASM.Org. Retrieved August 1, 2020, from https://asm.org/Articles/2019/ October/The-Origin-of-MacConkey-Agar Sokol, H., Pigneur, B., Watterlot, L., Lakhdari, O., BermúdezHumarán, L. G., Gratadoux, J.-J., Blugeon, S., Bridonneau, C., Furet, J.-P., Corthier, G., Grangette, C., Vasquez, N., Pochart, P., Trugnan, G., Thomas, G., Blottière, H. M., Doré, J., Marteau, P., Seksik, P., & Langella, P. (2008). Faecalibacterium prausnitzii is an anti-inflammatory commensal bacterium identified by gut microbiota analysis of Crohn disease patients. Proceedings of the National Academy of Sciences of the United States of America, 105(43), 16731–16736. https://doi.org/10.1073/ pnas.0804812105 Tan, S., & Tatsumura, Y. (2015). Alexander Fleming (1881– 1955): Discoverer of penicillin. Singapore Medical Journal,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
56(07), 366–367. https://doi.org/10.11622/smedj.2015105 Taxonomic Signatures of Long-Term Mortality Risk in Human Gut Microbiota | medRxiv. (n.d.). Retrieved August 16, 2020, from https://www.medrxiv.org/content/10.1101/2019.12.30 .19015842v2 The gut microbiome in neurological disorders—The Lancet Neurology. (n.d.). Retrieved July 23, 2020, from https://www. thelancet.com/article/S1474-4422(19)30356-4/fulltext The predictive power of the microbiome exceeds that of genome-wide association studies in the discrimination of complex human disease | bioRxiv. (n.d.). Retrieved August 16, 2020, from https://www.biorxiv.org/content/10.1101/2019.1 2.31.891978v1 Tierney, B. T., He, Y., Church, G. M., Segal, E., Kostic, A. D., & Patel, C. J. (2020). The predictive power of the microbiome exceeds that of genome-wide association studies in the discrimination of complex human disease. BioRxiv, 2019.12.31.891978. https://doi.org/10.1101/2019.12.31.891978 Urban, P. L. (2016). Quantitative mass spectrometry: An overview. Philosophical Transactions. Series A, Mathematical, Physical, and Engineering Sciences, 374(2079). https://doi. org/10.1098/rsta.2015.0382 Wang, B., Yao, M., Lv, L., Ling, Z., & Li, L. (2017). The Human Microbiota in Health and Disease. Engineering, 3(1), 71–82. https://doi.org/10.1016/J.ENG.2017.01.008 Wang, Z., Gerstein, M., & Snyder, M. (2009). RNA-Seq: A revolutionary tool for transcriptomics. Nature Reviews Genetics, 10(1), 57–63. https://doi.org/10.1038/nrg2484 Wilson, D. N., Hauryliuk, V., Atkinson, G. C., & O’Neill, A. J. (2020). Target protection as a key antibiotic resistance mechanism. Nature Reviews Microbiology, 1–12. https://doi. org/10.1038/s41579-020-0386-z
SUMMER 2020
313
The Psychology of the Pandemic STAFF WRITERS: DANIEL CHO '22, TIMMY DAVENPORT (UNIVERSITY OF WISCONSIN JUNIOR), NINA KLEE '23, MICHELE ZHENG (SAGE HILL SCHOOL SENIOR), ROBERTO RODRIGUEZ '23, JENNY SONG '23 BOARD WRITERS: SAM NEFF '21 AND MEGAN ZHOU '21 Cover Image: A lonely Paris street - with shops closed and schools shuttered, this picture captures the spirit of living in the time of the coronavirus pandemic. The world has just begun to return to a new normal state of affairs, but there is a long road ahead Source: Wikimedia Commons
Introduction The year 2020 has brought not just the beginning of a new decade, but a seismic shift in the modern way of life. The changes wrought by the COVID-19 pandemic – new ways of learning, working, and living in communities – are global in scale, and in many ways are irrevocable. The constant hum of office printers has been replaced by a quiet and ceaseless quest to complete work projects at the kitchen table or the empty bedroom; The chaos of the classroom swapped for daily video calls – students arranged in neatly ordered boxes on the screen, and prone to muting at the teacher’s discretion. No more disruptive students or noisy colleagues – it may have seemed a welcome prospect from the vantage of the past, but this way of life (for many) has lost its luster. Now more than ever, it seems to be a universally understood truth that social isolation is a dangerous state of living. Indeed, there is much
314
scientific evidence to back this statement. Perceived social isolation is associated with a host of negative health effects – from immune dysfunction to heart disease (Cacioppo et al., 2014). Additionally, mortality rates for those socially isolated are startlingly high (HoltLunstad et al., 2015). This report will explore the psychological effects of social isolation brought about by the ongoing COVID-19 pandemic, as well as the factors that make an individual more or less likely to abide by social distancing recommendations. It will also explore a number of circumstances that may affect one’s experience of the current crisis including stress and anxiety surrounding personal risk factors for infection, experiences with the illness itself, and coping with the loss of loved ones. The report will end with an exploration of what the future may hold, with an emphasis on remote work, the virtual classroom, and the turn to cybercare.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Anxiety and depressive disorders show the highest prevalence among all mental illnesses in the United States. They are also likely to co-occur due to shared triggers and overlapping susceptibilities Source: Wikimedia Commons
Psychology of Social Distancing A Rise in Depression and Anxiety Surrounding the Pandemic The COVID-19 pandemic and the lifestyle changes it has caused have resulted in a rise in depression and anxiety among the general public. Not only has the pandemic induced fear of contracting the virus, but social isolation, unemployment, lack of familial support, and limited access to services have also contributed to the worsening of mental health conditions (Moreno et al., 2020). While trends show an increase in anxiety and depression symptoms among the general population, racial and ethnic minorities are presumed to be at a greater risk due to disparities in access to mental health care (Cook et al., 2017). Despite the lack of longitudinal studies, numerous surveys have revealed increased symptoms of depression and anxiety during the pandemic. Many of these studies were performed in China between January 2020 and February 2020 when the virus was present in all 31 provinces; total cases in China were reported to be 79,824 as of February 29th. One such study surveyed 5,033 Chinese residents older than 18 randomly distributed in all of the Chinese provinces and municipalities between February 9th and February 16th, which was the peak of the COVID-19 pandemic in China. This study revealed that 20.4% of those surveyed showed symptoms of anxiety or depression, a sharp increase from the estimated 4% in 2019 (Li et al., 2020). It was also shown that younger
SUMMER 2020
people, those who spend more time thinking about COVID-19, and healthcare workers reported higher rates of pandemic-related anxiety (Huang & Zao, 2020). The reported stress factors contributing to anxiety and depression include fear of illness, economic uncertainties, and inconveniences caused by quarantines (Li et al., 2020). First, fear of illness can exacerbate health-related anxiety, which involves misinterpreting bodily sensations and subsequently developing irrational concerns about medical conditions (Abramowitz, 2007). A survey focused on the German populous revealed a sharp increase in virus anxiety during the months leading up to March 2020. Individuals with anxiety were at a higher risk, as the presence of the pandemic served as a trigger to their existing mental illness (Jungmann & Witthoft, 2020).
"The COVID-19 pandemic and the lifestyle changes it has caused have resulted in a rise in depression and anxiety among the general public."
Second, economic uncertainties and unemployment have been shown to provoke severe depression and anxiety, even leading to suicidal tendencies. Job uncertainties cause economic difficulties and interfere with oneâ&#x20AC;&#x2122;s self-esteem (Godinic et al., 2020). Moreover, a study directed at Italian dentists revealed a positive relationship between perceived job insecurity and depressive symptoms (Gasparro et al., 2020). Considering dentistsâ&#x20AC;&#x2122; relative job security compared to other occupations, it can be speculated that perceived job insecurity caused by COVID-19 would cause depressive symptoms to a similar to, if not greater, than those seen in this study.
315
Figure 2: Social isolation is not just an unhappy circumstance. Rather, it is a state of affairs that is detrimental to overall health and well-being. It raises the incidence and severity of heart disease, poisons sleep quality, and heightens the overall risk of mortality Source: Wikimedia Commons
Lastly, inconveniences in daily lives caused by quarantines have been shown to foster the development of symptoms associated with anxiety and depression. In early February 2020, a study surveyed 1,593 individuals from Southwest China and compared a group quarantined individuals to unquarantined controls. Twice as many individuals in quarantine had symptoms of anxiety and depression, with 12.9% to 6.7% for anxiety and 22.4% to 11.9% for depression in this group (Lei et al., 2020). The result shows a compelling relationship between quarantines and the virusâ&#x20AC;&#x2122;s impact on mental health.
"Controlling for all other confounding variables, individuals who report loneliness are at a 26% higher risk of mortality, and individuals who live alone are at a 32% higher risk."
While many surveys require follow-up studies for a more comprehensive interpretation of the results, it is essential that mental health conditions be closely monitored during the pandemic. Previous studies also suggest that governments should aim to provide an integrated mental health care system that takes an intersectional approach to combat existing disparities for minorities. Such measures may include reducing treatment cost, hiring mental health professionals from diverse backgrounds, and making the system available in historically underrepresented areas. The Impact of Social Isolation The psychology and neuroscience literature has repeatedly demonstrated the negative
316
consequences of social isolation. Controlling for all other confounding variables, individuals who report loneliness are at a 26% higher risk of mortality, and individuals who live alone are at a 32% higher risk (Holt-Lunstad et al., 2015). Social isolation increases the incidence and worsens the prognosis of heart disease (Cacioppo et al., 2014; Brummett et al., 2001). It diminishes sleep quality, which is itself associated with heightened mortality and poor quality of life (Hawkley and Cacioppo, 2013). Additionally, it tends to raise the expression of pro-inflammatory genes, which is associated with personality disorders and cognitive decline (Cacioppo et al., 2014). Lonely people have an increased risk of Alzheimerâ&#x20AC;&#x2122;s, and perhaps not surprisingly, loneliness is strongly associated with depression and suicide (Hawkley and Cacioppo, 2013). Importantly, perceived social isolation (not objective social isolation) is a better predictor of the negative consequences just described. People can live alone but not feel lonely; people can live in close proximity to others and still feel alone. Studies have shown that the most common factors that predispose one to loneliness are underlying physical health symptoms, chronic stress due to a demanding work schedule, a small social network, and the lack of a spouse (Cacioppo et al., 2014).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Scientists often describe the effects of social isolation in evolutionary terms; there is a positive feedback loop in which isolated individuals adapt to see the social activities of others as more threatening, and are therefore increasingly less likely to seek social connections (Hawkley and Cacioppo, 2013). In other words, once the isolated lifestyle has been adopted, it is very difficult to stop. Even prior to the pandemic, loneliness garnered attention as a health concern. A 2019 article on the website of the Health Resources & Services Administration observed that “two in five Americans report that they sometimes or always feel their social relationships are not meaningful, and one in five say they feel lonely or socially isolated.” A study by the National institute for Health Care Management estimated that $6.7 billion in federal spending goes towards treating the side effects of social isolation in the elderly population (HRSA, 2019). As the next section of this report will show, the COVID-19 pandemic has only exacerbated this unhappy situation.
behavior, acting more talkative and sociable. While extroverts benefitted from this change in behavior and felt happier and authentic, introverts did not benefit and tended to feel tired and irritable. Social distancing rules, however, require that extroverts adjust to more introverted behavior (Smillie and Haslam, 2020). Although introversion has become the norm, both groups have unique advantages for coping with social distancing. Introverts tend to crave less social engagement and feel less need for excitement and pleasure than extroverts, which might benefit them during social distancing due to their lower susceptibility to boredom in isolation (Smillie and Haslam, 2020). Extroverts, on the other hand, may be more open to social interactions through other means, such as reaching out to peers by phone or social media. Scientific data has shown a correlation between extroversion and the amount of received social support, highlighting the extroverted behavior of reaching out to people for help (Blue, 2020).
Differential Success in Social Distancing While social distancing has put everyone’s resilience and creativity to the test, it has affected certain groups of people more than others. How can different personality traits affect one’s experience with social distancing? Scientists are particularly interested in assessing differences in coping with social isolation along the introvert-extrovert spectrum. Two groups of people are of particular interest for analyzing differences in coping with social isolation: extroverts and introverts. Swiss psychiatrist Carl Jung defined introverts as happier and re-energized after spending time in solitude, and as reserved and reflective. On the other hand, he described extroverts as being re-energized by interactions with others and as more talkative, enthusiastic and social. Most individuals are somewhere in between the spectrum of introversion and extroversion, but with one trait more dominant than the other. Society has been growingly more supportive towards extroversion; according to psychologist and associate professor of psychology at Cornell University, Vivian Zayas, many introverts fake being more extroverted because it is more valued by society. Now, the roles are reversed, and introverted behavior has become the expectation during the pandemic (Rogers, 2020). In a recent study, both extroverts and introverts were asked to spend one week expressing more extrovert-typical
SUMMER 2020
However, according to Maricar Jenkins, a licensed clinical social worker at Sharp Mesa Vista Hospital, extroverts may be more challenged by social distancing guidelines. Extroverts usually thrive working in collaboration with a team, while introverts usually thrive working in quiet, independent settings. Furthermore, it may be difficult to rely on phone- and videoconference platforms as the only means of communication and social contact (Health News Team, 2020). Minimal opportunities for extroverts to engage socially could easily lead to feelings of loneliness and isolation. Introverts, on the other hand, may be relieved to have more time for introspection and reflection without the spontaneous interruptions and interactions of the usual day-to-day life at school or at work. It is also a concern, however, that some introverts may completely avoid social contact (Health News Team, 2020). Despite predictions that introverts may be better at coping with isolation, a Virginiabased research consultancy, Greater Divide, surveyed 1,000 American adults and found that individuals who scored higher on a measure of extroversion (evaluated by a series of personality tests) were less likely to report suffering from mental health issues due to quarantine. These counterintuitive results may be explained by the enhanced psychological resiliency that is associated with extroversion.
"Despite predictions that introverts may be better at coping with isolation, a Virginia-based research consultancy, Greater Divide, surveyed 1,000 American adults and found that individuals who scored higher on a measure of extroversion (evaluated by a series of personality tests) were less likely to report suffering from mental health issues due to quarantine."
317
"Individuals high in neuroticism, who are more easily stressed and prone to negative emotions, may be more at risk of anxiety and depression as a result of social distancing."
318
Researchers found that extroverts were more likely to agree with statements like “I am calm in the face of danger,” or “I believe things generally work out for the best” than introverts. The degree of introversion, on the other hand, was correlated with greater pandemic-related anxiety and fear. According to Christopher Soto, professor of personality psychology at Colby College, extroverts tend to experience positive emotions more frequently and intensely, likely making it easier for extroverts to stay optimistic amidst difficult circumstances. Furthermore, extroverts may benefit from large virtual networks as they tend to use social media platforms to a greater extent (Travers, 2020). Besides evaluating extroversion and introversion, one can analyze the other four personality traits of the “Big Five” personality model in the context of social distancing: openness to experience, conscientiousness, agreeableness, and neuroticism. Various tests have been developed to measure each of these five traits, and many correlative studies have been performed linking these personality traits to behaviors (“Big 5 Personality Traits,” n.d.). People with high openness to experience, being curious and imaginative, may easily become engrossed in books and music and find creative ways to spend their time in lockdown. Individuals with high conscientiousness, being more organized and productive, may find it easier to maintain a new structured daily routine during quarantine, as recommended by experts. People high in agreeableness, being polite, compassionate and cooperative, may be better equipped to live well with family or friends during quarantine. Individuals high in neuroticism, who are more easily stressed and prone to negative emotions, may be more at risk of anxiety and depression as a result of social distancing. Although these conditions are generalizations that depend on many other factors such as environment and support system, there may be a correlation between certain character traits and associated advantages or disadvantages for coping with social isolation (Smillie and Haslam, 2020). From personality research in extreme environments like an Antarctic research facility, people who are “emotionally stable, self-reliant and autonomous, goal-oriented, friendly, patient and open” are known to be better suited for extreme isolation (Smillie and Haslam, 2020). Selected Arctic and Antarctic workers score higher than average on all factors of the “Big
Five” model except neuroticism. Furthermore, ‘sociable introverts’ who enjoy, yet do not require, social interaction seem optimal for capsule living (Suedfeld and Steel, 2000). In another study of isolated groups at the Naval Medical Research Institute in Maryland, researchers placed pairs of volunteers in a 12-by-12-foot room with no connection to the outside world. Some groups insisted to be released after a few days, while others were thriving. Those thriving had formed strong relationships and became increasingly satisfied with their circumstances during the course of the experiment. They shared their concerns about coping with isolation with each other and adjusted their behaviors whenever conflicts arose. Moreover, successful groups planned out schedules for meals, exercise, and recreation. In contrast, group members who did not cope well with isolation stopped interacting with each other and thereby isolated themselves even more instead of communicating and collaborating. Hence, isolated groups function better when shifting from an individualistic to a collectivistic approach in which group needs are prioritized over individual needs (Forsyth, 2020). Regardless of personality type, human interaction is important for people’s health and well-being. Psychology professor at University of Arizona, Matthias Mehl, claims that the more conversations people have, the happier they are. Moreover, he has conducted research on 9/11 showing that humans have an especially strong need for social connection in stressful situations. According to Harvard Medical School, connecting with people has been found to reduce stress, increase happiness and help individuals live longer. It is therefore crucial that people find creative ways to stay in touch during the pandemic, through video calls, voice messages, or even face-to-face while maintaining a safe distance (Blue, 2020). Perceptions of Crisis Perception of risk during the pandemic varies depending on socioeconomic backgrounds, which influences public adherence to preventive procedures. Despite many recommendations from the CDC and the NIH that mask-wearing and social distancing decrease coronavirus transmission, many people continue to engage in unsafe social activities, which ultimately increases the chance
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: Politics and partisanship has had a huge impact on both the response to and the perception of COVID-19 by policymakers and constituents Source: Wikimedia Commons
of infection and its spread (Clements, 2020). It is important to understand why perception of risk has varied between groups to understand and improve adherence to preventive protocols. Throughout the United States, adherence to the social distancing and mask-wearing varies greatly. One way to explain this discrepancy is by analyzing a population’s understanding and trust in science. A study by Michigan State University sought to gauge knowledge of the pandemic across the U.S. through a crosssectional survey; this survey consisted of 12 questions that measured knowledge about the pandemic and three additional questions regarding participation in recommendations by the CDC and NIH. Scores varied greatly depending on sex, age, education, ethnicity, income, and political affiliation. The study indicated that compared to baby boomers, the scores of Gen X, millennials, and Gen Z were 42%, 53%, and 73% lower, respectively. In addition, African American participants scored 70% lower than white participants on average, and higher incomes were positively associated with score. Political party identification proved to have a large influence on odds of performing preventive behaviors and knowledge about COVID-19. Those who identified as republican were more likely to attend large gatherings than those who identified as democrat or independent. And democrats’ mean scores were 113% higher than Republicans, while independents were 76% higher. (Clements, 2020).
SUMMER 2020
These responses may be due in part to the shift in advice provided by experts who, early on, had recommended against the wearing of masks in order to secure a steady supply for essential healthcare workers. The rapid shift in recommendations, while providing some with a sense of personal control in wearing a mask, elicited sentiments of stress, anger, and skepticism in others. Risk mitigation tactics were perceived as invasive government interference and a burden upon economic stability and growth. Moreover, the dissemination of misinformation across social media platforms also creates feelings of confusion and suspicion among the public. Lack of familiarity surrounding the topic of COVID-19 not only induced extreme anxiety, but also led some to downplay the risks associated with the virus, with some equating it to influenza (Malecki et.al., 2020).
"Perception of risk during the pandemic varies depending on socioeconomic backgrounds, which influences public adherence to preventive procedures."
Politics of the Pandemic News outlets have a slew of stories and segments relating to the pandemic. Additionally, legislators and political figures have an immense influence on the general population’s perception of the pandemic. There appears to be an important link between partisan identity and the perception of both threat and severity of the pandemic. A plethora of research has found that politics is an important factor to evaluate judgement and behavior. For instance, Bavel and Pereira found that “partisan identities
319
Figure 4: While grieving after a loved one’s death is natural, COVID-19 makes it harder for the bereaved individuals to receive adequate support that will help them process the loss Source: Wikimedia Commons
"Researchers predict that a rise of prolonged grief disorder cases will follow the pandemic outbreak, as many aspects of the pandemic resemble natural disasters that have previously preceded a sharp increase in PGD cases."
influence reasoning on political cognition, including beliefs about political figures, political facts, support for policies, scientific issues, social issues, and beliefs in the expertise of scientists” (Van Bavel and Pereira, 2018). Pertinent to the ongoing pandemic is the power politics has to shape attitudes towards scientific issues. During the pandemic, political affiliation has been the single biggest dividing factor regarding people’s comfort with activities: Republicans and Republican leaning individuals generally express more comfort engaging in activities that were once routine prior to the pandemic and are also less likely to wear masks in public areas (Pew Research Center, 2020). Of course, it is important to acknowledge that this relationship is not universal. Left leaning does not always equate to appraising the pandemic as a more serious issue. However, this analysis is still important to note, as politics have greatly influenced how people are keeping up with and perceiving the progression of the pandemic in the U.S. The Psychology of Grief Grief occurs when something or someone meaningful has been lost (Zisook and Shear, 2009). With the U.S. and global COVID-19 death tolls reaching 157K and 685K, respectively, the virus has become the leading cause of death worldwide (Zhai and Du, 2020). Grief is an essential element in the discussion of the psychological impacts of the pandemic, as many have lost loved ones to this virus. Grief often entails complex affective, cognitive,
320
behavioral, and physical responses (Zhai & Du, 2020). While grieving after a loved one’s death is normal and not considered a condition to be treated professionally, bereaved individuals might experience complicated or prolonged grief, also known as prolonged grief disorder (PGD) (Zisook & Shear, 2009). PGD is considered a mental illness, with symptoms including an intense yearning for the deceased, sharp emotional pain, inability to partake in social activities, and difficulties accepting death (Lundorff et al., 2017). Patients with PGD need to be monitored by a mental health professional, as their condition may result in problematic behaviors including chronic depression and suicidal tendencies. The COVID-19 pandemic poses a unique threat to PGD; people are likely to experience anticipatory grief, meaning they expect an individual’s death and thus grieve in advance (Zhai and Du, 2020). Researchers predict that a rise of prolonged grief disorder cases will follow the pandemic outbreak, as many aspects of the pandemic resemble natural disasters that have previously preceded a sharp increase in PGD cases (Eisma et al., 2020). For an individual, the risk of PGD intensifies when the death is unexpected, when they are unable to receive physical and social support, or when they are unable to have traditional grieving rituals (Eisma et al., 2020). Normally, grief shifts from acute grief to integrated (resolved) grief after a certain amount of time as an individual processes the loss. However, today’s unusual circumstances can lead to disenfranchised grief, or grief that
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
cannot be publicly mourned, making it harder for afflicted individuals to transition from acute grief to integrated grief (Zhai & Du, 2020). Moreover, during a pandemic, not only are the deceased’s family members at risk of developing PGD, but healthcare providers are as well. One of the risk factors for PGD is the volume of the losses one experiences. Healthcare providers are experiencing many deaths from COVID-19 and lack time to process the loss because of their duty to take care of other patients. The unprocessed grief is likely to surface later as a prolonged, complicated grief (Wallace et al., 2020).
COVID-19 Case Studies Prevalence of Social Isolation and Mental Illness As the COVID-19 pandemic has unfolded, civilian populations across the globe have been placed into a state of social hibernation – shuttering businesses, canceling school, and limiting social gatherings in an attempt to stop the spread of the virus. These measures are designed to prevent face-to-face transmission of the virus and curtail its rapid growth. However, they are not without a cost, particularly to those less able to access psychological care. For those with psychological illnesses, the requisitioning of hospital beds for COVID patients and the lack of in-person access to psychological health professionals has produced a dangerous situation – one that may contribute, as one paper has put, to a “second pandemic” of mental health crises (Choi et al., 2020). This has provided new impetus for initiatives to remotely monitor patients with psychiatric illness, one of the topics in the next section of this paper (Zhou et al., 2020). However, this crisis of mental health extends far beyond those with existing psychological afflictions – it is a problem for the whole population. In China, during the early stages of the pandemic, scientists conducted an online survey and collected responses from 1,210 respondents in 194 cities. 53.8% reported that the psychological impact of the pandemic was moderate to severe. Furthermore, 16.5% reported moderate to severe symptoms of depression and 28.8% reported moderate to severe symptoms of anxiety. These statistics are troubling, even if not fully conclusive: to truly say that these results were caused by the pandemic, polling these same participants prior to its onset would have been necessary (Wang et al., 2020).
SUMMER 2020
The psychological effects of the pandemic are a particular problem for individuals in high-risk groups. The Chinese study did indeed find that poor self-rated health status and the presence of specific physical symptoms (potentially due to underlying conditions) were associated with a more serious psychological impact (Wang et al., 2020). One study found that pregnant women are at a higher risk of developing anxiety and depression during the ongoing pandemic – either due to worry of becoming infected, feeling lonely in social isolation, or through some other psychological, biological, or social trigger. This is a dangerous scenario, as depression during pregnancy is associated with low birth weight, fetal growth restriction, and other complications for a child after birth (Durankuş and Aksu, 2020). For those who had already reported feelings of loneliness, their experience of social isolation – and all of its harmful physiological side effects– may be exacerbated (Cacioppo et al., 2014). Even those who as of yet have not experienced psychological health issues or reported feeling lonely may come away from this pandemic with long-lasting psychological changes. Having adapted to social isolation, some people may be less inclined to carry out future social interactions as they had before the pandemic. This follows from the theory that social isolation leads people to interpret the social actions of others as more threatening (Hawkley and Cacioppo, 2013). However, until measures mandating social distancing are significantly relaxed, any lingering mistrust created by this pandemic is mere speculation. Additionally, the negative psychological impact of caring for COVID patients or losing a family member to the virus may also contribute to an individual’s level of social engagement moving forward.
"One study found that pregnant women are at a higher risk of developing anxiety and depression during the ongoing pandemic – either due to worry of becoming infected, feeling lonely in social isolation, or through some other psychological, biological, or social trigger."
Until the pandemic has been contained to at least the point where businesses, schools, and hospitals can operate at a capacity comparable to pre-pandemic conditions, the heated debate will continue as to what extent, and for how long, strict social distancing should be the chosen course of action. Policymakers are forced to strike a difficult balance between halting the spread of the virus and avoiding deaths, particularly in vulnerable populations, and allowing businesses, schools, and social gatherings to resume with certain precautions so as to limit the social, psychological, and economic consequences of physical distancing. Psychology of the Pandemic for At-Risk Groups
321
The COVID pandemic has created mental health challenges for people all over the world, but these problems are exacerbated for atrisk groups. These groups include (but are not limited to) the geriatric population, children, pregnant women, the African American and Latinx community, and homeless persons. With limited access to specialist services and reduced contact with family members, cases and severity of dementia among older adults have increased (Chong et.al., 2020). Additionally, fewer face-to-face interactions increases chances of developing anxiety, depression, loneliness, and furthers cognitive decline (GoodmanCasanova et. Al., 2020). A diminished ability to comprehend the clinical knowledge and preventive procedures surrounding coronavirus also threatens to further spread and infection. Thus, implementing a supportive system that helps to mitigate these risks in older adults is essential among one of the most vulnerable groups to this outbreak.
"Even those who as of yet have not experienced psychological health issues or reported feeling lonely may come away from this pandemic with longlasting psychological changes. Having adapted to social isolation, some people may be less inclined to carry out future social interactions as they had before the pandemic."
The unprecedented nature of the coronavirus pandemic also places young children and adolescents at risk of developing mental health disorders. The sudden loss of peer interaction combined with their young age can induce feelings of anxiety and hinder normal psychological development. For those in abusive families, the pandemic also creates increased opportunity for maltreatment and domestic violence at home. According to UN secretary general Antonio Guterres, there has been a â&#x20AC;&#x153;horrifying global surge in domestic violenceâ&#x20AC;? since the start of the pandemic which can lead to long-term health consequences (Fegert et.al., 2020). Another major concern surrounds children with special needs who may experience extreme difficulty in receiving psychiatric care during lockdown. Moreover, internal family stressors such as distress surrounding losing a family member and low socioeconomic status can also deteriorate psychological well-being. Having disproportionately affected socially disadvantaged groups, the COVID-19 pandemic not only worsens financial situations, but also threatens the mental health of members of these communities. The economic effects of the pandemic have stuck African Americans and Latinx communities especially. In a survey conducted from March 2019 to March 2020 by the Pew Research Center American Trends Panel, the number of respondents experiencing high psychological distress were substantially higher among Hispanics (28%) and African Americans
322
(26%) than whites (22%). In addition, medical racism against black people seeking health care only exacerbates feelings of distress, loss, grief, and hopelessness. All these factors contribute to heightened anxiety and depression in these communities (Pew Research Center, 2020). The Psychological Impact of the Pandemic on Essential Healthcare Workers For the layperson, perception of pandemic severity often lies with the numbers. There is an obsession, albeit an important one, with the statistics of cases, deaths, and recoveries. However, beneath the midst of all this chaos, healthcare workers (HCWs) are often overlooked for their instrumental work in assisting and caring for affected individuals while also continuing their practice in the midst of the pandemic. For HCWs in direct contact with COVID-19 patients, there is a rise in psychological distress; a survey in China found that 41.5% of respondents showed significant levels of increased depression, anxiety, insomnia, and distress (Wu et. Al., 2020). Additionally, as a result of this front line of care, these HCWs experience enormous amounts of pressure from a multitude of factors: inadequate protection from contamination, overwork, frustration, discrimination, isolation, a lack of contact with their families, exhaustion, social isolation, and quarantine (Kang et. Al., 2020). These findings have been corroborated in the past as HCWs consistently show post-traumatic stress symptoms, depressive symptoms, insomnia, severe anxiety symptoms, general psychiatric symptoms, and high levels of workrelated stress following outbreaks at the global level (Preti et. Al., 2020, Tan et. Al., 2020, and Temsah et al., 2020). A different side to this story involves job displacement. It is noteworthy that despite the resilience of health care jobs in times of economic and/or physical crises, job loss in the health care sector is growing at a rapid rate (Sanger-Katz, 2020). While causing bureaucratic strain in the healthcare industry and the functioning of hospitals, HCWs losing jobs that might seem necessary during a global health crisis only adds to the overarching theme of the psychological impact being placed upon HCWs. Without an end to the pandemic in sight, these HCWs will continue to be affected and pushed to the extreme â&#x20AC;&#x201C; physically, but more importantly, mentally.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 5: Healthcare workers are placed in a precarious situation where they have to treat COVID-19 patients yet deal with the stress of staying safe like the rest of us. These are two significant stressors that take a toll on their mental health Source: Wikimedia Commons
The Psychological Impact of Getting COVID or Losing Loved Ones Individuals who have contracted COVID-19 are subject to many psychological effects in addition to their potentially dangerous physical symptoms. One overlooked element is the effect large amounts of personal protective equipment (PPE) may have on patientphysician relationships. Healthcare providers often wear several layers of full body protection, leaving only their eyes visible to patients. This amount of PPE hinders the development of a strong relationship between a patient and their provider, leading to increased stress on both ends (Wang et al., 2020). Furthermore, those infected with COVID-19 are isolated from others, including loved ones, for extended periods of time to prevent further spreading of the virus, contributing to increased loneliness (Wang et al., 2020). This damage can be particularly devastating for patients for whom the disease proves to be fatal, as they often have little to no interaction with family and friends in their final moments Additionally, those who have lost loved ones to COVID-19 are often deprived of any last interactions due to restrictions, making the grieving process more difficult. Job Displacement and Economic Uncertainty in the Pandemic The multifaceted nature of a pandemic bears a particular burden to economies. The emergence of COVID-19 in the U.S. in March
SUMMER 2020
2020 was met with a surge in unemployment claims as companies and institutions closed their doors for the safety of others (Figure 5). Weekly unemployment claims have totaled over 3 million every week since late March and unemployment rates greater than 14% were observed in April of 2020 (> 24 million filings) (U.S. Bureau of Labor Statistics, 2020). The service sector, including tourism, fell on difficult times after being deemed as â&#x20AC;&#x2DC;nonessential,â&#x20AC;&#x2122; which led to businesses foregoing expected revenue and having to let go or furlough workers as fixed expenses became an overwhelming burden (UNCTAD, 2020). This unprecedented loss in economic activity has resulted in the U.S. entering a recession in March of 2020 (NBER, 2020). What is critical to recognize is that many people, families, and communities are now facing a fiscal crisis on top of a pandemic. Communication of economic uncertainties in families has been shown to have a negative impact on mental health, which has now been enhanced by heightened unemployment and the current recession (Afifi et al., 2015). Stress from financial uncertainty arising from job displacement has been felt on a national level as families must pay for rent, food, and other living expenses while unemployed. This implies that healthcare safety is not the only crisis that must be acknowledged during a pandemic, as examining fiscal security can be pivotal in aiding the mental health of families.
"For HCWs in direct contact with COVID-19 patients, there is a rise in psychological distress; a survey in China found that 41.5% of respondents showed significant levels of increased depression, anxiety, insomnia, and distress."
323
Figure 6: US Unemployment Rate (%) captured on a bi-annual measure from December of 2007 to June of 2020 Source: U.S. Bureau of Labor Statistics, 2020
The Successes and Failures of the Virtual Classroom
"What is critical to recognize is that many people, families, and communities are now facing a fiscal crisis on top of a pandemic."
324
One of the unprecedented public health responses to the COVID-19 pandemic has been the extended closure of educational institutions. According to the United Nations Educational, Scientific and Cultural Organization Report 2020, a total of 1,190,287,189 learners have been affected by temporary or indefinite countrywide school closures (Surkhali and Kumari, 2020). In an attempt to continue the learning process for all educational levels, institutions have embraced online systems through virtual classes. Public schools have gained support from companies such as Microsoft, Google, Zoom and Slack, who have offered many features of their products for free (Basilaia and Kvavadze, 2020). With access to technology, online learning can be effective and even advantageous because it can cover a significant amount of content and may be even more flexible in terms of scheduling (Surkhali and Kumari, 2020). Despite the numerous societal difficulties posed by the pandemic, virtual learning platforms have still been able to provide a sustainable, high-quality educational infrastructure that fosters participation and collaboration overall (Almarzooq et al., 2020). Webinars and virtual collaboration platforms, for example, have allowed participants to take advantage of faceto-face learning in real-time (Sleiwah et al., 2020).
However, there are clear disadvantages to online learning. Most significantly, some students suffer in this virtual classroom due to low internet bandwidth and technical difficulties. Some students may not have a safe home environment in which they can focus on their studies, and others may not be able to learn in this format. (Surkhali and Kumari, 2020). Additional barriers include the complexity of cultural and social contexts around the world, the lack of teachersâ&#x20AC;&#x2122; preparedness for virtual teaching, the impossibility of using mobilebased training for all age groups, and the general inability to virtualize all courses (Ahmady et al., 2020). Even if institutions assume that these factors are all equal, online classrooms do not facilitate a lot of the peer-based learning, twoway communication, and group discussion aspects of traditional classrooms. The virtual classroom requires people to stare closely at a screen for a large part of their day, and this monotonous experience can lead to a range of health problems including visual discomfort, exhaustion, and muscle or joint aches (Surkhali and Kumari, 2020). It is paramount to continue implementing changes to improve learning outcomes and overall satisfaction with virtual classroom experiences looking forward into the 2020-2021 school year and beyond. U.S. Geographic Spread of COVID-19 The reopening of countries around the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
world has served as a model for how states throughout the U.S. could potentially lift stayat-home orders, reopen businesses, and relax social distancing measures. However, federal guidelines advise that states wait until there is a solid downward trajectory of cases within a two-week period before moving into the next phase of opening, and many states are struggling to meet this criterion. Cases and deaths by state, sorted by places with the most cases per 100,000 residents in the last seven days, reveals that Florida, Louisiana, Mississippi, Alabama, and Arizona are the worst off, and public-health experts have attributed the South’s escalating outbreak to states relaxing their lockdown restrictions (Johns Hopkins University, 2020). Researchers have even noted that the most pronounced difference between the early high-prevalence states and those reaching high prevalence now is their political orientation, as evaluated between May 25 and June 28 (Frey, 2020). The early politicization of the pandemic by President Trump and other public officials has undoubtedly impacted the geographic spread of the coronavirus. Researchers have turned to spatial modeling to evaluate COVID-19 incidence rates in the continental United States (Mollalo et al., April 2020). Artificial neural network modelling indicates the importance of similar factors after accounting for ageadjusted mortality rates of ischemic heart disease, pancreatic cancer, leukemia, Hodgkin’s disease, mesothelioma, and cardiovascular diseases (Mollalo et al., May 2020). Ultimately, as visualized by the Johns Hopkins Tracker, the U.S. states are not all at the same stages of combating COVID-19, and researchers continue to evaluate trend analyses to determine future events and what the impact this entire pandemic will have on everyone’s mental health. COVID-19 and Xenophobia The international response to the COVID-19 pandemic, by nature of its origin, included a focus on China, but in the global effort to contain the disease, a damaging consequence has been the fueling of xenophobia and racism against Asian people. Social media platforms have bred echo chambers that reiterate the stereotype that Chinese people are unclean and disease riddled, and even news platforms in the U.S., Australia, and France have used racist terms like the “China virus” and “yellow alert” (Li and Galea, 2020). This prejudice
SUMMER 2020
against Asian people has resulted in a myriad of adverse health effects. One of the most glaring consequences is the increase in depression and anxiety among those targeted by racism (Li and Galea, 2020). In general, the most commonly reported COVID-19-related mental disorders for Chinese people are anxiety, depression, loneliness, stress and fear, or a combination of these (Wang et al., 2020). An uptick in these mental health problems is expected as people of Asian descent are discriminated against and isolated everywhere—at school, at work, and even in general public settings. Furthermore, this racism may hinder Chinese and other Asian people for seeking and/or accessing medical attention during this pandemic (Li and Galea, 2020). If they have been exposed to bullying and violence due to the COVID-19 pandemic, they may not want to go to the hospital for fear of judgment and retribution, which makes it harder to contain the virus.
A Vision for the Future – How to Thrive in an Age of Isolation Utilizing Virtual Mental Health Appointments With the onslaught of bleak news and novel stressors created by the COVID-19 pandemic, reconstruction of mental healthcare becomes increasingly imperative in an unpredictable and alienated world. Since the WHO’s first announcement of the outbreak to its official naming as a pandemic and to this day, healthcare workers have endured extreme stress, experienced anxieties surrounding COVID-19 transmission and shortage of essential equipment, and taken on unfamiliar clinical roles while adjusting to the challenges of becoming increasingly distant from family. Many have also experienced feelings of fear, frustration, estrangement, boredom, and hopelessness, which have long-lasting effects following the de-escalation of the coronavirus It is becoming increasingly important to implement accessible support mechanisms as well as extensive networks of communication and outreach to those in need of such resources. In today’s age of major technological advancement and connectivity, telehealth may answer the question of how to mediate interpersonal exchange between mental healthcare workers and patients and disseminate fact-based information at a time when mental stability is at risk.
"The international response to the COVID-19 pandemic, by nature of its origin, included a focus on China, but in the global effort to contain the disease, a damaging consequence has been the fueling of xenophobia and racism against Asian people."
For older members of society with mild cognitive impairments such as dementia, the absence of face-to-face interactions in this ongoing pandemic threatens to worsen
325
cognitive functioning and mental health. In a clinical trial administered to 93 participants in Spain, a new telehealth support program called TV-AssistDEM was used to explore the effects of television-based assistive technology on cognitive impairment (Goodman-Casanova et.al., 2020). Though the population surveyed appeared to be in optimal health at the time of this evaluation, the study showed that living alone proved to be a risk factor in causing greater psychological stress and irregular sleeping habits. Respondents utilizing TVAssistDEM were found to perform more memory exercises than the control group, who did not utilize the program. It appears that engaging in TV-mediated activities may provide sufficient cognitive stimulation to slow the negative consequences of social isolation. Offering video calls and “promoting active aging of the elderly in their own homes,” TV-AssistDEM and other similar programs help alleviate caregiver burden and help preserve the mental well-being of patients affected by the pandemic (GoodmanCasanova et.al., 2020).
"For older members of society with mild cognitive impairments such as dementia, the absence of face-toface interactions in this ongoing pandemic threatens to worsen cognitive functioning and mental health."
As knowledge surrounding COVID-19 and the current state of public health may be limited, telehealth may play a major role in facilitating updates on the changing situations and easing extreme worry and stress. Consequently, many countries around the world have implemented telehealth care services in order to provide equitable access to various mental health resources and develop methods and procedures to be used in future crises. However, this is not the first time a pandemic has encouraged a shift to video-based therapy; during the 2015 MERS epidemic, the government of South Korea’s Gyeonggi Province “developed and distributed video materials and online-based psychological stabilization programs” (Yoon et.al., 2016). The videos served those in quarantine at public health centers as well as recovered patients and those who lost family members promoting mental well-being. Moreover, the Australian government allocated $50 million USD to fund telehealth consultations, development of mental health portals, and a “coronavirus hotline” for those in need of mental health support (Fisk et.al., 2020). Ultimately, greater emphasis on implementing telehealthcare in nations all over the world will help ease stress and anxiety and promote psychological well-being among the general public and health workers. Thriving with Remote Work Results of a nationally-representative sample
326
of the US population found that, of those employed before COVID-19, around half now work from home and 10.1% reported being laidoff or on a leave of absence by June 2020. The greater the spread of COVID-19, the higher was the proportion of people switching to remote work, and younger individuals appeared to be more likely to switch to remote work (Brynjolfsson et al., 2020). A study from Global Workplace Analytics recently reported that 77% of the workforce would prefer to continue working from home after the pandemic. It is therefore highly likely that remote work will remain at least a fraction of work life in the long run (Burns, 2020). So how can one thrive working remotely, away from all the hustle and bustle of the workplace? One problem in moving from in-person to remote meetings is that it is difficult to clearly communicate emotions. Consequently, disagreements and miscommunication may occur by co-workers who fail to interpret each other’s body language and facial cues (Shirreffs, 2020). Moreover, the lack of faceto-face supervision yields the concern that employees will not work as efficiently. Many employees struggle with reduced support and communication from managers, which can lead to feelings of social isolation and loneliness. Employees may also face various distractions at home, for example having children at home, or suboptimal workspaces like the living room shared with family. Therefore, it is important to consider that family and home demands can disturb remote work (Larson et al., 2020). Managers should understand the common challenges of remote work and support their employees accordingly. They can do so through structured daily check-ins – either a series of one-on-one calls or a team call if the work is collaborative. Calls should be regular and in a setting in which employees can voice their concerns and ask their questions freely. Furthermore, virtual communication options beyond email should be provided and encouraged, with an emphasis on video conferences (e.g. Zoom), to maximize the visual cues that would be available in faceto-face communication (Larson et al., 2020). Recent research shows that, while video communication does not necessarily increase the quality of the work except for certain negotiation tasks, it increases employees’ satisfaction with their work. For negotiation tasks, people benefit from reading important facial cues and thereby adjusting their
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 7: Video conferencing is highly recommended to maintain social contact during remote work Source: Wikimedia Commons
strategies (Veinott et al., 2020). Otherwise, video conferencing has many advantages such as increased familiarity with co-workers and a reduced sense of isolation (Larson et al., 2020). It allows communication to be supplemented by gestures, facial expressions and head nods. Non-native English speakers seemed to benefit especially from video communication by being able to seek clarification (Veinott et al., 2020). Hence, video is useful for more complex or sensitive conversations because of visual cues and a lower likelihood of misunderstandings than by written or audio communication. Platforms like Slack and Microsoft-Teams may be useful for efficient communication when visual detail is non-essential. When the manager sets clear expectations for the frequency and means of communication among teams, such as daily check-in meetings through video conferences and using messages for urgent matters, all employees share the same expectations allowing for more efficient collaboration (Larson et al., 2020). Opportunities for virtual social interaction should also be provided for employees to have informal, non-work-related conversations (Larson et al., 2020). Each meeting could start on a more personalized note to get to know one other, such as by going around and having each person share an aspect of their personal life (Adams, n.d.). Someone could organize virtual pizza or cocktail parties, where pizza- or party packages can be delivered to employees’ homes. Experienced managers and their remote workers have reported that such virtual events help with feelings of isolation (Larson et al., 2020). For employees to engage in social interactions on a daily basis, it may be useful to leave the room to talk to family members
SUMMER 2020
during short breaks, or calling a friend during lunch, as well as asking work-related questions over the phone or by video call instead of email (Rogers, 2020). Additionally, managers should offer encouragement and emotional support to their employees by acknowledging their stress and concerns, and empathizing with their struggles. Asking how employees are coping with remote work and listening carefully to their responses can make them feel heard and supported in a time of crisis. Furthermore, research on emotional intelligence shows that employees observe and mimic how their manager reacts to sudden changes or crises. Hence, if a manager communicates stress and concern, it will negatively affect employees. Effective leaders will acknowledge employees’ stress and concerns while providing affirmation and confidence, using phrases like “we can do this.” Such support from managers encourages employees to tackle upcoming challenges with determination and focus (Larson et al., 2020).
"Managers should understand the common challenges of remote work and support their employees accordingly. They can do so through structured daily check-ins – either a series of one-onone calls or a team call if the work is collaborative."
Maximizing the Success of a Virtual Classroom Online learning appears to be the new normal for schools across all levels of education, with kindergarteners and college students alike at risk of coronavirus exposure. With in-person instruction deemed unsafe, the popularity of platforms such as Zoom has exploded. Student burnout, a state of extreme stress brought about by extenuating circumstances, can be exacerbated by the already extraordinary conditions (Almarzooq et al., 2020). In the coming months of online education, school administrators must focus on
327
maintaining a sense of community amongst those sharing classes and similar situations (Almarzooq et al., 2020). Video chat programs such as Zoom and text chat programs such as Whatsapp are very useful in developing and maintaining these communities amongst students who are unable to see each other physically. Increased support and attention should also be given to parents of younger elementary school students who often need increased assistance in setting up their virtual classrooms and completing their activities. There may be a silver lining in the switch to an online classroom. There are many potential benefits to online learning not seen in traditional schooling environments such as the ability to connect from virtually anywhere, asynchronous discussions, and faster feedback (Ahmady et al., 2020). Potential Solutions for Alleviating Economic Uncertainty
"...firms have adopted the use of hazard pay, a protocol in which employees are compensated in addition to their wage for putting their health on the line for their job."
When in times of crisis, citizens rely on their government to navigate the uncertainty. There is no doubt that the economic impacts of COVID-19 have been shared by families across the nation. As a result, the U.S. Congress passed the Coronavirus Aid, Relief, and Economic Security (CARES) Act in late March 2020 to alleviate some of the financial burdens associated with business closures and worker displacement (116th Cong., 2020). Provisions from this policy included an income-based stimulus check that provided all eligible U.S. citizens a $1,200 check if their 2019 annual income was less than $75,000 (116th Cong., 2020). This was intended to help spur consumer spending and aid in relieving living expenses. A $600 surplus was also added to state-level unemployment benefits, which allowed the unemployed to receive $600 in addition to the benefits normally allocated by their state for every week of unemployment (116th Cong., 2020). This has arguably made some individuals better off prior to the pandemic as the unemployment benefit is greater than the expected wage from working in some instances. However, those still working in-person must frequently weigh putting the health of themselves and their families at risk for their pay. To compensate for this, firms have adopted the use of hazard pay, a protocol in which employees are compensated in addition to their wage for putting their health on the line for their job (Ruhnke, 2020). While this benefited workers at the end of March 2020 as states started to reopen, policies have now changed to favor the firms, and hazard pay is starting to diminish in
328
some instances, leaving families at a financial loss despite stimulus efforts. Accepting Virtual Social Interaction Undoubtedly, the way in which we interact with others has changed drastically due to the pandemic. With major institutions such as universities and various workplaces continuing to operate remotely and many states advocating for restrictions on social gatherings, there is an increasing need to find ways to stay connected with the people around us. Since the advent of the internet in the past two decades, there has been growing concern regarding whether virtual platforms are able to mimic the relationships established face-toface. While not perfect, social media platforms are an important substitute in this era as it can sufficiently convey nonverbal cues (an important factor of engagement) and are easy to utilize (Hwang et al., 2020). Though it is not a perfect system, it is a necessary step to transition into this period of isolation. The most important thing to do in these trying times is to embrace the situation. Interacting with others in a virtual format is going to be the new “normal” for the time being. Therefore, researchers suggest that isolation must be broken through virtual means as they are still found to help reduce loneliness and mental illness. Moreover, in the goal of establishing some sort of regularity to our lives, we must all work to maintain some sort of routine – virtual social gatherings can help with this (Fiorillo and Gorwood, 2020). As mentioned throughout this paper, social interactions are vital to our wellbeing. While many have turned to virtual programs such as Zoom and FaceTime to maintain their social connections remotely, there are still many who either do not know how to navigate and/ or lack access to these programs. Older adults, especially, are struggling with this transition, increasing the technological divide between “haves” and “have-nots” (Graham, 2020). It will be imperative for us to find workarounds for these groups whether that be helping them use technology to connect with their loved ones or just giving them more care in general (Hwang et al,. 2020). Ultimately, the pandemic is an opportunity for us to grow. Globally, it is offering awareness and knowledge regarding both the preparedness
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
and response of many countries around the world to pandemic-like issues. But, on a more personal level, it is teaching us ways to stay connected and to address loneliness through new means (Berg-Weger and Morley, 2020).
Brummett, B., Barefoot, J., Siegler, I., Clapp-Channing, N., Lytle, B., Bosworth, H., Williams, R., & Mark, D. (2001). Characteristics of Socially Isolated Patients With Coronary Artery Disease Who Are at Elevated Risk for Mortality. Psychosomatic Medicine, 63, 267–272. https://doi.org/10.1097/00006842200103000-00010
References
Brynjolfsson, E., Horton, J., Ozimek, A., Rock, D., Sharma, G., & TuYe, H.-Y. (2020). COVID-19 and Remote Work: An Early Look at US Data (No. w27344; p. w27344). National Bureau of Economic Research. https://doi.org/10.3386/w27344
A Second Pandemic: Mental Health Spillover From the Novel Coronavirus (COVID-19)—Kristen R. Choi, MarySue V. Heilemann, Alex Fauer, Meredith Mead, 2020. (n.d.). Retrieved August 2, 2020, from https://journals.sagepub.com/ doi/10.1177/1078390320919803 Abramowitz, J. S., Olatunji, B. O., & Deacon, B. J. (2007). Health Anxiety, Hypochondriasis, and the Anxiety Disorders. Behavior Therapy, 38(1), 86–94. https://doi.org/10.1016/j. beth.2006.05.001 Adams, M. (n.d.). 5 Ways to Enable a Thriving Remote Work Environment. Firm of the Future. Retrieved August 16, 2020, from https://www.firmofthefuture.com/content/5-ways-toenable-a-thriving-remote-work-environment/ Afifi, T., Davis, S., Merrill, A. F., Coveleski, S., Denes, A., & Afifi, W. (2015). In the Wake of the Great Recession: Economic Uncertainty, Communication, and Biological Stress Responses in Families. Human Communication Research, 41(2), 268–302. https://doi.org/10.1111/hcre.12048 Ahmady, S., Shahbazi, S., & Heidari, M. (undefined/ed). Transition to Virtual Learning During the Coronavirus Disease–2019 Crisis in Iran: Opportunity Or Challenge? Disaster Medicine and Public Health Preparedness, 1–2. https://doi.org/10.1017/dmp.2020.142 Almarzooq, Z. I., Lopes, M., & Kochar, A. (2020). Virtual Learning During the COVID-19 Pandemic: A Disruptive Technology in Graduate Medical Education. Journal of the American College of Cardiology, 75(20), 2635–2638. https:// doi.org/10.1016/j.jacc.2020.04.015 America Is Reopening. But have we flattened the curve? (n.d.). Johns Hopkins Coronavirus Resource Center. Retrieved July 26, 2020, from https://coronavirus.jhu.edu/data/new-cases50-states Basilaia, G., & Kvavadze, D. (2020). Transition to Online Education in Schools during a SARS-CoV-2 Coronavirus (COVID-19) Pandemic in Georgia. Pedagogical Research, 5(4). https://doi.org/10.29333/pr/7937 Berg-Weger, M., & Morley, J. E. (2020). Loneliness and Social Isolation in Older Adults during the COVID-19 Pandemic: Implications for Gerontological Social Work. The Journal of Nutrition, Health & Aging, 24(5), 456–458. https://doi. org/10.1007/s12603-020-1366-8 Big 5 Personality Traits. (n.d.). Psychology Today. Retrieved August 16, 2020, from https://www.psychologytoday.com/ au/basics/big-5-personality-traits Coronavirus Aid, Relief, and Economic Security Act (n.d.). Retrieved July 26, 2020, from https://www.congress.gov/116/ bills/hr748/BILLS-116hr748enr.pdf Blue, A. (2020, March 30). Introverts aren’t actually better at social distancing. Futurity. https://www.futurity.org/ introverts-and-social-distancing-2319522/
SUMMER 2020
Burns, S. (2020, May 10). 3 Strategies To Help Your Remote Work Team Thrive. Forbes. https://www.forbes.com/sites/ stephanieburns/2020/05/10/3-strategies-to-help-yourremote-work-team-thrive/ Cacioppo, J. T., Cacioppo, S., Capitanio, J. P., & Cole, S. W. (2015). The neuroendocrinology of social isolation. Annual Review of Psychology, 66, 733–767. https://doi.org/10.1146/annurevpsych-010814-015240 Cao, W., Fang, Z., Hou, G., Han, M., Xu, X., Dong, J., & Zheng, J. (2020). The psychological impact of the COVID-19 epidemic on college students in China. Psychiatry Research, 287, 112934. https://doi.org/10.1016/j.psychres.2020.112934 Chong, T. W. H., Curran, E., Ames, D., Lautenschlager, N. T., & Castle, D. J. (n.d.). Mental health of older adults during the COVID-19 pandemic: Lessons from history to guide our future. International Psychogeriatrics, 1–2. https://doi. org/10.1017/S1041610220001003 Clements, J. M. (2020). Knowledge and behaviors toward COVID-19 among U.S. residents during the early days of the pandemic. MedRxiv, 2020.03.31.20048967. https://doi. org/10.1101/2020.03.31.20048967 Coates, M. (2020). Covid-19 and the rise of racism. BMJ, m1384. https://doi.org/10.1136/bmj.m1384 Cook, B. L., Trinh, N.-H., Li, Z., Hou, S. S.-Y., & Progovac, A. M. (2017). Trends in Racial-Ethnic Disparities in Access to Mental Health Care, 2004–2012. Psychiatric Services, 68(1), 9–16. https://doi.org/10.1176/appi.ps.201500453 Durankus, F., & Aksu, E. (n.d.). Effects of the COVID-19 pandemic on anxiety and depressive symptoms in pregnant women: A preliminary study. JOURNAL OF MATERNAL-FETAL & NEONATAL MEDICINE. https://doi.org/10.1080/14767058.2 020.1763946 Eisma, M. C., Boelen, P. A., & Lenferink, L. I. M. (2020). Prolonged grief disorder following the Coronavirus (COVID-19) pandemic. Psychiatry Research, 288, 113031. https://doi.org/10.1016/j.psychres.2020.113031 Fegert, J. M., Vitiello, B., Plener, P. L., & Clemens, V. (2020). Challenges and burden of the Coronavirus 2019 (COVID-19) pandemic for child and adolescent mental health: A narrative review to highlight clinical and research needs in the acute phase and the long return to normality. Child and Adolescent Psychiatry and Mental Health, 14. https://doi.org/10.1186/ s13034-020-00329-3 Fiorillo, A., & Gorwood, P. (2020). The consequences of the COVID-19 pandemic on mental health and implications for clinical practice. European Psychiatry, 63(1), e32. https://doi. org/10.1192/j.eurpsy.2020.35 Fisk, M., Livingstone, A., & Pit, S. W. (2020). Telehealth in the
329
Context of COVID-19: Changing Perspectives in Australia, the United Kingdom, and the United States. Journal of Medical Internet Research, 22(6), e19264. https://doi. org/10.2196/19264 Forsyth, D. (2020, April 3). What Research on Isolated Groups Tells Us about Dealing with Social Isolation in the Face of COVID-19. Society for Personality and Social Psychology. http:// www.spsp.org/news-center/blog/forsyth-social-isolationcovid-19#gsc.tab=0 Frey, W. H. (2020, June 19). A roaring Sun Belt surge has inverted the demographics and politics of COVID-19. Brookings. https://www.brookings.edu/blog/theavenue/2020/06/19/covid-19s-sun-belt-surge-has-recast-thepandemics-impact/ Gasparro, R., Scandurra, C., Maldonato, N., Dolce, P., Bochicchio, V., Valletta, A., Sammartino, G., Sammartino, P., Mariniello, M., Lauro, A., & Marenzi, G. (2020). Perceived Job Insecurity and Depressive Symptoms Among Italian Dentists: The Moderating Role of Fear of COVID-19. International Journal of Environmental Research and Public Health, 17, 5338. https:// doi.org/10.3390/ijerph17155338 Godinic, D., Obrenovic, B., & Khudaykulov, A. (2020). Effects of Economic Uncertainty on Mental Health in the COVID-19 Pandemic Context: Social Identity Disturbance, Job Uncertainty and Psychological Well-Being Model. International Journal of Innovation and Economic Development, 6(1), 61–74. https:// doi.org/10.18775/ijied.1849-7551-7020.2015.61.2005 Goodman-Casanova, J. M., Dura-Perez, E., Guzman-Parra, J., Cuesta-Vargas, A., & Mayoral-Cleries, F. (2020). Telehealth Home Support During COVID-19 Confinement for CommunityDwelling Older Adults With Mild Cognitive Impairment or Mild Dementia: Survey Study. Journal of Medical Internet Research, 22(5), e19434. https://doi.org/10.2196/19434 Graham, J. (n.d.). In the pandemic, technology has been a lifesaver, connecting them to the outside world. But others don’t have this access. Washington Post. Retrieved August 3, 2020, from https://www.washingtonpost.com/ health/in-the-pandemic-technology-has-been-a-lifesaverconnecting-them-to-the-outside-world-but-others-donthave-this-access/2020/07/31/8d46ddf2-d1ca-11ea-8d321ebf4e9d8e0d_story.html Hawkley, L. C., & Cacioppo, J. T. (2010). Loneliness Matters: A Theoretical and Empirical Review of Consequences and Mechanisms. Annals of Behavioral Medicine : A Publication of the Society of Behavioral Medicine, 40(2). https://doi. org/10.1007/s12160-010-9210-8 Health News Team. (n.d.). Is social isolation easier for introverts? Retrieved August 16, 2020, from https://www.sharp.com/ health-news/is-social-isolation-easier-for-introverts.cfm Holt-Lunstad, J., Smith, T. B., Baker, M., Harris, T., & Stephenson, D. (2015). Loneliness and social isolation as risk factors for mortality: A meta-analytic review. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 10(2), 227–237. https://doi. org/10.1177/1745691614568352 Huang, Y., & Zhao, N. (2020). Generalized anxiety disorder, depressive symptoms and sleep quality during COVID-19 outbreak in China: A web-based cross-sectional survey. Psychiatry Research, 288, 112954. https://doi.org/10.1016/j. psychres.2020.112954 Hwang, T.-J., Rabheru, K., Peisah, C., Reichman, W., & Ikeda, M.
330
(2020). Loneliness and social isolation during the COVID-19 pandemic. International Psychogeriatrics, 1–4. https://doi. org/10.1017/S1041610220000988 Jungmann, S. M., & Witthöft, M. (2020). Health anxiety, cyberchondria, and coping in the current COVID-19 pandemic: Which factors are related to coronavirus anxiety? Journal of Anxiety Disorders, 73, 102239. https://doi. org/10.1016/j.janxdis.2020.102239 Kang, L., Li, Y., Hu, S., Chen, M., Yang, C., Yang, B. X., Wang, Y., Hu, J., Lai, J., Ma, X., Chen, J., Guan, L., Wang, G., Ma, H., & Liu, Z. (2020). The mental health of medical workers in Wuhan, China dealing with the 2019 novel coronavirus. The Lancet Psychiatry, 7(3), e14. https://doi.org/10.1016/S22150366(20)30047-X Larson, B. Z., Vroman, S. R., & Makarius, E. E. (2020, March 18). A Guide to Managing Your (Newly) Remote Workers. Harvard Business Review. https://hbr.org/2020/03/a-guideto-managing-your-newly-remote-workers Lei, L., Huang, X., Zhang, S., Yang, J., Yang, L., & Xu, M. (2020). Comparison of Prevalence and Associated Factors of Anxiety and Depression Among People Affected by versus People Unaffected by Quarantine During the COVID-19 Epidemic in Southwestern China. Medical Science Monitor : International Medical Journal of Experimental and Clinical Research, 26, e924609-1-e924609-12. https://doi.org/10.12659/ MSM.924609 Li, J., Yang, Z., Qiu, H., Wang, Y., Jian, L., Ji, J., & Li, K. (2020). Anxiety and depression among general population in China at the peak of the COVID‐19 epidemic. World Psychiatry, 19(2), 249–250. https://doi.org/10.1002/wps.20758 Li, Y., & Galea, S. (2020). Racism and the COVID-19 Epidemic: Recommendations for Health Care Workers. American Journal of Public Health, 110(7), 956–957. https://doi. org/10.2105/AJPH.2020.305698 Lundorff, M., Holmgren, H., Zachariae, R., Farver-Vestergaard, I., & O’Connor, M. (2017). Prevalence of prolonged grief disorder in adult bereavement: A systematic review and meta-analysis. Journal of Affective Disorders, 212, 138–149. https://doi.org/10.1016/j.jad.2017.01.030 Malecki, K. M. C., Keating, J. A., & Safdar, N. (n.d.). Crisis Communication and Public Perception of COVID-19 Risk in the Era of Social Media. Clinical Infectious Diseases. https:// doi.org/10.1093/cid/ciaa758 Mollalo, A., Rivera, K. M., & Vahedi, B. (2020). Artificial Neural Network Modeling of Novel Coronavirus (COVID-19) Incidence Rates across the Continental United States. International Journal of Environmental Research and Public Health, 17(12), 4204. https://doi.org/10.3390/ijerph17124204 Moreno, C., Wykes, T., Galderisi, S., Nordentoft, M., Crossley, N., Jones, N., Cannon, M., Correll, C. U., Byrne, L., Carr, S., Chen, E. Y. H., Gorwood, P., Johnson, S., Kärkkäinen, H., Krystal, J. H., Lee, J., Lieberman, J., López-Jaramillo, C., Männikkö, M., … Arango, C. (2020). How mental health care should change as a consequence of the COVID-19 pandemic. The Lancet Psychiatry, 0(0). https://doi.org/10.1016/S22150366(20)30307-2 Republicans, Democrats Move Even Further Apart in Coronavirus Concerns (2020, June 25). Republicans, Democrats Move Even Further Apart in Coronavirus Concerns. Pew Research Center - U.S. Politics & Policy. https://
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
www.pewresearch.org/politics/2020/06/25/republicansdemocrats-move-even-further-apart-in-coronavirusconcerns/ Preti, E., Di Mattei, V., Perego, G., Ferrari, F., Mazzetti, M., Taranto, P., Di Pierro, R., Madeddu, F., & Calati, R. (2020). The Psychological Impact of Epidemic and Pandemic Outbreaks on Healthcare Workers: Rapid Review of the Evidence. Current Psychiatry Reports, 22(8). https://doi.org/10.1007/s11920020-01166-z Rogers, K. (2020, March 23). The introvert’s guide to social distancing. CNN. https://edition.cnn.com/2020/03/23/health/ introvert-social-distancing-coronavirus-wellness/index.html Ruhnke, G. W. (2020). Physician Supply During the Coronavirus Disease 2019 (COVID-19) Crisis: The Role of Hazard Pay. Journal of General Internal Medicine. https://doi. org/10.1007/s11606-020-05931-x Sanger-Katz, M. (2020, May 8). Why 1.4 Million Health Jobs Have Been Lost During a Huge Health Crisis. The New York Times. https://www.nytimes.com/2020/05/08/upshot/healthjobs-plummeting-virus.html Shirreffs, A. (2020, April 1). Surviving and thriving as remote work goes viral. EmoryBusiness.Com. https://www. emorybusiness.com/2020/04/01/surviving-and-thriving-asremote-work-goes-viral/ Sleiwah, A., Mughal, M., Hachach-Haram, N., & Roblin, P. (2020). COVID-19 lockdown learning: The uprising of virtual teaching. Journal of Plastic, Reconstructive & Aesthetic Surgery, 73(8), 1575–1592. https://doi.org/10.1016/j. bjps.2020.05.032 Smillie, L., & Haslam, N. (n.d.). Personalities that thrive in isolation and what we can all learn from time alone. The Conversation. Retrieved August 16, 2020, from http:// theconversation.com/personalities-that-thrive-in-isolationand-what-we-can-all-learn-from-time-alone-135307 Suedfeld, P., & Steel, G. D. (2000). The Environmental Psychology of Capsule Habitats. Annual Review of Psychology, 51(1), 227–253. https://doi.org/10.1146/annurev. psych.51.1.227 Surkhali, B., & Garbuja, C. K. (2020). Virtual Learning during COVID-19 Pandemic: Pros and Cons. Journal of Lumbini Medical College, 8(1), 2 pages-2 pages. https://doi. org/10.22502/jlmc.v8i1.363 Tan, B. Y. Q., Chew, N. W. S., Lee, G. K. H., Jing, M., Goh, Y., Yeo, L. L. L., Zhang, K., Chin, H.-K., Ahmad, A., Khan, F. A., Shanmugam, G. N., Chan, B. P. L., Sunny, S., Chandra, B., Ong, J. J. Y., Paliwal, P. R., Wong, L. Y. H., Sagayanathan, R., Chen, J. T., … Sharma, V. K. (2020). Psychological Impact of the COVID-19 Pandemic on Health Care Workers in Singapore. Annals of Internal Medicine. https://doi.org/10.7326/M20-1083 Temsah, M.-H., Al-Sohime, F., Alamro, N., Al-Eyadhy, A., Al-Hasan, K., Jamal, A., Al-Maglouth, I., Aljamaan, F., Al Amri, M., Barry, M., Al-Subaie, S., & Somily, A. M. (2020). The psychological impact of COVID-19 pandemic on health care workers in a MERS-CoV endemic country. Journal of Infection and Public Health, 13(6), 877–882. https://doi.org/10.1016/j. jiph.2020.05.021 The “Loneliness Epidemic.” (2019, January 10). [Text]. Official Web Site of the U.S. Health Resources & Services Administration. https://www.hrsa.gov/enews/past-
SUMMER 2020
issues/2019/january-17/loneliness-epidemic Travers, M. (n.d.). Are Extroverts Suffering More From The Quarantine? Not So Fast, Says New Research. Forbes. Retrieved August 16, 2020, from https://www.forbes.com/ sites/traversmark/2020/04/30/are-extroverts-suffering-morefrom-the-quarantine-not-so-fast-says-new-research/ Unctad.org | Coronavirus deals severe blow to services sectors. (n.d.). Retrieved August 2, 2020, from https://unctad. org/en/pages/newsdetails.aspx?OriginalVersionID=2327 U.S. Bureau of Labor Statistics. (n.d.). Retrieved August 2, 2020, from https://www.bls.gov/ Van Bavel, J. J., & Pereira, A. (2018). The Partisan Brain: An Identity-Based Model of Political Belief. Trends in Cognitive Sciences, 22(3), 213–224. https://doi.org/10.1016/j. tics.2018.01.004 Veinott, E. S., Olson, J., Olson, G. M., & Fu, X. (1999). Video helps remote work: Speakers who need to negotiate common ground benefit from seeing each other. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 302–309. https://doi.org/10.1145/302979.303067 Wallace, C. L., Wladkowski, S. P., Gibson, A., & White, P. (2020). Grief During the COVID-19 Pandemic: Considerations for Palliative Care Providers. Journal of Pain and Symptom Management, 60(1), e70–e76. https://doi.org/10.1016/j. jpainsymman.2020.04.012 Wang, C., Pan, R., Wan, X., Tan, Y., Xu, L., Ho, C. S., & Ho, R. C. (2020). Immediate Psychological Responses and Associated Factors during the Initial Stage of the 2019 Coronavirus Disease (COVID-19) Epidemic among the General Population in China. International Journal of Environmental Research and Public Health, 17(5), 1729. https://doi.org/10.3390/ ijerph17051729 Wu, P. E., Styra, R., & Gold, W. L. (2020). Mitigating the psychological effects of COVID-19 on health care workers. Canadian Medical Association Journal, 192(17), E459–E460. https://doi.org/10.1503/cmaj.200519 Yoon, M.-K., Kim, S.-Y., Ko, H.-S., & Lee, M.-S. (2016). System effectiveness of detection, brief intervention and refer to treatment for the people with post-traumatic emotional distress by MERS: A case report of community-based proactive intervention in South Korea. International Journal of Mental Health Systems, 10(1), 51. https://doi.org/10.1186/ s13033-016-0083-5 Zhai, Y., & Du, X. (2020). Loss and grief amidst COVID-19: A path to adaptation and resilience. Brain, Behavior, and Immunity, 87, 80–81. https://doi.org/10.1016/j. bbi.2020.04.053 Zhou, X., Snoswell, C. L., Harding, L. E., Bambling, M., Edirippulige, S., Bai, X., & Smith, A. C. (2020). The Role of Telehealth in Reducing the Mental Health Burden from COVID-19. Telemedicine and E-Health, 26(4), 377–379. https:// doi.org/10.1089/tmj.2020.0068 Zisook, S., & Shear, K. (2009). Grief and bereavement: What psychiatrists need to know. World Psychiatry, 8(2), 67–74. Zuo, L., Dillman, D., & Juvé, A. M. (2020). Learning at home during COVID-19: A multi-institutional virtual learning collaboration. Medical Education, 54(7), 664–665. https://doi. org/10.1111/medu.14194
331
Rational Drug Design: Using Biology, Chemistry, and Physics to Develop New Drug Therapies STAFF WRITERS: GEORGE SHAN '23, CAROLINA GUERRERO '23, SAMANTHA HEDLEY '23, ANNA KOLLN '22, DEV KAPADIA '23, MICHAEL MOYO '22, SOPHIA ARANA '22 BOARD WRITER: LIAM LOCKE '21 Cover Image: Virtual docking is a computational technique that greatly speeds up the drug development process. This figure shows the binding pose of a drug (ball and stick model) on the surface of the acetyltransferase enzyme (grey - carbon, blue - nitrogen, red - oxygen, yellow - sulfur, pink iodine). Source: Wikimedia Commons
332
Introduction In 1928, Alexander Fleming examined a bacteriafree circle on a culture of staphylococcus. An inattentive microbiologist might have dismissed this result as simple contamination plenty of organisms have evolved antibacterial properties - but Fleming saw value in identifying the source of this bacteria-free circle. By identifying the compound responsible, Fleming and his colleagues serendipitously discovered penicillin, the worldâ&#x20AC;&#x2122;s first antibiotic. In 1932, he published an article in the Journal of Pathology and Bacteriology on the antibacterial properties of the fungus Penicillium notatum, but it was not until ten years later that penicillin was first used as an antibiotic (Fleming, 1932). In the late 1930s, Howard Florey and Ernst Chain began their research on Penicillium notatum, culturing massive amounts of the fungus in order to extract and purify penicillin in biologically active quantities. Animal studies were extremely promising; in several trials, the
only mice to survive a bacterial infection were those given penicillin (Lobanovska & Pilla, 2017). Penicillin was administered to the first patient in 1941, and since its discovery, penicillin and other related β-lactam antibiotics have saved hundreds of millions of lives (Kardos & Demain, 2011). Fleming, Florey, and Chain shared the 1945 Nobel Prize in Medicine for their discovery (Ban, 2006; Alharbi et al., 2014). The field of pharmacology has undergone many iterations of the drug discovery process. Before the advent of modern synthetic chemistry, it was common to extract and separate the compounds of a medicinal plant (or other naturally occurring medicines) to identify therapeutic compounds; purifying these compounds would then greatly improve the potency of the drug and eliminate side effects from plant toxins. With advances in organic synthesis and automated experimentation, it became possible to make and test hundreds DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
of molecules at a time, greatly reducing the labor required to separate plant compounds. Classical (or forward) pharmacology refers to the testing of a therapeutic compound without an understanding of the underlying disease mechanism (Takenaka, 2008). This generally involves screening the compound against a cell line or organism that models the disease and assessing changes in the disease state otherwise known as phenotypic screening. On the other hand, reverse pharmacology techniques are used to identify a drug when the molecular basis of the disease is known. The drug target (e.g. protein) is purified and drug binding is examined either in the laboratory or through computational methods. High throughput screening (HTS) is an automated process by which many medicinal compounds are tested on a drug target in the laboratory to assess their binding potential. Rational drug design is another example of reverse pharmacology. In rational drug design, information about the structure of a drug target or its endogenous ligand (the molecule which normally binds the receptor in a cell), is used to develop a drug using computational chemical and physical approaches. The process of rational drug design is the topic of this paper Rational drug design occurs in four phases. Phase 1 is the identification of a target compound and is either achieved through a review of clinical studies, preclinical research in disease models, or gene and protein expression
SUMMER 2020
databases like the Human Protein Atlas. Phase 2 involves expression and purification of the drug target followed by structure determination through biophysical techniques (X-ray crystallography, Cryo-EM, NMR, homology modeling). Once a structure is obtained (which may require multiple trials over several years), the drug design process enters Phase 3, in which computer simulations are used to identify possible drug candidates. Many techniques have been developed for computer-aided drug design, including virtual HTS, molecular dynamics to resolve conformational changes of the target, and machine learning algorithms that construct a drug based on known binding affinities of other drugs. The result of Phase 3 is the identification of a molecular fragment, chemical motif, pharmacophore, or lead compound that binds strongly to the binding pocket of the target molecule. Phase 4 requires the synthesis of a lead compound or library of compounds, biochemical assays to determine potency, structure determination of the compound in complex with the target, and drug optimization, which may involve additional computer simulations (Anderson, 2003; Batool et al., 2019).
Figure 1: Alexander Fleming in his laboratory at St. Maryâ&#x20AC;&#x2122;s, London
Drug development is an extremely expensive and time-consuming process; taking a new drug to market takes an average of 14 years and an estimated $800 million (Lavecchia et al., 2016). Although most expenses are encountered in clinical trials, the drug discovery phase can cost upwards of $20 million (Moore et al., 2016). Rational drug design is becoming an increasingly popular technique in both industry and academia, highlighting the efficiency, efficacy, and cost-effectiveness of this approach to drug development.
"Drug development is an extremely expensive and timeconsuming process; taking a new drug to market takes an average of 14 years and an estimated $800 million."
Source: Wikimedia Commons
Target Identification What makes a â&#x20AC;&#x2DC;goodâ&#x20AC;&#x2122; target? Identifying a good drug target is crucial for rational drug design because it helps researchers better predict drug efficacy (Gashaw et al., 2011). The most common types of drug targets include G-protein-coupled receptors (GPCRs) (12%), ion channels (19%), kinases (10%), and nuclear receptors (3%). (Santos et al., 2017). The remaining 56% consists of other drug targets of smaller individual percentages. The fractions of FDA-approved drugs targeting each of these families are 33%, 18%, 3%, and 16%, respectively (Santos et al., 2017). Many factors influence whether something is a good drug target and having a better 333
Figure 2: Forward (classical) vs. reverse pharmacology techniques. Rational drug design is a type of reverse pharmacology Source: Wikimedia Commons
"In order to design a drug that alters the function of its target, the binding sites of the target must be identified. Therefore, one of the primary aims of rational drug design is to identify the molecular structure of the target."
334
understanding of the disease mechanism allows for more robust predictive models of drug efficacy. As such, the role of the molecule in disease pathogenesis should be thoroughly studied when searching for a drug target (Gashaw et al., 2011). Once a possible target has been identified, drug developers may experimentally validate the drug target with in-vitro cell-based studies, and then conduct in-vivo animal studies to solidify their findings (Gashaw et al., 2011). The in-vitro studies allow researchers to better understand the signaling and regulative characteristics of drug targets, whereas the in-vivo studies help validate these results. In addition to studying the role of the drug target in disease pathogenesis, drug developers should also verify that the target expression is localized to the tissue of interest to the greatest extent possible; otherwise, there is a higher probability of negative side reactions. In microbial drug design, the target should be present only in the pathogen and be essential to its functioning so that target inhibition will kill the pathogen without harming the host (Anderson, 2003). In drug design for human proteins, the goal is typically to modulate rather than totally inhibit the targetâ&#x20AC;&#x2122;s function, as total
inhibition would kill the human cells (Anderson, 2003). Furthermore, the drug should have high specificity for its target to avoid unwanted side reactions; however, this specificity is not always easy to achieve. For instance, the drug sorafenib, known as NexavarÂŽ by Bayer, has multi-kinase inhibitory properties (Gashaw et al., 2011). While sorafenib is an effective anticancer drug, it also plays a role in causing cardiotoxicity, since many kinases share similar structures â&#x20AC;&#x201C; while blocking one pathway helps fight cancer, inhibiting another causes harmful side effects (Gashaw et al., 2011). Due to the difficulties in establishing target specificity within the kinome, developing drugs to target kinases is especially challenging. As such, it is vital for drug developers to design highly specific drugs that target a specific tissue of interest to the greatest extent possible. In order to design a drug that alters the function of its target, the binding sites of the target must be identified. Therefore, one of the primary aims of rational drug design is to identify the molecular structure of the target. The most detailed and complete protein structures for drug design are often obtained through X-rays of purified target crystals. To be considered as a drug target, a published structure must
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
meet accuracy guidelines that ensure the drug molecule designed to interact with the target will have the highest possible level of efficacy. The Protein Data Bank provides resolution and R-values for the structures in the database, indicating the importance of those parameters in the validity of a structure. Resolution is the highest valued parameter and is based on the quality of the experimentally generated crystal. Resolution values of 2.5 Angstroms (0.25 nm) and below are typically required for the structure to be used in a drug project (Singh et al., 2006). Resolution values vary throughout the crystal, the stated value for a structure being the lowest value determined in any part of the sample. When the crystal is bombarded with X-rays, the beams reflect off of parallel atomic planes, and the distance between these two planes gives the resolution of that section (more on the theory of crystallography is found under Structure Determination). A smaller distance confers a lower resolution value, indicating that the crystal has a clearer resolution and smaller features of the structure will be distinguishable. Each molecule is given an R-value parameter, which evaluates how well the experimental data of the crystal fits into the simulated lattice it was determined to resemble most. Typical values for the database are around 0.20, whereas a perfect alignment would have an R-value of 0. R-values can be biased through refinement of the structure, where only 90% of the data is used to better align the experimental and simulated lattices. However, an R-free value is then generated by evaluating how well the simulated model matches the 10% of data that were not used in refinement. This value is generally a little higher but should not exceed 0.25 for a good structure (PDB). One significant consideration for drug design is the level of toxicity a compound induces. Toxicity values indicate how much damage is caused by a dose of a drug. This stage of testing eliminates roughly ⅓ of drug candidates for a target, raising the expenses of both time and money for drug design projects (Guengerich, 2016). Drugs should be highly specific to the target of interest to reduce the chances of off-target interactions and avoid alternative side effects. Once alternative binding for the drug molecule is minimized, the concept of ADME is used to further evaluate and adjust how a compound affects a living system (Doogue & Polasek, 2013). Absorption (A) is largely dependent on the solubility of the compound. For drugs with narrow therapeutic indexes, where just small
SUMMER 2020
differences in the dose of the drug produce adverse effects, the level of absorption is vital to determining the proper concentration and the level of toxicity. Distribution (D) is the movement of the compound from the site of entry to other tissues via the bloodstream or cell-to-cell interactions. Immune responserelated toxicity is present at this stage of drug movement, as drug molecules can covalently bind to proteins in the blood, generating antibodies and initiating an immune response (Guengerich, 2016). Metabolism (M) involves the decomposition of the drug molecule by Cytochrome P450 enzymes in the liver. Toxicity in this stage is the result of electron transfer via redox chemical reactions (Doogue & Polasek, 2013). With these chemical reactions, the drug can be converted to be a reactive ‘metabolite’ and modify the protein it interacts with, causing toxicity and adverse effects (Guengerich, 2016). Excretion (E) of the drug molecule is based on its size, charge, and solubility. For instance, net negative charges in areas of the renal pathway promote higher levels of reabsorption of positively charged proteins (Tibbitts et al., 2016). Incomplete excretion processes or lipidsolubility can cause accumulations of the drug molecule in the body, leading to increased levels of toxicity (Doogue & Polasek, 2013). Clinical screening Target identification is a crucial step in the drug development process. The druggable target and the ways that target interacts with other cells to produce a disease phenotype must be well-understood. Early validation of a target can save an immense amount of time and money, and the rise of the study of the genome (genomics) and available patient data gives scientists the ability to identify and screen novel druggable targets (Lindsay, 2003). Genome wide association studies (GWAS) involve scanning the complete DNA of those with a certain disease to identify single nucleotide polymorphisms (SNPs). If these SNPs are common among a certain disease population, they may be a potential therapeutic target. A GWAS study of 100,000 rheumatoid arthritis cases identified 101 SNPs, and further testing of those led to 98 novel target gene candidates (Plenge et al., 2014). Information can also be extracted from combining data from doctor’s offices and hospitals. Ethics must be considered when doing this, and if the data cannot be made anonymous, the patient must provide consent, which can be difficult to acquire at times (Watt, 2016). Electronic health records
"Once alternative binding for the drug molecule is minimized, the concept of ADME is used to further evaluate and adjust how a compound affects a living system."
335
Figure 3: The degree of detail in protein structures of varying resolutions Source: RCSB PDB
(EHRs) are already common, making it easy to upload vast amounts of patient data. Aspirin was originally only used as a pain medication, and using the EHRs of patients, along with postmarketing surveillance data, its potential to treat colorectal cancer was also discovered (Qian et al., 2019). Although in 2019 only 16 drugs were licensed for novel targets, this is an increase from the 2-3 drugs for novel targets that were being produced before 2003 (Avram et al., 2020). Emphasis on front-loading target validation research and increased access to large databases may contribute to the rising trend of more novel drug targets (Lindsay, 2003). Genomics and proteomics databases
"Eroomâ&#x20AC;&#x2122;s law states that every nine years, the cost of developing a new drug doubles due to increasing regulation, more complex interactions and procedures, and antimicrobial resistance, which limits applicability of past research."
The burden put on pharmaceutical and biotech firms in drug design is not only exorbitant but is also growing at a rapid rate. Eroomâ&#x20AC;&#x2122;s law states that every nine years, the cost of developing a new drug doubles due to increasing regulation, more complex interactions and procedures, and antimicrobial resistance, which limits applicability of past research. If at first glance nine years seems like a reasonable amount of time for the cost of developing a drug to double, keep in mind that the median cost of bringing a drug to market between 2009 and 2018 was $985 million; even a doubling of this number could put significant financial burdens on many of the companies that produce these drugs (Gardner, 2020). Furthermore, it is estimated that for every billion dollars spent on research and development, pharmaceutical companies only make an average return of $75 million in sales (Institute of Medicine (US), 2012). Therefore, companies and research institutions look for any way to not only cut costs but also expedite the research and development process and gather additional data supporting the efficacy of their drug in order to get faster approval. One method that achieves all three of these goals is the use of genomic and proteomic databases. As previously stated, the cost of drug development has been increasing at a significant rate and is expected to continue to do so. However, the cost of DNA and protein sequencing continues to fall as methods become more advanced and widely available (Institute of Medicine (US), 2012). Genomics and proteomic data derived from these sequencing procedures are not just helpful to have in the drug design process, they are essential if researchers are hoping to produce their drug within a reasonable timeline and budget. Researchers compare the collected data between symptom-carriers and normal control
336
to identify target genes that have different expression levels. The corresponding physical structures of these genes, such as enzymes, receptors, or simply even RNA, will then be the potential drug targets for the disease. For example, when a deleterious gene mutation induces the loss of a certain function and causes the disease, scientists can remedy the mutation by activating an alternative cellular pathway, if existed, of the same function as the normal phenotype, usually by means of drugs targeting cellular signal receptors (Xia, 2017). It is the genomics database that screens cellular pathways exhaustively while facilitating the searching speed. The genomics and proteomics databases also contribute to the drug development against diseases caused by pathogens in the same manner as the previous example. Bioinformatic analyses reduce the possibility of drug resistance by revealing the alternative cell activities in pathogens, such as the glucose-lactose genetic switch in some bacterial species (Jacob & Monod, 1961). That being said, scientists can theoretically target all sites of alternative cellular activities at the same time for a single drug, thus blocking every coping mechanism of the pathogen to survive. Just as the genomic, proteomic, and other types of data relevant to drug design are plentiful, databases containing these types of data are also very common, and the rate of data production is increasing. In 2012, there were about twenty million protein sequences in the UniProtKB database, one of the most comprehensive protein databases in the world. By 2015, this number had increased to almost ninety million, over a four-fold increase (Chen et al., 2017). This abundance of data is great for many researchers studying drug development, but there is much more data to store than DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
just ninety million protein sequences. For this reason, databases often focus on distinct types of data regarding a specific application. While all of these databases cannot possibly be covered in this article, it is helpful to be knowledgeable of a few key databases and the information they contain. UniProt is one of the leading proteomic databases that holds information as well as the annotations for this sequence data. ChEMBL, a widely used database in the field of drug discovery, provides information on bioactivity for distribution, metabolism, toxicity, and many other processes in the body. Other databases, like Ensembl, focus exclusively on providing the most up-to-date annotation data for sequences. Yet others, like InterPro, focus on storing the protein models, called “signatures,” that characterize the protein domains, regions, and families (Chen et al., 2017). However, one of the major concerns with the use of these databases is, ironically, the overabundance of data. The wide variety of data available can overwhelm researchers and make analyzing it for actionable insight into drug discovery overly burdensome. One major development that mitigates this problem is the introduction of machine learning algorithms into the field. Machine learning can be applied to a variety of analytical frameworks in drug development that can make early predictions on effective methods, rapidly speeding up the process (Qian et al., 2019). Using machine learning, researchers input data that has already been categorized. For instance, researchers could use data documenting the interactions between protein sequences and a receptor on a cell into what is called a “training set” for the machine learning algorithm. By analyzing the patterns found in the data in the training set, the algorithm can then make predictions about the interactions of other sequences and receptors that researchers want to test There are a variety of algorithms that could be used for machine learning, each with their own benefits and drawbacks. While the implementation and differences between the algorithms is beyond the scope of this article, it is useful to highlight the common algorithms to build upon the idea of optimizing proteomic and genomic analysis. For the analytical frameworks used in genomic and proteomic research, methods such as linear discriminant analysis, k-Nearest Neighbor, Decision Trees, and Support Vector Machines are all common algorithms that are used. However, the most commonly used algorithm is the Random
SUMMER 2020
Forest algorithm. In this algorithm, several different decision tree algorithms are used and the outputs from each decision tree are aggregated and a final conclusion is made. This method is particularly effective when there are many outliers in the dataset and prevents what is known as overfitting, when the algorithm becomes so complex that it relies too heavily on the patterns of the training set (Qian et al., 2019). While there are many benefits of data analysis that can occur because of these databases, there is much room for improvement in their design and usage; the most important of these improvements being the question of storage. As researchers want a single source for all of their data, which is continuously accumulating, effectively storing that data is an important challenge for data scientists. This challenge could be overcome by using what are known as No-SQL databases, in which relations between data are stored within data storage structures. More effective compression techniques that also require more sophisticated organizational methods can increase the amount of data that can be stored on a database. Another challenge is effectively analyzing the large amounts of data stored in these databases, which can be overcome with advancements in cloud computing along with the machine learning algorithms that were previously discussed. Another challenge is data integration and input into these databases. Many databases use text mining algorithms from journals that rely on human-inputted word-relationship rules the algorithm can follow. These can always be improved upon in order to extract the most accurate data in the most efficient manner. Lastly, the user interface design of these databases is a surprisingly significant challenge. While many focus on the storage and internal organization of the data, the process by which users organize and retrieve data is also extremely important. Without a well-designed user interface, users could easily become lost or frustrated when trying to find relevant information. Therefore, increasing the ease of use, aesthetics, and responsiveness of these databases to better suit the needs of researchers is another important challenge (Chen et al., 2017).
"Machine learning can be applied to a variety of analytical frameworks in drug development that can make early predictions on effective methods, rapidly speeding up the process."
Structure Determination After a drug target has been selected, the next step is to resolve its three-dimensional structure, or the relative coordinates of each atom in the protein. A target structure is
337
Figure 4: Bragg’s law of x-ray diffraction and Bragg's Equation. Created by author
required to develop drugs through the rational drug-design method, as the next step in the calculation of drug binding energies based on this structure. Four common techniques used to determine peptide structure will be introduced: x-ray crystallography, cryogenic electron microscopy, nuclear magnetic resonance, and homology modeling. Each of these techniques are the subject of thousands of papers and deserves a complete review of their own. The purpose of the following sections is not to provide a comprehensive review of these techniques, but to describe their theoretical basis as well as advantages and disadvantages of each technique. X-ray crystallography
"Protein x-ray crystallography reveals the threedimensional structures of proteins at near-atomic resolution."
338
Protein x-ray crystallography reveals the threedimensional structures of proteins at nearatomic resolution. Over the last few decades, this method has provided tremendous insight into the working of numerous biological processes and a greater understanding of protein function and activity. The three-dimensional atomic structures of proteins and macromolecules obtained from x-ray crystallography have also paved the way for the design of numerous drugs and improvement of existing drugs. Because of its usefulness, x-ray crystallography has become incredibly widespread around the world, from academic laboratories to the pharmaceutical industry (Parker, 2003). X-ray crystallography relies on Bragg's law of X-ray diffraction by crystals. When bombarded
by a beam of x-ray light, crystals diffract light at various angles as shown in Figure 4. The arrows indicate the x-rays that are reflected from a pair of parallel planes separated by distance d. The dashed line with one end at B is a normal to these planes. The bottom wave travels an extra distance– a path difference of AB + BC– and if it is to be in constructive interference with the top wave, AB + BC needs to be an integer multiple of the wavelength λ of the wave. Since the waves are parallel, OA and OC are perpendicular to AB and BC respectively, and by simple geometry, AB = BC = dsinθ. Hence, for the constructive addition of the waves, AB + BC = 2dsinθ = nλ, which is essentially Bragg's equation shown below (Parker, 2003). The diffracted x-rays have different intensities, which are recorded on the diffraction pattern as reflection spots. These spots essentially reveal the structural arrangement of atoms within the crystal and therefore can be used to deduce the original structure of the crystal using Bragg’s Law (Wang and Wang, 2016). Solving the desired molecular structure requires additional information called phases, which are absent on the diffraction pattern– a phenomenon known as the “phase problem.” Further experimental and computational techniques are normally required to obtain the phase information (Wang and Wang, 2016). The intensity and phase information of multiple diffraction patterns of the crystal can then be integrated, and Fourier transformed using advanced computational techniques to generate an electron density map from which DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 5: University of Ottawaâ&#x20AC;&#x2122;s 900MHz NMR facility. Setting up and maintaining the equipment needed for NMR can be very expensive Source: Wikimedia Commons
the moleculeâ&#x20AC;&#x2122;s three-dimensional structure can be obtained after further analysis. The quality and accuracy of the three-dimensional structure heavily depends on the sharpness of the diffraction spots, which in turn is determined by the degree of order of the crystal. Therefore, it is essential to use highlyordered crystals, which can be collected by obtaining a large amount of highly purified macromolecules using optimal crystallization conditions (Wang and Wang, 2016). X-ray crystallography is reliable because it provides a two-dimensional view that gives an indication of what the three-dimensional structure of a protein would look like. Also, it is relatively cheap and simple to use, and is not limited by the size or atomic weight of the protein sample despite yielding higher atomic resolution than other competing structural determination techniques such as cryo-EM. Despite these advantages, however, crystallization of samples is a very difficult procedure that requires a sophisticated understanding of optimal crystallization techniques. This understanding of optimal x-ray crystallization techniques, therefore, makes x-ray crystallization a very difficult structure
SUMMER 2020
determination technique. The types of samples that can be analyzed are also limited because, due to their large molecular weight and relatively poor solubility, membrane proteins and large molecules are difficult to crystallize. Finally, the crystallized samples provide static structures, which do not represent the native nature of the protein under consideration (Davey, 2019). Cryogenic electron microscopy (Cryo-EM) Cryogenic electron microscopy is a structure determination technique that uses an electron microscope to image frozen specimens in aqueous solution. This method has contributed less to the volume of structures in databases such as the Protein Data Bank as compared to methods like X-ray crystallography due to a tendency to generate lower-resolution structures. However, as a result of the development of sharper electron microscopes and elevated image processing programs in recent years, Cryo-EM has evolved into a highresolution technique (Callaway, 2020). Structure determination with Cryo-EM involves a diffraction image that is then evaluated by
"Cryogenic electron microscopy is a structure determination technique that uses an electron microscope to image frozen specimens in aqueous solution."
339
Figure 6: Example NOESY spectrum. Off-diagonal elements give distances between nuclei close together in space Source: Wikimedia Commons
"...the goal of protein NMR is to determine the proteinâ&#x20AC;&#x2122;s threedimensional structure based on the spin couplings of adjacent nuclei."
340
a program that then generates a model of the sample, much like X-ray crystallography. Instead of using X-ray beams like in crystallography, however, Cryo-EM uses an electron microscope. The electron microscope focuses a concentrated beam of electrons onto the sample by using a magnetic lens, and the electrons are scattered by interactions within the sample, creating a magnified image of the sample on the detector. The atoms within the sample interact with the electrons, and the differences in their Coulomb potentials allow structural information about the molecule to be determined via the Coulomb Electron Density Map (Milne et al., 2012). Imaging with the electron microscope generates 2D micrographs of the sample. The micrographs are then aligned with each other, allowing averages of the position and electron density of each particle on the sample to be obtained and classified. A 3D structure is then modeled based on the densities of these averaged particles (Passmore & Russo, 2016). The first step in Cryo-EM is to purify the target, and such homogeneity of the target sample can be assured through gel electrophoresis tests or mass spectrometry. Additionally, functional assays should be run to ensure the target is acting normally and in its natural state. To further test the homogeneity of the sample, a negative stain is run, where the sample is embedded in heavy metals and imaged by an electron microscope to show the target particles within the sample. If the particles are of uniform size, the sample is considered homogeneous and cryo-EM preparations commence. The aqueous sample of the purified target is applied to a support structure with a thin dimension of only 10-80 nm. The sample is then rapidly frozen by a cooling agent such as ethane or nitrogen to prevent the water from crystallizing (Passmore & Russo, 2016). If ice were to form on the sample, the crystals would obscure the electron image. By freezing rapidly, the sample, its water molecules can be kept in a liquid structural arrangement while also allowing scientists to obtain a highresolution structure of the protein in its natural form (Cheng et al., 2017). With the recent advancements in cryo-EM imaging and modeling, this technique has become much more attractive in the field of structure determination. Where X-ray crystallography requires large amounts of sample and perfect crystals, cryo-EM works well with small sample sizes and the limiting crystallization problem of x-ray crystallography is not a factor. Additionally, freezing the sample
instead of crystallizing it allows the image to show the sample as it exists in its natural form in solution. However, imaging the sample in solution prevents the possibility of determining alternate conformations of the sample as it interacts with other substances. Finally, despite advances in cryo-EM machinery, the technique has a high sensitivity to electron damage. When electron beams randomly hit particles in their path, energy is transferred from the electron to the particle, breaking covalent bonds between molecules (Kempner, 2012). Therefore, samples must be imaged under low electron dose conditions. This results in a low signal to noise ratio, making it difficult to see visible contrast between particles (Carroni & Saibil, 2016). Nuclear magnetic resonance (NMR) While the goal of NMR in organic chemistry is often to resolve the organization and stereochemistry of chemical bonds, this information is already known from the proteinâ&#x20AC;&#x2122;s primary sequence. Instead, the goal of protein NMR is to determine the proteinâ&#x20AC;&#x2122;s three-dimensional structure based on the spin couplings of adjacent nuclei. Briefly, NMR occurs when a spin-active nucleus (non-zero nuclear spin quantum number) in a strong magnetic field is bombarded with radiofrequency electromagnetic radiation (the resonance frequency of a nucleus). This causes a shift in many nuclei from a low-energy state to a high energy state. Radiofrequency receivers collect the radiation released when the nuclei return to their low-energy state; this data is then converted to discrete frequencies using Fourier transforms (Marion, 2013). Understanding how this process could possibly produce a three-dimensional structure of a protein requires a brief introduction to two-
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
dimensional NMR experiments and relaxation mechanisms (Reddy & Rainey, 2010). Twodimensional NMR requires two spin-active nuclei, however, the only atom found in proteins that is strongly spin-active is hydrogen. 14N is the most abundant form of nitrogen, but only 15N is magnetically active. The solution is to grow the protein in a limited medium that only contains 15N (Englander et al., 2006). Having spin active hydrogen and nitrogen is desirable because these nuclei can resolve the spatial organization of the protein backbone which defines the protein’s overall shape. One of the central advantages of this technique is that NMR does not require any additional manipulation of the sample before a spectrum is recorded (ex. crystallization or freezing). The bottleneck in NMR comes when analyzing the spectrum, as this requires a great amount of focus and skill from the researcher. Two 2D NMR techniques are required to resolve a protein’s structure: TOCSY and NOESY. TOCSY relies on through-bond (spin-spin) relaxation. This occurs when an excited nucleus releases its excess energy to a nearby nucleus through the bonds that connect them. This type of relaxation can occur only within a single amino acid residue, as it is blocked on either side by the carbonyl in the protein backbone. This technique gives information on the identity of every amino acid in the protein; however, this information was already known from the protein sequence, and is only required to help resolve peaks in the NOESY spectrum. NOESY uses through-space (spin-lattice) relaxation to show which atoms are close together in threedimensional space. This includes (1) atoms close together because they belong to the same amino acid, (2) atoms close together because they belong to adjacent amino acids, and -most importantly -- (3) atoms close together because of the protein’s tertiary structure. Determining which peaks are due to relaxation method (3) allows the structure of the protein to be resolved. Resonances due to (1) are those in the TOCSY spectrum, so these can simply be subtracted from the NOESY spectrum. Resonances due to (2) can be determined and assigned by the researcher based on their knowledge of the protein’s primary sequence. What is left is the distance between atoms that are not close together in the primary sequence. A computer process called simulated annealing can then fold the protein into the correct structure based on this distancing information, and the result is a three-dimensional structure that can be verified by a Ramachandran plot
SUMMER 2020
(Wuthrich, 1989). The slowest part of an NMR experiment is the peak assignment after a spectrum has been recorded, but once this is done for a particular protein, many experiments can be performed without repeating peak assignments. This technique also allows proteins to be studied in their native (aqueous) environments and can show protein dynamics in a single recording session. However, NMR facilities are extremely expensive to set-up and maintain. The superconducting magnets require cooling by liquid helium (which can be very expensive) as well as an outer layer of liquid nitrogen. The rooms in which these magnets are located must also be relatively insulated from radiofrequency radiation, which can be accomplished by creating a Faraday Cage with copper wiring. As a result, protein NMR is optimal for studying small, solvated proteins. Homology Modeling Used alongside experimental methods, computational methods offer a quicker and less resource-intensive way to determine the structure of target proteins for rational drug design. Currently there are 155,000 structurally determined proteins in the Protein Data Bank but 186,000,000 amino acid sequences in the UniProt Knowledgebase, amounting to a massive knowledge gap. Homology modeling is a computational method that utilizes the amino acid sequences of analogous, structurally established proteins (also known as templates) to predict a target protein structure.The process of homology modeling involves four main steps: (1) the identification of template proteins from structure databases, (2) the alignment of template and target sequences, (3) the creation of a 3D model, and (4) an assessment of the quality of the generated protein structures (Bordoli et al., 2009).
"Currently there are 155,000 structurally determined proteins in the Protein Data Bank but 186,000,000 amino acid sequences in the UniProt Knowledgebase, amounting to a massive knowledge gap."
The third step can be approached using three main methods. (1) The rigid-body or fragment assembly method identifies similar core sections between the template and target sequences, generates a model by overlaying these core sections, then fills in structural gaps through further database searches. (2) The segment matching method splits the target protein sequence into shorter segments and assesses database matches for each segment. (3) The spatial restraint method generates a series of restrictions for the target sequence, such as bond lengths or angles, which are
341
Equation 2: Example scoring function for virtual docking. Wf is a scalar that defines the contribution of each force to the calculated binding energy and the five forces are Coulombâ&#x20AC;&#x2122;s Law, van der Waals, hydrogen bonding, solvation-desolvation energy, and torsional penalties introduced by rotatable bonds.
used to optimize the model. In addition, further computation is carried out to model protein loops and side chains (MartĂ-Renom et al., 2000). To validate the quality of the models, Ramachandran plots are useful in identifying stereochemical errors (Muhammed & Aki-Yalcin, 2019.
"One advantage of HTS and phenotypic screening over rational drug discovery is that the structure of the target compound is not necessary to discover a lead compound, so the structure determination phase is bypassed, allowing more time and energy to be spent screening compounds."
Many programs are available for homology modeling in both software and web-based forms. Each program utilizes one of the three methods of model-building, or a hybrid thereof. Generally, there is no consensus as to which program is best, as most produce similar results under similar conditions; a review by Dolan et al. found I-Tasser to have the best usability, while Muhammed & Aki-Yalcin found MODELLER to be the best software tool. Although it is faster and more efficient than experimental structural determination, homology modelling has several drawbacks. As it relies on previously determined protein structures, the quality of protein models decreases when there are no closely analogous templates. Additionally, it cannot account for complex quaternary structures (Bordoli et al., 2009). Therefore, homology modelling is best used either in close conjunction with experimental methods or as a first step in the process of rational drug design.
Computer Simulations The development of a compound that binds to and elicits a functional change in a drug target is a time consuming and labor-intensive process. In the case of laboratory-based drug discovery paradigms like high throughput and phenotypic screening, hit identification (identifying compounds that bind to the region of interest)
342
and lead optimization (the process required to make a molecule that binds strongly to the region of interest) require a lot of purified protein and the purchase of a large number of compounds (Takenaka, 2001). One advantage of HTS and phenotypic screening over rational drug discovery is that the structure of the target compound is not necessary to discover a lead compound, so the structure determination phase is bypassed, allowing more time and energy to be spent screening compounds (Lage et al., 2018). However, most researchers agree that the process of structure-based drug design decreases the time, money, and resources necessary to develop a drug (Batool et al., 2019; Macalino et al., 2015; Patwardhan et al., 2008). This section will explore the most common computer simulations used to discover lead compounds -- virtual docking, molecular dynamics, and techniques to bound fragments into a lead compound. The desired result is the identification of a promising, synthesizable compound that can be tested against the target. If the synthesized molecule does not have the desired outcome, these simulations might need to be repeated, or alternate methods may need to be employed. Virtual Docking Virtual docking is a technique that takes the place of high throughput screening in the process of rational drug design. Once a structure of the drug target has been resolved, it is converted into a protein database (PDB) computer file that specifies the coordinates of every atom in the molecule. The basic principle behind virtual docking is that the threedimensional structures of a drug target and library of drug-like or fragment-like molecules can be used to calculate the free energy change DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
between unbound and bound states. The lower the change in free energy between these states, the stronger the association between the target and molecule (Kontoyianni, 2017). One of the first steps in running a virtual docking experiment is deciding what areas of the target to search. Screening the entire protein is computationally demanding and slow, so refining the search to a particular area of functional importance is necessary to speed up the process and reduce non-specific binding. One approach to identifying potential binding pockets is to search for indentations in the protein’s surface. The program CAVITY can search for these binding pockets and produces two quantitative metrics -- ‘ligandability’ and ‘druggability’ -- predicting the efficacy of a virtual screen (Xu et al., 2018). Ligandability refers to the probability of designing a small molecule with high affinity for the binding pocket and druggability refers to the potential of the pocket to bind drug-like molecules (those with favorable ADME profiles). Another approach to detecting binding pockets is the identification of ‘hot-spot’ residues (Rosell & Fernández-Recio, 2020). Hot-spot residues are the particular amino acids of a protein required to bind its endogenous ligand. These can be identified either experimentally (via X-ray crystallography or NMR) or using molecular dynamics simulations. Once a particular region of the target is identified as a promising candidate for virtual screening, libraries of drugs (or drug fragments) are bound to the target using a scoring function. For virtual screening software, the scoring function is a weighted sum of all contributions to drug binding. Below is an example of a scoring function: This function shows that the binding potential of a ligand is the weighted sum of five forces acting on between the receptor and ligand: coulomb (electrostatic) potential, van der Waals potential, hydrogen bonding, solvation energy (the energy of desolvating the binding pocket), and torsions (the number of freely rotatable bonds). Lower scores are preferable because low energy indicates a favorable interaction between the drug and receptor (Hill & Reilly, 2015). One major drawback to many virtual docking programs is that they assume the threedimensional structure of the target is rigid. However, this is not the case in vivo; the protein samples many different conformations and
SUMMER 2020
these dynamics are important to the function of the protein and the binding of drugs. Additionally, molecular docking programs use a grid-based approach in which the energy of each atom is pre-calculated at each grid point. This discrete search greatly increases the speed of these programs, allowing many more molecules to be searched, but introduces additional error into the calculation of the binding energy (Kontoyianni, 2017). Molecular Dynamics While virtual docking assumes a rigid protein and a discrete number of possible configurations of the system, molecular dynamics calculates the trajectory of every atom, so an infinite number of conformations are possible and everything is flexible. This is much more representative of what actually occurs in a cell, but this comes at the cost of increased computational demand and a much slower simulation. While many virtual docking experiments can be completed in days or maybe a week depending on the size of the library, molecular dynamics simulations can take weeks to months (Goga et al., 2015). The basic ideology in a molecular dynamics simulation is to calculate the force on every atom at each time frame during the simulation to adjust their velocities and calculate their trajectories. When applying a force to a system, the system will tend toward its lowest energy configuration. This is shown by rewriting Newton’s second law as shown below:
"While virtual docking assumes a rigid protein and a discrete number of possible configurations of the system, molecular dynamics calculates the trajectory of every atom, so an infinite number of conformations are possible and everything is flexible."
Because potential energy is proportional to velocity, atoms will move more slowly and spend more time over areas of low potential energy. Over time, the system will find its global minimum and will sample a set of conformations near this minimum. Examining these conformations gives important information about the dynamics of the system, and these can be used to identify hot-spot residues or generate different starting structures for virtual screening. Running a molecular dynamics simulation of the apo (empty) protein can give important information about the target’s solution behavior when nothing is bound, but these simulations can also be run to examine protein-drug, protein-protein, protein-DNA, and many other interesting interactions (Pan et al., 2019). Running molecular dynamics simulations with a protein and drug can give a more comprehensive view of the binding pose and show conformational changes in the protein when the drug is bound. This technique
343
Equation 3: Newton’s 2nd Law of motion shows that potential energy is proportional to velocity. This is the primary relationship that allows for energy minimization in molecular dynamics.
"...it is often desirable to analyze the entirety of a molecular dynamics simulation, and there are several techniques for interpreting the dynamics of the system over the course of the simulation."
344
is also very important for identifying allosteric interactions, in which an interaction outside the functional domain of the target molecule can change the shape of this functional domain (Vettoretti et al., 2016). The above function shows that the potential energy of the system is determined by summing many of the forces acting on the receptor and/ or ligand. The first two terms are the Coulomb (electrostatic) potential and Lennard-Jones potential and are forces which generally facilitate drug binding (as opposed to hindering it). Coulomb’s Law describes the attraction of unlike charges and the repulsion of like charges. The Lennard-Jones potential describes the interactions between an atom’s nucleus and the electrons of an adjacent atom, which creates an equilibrium (energy well). The next three terms can be thought of as penalty functions due to the distortion of bond lengths, bond angles, and dihedral angles. Bond lengths and angles are modeled as a harmonic oscillator (Hooke’s Law) and the dihedral angles by a cosine function. The summation of all of these values gives the potential energy of the system and is minimized during the course of a molecular dynamics simulation (González, 2011). Once the trajectory file of a molecular dynamics simulation has been generated, there are many analyses that can be performed to interpret the results. Often, it is only the last frame (or last few frames) that are of interest, as is the case with receptor-ligand docking. These frames can easily be extracted and viewed in a molecular graphics program (PyMOL, VMD, UCSF Chimera) that allows for a careful analysis of the final binding pose, which should be near the global minimum if the simulation is run long enough (Salmaso & Moro, 2018). However, it is often desirable to analyze the entirety of a molecular dynamics simulation, and there are several techniques for interpreting the dynamics of the system over the course of the simulation. Root-mean-square deviation (RMSD) gives a quantitative metric for how much the system moves over the simulation and can be used to interpret the stability of a protein as well as the completeness of a simulation. Another useful metric is root-mean-square fluctuation (RMSF), which measures how much movement is present in each residue over the course of the simulation. This can be useful when trying to identify hot-spot residues and druggable pockets (Zhang et al., 2003). Finally, clustering algorithms can be used to visualize
the conformational states sampled over a simulation (Peng et al., 2018). Running a molecular dynamics simulation requires several steps to set up the experimental environment. First of all, researchers have to choose a computing hardware on which the simulation performs. Traditional central processing units (CPUs) used to be the major choice and are still the most readily available to researchers. For teams with adequate budget, supercomputers provide the fastest simulation and the best quality at great expense; recently developed graphic processing units (GPUs) have become more attractive due to its fast simulation and modest cost (Hollingsworth & Dror, 2018). The second step involves selection of force fields. Classical force fields such as AMBER, CHARMM, and OPLS share most of the properties yet each specializes in certain aspects, like the modeling of protein and lipid. Despite the variety in these classical force fields, none of them could be applied to simulations involving changes to the covalent bonds, and this vacancy is filled up by the quantum mechanics/ molecular mechanics (QM/MM) simulation and its novel construction of the force field (Senn & Thiel, 2009). Complementing the force fields are software packages that map to the hardware and perform computations. Notably, previously mentioned GPUs are made possible in molecular dynamics simulations under the support of updated software packages. Afterwards, only the actual performance of the simulation and analysis are left. Molecular dynamics simulations are a valuable tool not only in drug development, but also in biochemistry, structural biology, and systems biology, where researchers use them to DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Equation 4: Example potential energy function for molecular dynamics showing Coulombâ&#x20AC;&#x2122;s Law, van der Waals interactions, as well as distortions of bond length, bond angles, and dihedral angles. This is the function that is minimized during an MD simulation.
understand the interactions between molecules within a cell. These simulations offer the most realistic description of molecular interactions but are computationally demanding and can be extremely time consuming. Minimizing the size of the system while introducing minimal error may be a desirable approach to reduce the simulation time course. Fragment-to-lead techniques Fragment based drug discovery has garnered significant attention as a technique to develop a novel and potent lead compound. In this technique, binding cavities and hot spot residues are identified for virtual screening and libraries of low-molecular weight compounds are docked into the binding pocket. Although these compounds do not form as many interactions as drug-like compounds and therefore have lower binding scores individually, growing, linking, and merging these fragments together can generate novel compounds with potent binding affinities. These growing, linking, and merging techniques must be carefully implemented by the researcher to obtain the desired result. When growing a molecule, it is important to test extensions in silico (on the computer) and in vitro. Generally, a fragment is grown only three or four atoms (excluding hydrogens) at a time using computational approaches before it needs to be tested again in the lab (de Souza Neto et al., 2020). Testing in the lab often requires a calculation of binding affinity or resolution of the binding pose using NMR or X-ray crystallography. If the desired effect was achieved, the process can be repeated to continue growing the compound. Additional techniques for growing a fragment include vector tracing through the binding pocket to avoid steric hindrance, docking of fragments libraries in the adjacent part of the binding cavity exclusively, and consultations with expert
SUMMER 2020
medicinal chemists to design an extension by hand (Lamoree & Hubbard, 2017). Linking is required to make a drug from two or more non-competitive (nonoverlapping) fragments. Although linking fragments may seem like a simple task, there are often substantial problems that arise with this technique. Most importantly, the implementation of a flexible linker greatly increases the entropic penalty of drug binding; the second law of thermodynamics says that increasing entropy is energetically favorable, so when a drug with a high number of rotatable bonds binds to a target, it loses flexibility and entropy decreases. Linking fragments is certainly the most attractive technique for quickly increasing potency, but it can lead to problems if the linker is too flexible. Constructing a rigid linker can help constrain the molecule and decrease the entropic penalty (Chung et al., 2009). Merging occurs when two partially-competitive (overlapping) fragments are combined into one fragment by fusing their overlapping regions. This is perhaps the most straightforward approach, because the extensions and linker are pre-defined by the fragments and their overlap (de Souza Neto et al., 2020). Developing a lead compound from a library of docked fragments is also a problem wellsuited to a machine learning technique known as deep learning. The process can be divided into four steps: (1) large libraries of simplified molecular input line entry system (SMILES) files are fed to the latent space through an encoder to train the system in recognizing appropriate syntax, (2) transfer learning is used to input top results from molecular docking experiments, (3) the results generated by the latent space are then filtered by ADME and other quantitative structure-activity relationships, and (4) the results from the latent space are decoded to SMILES for the researcher to visualize (Gupta et al., 2018).
"Developing a lead compound from a library of docked fragments is also a problem well-suited to a machine learning technique known as deep learning."
345
Equation 5: Quantification of protein-drug interaction (KA - association constant; KD - dissociation constant; kon - forward rate constant; koff - reverse rate constant; ΔG - Gibb’s free energy change; ΔH - enthalpy change; ΔS - entropy change)
"Once drug developers have identified a lead compound, they must synthesize the molecule to carry out further analyses in a process called organic synthesis."
346
Lead Compound Synthesis and Evaluation How to approach lead compound synthesis Synthetic chemists can approach the production of lead compounds in a number of ways. Targetoriented synthesis focuses on producing a single-target drug-like molecule (Nadin et al., 2012). The pathway to synthesizing the target is determined via retrosynthesis–an analysis in which chemists theoretically deconstruct a desired product to determine an appropriate starting material and route to that product. Although straightforward, this method does not cover the ground necessary for drug design, which requires a vast number of molecules to be available in drug-screening libraries and dozens of molecules to be tested against each protein target. Therefore, other synthesis approaches have been developed. Combinatorial synthesis aims to produce a large quantity of chemical products and works to expand the size of drug libraries (Nadin et al., 2012). Diversity-oriented synthesis works in a reagent-to-product direction to expand the diversity of of druglike compounds from a small group of starting materials (Schreiber 2000). Lead-oriented synthesis, a more tailored approach, creates a set of drug-like compounds with similar chemical properties (Nadin et al., 2012). Function-oriented synthesis, a growing sector of medicinal chemistry, begins with naturally-occurring starting materials and modifies them to make optimal therapeutic products (Wender et al., 2008). The use of multiple approaches such as combinatorial, diversity-oriented, lead-oriented, and function-oriented syntheses can expand the realm of available drug-like molecules and deliver pathways for specific leads. Once drug developers have identified a lead compound, they must synthesize the molecule to carry out further analyses in a process called organic synthesis. While some drug developers may have the resources and knowledge to synthesize their own compounds, others may outsource their chemical syntheses to labs specializing in organic synthesis (Brown & Boström, 2016). The process of organic synthesis begins with retrosynthesis. Whereas drug design broadly considers different compounds as possible drugs, retrosynthesis is a tool within drug design that focuses specifically on the construction of one compound of interest. As such, drug developers typically approach lead compound
synthesis by first conducting a retrosynthetic analysis of the target molecule. Retrosynthetic analysis is one of the most challenging parts of organic synthesis because it depends on the chemist’s expert intuition developed over years of practice (Almeida & Rodrigues, 2019). Recently, researchers have been investigating ways to automate retrosynthetic analysis using artificial intelligence (AI), as such technology could enable human chemists to see patterns otherwise not visible and devise more efficient paths to products (Almeida & Rodrigues, 2019). One retrosynthetic analysis AI tool being developed is 3N-MCTS, which operates on a “Monte Carlo tree search coupled to deep neural networks” (Almeida & Rodrigues, 2019). A Monte Carlo tree search is an algorithm most commonly used in games to determine the best move after a given turn; it works by selecting, expanding, simulating, and then updating the nodes in a tree to find the move with the highest probability of winning (Sharma, 2018). As such, in retrosynthesis, the Monte Carlo tree search allows researchers to determine the best path backwards from a desired product to a feasible starting compound based on known chemical principles. The 3N-MCTS program consists of 3 layers of neural networks – a method of machine learning in which a program learns to perform tasks after being trained on known data (Hardesty, 2017). The first layer focuses on expansion: the algorithm suggests a multitude of possible transformations working backward from the product to a starting material (Segler et al., 2018). The second layer predicts the feasibility of each proposed reaction from the first neural network (Segler et al., 2018). For these layers, the programmers first associated each reaction center with a set of transformation rules based on transformations seen in retrosynthetic analyses in organic chemistry literature (Segler et al., 2018). This allows the program to reasonably expand from the target compound. The program developers also limited the possible number of transformations to 50, since there are generally many ways to feasibly
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 7: Parts of an SPR experiment. Changes in the absorbed wavelengths can be used to quantify drug binding Source: Wikimedia Commons
synthesize a compound (Segler et al., 2018). This allows the algorithm to avoid excessively complicated procedures when computing a retrosynthetic analysis. Additionally, the program is trained to determine the probability of each transformation based on a set of rules, and once the sum of the probabilities reaches 0.995, the program halts expansion even if the number of transformations is below 50 (Segler et al., 2018). Finally, the third layer, known as the “rollout phase,” addresses the position value of the feasible transformations to determine the best one (Segler et al., 2018). Like the first and second layers, the third layer applies a unique set of rules observed in organic chemistry literature to select the most reasonable transformation (Segler et al., 2018). While this technology has immense promise, it is still in development. Currently, it is not advanced enough to produce retrosynthetic analyses for natural products, as these are highly complex molecules and there is limited data available for training AI programs (Segler et al., 2018). Additionally, while 3N-MCTS can predict stereochemistry, or the 3-D arrangement of atoms in a molecule, it struggles to quantitatively predict enantiomeric and diastereomeric ratios, or the ratios of compounds that exhibit a right- or left-handedness at all or some chiral centers, respectively (Segler et al., 2018). These ratios are critical because many compounds are therapeutic in one enantiomeric form but ineffective or harmful in another, so the synthesis
SUMMER 2020
of drugs often requires significant attention to the stereoselectivity. Also, 3N-MCTS cannot predict reaction mechanisms or chemical equilibria that may be significant for a reaction, such as tautomerization (Segler et al., 2018). In tautomerization, a molecule can equilibrate between two structural isomers – one of which may be favored and go on to form the major product. Thus, ignoring tautomerization or other equilibria may provide results that are contrary to what is observed experimentally. Nevertheless, the integration of AI to synthetic organic chemistry shows great promise for the future of rational drug design. Besides assisting retrosynthetic analysis, computer-assisted organic synthesis software has applications in the prediction of reaction products, the optimization of reaction conditions, and finding novel reactivity. Currently, the prediction of reaction products depends on an experienced chemist’s intuition; however, a “formalization” of this intuition by AI may improve the efficacy of such predictions. One computational tool, known as density functional theory (DFT), is able to generate molecular descriptors, especially electronic considerations, that may aid these predictions (Almeida & Rodrigues, 2019). Recent advances in this field have also begun including solvent information and the description of other chemical species that may influence a reaction product within the AI algorithms (Almeida & Rodrigues, 2019). However, while
"Besides assisting retrosynthetic analysis, computerassisted organic synthesis software has applications in the prediction of reaction products, the optimization of reaction conditions, and finding novel reactivity."
347
computational advances have the potential to transform organic synthesis, it is not a replacement for a human’s intuition. Rather, these machine learning techniques may make organic synthesis more accessible to those with less training and improve the overall efficiency of lead compound synthesis. Although the number of known synthetic reactions is constantly growing, medicinal chemists rely on remarkably few reactions to carry out the vast majority of drug syntheses. A literature sweep by Brown and Boström found that the five most common reactions were amide formations, SNAr reactions, Boc protection or deprotection, ester hydrolysis, and Suzuki–Miyaura coupling (2016). This narrow range of reactions is due to a number of reasons, including historical precedence, availability of starting materials, ease of carrying out the reactions, and the diversity of chemical “building blocks” available through these reactions (Boström et al., 2018). In the future, incorporating a more diverse set of reactions could allow far more drug-like compounds to be available for drug design. Biochemical Assays
"After a compound has been purchased or synthesized, it must be tested against its target in the laboratory. Biochemical assays are a collection of techniques used to quantify drug-target association..."
348
After a compound has been purchased or synthesized, it must be tested against its target in the laboratory. Biochemical assays are a collection of techniques used to quantify drugtarget association; measurements from these assays include equilibria (KA, KD, KI), kinetics (kon, koff ), thermodynamics (ΔH, ΔS, ΔG), as well as structural studies to determine the binding pose of a drug. Equation 5 shows relationships between many of these values Strong binding constants and favorable thermodynamic performance both indicate a promising candidate compound that should continue to the next phase of drug development, and good values in biochemical assays are often a prerequisite before moving on to cellbased disease models or model organisms (Rishton, 2003). Within this section, several classes of biochemical assays will be discussed; fluorescence and radioligand assays are useful in measuring binding affinity, surface plasmon resonance is ideal for measuring kinetics, calorimetry is used to measure thermodynamics, and x-ray crystallography and NMR are commonly used to determine the structure of a protein-drug complex in vitro. In fluorescence-based assays, a fluorescent
compound (called a fluorophore) is attached to the drug or receptor to provide a robust signal. One type of fluorescence-based assay, called a fluorescence anisotropy assay, requires only a fluorophore-labeled drug. In this assay, polarized light is passed through a sample of protein and drug, and the degree of polarization is measured on the other side of the sample. If the drug is not bound to the target, it rotates freely, and the re-emitted light is less polarized. If the drug is bound to the target, it rotates slowly, and the re-emitted light is more polarized (Lea & Simeonov, 2011). Therefore, measuring the polarization of re-emitted light can give information about how much drug is bound to the target and can be used to calculate the association constant. However, a limitation of fluorescence anisotropy is that it cannot delineate specific binding (drug bound to the target’s functional domain) from nonspecific binding (drug bound to nonfunctional domains of the target). Föerster resonance energy transfer (FRET) is another fluorescence-based assay that measures only specific binding. In this technique, two fluorophores with overlapping absorption and emission spectra are used to measure drug-target interactions -- one fluorophore is attached to the drug and the other is attached to the functional domain of the target. Energy transfer between these fluorophores is only possible when they are in close proximity (less than 10 angstroms), so a signal will only be detected if the drug is binding to the target’s functional domain (Stoddart et al., 2016). A significant limitation of fluorescence-based assays is that attaching a fluorophore to a drug and/or receptor changes its chemical and physical characteristics, making the measured binding constants unreliable. Radioligand assays overcome this limitation by synthesizing the drug from isotopically-enriched starting materials. Radioactive isotopes give a robust signal, but do not change the chemical properties of the compound (the electron density around the drug remains the same) (Bylund & Toews, 1993). Surface plasmon resonance (SPR) is a labelfree technique (unaltered protein and drug) used to measure binding constants as well as the kinetics of drug-target association and dissociation. There are four basic components to SPR: a light source, a prism, a gold sensor chip, and a detector. Light is passed through the prism and focused onto the sensor chip where specific wavelengths are absorbed; this creates a dip in the reflected spectrum incident on the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 8: part of the GPCR signalling mechanism. cAMP is the target second messenger when an activated GPCR is binded to GS protein (Source: Wikimedia Commons).
detector. To measure ligand binding, the drug target (e.g. protein) is immobilized on the gold sensor chip and the drug is passed over the sensor chip in a water-buffer solution. Changes in the absorption spectrum are detected and can then be used to calculate binding constants as well as the rate constants for association and dissociation (Wong & Olivo, 2014). Calorimetry refers to a collection of techniques used to study the thermodynamics of chemical reactions (ΔH, ΔS, ΔG). Thermodynamics contribute substantially to drug binding and knowing these values can help researchers understand and optimize their drug’s binding mechanism. Two types of calorimetry -differential scanning and isothermal titration -- are commonly used to examine the thermodynamics of drug binding. In differential scanning calorimetry, changes in the denaturation temperature of protein targets (temperature at which the protein unfolds) are detected at different drug concentrations. As a drug is added to the protein solution, it will make stabilizing interactions and raise the denaturation temperature. This increase can then be converted to thermodynamic values for protein-drug association (Weber & Salemme, 2003). In isothermal titration calorimetry, a drug is slowly added to an insulated well containing protein and a buffered solution. As the drug is added, the temperature decreases slightly, and heat is added to keep the solution at constant temperature. By measuring how much heat is added to the system, researchers can calculate the thermodynamics and association constant
SUMMER 2020
of the drug-protein interaction (Ward & Holdgate, 2001). In the process of rational drug design, it is extremely important to validate the results of computer simulations with laboratory experiments. In computer-aided drug design, the binding pose of a potential drug was resolved in molecular docking and refined in molecular dynamics, but the binding pose must also be determined through structural analysis in the lab. Co-crystallization of a protein and drug can give a detailed structure of the drug bound to the target in x-ray crystallography studies (Yin et al., 2014). In NMR, shifts in resonance peaks can be used to resolve the binding pose as well as the binding affinity and dynamics of drug-target association and dissociation (Meyer & Peters, 2003). Cell-Based Assays Cell-based assays examine the effects of a drug on entire cells by measuring levels of cellular activities and properties like signal transmission, cytotoxicity, or cell proliferation. The assays are generally categorized by the evaluated cellular properties, including cell viability assays and cell migration assays. Cell viability assays are used to detect whether the drug molecule facilitates cell proliferation or display cytotoxic effects. Methods include but are not limited to dye exclusion assay and ATP assay; however, regardless of which is used, all cell viability assays measure the number of
"Cell-based assays examine the effects of a drug on entire cells by measuring levels of cellular activities and properties like signal transmission, cytotoxicity, or cell proliferation."
349
"Cell migration assay, also called a scratch assay or a wound healing assay, is used to study cell migration or cell-cell interaction."
viable cells at the beginning and the end of the experiment. One of the earliest dye exclusion methods is the trypan blue dye exclusion assay, in which dead cells will appear blue while viable cells remain the same. This assay is based on the principle that only dead cells take up the trypan blue dye, owing to the fact that they are devoid of intact cell membrane and thereby not able to exclude macromolecules like trypan blue outside of the cell (Stoddart, 2011). ATP assay determines the number of viable cells in culture by quantifying ATP, which indicates the presence of metabolically active cells. The readout of ATP is directly proportional to the number of viable cells, through which we are able to monitor the trend of cell population growth/decline and evaluate the drug effect. Cell migration assay, also called a scratch assay or a wound healing assay, is used to study cell migration or cell-cell interaction. The basic principle is to make a scratch either chemically or physically on a cell monolayer and observe how the cells migrate to close the gap. The formula to quantify cell migration is straightforward:
variety of cellular activities. An example of the diversity of cell-based assays is the assays for GPCRs activity. G-protein-coupled receptors (GPCRs) are important signal transducers that are involved in many biological processes like hormone response and cell-cell communication. Given the significance of their function, GPCRs are popular targets for many best-selling drugs and approximately 34% of all FDA approved drugs (Hauser et al., 2018). To test drug efficacy, many cell-based assays for GPCRs have been developed that are tailored for different situations. GPCRs bind to and act through trimeric G protein, which has three types Gi, Gs, and Gq. When a ligand binds to a GPCR and causes conformational changes, each corresponding α subunit of the three kinds of G protein triggers a different intracellular signaling response and induces changes in the abundance of second messengers like cyclic adenosine monophosphate (cAMP), Ca^(2+), and phosphatidylinositol. For different signaling pathways there exist specific cellbased assays to test how the ligand (drug) affects the intracellular signaling process.
Efforts are being made to optimize the interpretation of cell migration assay by modifying the basic formula. Direct rate average measurement takes the average of wound healing rate of different spots at a defined time, ▁R_plate, and then accumulates ▁R_plate at different time spots to derive the total wound healing rate ▁R_total; regression rate average measurement first obtain R_spot, which is the slope of linear regression applied to wound distance at a specific spot as a function of time, and take the average of R_spotfrom all spots observed as ▁R_total; average distance regression rate measurement averages the wound lengths ▁Wof all spots, plots Δ▁W_tas a function of time and uses linear regression to obtain ▁R_total. A previous study has indicated that direct rate average and average distance regression rate are more resistant to outliers, whereas regression rate average is more sensitive to outliers (Dhillon et al., 2017). Scientists are able to choose different measurements in favor of their demand and interpret the result more accurately. Nevertheless, cell-based assays cannot be simply categorized due to the astonishing
Numerous reagent kits are available to measure the intracellular cAMP levels. The majority of these assays detect intracellular cAMP through antibodies that specifically recognize it, while the GloSensor cAMP assay by Promega, as an exception, utilizes luciferase biosensor (Wang et al., 2004). A biosensor with cAMP binding domain is fused to a luciferase molecule, which will be activated when a cAMP molecule binds to the biosensor. Luciferase is a class of oxidative enzymes that produce bioluminescence, so the GloSensor assay measures cAMP level by quantifying the light signal output released by the activated luciferase biosensor, where there is a positive correlation between these two data. When GPCRs bind to Gi or Gq proteins they affect intracellular Ca^(2+)level, and for the purpose of detecting the changes there are generally two types of mature cell-based assays available. Photoprotein-based assays utilize Ca^(2+)-sensitive photoprotein like aequorin, which is isolated from a jellyfish species, to measure intracellular Ca^(2+) level. The aequorin molecule contains Ca^(2+)-binding sites, and in certain conditions the binding of Ca^(2+)ions causes conformational changes inducing the emission of photons (Ma et al., 2017). After the introduction of synthetic Ca^(2+)-sensitive fluorescent indicators, which are the second major type of assay, they
Equation 6: RM represents the rate of cell migration, and Wi and Wf are the initial and final wound width respectively.
350
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
have become extensively used especially in high-throughput studies. Common Ca^(2+) fluorescent indicators incorporate Ca^(2+) chelators, some small molecules that bind very tightly to Ca^(2+) ions, with a fluorescent moiety. The binding of Ca^(2+)to the indicator alters the configuration of the fluorescent moiety, which either increases or decreases the fluorescence intensity drastically. There is a vast diversity of synthetic fluorescent indicators that vary in excitation and emission wavelengths, and thus reveal different emission colors. Compared to the earlier photoprotein-based assays, the Ca^(2+)-sensitive fluorescent indicators facilitate high-throughput studies on measuring GPCR-triggered Ca^(2+) changes due to their ease of use, which is even further amplified by the development of complementary instrument fluorometric imaging plate reader (FLIPR).
Conclusion Although the processes of target identification, design, and synthesis are lengthy and laborintensive steps in producing a therapeutic compound, the drug development process is far from over. Drug development can be broken down into three sections: pre-clinical, clinical trials, and manufacturing. The preclinical stage encompasses everything that determines whether the drug is ready to be tested in the clinical environment. This includes all of the steps outlined in this paper, as well as the completion of toxicology and pharmacology reports, substantial testing in cell and animal disease models, and a manufacturing report on how the drug could reliably be produced if approved. In the US, all of this data will be included in the Investigational New Drug (IND) application that will be sent to the Food and Drug Administration (FDA). Similar processes with different nomenclature are followed in other countries. Once approval is gained, the drug can then move into the clinical trials phase (Van Norman, 2016). In the clinical trials stage, the drug is tested on human subjects to determine safety and efficacy of the drug if it were to be available to the public. In the United States, this stage can be further divided into phase I, phase II, phase III, and potentially phase IV if the FDA deems further testing necessary following the conclusions of the phase III trials (Van Norman, 2016). Phase I clinical trials ensure that the product is safe for further testing, and use only a small group of subjects, anywhere from 15 to 80 patients, so that few people are exposed
SUMMER 2020
if the drug causes adverse effects; these trials usually last from several months to a year. Next, phase II trials are conducted to ensure that the drug has the necessary concentration, biological interactions, and other properties for sufficient therapeutic efficacy, as measured by some disease biomarker or reduction in symptoms; these trials use anywhere from 100 to 500 subjects. Phase II trials usually last around two years because the metrics recorded are expanded from those studied in the previous stage and the larger sample size means scientists need more time to recruit participants and analyze the data. Finally, the drug enters the phase III trial, which consists of a comprehensive review of both the safety and efficacy of the drug to build a strong case for FDA approval when the application is submitted. Because this is the last step before regulatory approval, sample sizes can consist of anywhere from 1,000 to 5,000 subjects and the trial can take anywhere from one to four years to complete, depending on when sufficient evidence is found to make a compelling case for approval. Once these tests are completed, a portfolio of evidence for safety and efficacy from these studies is sent to the FDA; this report is known as a New Drug Application (NDA) (Mohs & Greig, 2017). Once the drug is accepted by the FDA, it finally moves to the manufacturing process where the pharmaceutical company decides how they will produce the drug. The manufacturing process is highly dependent on the particular drug being produced and can require several different processes, such as cooling, solvent extractions, powder feeding and blending, milling, granulation, and hot extrusion (Mohs & Greig, 2017). As outlined, the drug development timeline is not just long but also extremely costly for pharmaceutical companies. Out of the 10,000 drugs that enter the preclinical stage, only one drug on average achieves FDA approval (Dimachkie Masri et al., 2012). Therefore, companies identify many potential points for outsourcing that can help the company cut down on costs and time. One particularly popular option is Contract Research Organizations (CROs). While focusing initially on the clinical trials stage of drug development, CROs have since expanded to offer a wide range of services that span the entire drug development process, including data analytic services, toxicology testing, development of animal models for preclinical testing, clinical trials recruitment and management, manufacturing consulting services, and much
"While focusing initially on the clinical trials stage of drug development, CROs have since expanded to offer a wide range of services that span the entire drug development process, including data analytic services, toxicology testing, development of animal models for preclinical testing, clinical trials recruitment and management, manufacturing consulting services, and much more."
351
more (Getz et al., 2014). Other companies provide software services that integrate artificial intelligence and machine learning into their operations, performing many of the tasks necessary in the pre-clinical process and others that allow researchers to skip or cut down on time spent on testing. Both these industries and many others have benefited from the complexity of the drug development process and leveraged their expertise in the industry to cut down on the timeline and related costs of producing drugs. However, despite the great increases in speed and resource efficiency offered by rational drug design, the overall approval rates for clinical trials have not seen a substantial increase alongside these techniques. This is largely due to the heterogeneity of disease mechanisms, and the failure of one drug target to offer a solution to all individuals with a specific set of symptoms (Dugger et al., 2018). Targeting a particular protein or biomolecule for drug development assumes a single-molecule dependence of a disease; the reality is that many diseases are pleiotropic (depending on many genes) and each individual with a common set of symptoms will have a unique genetic and molecular profile determining their responsiveness to a drug. Future efforts in drug development need not only focus on the molecular pathways altered in a disease population but must consider each patient independently. As genetic testing in the clinic continues to grow, the individuals chosen for a particular clinical trial and the target population of a particular compound will likely become more specific. The promise of rational drug design is that the synthesis of a lead compound will be sped up to such a degree that many drug compounds may be created for a particular disorder. These drugs may then be matched to patients based on their individual genetic and laboratory profiles. Although rational drug design has not significantly increased the rate of FDA drug approval, it should be noted that these techniques are a triumph of modern biology, chemistry, physics and computer science. These techniques have facilitated extensive interdisciplinary collaboration and have revealed intimate links between the physical and biological sciences. Future efforts to characterize biomolecules will continue to enrich our understanding of disease, generate more drug candidates, and help tailor medicines to the individual. With growing advancements in this field, a world where a medication is created for an individual patient may not be too far away. References
352
Almeida, A., & Rodrigues, T. (2019). Synthetic Organic Chemistry Driven by Artificial Intelligence. Nature Reviews Chemistry, 3. https://doi.org/10.1038/s41570-019-0124-0 Anderson, A. (2003). The Process of Structure-Based Drug Design. Chemistry & Biology. https://doi.org/10.1016/j. chembiol.2003.09.002 Avram, S., Halip, L., Curpan, R., & Oprea, T. I. (2020). Novel drug targets in 2019. Nature Reviews Drug Discovery, 19(5), 300–300. https://doi.org/10.1038/d41573-020-00052-w Batool, M., Ahmad, B., & Choi, S. (2019). A Structure-Based Drug Discovery Paradigm. International Journal of Molecular Sciences, 20(11), 2783. https://doi.org/10.3390/ijms20112783 Bordoli, L., Kiefer, F., Arnold, K., Benkert, P., Battey, J., & Schwede, T. (2009). Protein structure homology modeling using SWISS-MODEL workspace. Nature Protocols, 4(1), 1–13. https://doi.org/10.1038/nprot.2008.197 Boström, J., Brown, D. G., Young, R. J., & Keserü, G. M. (2018). Expanding the medicinal chemistry synthetic toolbox. Nature Reviews Drug Discovery, 17(10), 709–727. https://doi. org/10.1038/nrd.2018.116 Bylund, D. B., & Toews, M. L. (1993). Radioligand binding methods: Practical guide and tips. American Journal of Physiology-Lung Cellular and Molecular Physiology, 265(5), L421–L429. https://doi.org/10.1152/ajplung.1993.265.5.L421 Carroni, M., & Saibil, H. R. (2016). Cryo electron microscopy to determine the structure of macromolecular complexes. Methods (San Diego, Calif.), 95, 78–85. https://doi. org/10.1016/j.ymeth.2015.11.023 Chen, C., Huang, H., & Wu, C. H. (2017). Protein Bioinformatics Databases and Resources. Methods in Molecular Biology (Clifton, N.J.), 1558, 3–39. https://doi.org/10.1007/978-1-49396783-4_1 Cheng, Y., Glaeser, R. M., & Nogales, E. (2017). How CryoEM Became so Hot. Cell, 171(6), 1229–1231. https://doi. org/10.1016/j.cell.2017.11.016 Chung, S., Parker, J. B., Bianchet, M., Amzel, L. M., & Stivers, J. T. (2009). Impact of linker strain and flexibility in the design of a fragment-based inhibitor. Nature Chemical Biology, 5(6), 407–413. https://doi.org/10.1038/nchembio.163 Dhillon, P. K., Li, X., Sanes, J. T., Akintola, O. S., & Sun, B. (2017). Method comparison for analyzing wound healing rates. Biochemistry & Cell Biology, 95(3), 450–454. https://doi. org/10.1139/bcb-2016-0163 Dimachkie Masri, M., Ramirez, B., Popescu, C., & Reggie, E. M. (2012). Contract research organizations: An industry analysis. International Journal of Pharmaceutical and Healthcare Marketing, 6(4), 336–350. https://doi. org/10.1108/17506121211283226 Dolan, M. A., Noah, J. W., & Hurt, D. (2012). Comparison of Common Homology Modeling Algorithms: Application of User-Defined Alignments. In A. J. W. Orry & R. Abagyan (Eds.), Homology Modeling: Methods and Protocols (pp. 399–414). Humana Press. https://doi.org/10.1007/978-1-61779-5886_18 Doogue, M. P., & Polasek, T. M. (2013). The ABCD of clinical pharmacokinetics. Therapeutic Advances in Drug Safety, 4(1), 5–7. https://doi.org/10.1177/2042098612469335
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Drug Approvals—From Invention to Market...12 Years! (n.d.). MedicineNet. Retrieved July 23, 2020, from https://www. medicinenet.com/script/main/art.asp?articlekey=9877 Dugger, S. A., Platt, A., & Goldstein, D. B. (2018). Drug development in the era of precision medicine. Nature Reviews Drug Discovery, 17(3), 183–196. https://doi. org/10.1038/nrd.2017.226 Englander, J., Cohen, L., Arshava, B., Estephan, R., Becker, J. M., & Naider, F. (2006). Selective labeling of a membrane peptide with15N-amino acids using cells grown in rich medium. Biopolymers, 84(5), 508–518. https://doi.org/10.1002/ bip.20546 Fleming, A. (1932). Lysozyme. Proceedings of the Royal Society of Medicine, 26(2), 71–84. https://doi. org/10.1177/003591573202600201 Gardner, J. (2020, March 3). New estimate puts cost to develop a new drug at $1B, adding to long-running debate. BioPharma Dive. https://www.biopharmadive.com/news/ new-drug-cost-research-development-market-jamastudy/573381/ Gashaw, I., Ellinghaus, P., Sommer, A., & Khusru, A. (2011). What makes a good drug target? Drug Discovery Today, 16(23–24), 1037–1043. https://doi.org/10.1016/j. drudis.2011.12.008 Getz, K. A., Lamberti, M. J., & Kaitin, K. I. (2014). Taking the Pulse of Strategic Outsourcing Relationships. Clinical Therapeutics, 36(10), 1349–1355. https://doi.org/10.1016/j. clinthera.2014.09.008 Goga, N., Marin, I., Vasilateanu, A., Pavaloiu, I.-B., Kadiri, K. O., & Awodele, O. (2015). Improved GROMACS algorithms using the MPI parallelization. 2015 E-Health and Bioengineering Conference (EHB), 1–4. https://doi.org/10.1109/ EHB.2015.7391443 González, M. A. (2011). Force fields and molecular dynamics simulations. École Thématique de La Société Française de La Neutronique, 12, 169–200. https://doi.org/10.1051/ sfn/201112009 Guengerich, F. P. (2011). Mechanisms of Drug Toxicity and Relevance to Pharmaceutical Development. Drug Metabolism and Pharmacokinetics, 26(1), 3–14. https://doi. org/10.2133/dmpk.DMPK-10-RV-062 Hardesty, L. (2017, April 14). Explained: Neural networks. MIT News On Campus and Around the World. https://news.mit. edu/2017/explained-neural-networks-deep-learning-0414 Hauser, A. S., Chavali, S., Masuho, I., Jahn, L. J., Martemyanov, K. A., Gloriam, D. E., & Babu, M. M. (2018). Pharmacogenomics of GPCR Drug Targets. Cell, 172(1), 41-54.e19. https://doi. org/10.1016/j.cell.2017.11.033 Hill, A. D., & Reilly, P. J. (2015). Scoring Functions for AutoDock. In T. Lütteke & M. Frank (Eds.), Glycoinformatics (Vol. 1273, pp. 467–474). Springer New York. https://doi.org/10.1007/978-14939-2343-4_27 Hollingsworth, S. A., & Dror, R. O. (2018). Molecular Dynamics Simulation for All. Neuron, 99(6), 1129–1143. https://doi. org/10.1016/j.neuron.2018.08.011 Jacob, F., & Monod, J. (n.d.). Genetic regulatory mechanisms in
SUMMER 2020
the synthesis of proteins. 39. Jones, G., Willett, P., Glen, R. C., Leach, A. R., & Taylor, R. (1997). Development and validation of a genetic algorithm for flexible docking11Edited by F. E. Cohen. Journal of Molecular Biology, 267(3), 727–748. https://doi.org/10.1006/ jmbi.1996.0897 Kardos, N., & Demain, A. L. (2011). Penicillin: The medicine with the greatest impact on therapeutic outcomes. Applied Microbiology and Biotechnology, 92(4), 677–687. https://doi. org/10.1007/s00253-011-3587-6 Kempner, E. S. (2011). Direct effects of ionizing radiation on macromolecules. Journal of Polymer Science Part B: Polymer Physics, 49(12), 827–831. https://doi.org/10.1002/polb.22250 Kim, T.-R., Oh, S., Yang, J. S., Lee, S., Shin, S., & Lee, J. (2012). A simplified homology-model builder toward highly proteinlike structures: An inspection of restraining potentials. Journal of Computational Chemistry, 33(24), 1927–1935. https://doi. org/10.1002/jcc.23024 Kontoyianni, M. (2017). Docking and Virtual Screening in Drug Discovery. In I. M. Lazar, M. Kontoyianni, & A. C. Lazar (Eds.), Proteomics for Drug Discovery (Vol. 1647, pp. 255–266). Springer New York. https://doi.org/10.1007/978-1-4939-72012_18 Lage, O., Ramos, M., Calisto, R., Almeida, E., Vasconcelos, V., & Vicente, F. (2018). Current Screening Methodologies in Drug Discovery for Selected Human Diseases. Marine Drugs, 16(8), 279. https://doi.org/10.3390/md16080279 Lamoree, B., & Hubbard, R. E. (2017). Current perspectives in fragment-based lead discovery (FBLD). Essays in Biochemistry, 61(5), 453–464. https://doi.org/10.1042/ EBC20170028 Lea, W. A., & Simeonov, A. (2011). Fluorescence polarization assays in small molecule screening. Expert Opinion on Drug Discovery, 6(1), 17–32. https://doi.org/10.1517/17460441.20 11.537322 Lindsay, M. A. (2003). Target discovery. Nature Reviews Drug Discovery, 2(10), 831–838. https://doi.org/10.1038/nrd1202 Lobanovska, M., & Pilla, G. (2017). Penicillin’s Discovery and Antibiotic Resistance: Lessons for the Future? The Yale Journal of Biology and Medicine, 90(1), 135–145. Ma, Q., Ye, L., Liu, H., Shi, Y., & Zhou, N. (2017). An overview of Ca 2+ mobilization assays in GPCR drug discovery. Expert Opinion on Drug Discovery, 12(5), 511–523. https://doi.org/1 0.1080/17460441.2017.1303473 Macalino, S. J. Y., Gosu, V., Hong, S., & Choi, S. (2015). Role of computer-aided drug design in modern drug discovery. Archives of Pharmacal Research, 38(9), 1686–1701. https:// doi.org/10.1007/s12272-015-0640-5 Martí-Renom, M. A., Stuart, A. C., Fiser, A., Sánchez, R., Melo, F., & Šali, A. (2000). Comparative Protein Structure Modeling of Genes and Genomes. Annual Review of Biophysics and Biomolecular Structure, 29(1), 291–325. https://doi. org/10.1146/annurev.biophys.29.1.291 Medicine (US), I. of. (2012). The Current Landscape. In Genome-Based Therapeutics: Targeted Drug Discovery and Development: Workshop Summary. National Academies Press (US). https://www.ncbi.nlm.nih.gov/books/NBK116445/
353
Meyer, B., & Peters, T. (2003). NMR Spectroscopy Techniques for Screening and Identifying Ligand Binding to Protein Receptors. Angewandte Chemie International Edition, 42(8), 864–890. https://doi.org/10.1002/anie.200390233 Milne, J. L. S., Borgnia, M. J., Bartesaghi, A., Tran, E. E. H., Earl, L. A., Schauder, D. M., Lengyel, J., Pierson, J., Patwardhan, A., & Subramaniam, S. (2013). Cryo-electron microscopy—A primer for the non-microscopist. FEBS Journal, 280(1), 28–45. https:// doi.org/10.1111/febs.12078 Mohs, R. C., & Greig, N. H. (2017). Drug discovery and development: Role of basic biological research. Alzheimer’s & Dementia : Translational Research & Clinical Interventions, 3(4), 651–657. https://doi.org/10.1016/j.trci.2017.10.005 Muhammed, M. T., & Aki‐Yalcin, E. (2019). Homology modeling in drug discovery: Overview, current applications, and future perspectives. Chemical Biology & Drug Design, 93(1), 12–20. https://doi.org/10.1111/cbdd.13388 Nadin, A., Hattotuwagama, C., & Churcher, I. (2012). LeadOriented Synthesis: A New Opportunity for Synthetic Chemistry. Angewandte Chemie International Edition, 51(5), 1114–1122. https://doi.org/10.1002/anie.201105840 Pan, A. C., Jacobson, D., Yatsenko, K., Sritharan, D., Weinreich, T. M., & Shaw, D. E. (2019). Atomic-level characterization of protein–protein association. Proceedings of the National Academy of Sciences, 116(10), 4244–4249. https://doi. org/10.1073/pnas.1815431116 Passmore, L. A., & Russo, C. J. (2016). Specimen Preparation for High-Resolution Cryo-EM. In Methods in Enzymology (Vol. 579, pp. 51–86). Elsevier. https://doi.org/10.1016/bs.mie.2016.04.011 Patwardhan, B., Vaidya, A., Chorghade, M., & Joshi, S. (2008). Reverse Pharmacology and Systems Approaches for Drug Discovery and Development. Current Bioactive Compounds, 4(4), 201–212. https://doi.org/10.2174/157340708786847870 Peng, J., Wang, W., Yu, Y., Gu, H., & Huang, X. (2018). Clustering algorithms to analyze molecular dynamics simulation trajectories for complex chemical and biological systems. Chinese Journal of Chemical Physics, 31(4), 404–420. https:// doi.org/10.1063/1674-0068/31/cjcp1806147 Petros, R. A., & DeSimone, J. M. (2010). Strategies in the design of nanoparticles for therapeutic applications. Nature Reviews Drug Discovery, 9(8), 615–627. https://doi.org/10.1038/ nrd2591 Qian, T., Zhu, S., & Hoshida, Y. (2019a). Use of big data in drug development for precision medicine: An update. Expert Review of Precision Medicine and Drug Development, 4(3), 189–200. https://doi.org/10.1080/23808993.2019.1617632 Qian, T., Zhu, S., & Hoshida, Y. (2019b). Use of big data in drug development for precision medicine: An update. Expert Review of Precision Medicine and Drug Development, 4(3), 189–200. https://doi.org/10.1080/23808993.2019.1617632 RACI consortium, the GARNET consortium, Okada, Y., Wu, D., Trynka, G., Raj, T., Terao, C., Ikari, K., Kochi, Y., Ohmura, K., Suzuki, A., Yoshida, S., Graham, R. R., Manoharan, A., Ortmann, W., Bhangale, T., Denny, J. C., Carroll, R. J., Eyler, A. E., … Plenge, R. M. (2014). Genetics of rheumatoid arthritis contributes to biology and drug discovery. Nature, 506(7488), 376–381. https://doi. org/10.1038/nature12873 Reddy, T., & Rainey, J. K. (2010). Interpretation of biomolecular
354
NMR spin relaxation parametersThis paper is one of a selection of papers published in this special issue entitled “Canadian Society of Biochemistry, Molecular & Cellular Biology 52nd Annual Meeting — Protein Folding: Principles and Diseases” and has undergone the Journal’s usual peer review process. Biochemistry and Cell Biology, 88(2), 131–142. https://doi.org/10.1139/O09-152 Rishton, G. M. (2003). Nonleadlikeness and leadlikeness in biochemical screening. Drug Discovery Today, 8(2), 86–96. https://doi.org/10.1016/S1359644602025722 Rosell, M., & Fernández-Recio, J. (2018). Hot-spot analysis for drug discovery targeting protein-protein interactions. Expert Opinion on Drug Discovery, 13(4), 327–338. https://doi.org/10 .1080/17460441.2018.1430763 Salmaso, V., & Moro, S. (2018). Bridging Molecular Docking to Molecular Dynamics in Exploring Ligand-Protein Recognition Process: An Overview. Frontiers in Pharmacology, 9, 923. https://doi.org/10.3389/fphar.2018.00923 Sanchez-Lengeling, B., & Aspuru-Guzik, A. (2018). Inverse molecular design using machine learning: Generative models for matter engineering. Science, 361(6400), 360–365. https:// doi.org/10.1126/science.aat2663 Santos, R., Ursu, O., Gaulton, A., Bento, A., Donadi, R., Bologa, C., Karlsson, A., Al-Lazikani, B., Hersey, A., Oprea, T., & Overington, J. (2017). A Comprehensive Map of Molecular Drug Targets. Nature Reviews Drug Discovery, 19–34. https:// doi.org/10.1038/nrd.2016.230. Schmidt, T., Bergner, A., & Schwede, T. (2014). Modelling three-dimensional protein structures for applications in drug design. Drug Discovery Today, 19(7), 890–897. https://doi. org/10.1016/j.drudis.2013.10.027 Schneider, N., Lowe, D. M., Sayle, R. A., Tarselli, M. A., & Landrum, G. A. (2016). Big Data from Pharmaceutical Patents: A Computational Analysis of Medicinal Chemists’ Bread and Butter. Journal of Medicinal Chemistry, 59(9), 4385–4402. https://doi.org/10.1021/acs.jmedchem.6b00153 Schreiber, S. L. (2000). Target-Oriented and Diversity-Oriented Organic Synthesis in Drug Discovery. Science, 287(5460), 1964–1969. https://doi.org/10.1126/science.287.5460.1964 Schwede, T., Kopp, J., Guex, N., & Peitsch, M. C. (2003). SWISSMODEL: An automated protein homology-modeling server. Nucleic Acids Research, 31(13), 3381–3385. https://doi. org/10.1093/nar/gkg520 Segler, M., Preuss, M., & Waller, M. (2018). Planning Chemical Syntheses with Deep Neural Networks and Symbolic AI. Nature, 555, 589–604. https://doi.org/doi:10.1038/ nature25978 Senn, H. M., & Thiel, W. (2009). QM/MM Methods for Biomolecular Systems. Angewandte Chemie International Edition, 48(7), 1198–1229. https://doi.org/10.1002/ anie.200802019 Sharma, S. (2018, August 1). Monte Carlo Tree Search. Towards Data Science. https://towardsdatascience.com/monte-carlotree-search-158a917a8baa Singh, S., Malik, B. K., & Sharma, D. K. (2006). Molecular drug targets and structure based drug design: A holistic approach. Bioinformation, 1(8), 314–320. https://doi. org/10.6026/97320630001314
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Stoddart, L. A., White, C. W., Nguyen, K., Hill, S. J., & Pfleger, K. D. G. (2016). Fluorescence- and bioluminescence-based approaches to study GPCR ligand binding: Fluorescence and bioluminescence in ligand binding. British Journal of Pharmacology, 173(20), 3028–3037. https://doi.org/10.1111/ bph.13316 Stoddart, M. J. (2011). Cell Viability Assays: Introduction. In M. J. Stoddart (Ed.), Mammalian Cell Viability: Methods and Protocols (pp. 1–6). Humana Press. https://doi. org/10.1007/978-1-61779-108-6_1
Wender, P. A., Verma, V. A., Paxton, T. J., & Pillow, T. H. (2008). Function-Oriented Synthesis, Step Economy, and Drug Design. Accounts of Chemical Research, 41(1), 40–49. https:// doi.org/10.1021/ar700155p Werth, B., & Simon & Schuster. (2014). The billion-dollar molecule: The quest for the perfect drug. Simon & Schuster. Wong, C. L., & Olivo, M. (2014). Surface Plasmon Resonance Imaging Sensors: A Review. Plasmonics, 9(4), 809–824. https://doi.org/10.1007/s11468-013-9662-3
Takenaka, T. (2008). Classical vs reverse pharmacology in drug discovery: CLASSICAL VS REVERSE PHARMACOLOGY. BJU International, 88, 7–10. https://doi.org/10.1111/j.1464410X.2001.00112.x
Wuthrich, K. (1989). Protein structure determination in solution by nuclear magnetic resonance spectroscopy. Science, 243(4887), 45–50. https://doi.org/10.1126/ science.2911719
Tibbitts, J., Canter, D., Graff, R., Smith, A., & Khawli, L. A. (2016). Key factors influencing ADME properties of therapeutic proteins: A need for ADME characterization in drug discovery and development. MAbs, 8(2), 229–245. https://doi.org/10.10 80/19420862.2015.1115937
Xia, X. (2017). Bioinformatics and Drug Discovery. Current Topics in Medicinal Chemistry, 17(15), 1709–1726. https://doi. org/10.2174/1568026617666161116143440
Van Norman, G. A. (2016). Drugs, Devices, and the FDA: Part 1: An Overview of Approval Processes for Drugs. JACC: Basic to Translational Science, 1(3), 170–179. https://doi.org/10.1016/j. jacbts.2016.03.002 Vettoretti, G., Moroni, E., Sattin, S., Tao, J., Agard, D. A., Bernardi, A., & Colombo, G. (2016). Molecular Dynamics Simulations Reveal the Mechanisms of Allosteric Activation of Hsp90 by Designed Ligands. Scientific Reports, 6(1), 23830. https://doi. org/10.1038/srep23830 Wang, H.-W., & Wang, J.-W. (2017). How cryo-electron microscopy and X-ray crystallography complement each other: Cryo-EM and X-Ray Crystallography Complement Each Other. Protein Science, 26(1), 32–39. https://doi.org/10.1002/ pro.3022 Wang, J., Wolf, R. M., Caldwell, J. W., Kollman, P. A., & Case, D. A. (2004). Development and testing of a general amber force field. Journal of Computational Chemistry, 25(9), 1157–1174. https://doi.org/10.1002/jcc.20035
Xu, Y., Wang, S., Hu, Q., Gao, S., Ma, X., Zhang, W., Shen, Y., Chen, F., Lai, L., & Pei, J. (2018). CavityPlus: A web server for protein cavity detection with pharmacophore modelling, allosteric site identification and covalent ligand binding ability prediction. Nucleic Acids Research, 46(W1), W374– W379. https://doi.org/10.1093/nar/gky380 Yin, X., Scalia, A., Leroy, L., Cuttitta, C. M., Polizzo, G. M., Ericson, D. L., Roessler, C. G., Campos, O., Ma, M. Y., Agarwal, R., Jackimowicz, R., Allaire, M., Orville, A. M., Sweet, R. M., & Soares, A. S. (2014). Hitting the target: Fragment screening with acoustic in situ co-crystallization of proteins plus fragment libraries on pin-mounted data-collection micromeshes. Acta Crystallographica Section D Biological Crystallography, 70(5), 1177–1189. https://doi.org/10.1107/ S1399004713034603 Zhang, Z., Shi, Y., & Liu, H. (2003). Molecular Dynamics Simulations of Peptides and Proteins with Amplified Collective Motions. Biophysical Journal, 84(6), 3583–3593. https://doi.org/10.1016/S0006-3495(03)75090-5
Wang, T., Li, Z., Cvijic, M. E., Zhang, L., & Sum, C. S. (2004). Measurement of cAMP for Gαs- and Gαi Protein-Coupled Receptors (GPCRs). In G. S. Sittampalam, A. Grossman, K. Brimacombe, M. Arkin, D. Auld, C. P. Austin, J. Baell, B. Bejcek, J. M. M. Caaveiro, T. D. Y. Chung, N. P. Coussens, J. L. Dahlin, V. Devanaryan, T. L. Foley, M. Glicksman, M. D. Hall, J. V. Haas, S. R. J. Hoare, J. Inglese, … X. Xu (Eds.), Assay Guidance Manual. Eli Lilly & Company and the National Center for Advancing Translational Sciences. http://www.ncbi.nlm.nih.gov/books/ NBK464633/ Ward, W. H. J., & Holdgate, G. A. (2001). 7 Isothermal Titration Calorimetry in Drug Discovery. In Progress in Medicinal Chemistry (Vol. 38, pp. 309–376). Elsevier. https://doi. org/10.1016/S0079-6468(08)70097-3 Watt, G. (2006). Using patient records for medical research. The British Journal of General Practice: The Journal of the Royal College of General Practitioners, 56(529), 630–631. Weber, P. C., & Salemme, F. R. (2003). Applications of calorimetric methods to drug discovery and the study of protein interactions. Current Opinion in Structural Biology, 13(1), 115–121. https://doi.org/10.1016/S0959440X(03)00003-4
SUMMER 2020
355
The Rise of Regenerative Medicine STAFF WRITERS: BRYN WILLIAMS '23, SUDHARSAN BALASUBRAMANI '22, DANIEL CHO '22, GEORGE SHAN '23, JENNY SONG '23, ARUSHI AGASTWAR (MONTA VISTA HIGH SCHOOL SENIOR) BOARD WRITERS: NISHI JAIN '21 AND MEGAN ZHOU '21 Cover Image: Regenerative medicine is a powerful new technique for alleviating devastating illnesses Source: Wikimedia Commons
356
What is regenerative medicine? For stealing and bringing fire to humanity, the titan Prometheus was punished by Zeus. Chained to a rock in the Caucasus Mountains, Prometheus had to endure an eagle eating part of his liver each day; however, each night his liver would regrow, making his punishment eternal. This myth of Prometheus has inspired artists and culturalists alike; for scientists and the medical field, the myth has become a symbol of the regenerative capacity of the liver and serves as inspiration (albeit morbid) for the field of regenerative medicine. Regenerative medicine is a relatively young field and employs the use of biotechnology to bring about the regeneration, replacement, or repair of a tissue or organ. Though it may be a young field, its capacity to advance medical science is tremendous. In the instance of serious trauma or disease, transplantation of impaired organs is the only way to help the patient; unfortunately, due to the lack of donors, and the
dangerous side-effects of immunosuppression, transplantation is often life-limiting as much as it is lifesaving (Bakalorz et al. 2019). Regenerative medicine involves several approaches, ranging from gene therapy (onetime medication that edits the genome of the patient) to cell therapy (continuous medication that targets a cellâ&#x20AC;&#x2122;s inherent machinery) to tissue engineering. Currently, the progress of the rapidly evolving field is greatly associated with stem-cell-based therapy. Stem cells are undifferentiated cells, possessing the ability to excessively divide, self-renew, and differentiate into other types of specialized cells in the body (Bakalorz et al., 2019). There are different classes of stem cells: embryonic, tissuespecific, induced pluripotent stem cells (iPSCs), and mesenchymal stem cells (MSCs) (ISSCR, 2020). As the name suggests, embryonic stem cells are derived from the inner cell masses of blastocysts in the human embryo. These DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
stem cells are pluripotent, meaning they can give rise to every type of cell in the human body, except for the placenta and umbilical cord (ISSCR, 2020). Tissue-specific stem cells are multipotent, meaning that they are more specialized than embryonic stem cells in that they can only generate different cell types within the specific tissue they are derived from. In regenerative medicine, the most typical are MSCs – multipotent stem cells that differentiate into bone, cartilage, muscle, and fat cells – and iPSCs – specialized cells that have been reversed engineered into an embryonic-like pluripotent state. MSCs are special because they exist in many different kinds of tissues and organs, such as bone marrow, skin, blood, liver, and adipose tissue. Therefore, they are observed to have both multipotent and pluripotent potential (Rajabzadeh et al., 2019). The diversity of MSCs makes a perfect application for various diseases in animals, with the potential to treat diseases that cannot be cured by conventional medicines (Rajabzadeh et al., 2019). Many case studies have shown that bone-marrowderived MSCs (BMSCs) transplantation in the heart improves cardiac function during heart failure. Other studies show that umbilical cord blood derived MSCs are beneficial for human disorders that involve vascular deficit (Roura et al., 2015). In addition, MSCs have also provided good resources for wound healing in the skin and teeth as well as neurodegenerative diseases. iPSCs are similar in diversity but
SUMMER 2020
offer the ability to “custom-tailor” cells for the treatment of numerous diseases (Wu and Hochedlinger, 2011). So far, they have shown promise in the same areas that MSC cells have and have also been used in human organ tissue modeling to simulate various human diseases for clinical trials (Wu and Hocedlinger, 2011).
Figure 1: The differences between totipotent, pluripotent, and multipotent embryonic stem cells.
Tissue generation, on the other hand, is a subset of regenerative medicine defined in the early 1990s as an interdisciplinary field focused on assembling new tissues and organs that can improve and restore damaged tissues in patients (Caddeo et al., 2017). It employs a combination of scaffolds, cells, and biologically active molecules to create functional tissues that can either aid or substitute for preexisting tissues (NIH, 2020). Generally, when engineering tissues, a scaffold is created either out of plastic or host proteins (often collagen) to support the cells and the newly formed structure (NIH, 2020). If the generation is successful and growth factors are mixed in with the scaffold, the cells will start to multiply and form around the structure, creating the desired tissue (NIH, 2020). Significant strides towards the use of tissue engineering to help patients have been made; currently, there are several generated tissues that are FDA approved, including artificial cartilage, skin, kidneys, and tracheal tissue (NIH, 2020). With over 112,000 patients in the United States on the transplant waiting list, the generation of artificial tissues is a key step in providing treatment to those in need of transplants (HRSA, 2020). Currently, lab-grown tissues represent a small segment of treatment options because they are still experimental and costly; yet, the possibilities of tissue transplantation are expanding and appear to be a promising future treatment option for those suffering from organ and tissue loss (Mao, 2015; NIH, 2020). In addition to tissue generation for the purpose of transplantation, artificial tissues are already serving a secondary role as biosensors, helping to detect and analyze biological and chemical threats as well as to test the toxicity of new medications (NIH, 2020).
"MSCs are special because they exist in many different kinds of tissues and organs, such as bone marrow, skin, blood, liver, and adipose tissue. Therefore, they are observed to have both multipotent and pluripotent potential."
Source: Wikimedia Commons
Gene therapy, as applied to regenerative medicine, aims to make site specific modification to DNA in order to enable new or improved function in human cells. Viral vectors are commonly used as delivery vehicles are viruses due to their natural ability to infiltrate cells and insert their own genetic material. To create these vectors, scientists use CRISPR to modify the viral particles and use the inherent strength and robustness of the viral particles to 357
Figure 2: Despite ethical concerns raised by some critics, stem cell transplantation has shown promising prospects for curing human diseases that have previously been considered incurable. Source: Wikimedia Commons
insert functional, rather than mutated DNA into the patient’s cells (Goncalves et al. 2017). The genetic material itself that is contained within the viral vector involves several components. It combines the CAS9 protein – capable of cutting up genomic DNA at the site of mutation – with template DNA to replace the mutation and a guide RNA to locate the mutated region (Misra 2013). After transfecting the patient’s cells with the viral vectors, these components are deposited into cells and work in tandem to fix undesirable DNA. Viruses are the most common method of transporting the adjusted genetic material due to their robustness, but they have their drawbacks; there is always danger with immune or inflammatory response, potentially mutation-inducing effects, and the potential for expressing the gene for only a short period of time rather than the patient’s entire life (Cong et al. 2013). Along with viral vectors, there is the added possibility of using plasmids to transfect cells with engineered DNA. Plasmids are small, circular, double stranded bacterial DNA molecules that can be separated from the parent cell and can replicate independently (Goncalves et al. 2017). Plasmid manipulation is accomplished with restriction enzymes that are able to recognize and cut the plasmids at a certain designated spot, creating “sticky ends” that allow engineered DNA to integrate with the plasmid and settle down in the cut spot. The plasmid ends are then glued together by the enzyme DNA ligase to form the final payload (Shintani et al. 2015), and the resulting product is used to transfect cells.
History of regenerative medicine "The aftermath of World War II further stimulated the development of bone marrow transplantation in the 1960s. The effects of radiation from atomic bombs and nuclear devices lingered in civilians and caused irradiation-induced diseases."
358
World War I and World War II significantly accelerated the advancement of landmark clinical cases of regenerative medicine, given the large number of wound and trauma victims produced by the conflicts. One example of this that will be discussed in the next section is blood transfusion, technically a cell therapy, which was used in routine clinical practice to restore lost blood. In the early 1800s, British obstetrician James Blundell performed blood transfusion procedures for over ten hemorrhage patients, only five of whom survived (Farmer, Isbister & Leahy, 2014). The discovery of the ABO blood group system in 1900 and the increased need for treating acute hemorrhage during World War I enabled blood transfusion to be developed for an established medical usage (Cossu et al., 2018). The procedure then became more customary during the Spanish Civil War and World War II (Farmer, Isbister & Leahy, 2014).
The aftermath of World War II further stimulated the development of bone marrow transplantation in the 1960s. The effects of radiation from atomic bombs and nuclear devices lingered in civilians and caused irradiation-induced diseases (Perry & Linch, 1996). Earlier practices failed due to complications such as the absence of engraftment, graft rejection, and secondary syndromes (Perry & Linch, 1996). The resolution for such complications was ensuring an immunological match between the host and the donor on the one hand, and the development of immunosuppression on the other hand (Cossu et al., 2018). Another landmark clinical case is skin transplantation — the process of taking a skin graft from one part of the body and transplanting it to another part. Although this modern clinical practice began in the 19th century, its earliest history dates back to ancient India, where gluteal fat and skin grafts were used to treat ear, nose, and lip mutilations (Ameer, Singh, & Kumar, 2013). In the 19th century, attempts were made to revive skin grafting, from small experiments to partial success of whole-thickness skin transplantation (Ameer, Singh, & Kumar, 2013). In 1869, Jacques-Louis Reverdin reported the first successful skin transplantation, while in 1929, the split-thickness grafting method was published (Shimizu & Kishi, 2012). Currently, skin grafting is used to cover less complicated wounds, with its most common use in treating traumatic wounds, scar contracture release, and hair restoration. (Shimizu & Kishi, 2012). More recently, gene therapy has more widely expanded into clinical usage, showing a promising possibility of treating inherited and acquired human diseases (Dumbar et al., 2018).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: In the process of artificial cloning, the nucleus from the donor cell is transferred to the enucleated egg cell. Whereas therapeutic cloning results in embryonic stem cells, reproductive cloning entails a genetically identical organism Source: Wikimedia Commons
In December, 1988, the Recombinant DNA Advisory Committee approved the first clinical trial for an early form of gene therapy (Wirth et al., 2013). Then, in 1990, S.A. Rosenberg conducted a trial to treat two melanoma patients using exvivo modified tumor-infiltrating lymphocytes (Wirth et al., 2013). In the same year, the FDA approved a therapeutic gene therapy trial for ADA patients (Wirth et al., 2013). However, this approval was not without its own risks – along the early path to development of gene therapies were severe side effects and patient deaths that served as severe setbacks to the industry. It is only as a result of modern technology and a better understanding of genetics that gene therapy has started to take off. Today, gene therapy has been approved for or is predicted to be approved for treating inherited immune disorders, hemophilia, eye and neurodegenerative disorders, and lymphoid cancers (Dumbar et al., 2018). Human embryonic stem cells (hESC) are among the most recent additions to the cell therapy arsenal in regenerative medicine. The hESC’s prospects are that they can infinitely renew themselves and thus restore cell and tissue functions. After the first derivation of hESC lines in 1998, in 2009, the FDA approved the first clinical trial of hESC for spinal cord injury and macular degeneration (Desai et al., 2015). In 2012, retinal pigment epithelial cells derived from hESC were transplanted into a human patient to restore the retinal epithelial monolayer (Desai et al., 2015). Despite ethical
SUMMER 2020
and legal debates that this paper will go into later on, hESC is said to have promising prospects for treating lethal diseases in the 21st century.
Landmark lab discoveries Since the start of the 21st century, the emergence of genome-editing technologies has enabled direct manipulation of the human genome – and has become one of the most popular sectors of gene therapy. The fundamental basis of gene editing is to precisely introduce a DNA double strand break (DSB) to a target site in order to induce the endogenous cellular repair machinery. Three common platforms are used to induce site-specific DSBs, which are zinc finger nucleases (ZFNs), transcription activator-like effector-nucleases (TALENs), and the most recent CRISPR/Cas system. A ZFN is an artificial endonuclease that consists of a designed zinc finger protein fused to FokI cleavage domain and can cleave a chosen DNA target (Kim et al., 1996; Maeder & Gersbach, 2016). Although the function of ZFNs was indicated as early as 1996, only after nearly a decade did scientists successfully identify genome sites for ZFNs (Carroll, 2008; Lombardo et al., 2007; Porteus, 2006; Santiago et al., 2008). In 2009, a new DNA binding domain was reported in a family of protein called TALEs. Prediction of TALEs targets are enabled due to the special one-toone correspondences between certain amino acids pattern in the protein and the target nucleotides in the cell, and researchers could in
"Since the start of the 21st century, the emergence of genome-editing technologies has enabled direct manipulation of the human genome – and has become one of the most popular sectors of gene therapy."
359
turn construct TALENs to target designated DSBs (Boch et al., 2009; Christian et al., 2010; Moscou & Bogdanove, 2009). Despite extensive studies on ZFNs and TALENs, scientists at the Broad Institute sought for a technology that was easier to program, more scalable and more affordable, and thus they developed CRISPR/Cas system. This genetic tool uses RNAs to direct Cas 9 nucleases to a specific sequence and induce desired DSBs, and is capable of multiplex genome editing in mammalian cells (Cong et al., 2013).
"Typically, when employing the CRISPR/Cas9 system, gene therapists use viral vectors as the mode of drug delivery."
A more specified iteration of gene therapy has gradually gained more attention from scientists since the 1990s, when academia saw the first issue of Human Gene Therapy (HGT), the first peer-reviewed journal covering researches, methods, and clinical advancement of gene therapy. Extensive research was on gene transfer, at first just in mice or in human cells, until 1993 when the first clinical protocol of human gene therapy was published in the journal (Bordignon, 1993). For instance, severe combined immunodeficiency (SCID) is caused by genetic mutations and characterized by function-loss of immune T cells and B cells, and its ADA (adenosine deaminase) deficient variant was the first genetic disorder that researches sought for treatment of human somatic cell gene therapy. Bordignon proposed in this paper a gene therapy method for ADA---SCID that transferred ADA genes into the patientâ&#x20AC;&#x2122;s bone marrow cells and peripheral blood lymphocytes. Typically, when employing the CRISPR/Cas9 system, gene therapists use viral vectors as the mode of drug delivery. Depending on specific demand, adenovirus, adeno-associated virus, and retroviral vectors are the three major viral vector categories readily available to use (Cossu et al., 2018). Adenovirus is effective in terms of receptivity by human cells and accommodation of larger DNA payloads yet is highly immunogenic. This makes itâ&#x20AC;&#x2122;s ideal for short-term treatment (Crystal, 2014). Adenoassociated viral vectors are less immunogenic and tend to allow transgene expression to be maintained in cells for a longer period of time; retroviruses and lentiviruses integrate to the host genome, which make possible prolonged expression but also bring along the danger of insertional mutagenesis (Cossu et al., 2018). In 1991, researchers demonstrated the first in vivo gene transfer that displayed evidence of organspecific expression following the discovery of adenovirus vector. To accomplish this, they deleted E1 and E3 genes from normal human adenovirus so that it is replication deficient
360
and would not cause illness (Crystal, 2014; Rosenfeld et al., 1991). Another technique called artificial cloning has been applied to the reproduction of entire organisms, certain genes, and embryonic stem cells. In a technique known as nuclear transfer, reproductive cloning involves the transfer of DNA from an undifferentiated or differentiated cell of an organism to an oocyte (unfertilized egg cell) or early embryo (NIH). The nucleus of the donor cell is transferred to an unfertilized egg cell after the nucleus of the unfertilized egg cell is removed. Two different methods have been employed to enucleate the egg cell: the nucleus is removed by a needle that penetrates the cell membrane or an electrical current helps merge the cell and its nuclei into the recipient egg cell (NIH). To observe embryo development and nuclear differentiation, this process was originally performed using an embryonic amphibian cell (Rana pipiens frog embryo) in 1952 (Briggs & King, 1952). In this experiment, the embryo was able to develop properly, and the first method of enucleation (needle enucleation) was used. In 1975, a similar process was attempted on mammalian rabbit embryos. During this experiment, it was discovered that the embryo died after an initial stage of development (Bromhall 1975). More than a decade later, nuclear transplantation techniques and electrofusion, the second method of enucleation mentioned above, was used to clone a sheep embryo. Although the embryo developed viably, it was unable to eventually become an adult organism (Willadsen 1986). Nevertheless, in 1997, when similar nuclear transfer techniques were used, a sheep was successfully cloned and grew into an adult. In this particular discovery, researchers from the Roslin Institute of Edinburgh, UK induced the donor nuclei in a state of dormancy (quiescence) before transferring the nuclei to the recipient egg cell (Wilmut et al 1997). In their follow-up experiment, researchers used a different nuclear transfer technique known as somatic cell nuclear transfer (SCNT). In this process, the nucleus from a somatic cell (differentiated) rather than an undifferentiated cell (germ cell or embryo) of an organism was transferred to the egg cell (Wilmut et al 1997). After transferring nuclei from a differentiated adult cell to 227 different embryos, the experiment resulted in a sheep named Dolly. This discovery was significant because it revealed that the nuclei of adult cells could be used to reproduce another animal (Wilmut et al
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
1997). Techniques such as induced quiescence used in the nuclear transfer process of both experiments were steppingstones to further developments in artificial cloning. Eventually, induced quiescence enabled researchers to genetically modify the DNA of differentiated adult cell nuclei before nuclear transfer because growth factor proteins no longer had the ability to alter the DNA after insertion (Schnieke et al 1997; Bartlett 2014). In the last 2 decades, nuclear transfer techniques, like cross-species nuclear transfer on amphibians and mammals alike, have emerged (Sun 2014). Overall, reproductive animal cloning has had implications for livestock breeding and the production of transgenic farm animals, among other things (Smith et al 2000). This type of cloning has been reproduced multiple times after the triumph of Dolly, with cattle, swine, sheep, and goats cloned currently (FDA, 2020). Yet, the overall success rate is still low, with only about one out of every hundred manipulated oocytes developing into adulthood (Solter, 2000). The majority of cloning attempts result in high abortion and fetal mortality rates (Tian et al., 2003). Though many of the reasons behind the low success rate are largely unknown, the incorrect reprogramming of the donor nuclear genome is considered the main reason for cloning failure (Solter, 2000). When the somatic cell cannot return to its totipotent state, growth
SUMMER 2020
cannot occur properly and the cloning fails (Solter, 2000; Tian et al., 2003). The ability to clone mammals has significant implications for the future of agriculture and livestock (Solter, 2000). Many see cloning as a practical solution to producing ideal breeding stock with desired phenotypes as well as producing bioreactors which are genetically engineered animals that produce certain molecules needed for modern medicine (Solter, 2000; FDA, 2020). Although most cell therapy trials are achieved with postnatal stem cells, there has been an increasing number of applications of embryonic stem cells (ESCs). The predecessor of ESC studies started as early as 1964, when the research subjects were focused on mice embryonal carcinoma (EC) cells, extracted from a grown teratocarcinoma in vivo, a kind of malignant tumor involving pluripotent stem cells (M. Evans, 2011). Initial research on ESCs were conducted on mouse blastocysts, while the human ESC cell lines were derived almost two decades later (Cossu et al., 2018). The first pluripotent cell line isolated from a mouse blastocyst was successfully grown in a tissue culture in vitro in 1981 (Evans & Kaufman, 1981). Based on this finding and previous studies on EC cells, scientists soon discovered the possibility of genetic manipulation on mice ESCs. The sheer amount of ESCs that could be maintained in the tissue culture assists the transmission of genetic change back into the organism (M. Evans, 2011). In 1987, Hprt became the first gene alteration that was transferred into a mouse germline, which served as a potential model for Lesch-Nyhan syndrome â&#x20AC;&#x201C; a neurological disorder caused by inherited deficiency of the protein HPRT (Kuehn et al., 1987).
Figure 4: The simplified process of cloning Dolly the sheep, who in 1996 became the first mammal to be cloned Source: Wikimedia Commons
"In addition to breakthroughs in gene therapy, the use of embryonic stem cells has brought remarkable new insights into our ability to treat and cure challenging ailments with the newfound principle of pluripotency."
In addition to breakthroughs in gene therapy, the use of embryonic stem cells has brought remarkable new insights into our ability to treat and cure challenging ailments with the newfound principle of pluripotency. Scientists knew that these cells were incredibly valuable for research and that they could only be obtained from blastocysts created by in vitro fertilization (IVF) that were no longer needed, making them somewhat of a scarcity. Additionally, there were ethical concerns that resulted in a hesitancy to use this technique. To counter this, scientists looked for a method to induce somatic cells into a pluripotent state (Wu and Hochedlinger, 2011). This search for a method to induce pluripotency was derived from cloning studies in frogs (Briggs and King, 1952).
361
Figure 5: The mechanism of different gene-editing platforms Source: Wikimedia Commons
"This discovery [ABO blood types] enabled doctors to realize the heterogeneity of blood among patients of different blood groups, thereby reducing the rate of the body’s rejection of blood transfusion by only transfusing among the same group."
362
In 1952, scientists transplanted differentiated cell nuclei from blastula cells into enucleated frog eggs and observed that they developed into genetically identical clones (Briggs and King, 1952; Statfeld and Hochedlinger, 2010). The conclusion from these findings was that despite the differing nuclei, the development “imposes reversible epigenetic” changes on the genome during cell differentiation (Statfeld and Hochedlinger, 2010). However, at the time, the admixture of transcription factors from the oocyte that reverted the nucleus to its primitive state remained elusive until the 21st century. In 2006, scientists Shinya Yamanaka and Kazutoshi Takahashi at Kyoto University at devised an experiment in which they screened for factors within a pool of 24 pluripotencyassociated candidate genes (Takahashi and Yamanaka, 2006). These candidates were selected to activate a dormant drug resistance allele in the ESC-specific Fbxo15 locus of mouse fibroblasts (Takahashi and Yamanaka, 2006). When expressed, the 24 factors indeed activated Fbxo15 and induced the formation of drug-resistant colonies. They repeated this process until identifying the minimal 4 genes that would induce a cell into a pluripotent state: Klf4, Cox2, c-Myc, and Oct4. Additional genes, SSEA-1 and Nanog, also exhibited an induction into a semi-pluripotent state (Takahashi and Yamanaka, 2006). Immediately, Yamanaka noticed that iPSCs expressed significantly lower
levels of pluripotency compared with ESCs and confirmed that only partial reprogramming was possible. In subsequent tests, however, Yamanaka was able to reproduce his findings and found that some iPSCs have the developmental potency equivalent to ESCs (Takahashi and Yamanaka, 2006). Since this initial finding, different groups have repeated the experiment with human cells.
Landmark clinical cases Although blood transfusion has been in practice since the early 19th century in the civil war, the procedure has seen radical changes since. In the current medical practice, blood is transferred by components, such as red and white blood cells, plasma, and platelets. While whole blood transfusion has been out of practice in traditional hospitals due to its risks and inefficacy, it is still used in the U.S. military to resuscitate trauma patients who are acidotic, hypothermic, and coagulopathic (Repine et al., 2006). A landmark discovery that enabled establishment of blood transfusion as a routine medical procedure was the discovery of ABO blood types by Karl Landsteiner in 1900 (Farmer et al., 2014). This discovery enabled doctors to realize the heterogeneity of blood among patients of different blood groups, thereby reducing the rate of the body’s rejection of blood transfusion by only transfusing among the same group. The grouping system allowed
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
surgeons to be informed of antibodies and vastly improved the transfusion success rate (Maluf, 1954). While the discovery of the ABO blood type system (among other innovations) certainly has made transfusions safer, there are still some risks associated with the modern blood transfusion procedure – one of which is transfusion-transmitted diseases. Notably, hepatitis B severely harmed the U.S. Army forces during World War II, and hepatitis C infected 20 percent of the total transfusion patients of the United States in the 1970s (Farmer et al., 2014). While the above instances made people aware of the dangers of blood transfusion, critical concerns arose when cases of transfusiontransmitted HIV infection spiked in the 1980s (Zhou, 2016). An alternative to donor-recipient transfusion is autologous transfusion, a process in which the patient’s blood is used for their own transfusions (Cossu et al., 2018). The advancement of this procedure was prompted by recent medical achievements, the shortage of donor blood, and the spread of disease (Vanderlinde et al., 2002). Autologous blood transfusion is divided into the following three types: preoperative autologous blood donation (blood collection from the patient in the days leading up to a surgery), acute normovolemic hemodilution (complete removal of the whole blood), and intraoperative and postoperative autotransfusion (recovery of blood during surgeries) (Zhou, 2016). Despite the benefits of autologous transfusion, the procedure can be highly inefficient, as up to half of the collected blood may be wasted (Vanderlinde et al., 2002). Bone marrow transplantation (BMT) became a necessity surrounding the aftermath of World War II, as its use is historically linked to this time when civilians were exposed to nuclear radiation (Cossu et al., 2017). Like many forms of cell therapy, successful BMT is reliant on immunological matching between the host and donor. This means that the transfer allows for long-term reconstitution of all the damaged blood cell types, and this results in permanent therapeutic effects. The first clinical case was in 1957, when Donnall Thomas reported that six patients were given BMTs to restore hemopoiesis (blood cell production) following ablation by radiation or drug toxicity (Simpson and Dazzi, 2019). At the time, since there was limited knowledge of the immune responses to transplant antigens, all of those patients died
SUMMER 2020
(Simpson and Dazzi, 2019). Another landmark breakthrough case that soon followed the discovery of the human leukocyte antigen (HLA) in 1968 with a successful transfer of immune function to an infant with severe combined immunodeficiency syndrome by BMT from his HLA-identical sister (Buckley, 2013). Around the same time, there was a reported success in allogeneic transplantation for an infant with Wiskott-Aldrick syndrome (Mortimer, 1970). These cases were followed with successful transplantation for addressing aplastic anemia, and eventually leukemia (Moore, 2017). More recently, large trials as part of the BMT Clinical Trials Network have represented landmark advances in the field of BMT over the past decade (Khera, 2015). One such study examined the efficacy of peripheral blood (PB) and bone marrow for unrelated donor transplants, determining that PB stem cells reduces the risk of graft failure whereas bone marrow reduces the risk of chronic graftversus-host disease (Anasetti et al., 2012). In general, the long-term results of BMT have been improved by a myriad of factors, such as advanced pretransplant chemoradiotherapy, use of immunosuppressive drugs, better antibiotics and isolation procedures, as well as the use of carefully matched marrow donors (Goulmy et al., 1995). Now, as of 2018, over hundreds of thousands of BMTs have been successfully implemented (Simpson and Dazzi, 2019).
"The first successful procedure for “skin transplantation” was in 1869 when Jacques-Louis Reverdin discovered that small, thin grafts would heal and successfully cover burns and open wounds..."
Skin transplantation also poses an interesting ex-vivo approach to regenerative medicine. Going back a little, the first successful procedure for “skin transplantation” was in 1869 when Jacques-Louis Reverdin discovered that small, thin grafts would heal and successfully cover burns and open wounds (Barker and Markmann, 2013). With the rise of the industrial revolution and industrial work accidents, skin-related injuries increased in frequency and severity throughout the 18th and 19th century. There was a significant need to treat patients with severe skin burns and other skin-related injuries. Autologous splitthickness skin grafts (STSG) have been the gold standard since their conception in the 19th century for treating skin injuries that require healthy skin (Kaur et. al., 2019). Following the first skin autotransplantation procedure in 1869, skin grafts have been adapted to be used in a multitude of clinical situations, such as traumatic wounds, imperfections after oncologic resection, burn reconstruction,
363
Figure 6: Simplified procedure for inducing human somatic cells back into pluripotent stem cells that can further differentiate once again. Source: Wikimedia Commons
and more (Shimizu and Kishi, 2012). In 1929, a landmark study was released that went into detail regarding both the benefits and harms of STSGs compared to full-thickness, intermediatethickness, and epidermal grafts, helping develop the status quo of skin grafting principles (Shimizu and Kishi, 2012). In this study, it was determined that identical twins would accept exchanged skin grafts (Barker and Markmann, 2013).
"...according to the FDA and as of the date of publishing this article, there are 18 products that are approved therapies that are given the FDA designation of “gene therapies” with thousands of clinical trials that are ongoing."
364
In the early 20th century, there were two focal points for skin transplantation: homografts (a skin graft from a cadaver or donor) and autografts (skin grafts coming from one’s own body). However, over time, it was found that homografts were much less effective for long term treatment of wounds. By the early 1930s, significant research was conducted to ascertain the effectiveness of homografts. Many studies came out that showed the inevitable failure associated with homografts due to variations in transplantation immunology (Barker and Markmann, 2013). As a result, proponents of skin homografts conceded their failures and most physicians turned to autografts to treat skin-related injuries. Despite the long reign of STSGs, their efficacy has been limited due to the massive scar burden imparted on patients resulting in additional treatment/therapy plans to treat both the donor and recipient sites (Aarabi et. al., 2007). Until now, these issues were considered minor in comparison to the treatment it performed. However, physicians and patients alike are asking for alternatives that improve the cosmetic outcome of the procedure. Additionally, STSGs suffer from the limitation of being unable to treat full-thickness
wounds (Kaur et. al., 2019). Looking more toward the present, according to the FDA and as of the date of publishing this article, there are 18 products that are approved therapies that are given the FDA designation of “gene therapies” with thousands of clinical trials that are ongoing. Of these 18 approved products, 8 are hematopoietic stem cell transplantation (“Approved Cellular and Gene Therapy Products”). Hematopoietic stem cell transplantation (related to the red bone marrow), or HPC stem cells contain progenitor cells, monocytes, lymphocytes, and granulocytes from human cord blood and are used primarily for blood infusions. These samples are recovered from umbilical cords and are used for patients whose hematopoietic systems are compromised as a result of an inherited or acquired trait (“HPC Cord Blood BLA,” FDA 2020). The other prominent approved therapies include autologous cellular parts such as Azficel-T, which uses the patient’s own skin cells, places them in water and a series of salts, and then surgically replaces it onto the patient’s face to reduce the appearance of wrinkles (Mayo Clinic 2020). Others are topical treatments, such as allogenic cultured keratinocytes and fibroblasts, which are scaffold proteins for topical treatment to surgical vascular wounds in dental surgeries (“GINTUIT,” FDA 2020). In addition, there are several engineered autologous CAR T cell therapies that are genetically modified to produce a CAR protein that works to identify and eliminate CD19
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
malignant cancer cells such as axicabtagene ciloleucel, tisagenlecleucel, and axicabtagene ciloleucel (Maude et al. 2019, Gilead 2020). There are also several viral packages that are in use to treat other disorders such as cancer, retinopathies, and muscular dystrophy. Talimogene laherparepvec is an oncolytic virus that is genetically modified to preferentially replicate in melanoma cells and generate an immune response against the tumors (Conry et al. 2018). Voretigene neparvovec-rzyl was, at the time of its approval in 2017, the first gene therapy that targeted the mutation that resulted in inherited blindness. This drug, whose trade name is Luxturna, is approved for patients with a biallelic RPE65 mutation and works by delivering a normal copy of the gene into retinal cells using the adenoassociated virus (AAV) that has been modified with recombinant techniques (FDA 2017). Onasemnogene abeparvovec-xioi is an AAV vector that delivers a functional copy of the defective SMN gene into motor neuron cells as a treatment for muscular dystrophy (Mahajan et al. 2019). Another prominent injection is the autologous cultured chondrocytes on a porcine collagen membrane, which are undifferentiated cartilage cells that are injected into knee to improve cartilage defects (FDA 2019). Finally, sipuleucel-T is an immunological agent that is thought to work through antigen-presenting cells (APCs) to induce an immune response against hormone refractory prostate cancer (Anassi et al. 2011). Embryonic stem cells are at an interesting point within the FDA process as well. After 20 years of deriving gene therapies and studying human embryonic stem cells (hESCs), medications employing such techniques are emerging as possible solutions to degenerative diseases. As of 2019, about 30 clinical trials are currently ongoing with hESC-derived cells, some even including a combination of hESCs and hiPSCs (Eguizabal et al., 2019). These trials include macular degeneration, spinal cord injury, type I diabetes, heart disease, and Parkinsonâ&#x20AC;&#x2122;s disease (Trounson and DeWitt, 2016; Eguizabal et al., 2019). The first clinical trial involving hESCs was approved by the FDA on January 23rd, 2009 for spinal cord injuries. The investigational new drug (IND) application was filed by Geron Corporation, a biotech firm based in California, in
SUMMER 2020
hopes of utilizing GRNOPC1, a product derived from hESCs (Alper, 2009). Geronâ&#x20AC;&#x2122;s goal was to use GRNOPC1 to stimulate nerve growth and remyelinate debilitating damage in the spinal cord. After a small delay from the FDA due to safety precautions surrounding hESCs, Geron launched its first trial in 2010 treating their first patient just two weeks after he sustained a spinal injury in a car accident (Eguizabal et al., 2019). Each patient was administered a single injection of GRNOPC1, containing around two million hESCs. After their first patient, Geron had three more volunteers participate, after which they discontinued the clinical trials for financial reasons (Eguizabal et al., 2019). Though the official results have not been released, results presented at various conferences surrounding regenerative medicine have indicated that no patients have experienced adverse effects to the treatment and have had some success in recovery. Geron will continue to monitor their participants for the next six years (NIH, 2020). In 2014, Asterias Biotherapeutics restarted the trials with GRNOPC1 with a Geron partnership to treat over 35 patients with different dosages with promising results showing cavitation, neuronal growth stimulating factors and improved myelin coating (Eguizabal et al., 2019). Other clinical trials involving hESCs include treatment for age-related macular degeneration (AMD) and Stargardtâ&#x20AC;&#x2122;s disease. Advanced Cell Technology (ACT) in 2011 initiated a phase I trial for AMD and published results stating that vision improved in 17 out of the 18 patients that were treated (Schwartz et al., 2010). The hESC-derived product they used, MA09-hRPE, has also shown promise in phase I and II trials in South Korea. Since then, the FDA has approved over 20 more hESC products for clinical trials, and each year we see newer applications of the technology.
"To enhance the safety of nucleic acids transportation, scholars have put much effort into synthesizing non-viral gene vectors with a new generation of nanoparticles."
Current medical research landscape A primary focus of current gene therapy studies is establishing safety of this relatively unmatured technology. Viral vectors have been widely used for gene delivery into target cells despite the potential risk to produce an unwanted immune response. To enhance the safety of nucleic acids transportation, scholars have put much effort into synthesizing nonviral gene vectors with a new generation of nanoparticles. Therefore, multivalent cationic vectors (MVCVs) with potential high efficacy and safety have received much attention in the last decade. Researchers adopted the bottom-
365
Figure 7: Despite the initial setbacks, blood transfusion is now an everyday procedure that has saved numerous lives since World War I Source: Wikimedia Commons
up strategy to assemble the supramolecular structure so that the product contains sites or structures for desired functions, like binding to cells. Two essential parts of the vector are a positive-charged cationic lipid that interacts with the negative-charged nucleic acids within and a helper lipid to assist transfection (Junquera & Aicart, 2016). Multivalent cationic lipids nowadays are considered as a useful tool for in vitro gene transfections, while they are expected to be more efficient in vivo as well.
"Most significantly, a team was able to develop a retinal pigment epithelium (RPE) patch composed of fully differentiated, human embryonic stem cellderived monolayer on a synthetic membrane to replace a patientâ&#x20AC;&#x2122;s damaged RPE."
366
The sheer variety of stem cells types highlights the potential for its use in medical and clinical research. Recent studies in the field have shown that mesenchymal stem cells in particular have huge applications in the treatment of diseases like wound healing, heart failure, and tooth regeneration (Rajabzadeh et al., 2019). Furthermore, advances in human pluripotent stem cells (hPSCs) have occurred alongside advances in genome engineering and genomic technologies; together, researchers and scientists are able to now tackle some of the most devastating disorders such as (Soldner and Jaenisch, 2018). While there were attempts to generate whole organs in the early 1990s, it was just recently that there have been substantial breakthroughs (Hunter, 2019). Animal work has seen more complex solid organ growth for the heart, liver, kidney and pancreas, but a key clinical breakthrough was with skin replacement for a patient suffering from epidermolysis bullosa (EB), a group of previously incurable skin diseases. By replacing the defective genes
from a skin patch taken from the patient, the treatment involved culturing and growing this skin until they were large enough sheets to be attached to the patient (Hunter, 2019). The skin settled down to a healthy state, indicating success. Eye-related disorders are also poised to be the first clinically approved stem cell-based regenerative treatments on a large scale (Hunter, 2019). Most significantly, a team was able to develop a retinal pigment epithelium (RPE) patch composed of fully differentiated, human embryonic stem cellderived monolayer on a synthetic membrane to replace a patientâ&#x20AC;&#x2122;s damaged RPE (Hunter, 2019). Currently, human pluripotent stem cells are being studied to evaluate their use in treating Parkinsonâ&#x20AC;&#x2122;s disease, as they can be robustly differentiated into midbrain dopaminergic neurons. In general, there is huge room for growth in a wide range of major therapeutic advances, and stem cell-based research is still at the forefront for autoimmune diseases such as lupus, multiple sclerosis, systemic sclerosis, and juvenile rheumatoid arthritis, as well as for gene therapy related to HIV infection, betathalassemia, and sickle cell disease (Moore et al., 2017). In vitro tissue transplantation has been the conventional model for tissue engineering which gathers tissue-matching cells from either a primary source or from stem cells. The utilization of primary source (autologous) cells have been instrumental in the advances of regenerative medicine. However, the medical
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
community is advising against using them due their drawbacks: they require invasive cell collection, suffer from low proliferative capacity, and carry the risk of the cells being in a diseased state (Kurniawan, 2019). As a result, stem cells, despite the added steps, are being considered as the new, and preferred, alternative in order to circumvent these issues and provide enduring treatment (Kurniawan, 2019). Another important aspect of tissue engineering is the use of scaffolds. It has recently been discovered that modifying factors such as biologically active proteins, drugs, and DNA can be included in the scaffolds to promote tissue generation (Kurniawan, 2019). Moreover, the physical, structural, and mechanical properties of the scaffold itself can alter its efficacy (Kurniawan, 2019). Hence, current research explores the optimal properties of these scaffolds. Looking forward, there are a few new advances in in vitro strategies. First, scaffold-free tissue engineering is being explored to bypass the innate problems associated with scaffolds. The main drawback from this method is the fragility of the final tissue constructs, since scaffolds are originally used to provide structural support. However, the major benefit is the ability to fine tune the properties of the tissue architecture (Kurniawan, 2019). Recent advances in this area have helped researchers design strategies that combine scaffold-based and scaffold-free approaches (Ovsianikov et al., 2018). Second, bioprinting is a method that deposits “suspensions containing cells as well as hydrogels biomaterials, growth factors, and any other desired bioactive molecules in a spatially controlled manner to achieve 3D tissue-like architectures” (Kurniawan, 2019). Since in vitro has been the gold standard for so long, researchers have been working hard to formulate strategies that can best help patients. Additionally, in situ tissue engineering is a growing area of research. In situ engineering relies on hydrogels, a network of polymer chains forming a 3-D structure, that can mimic the native extracellular matrix and the physicochemical and biological properties of the human body. The machines used to develop these hydrogels, called “bioreactors,” have been determined to provide a wide number of advantages: delivering nutrients and eliminating waste/metabolites, mechanically stimulating cells, building systemic models, enhancing pathways for cell-cell signaling and
SUMMER 2020
co-culture, and enabling long term culture (Ahmed et al., 2019). In situ tissue engineering has huge potential to eventually replace in vitro forms as it theoretically circumvents the complexity of manually developing functional tissues. Instead, this approach uses the body as a native bioreactor “harnessing [its] innate regenerative properties” (Sengupta et al., 2014). For now, the main roadblock is trying to ensure that the body can mobilize these endogenously produced stem cells to the site of injury.
Ethics of Regenerative Medicine These ethical and related political challenges of regenerative medicine and research with stem cells have been discussed since at least the 1990s (Miguel-Beriain, 2014). Multiple legal cases have focused on the definition of what is and what is not a human embryo, especially in the context of political and legal frameworks. For example, the 2011 Brüstle v. Greenpeace established that a process that involves the removal of a stem cell from a human embryo at the blastocyst stage cannot be patented, as that is related to the destruction of that embryo (Miguel-Beriain, 2014). The Obama Administration’s stem cell policy was based on the “discarded-created distinction” in which it is immoral to create human embryos for the sole purpose of being a source of stem cell lines, but it is not immoral to use the lines obtained from surplus embryos for this purpose (MiguelBeriain, 2014). The argument states that since discarded IVF embryos will die soon, and since there are terrible diseases and disabilities that could be improved, if these IVF embryos are destroyed to create stem cell lines, there is no moral harm to do so (Miguel-Beriain, 2014). While this guideline stands in the US, there is varying legislation that regulates human embryonic stem cell (hESC) research around the world. Italy has prohibited all hESC-based research as a more extreme stance, while the UK allows for hESC research but states that it is illegal to perform nuclear transfer for therapeutic or reproductive purposes (Volarevic, 2018). Since it is of economic interest to be the next leader in regenerative medicine, regulations in some countries have become more relaxed. While this may be interpreted positively, increasing research and clinical trials with stem cell-based medicines, unfortunately, this can lead to detrimental consequences for the field of regenerative medicine at the global
"Multiple legal cases have focused on the definition of what is and what is not a human embryo, especially in the context of political and legal frameworks."
367
level (Sipp and Sleeboom-Faulkner, 2019). For example, the Korean FDA approved four of the world’s first stem cell based medical products between 2011 and 2014, but only one of them is covered by their national health insurance since there are concerns over its efficacy. There has been strong international skepticism over the choice to lower clinical data standards for the sake of expedience by the Korean FDA (Sipp and Sleeboom-Faulkner, 2019). Japan’s drug regulator has drawn similar criticism for approving a stem cell biologic for the treatment of spinal cord injury based on a single, small, uncontrolled and unpublished study in December 2018 (Sipp and Sleeboom-Faulkner, 2019). Clearly, there are various controversies surrounding approval and regulation for regenerative medicine. Stem cell research is on track to becoming a ubiquitous solution to many health problems today, but further regulations are needed to uphold the science to high ethical standards. References Aarabi, S., Longaker, M. T., & Gurtner, G. C. (2007). Hypertrophic Scar Formation Following Burns and Trauma: New Approaches to Treatment. PLOS Medicine, 4(9), e234. https://doi. org/10.1371/journal.pmed.0040234 Ahmed, S., Chauhan, V. M., Ghaemmaghami, A. M., & Aylott, J. W. (2019). New generation of bioreactors that advance extracellular matrix modelling and tissue engineering. Biotechnology Letters, 41(1), 1–25. https://doi.org/10.1007/ s10529-018-2611-7 Anassi, E., & Ndefo, U. A. (2011). Sipuleucel-T (Provenge) Injection. Pharmacy and Therapeutics, 36(4), 197–202. Azficel-T (Intradermal Route) Description and Brand Names— Mayo Clinic. (n.d.). Retrieved July 26, 2020, from https://www. mayoclinic.org/drugs-supplements/azficel-t-intradermalroute/description/drg-20074992 Barker, C. F., & Markmann, J. F. (2013). Historical Overview of Transplantation. Cold Spring Harbor Perspectives in Medicine, 3(4). https://doi.org/10.1101/cshperspect.a014977
Dolly at 20: The inside story on the world’s most famous sheep: Nature News & Comment. (n.d.). Retrieved July 24, 2020, from https://www.nature.com/news/dolly-at-20-theinside-story-on-the-world-s-most-famous-sheep-1.20187 Gonçalves, G. A. R., & Paiva, R. de M. A. (2017). Gene therapy: Advances, challenges and perspectives. Einstein, 15(3), 369–375. https://doi.org/10.1590/S1679-45082017RB4024 Kaur, A., Midha, S., Giri, S., & Mohanty, S. (2019). Functional Skin Grafts: Where Biomaterials Meet Stem Cells. Stem Cells International, 2019, 1–20. https://doi. org/10.1155/2019/1286054 Kim, Y. S., Smoak, M. M., Melchiorri, A. J., & Mikos, A. G. (2019). An Overview of the Tissue Engineering Market in the United States from 2011 to 2018. Tissue Engineering. Part A, 25(1–2), 1–8. https://doi.org/10.1089/ten.tea.2018.0138 Kurniawan, N. A. (2019). The ins and outs of engineering functional tissues and organs: Evaluating the in-vitro and in-situ processes. Current Opinion in Organ Transplantation, 24(5), 590–597. https://doi.org/10.1097/ MOT.0000000000000690 Mao, A. S., & Mooney, D. J. (2015). Regenerative medicine: Current therapies and future directions. Proceedings of the National Academy of Sciences of the United States of America, 112(47), 14452–14459. https://doi.org/10.1073/ pnas.1508520112 Maude, S. L., Laetsch, T. W., Buechner, J., Rives, S., Boyer, M., Bittencourt, H., Bader, P., Verneris, M. R., Stefanski, H. E., Myers, G. D., Qayed, M., De Moerloose, B., Hiramatsu, H., Schlis, K., Davis, K. L., Martin, P. L., Nemecek, E. R., Yanik, G. A., Peters, C., … Grupp, S. A. (2018). Tisagenlecleucel in Children and Young Adults with B-Cell Lymphoblastic Leukemia. The New England Journal of Medicine, 378(5), 439–448. https://doi. org/10.1056/NEJMoa1709866 A Primer on Cloning and Its Use in Livestock Operations. (2020). FDA. https://www.fda.gov/animal-veterinary/animalcloning/primer-cloning-and-its-use-livestock-operations Misra, S. (2013). Human gene therapy: A brief overview of the genetic revolution. The Journal of the Association of Physicians of India, 61(2), 127–133. Organ Donation Statistics | Organ Donor. (2018, April 10). https://www.organdonor.gov/statistics-stories/statistics.html
Caddeo, S., Boffito, M., & Sartori, S. (2017). Tissue Engineering Approaches in the Design of Healthy and Pathological In Vitro Tissue Models. Frontiers in Bioengineering and Biotechnology, 5. https://doi.org/10.3389/fbioe.2017.00040
Ovsianikov, A., Khademhosseini, A., & Mironov, V. (2018). The Synergy of Scaffold-Based and Scaffold-Free Tissue Engineering Strategies. Trends in Biotechnology, 36(4), 348–357. https://doi.org/10.1016/j.tibtech.2018.01.005
Cenciarini-Borde, C., Courtois, S., & La Scola, B. (2009). Nucleic acids as viability markers for bacteria detection using molecular tools. Future Microbiology, 4(1), 45–64. https://doi. org/10.2217/17460913.4.1.45
GINTUIT (Allogeneic Cultured Keratinocytes and Fibroblasts in Bovine Collagen). (2019). FDA. https://www.fda.gov/ vaccines-blood-biologics/cellular-gene-therapy-products/ gintuit-allogeneic-cultured-keratinocytes-and-fibroblastsbovine-collagen
FDA approves novel gene therapy to treat patients with a rare form of inherited vision loss. (2020, March 24). FDA; FDA. https://www.fda.gov/news-events/press-announcements/ fda-approves-novel-gene-therapy-treat-patients-rare-forminherited-vision-loss Cong, L., Ran, F. A., Cox, D., Lin, S., Barretto, R., Habib, N., Hsu, P. D., Wu, X., Jiang, W., Marraffini, L. A., & Zhang, F. (2013). Multiplex genome engineering using CRISPR/Cas systems. Science (New York, N.Y.), 339(6121), 819–823. https://doi.
368
org/10.1126/science.1231143
MACI (Autologous Cultured Chondrocytes on a Porcine Collagen Membrane). FDA. (2019). https://www.fda.gov/ vaccines-blood-biologics/cellular-gene-therapy-products/ maci-autologous-cultured-chondrocytes-porcine-collagenmembrane Approved Cellular and Gene Therapy Products. FDA. (2019). https://www.fda.gov/vaccines-blood-biologics/cellular-genetherapy-products/approved-cellular-and-gene-therapy-
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
products FDA approves axicabtagene ciloleucel for large B-cell lymphoma. FDA. (2019). https://www.fda.gov/drugs/ resources-information-approved-drugs/fda-approvesaxicabtagene-ciloleucel-large-b-cell-lymphoma Sengupta, D., Waldman, S. D., & Li, S. (2014). From In Vitro to In Situ Tissue Engineering. Annals of Biomedical Engineering, 42(7), 1537â&#x20AC;&#x201C;1545. https://doi.org/10.1007/s10439-014-1022-8 Shimizu, R., & Kishi, K. (2012). Skin Graft. Plastic Surgery International, 2012. https://doi.org/10.1155/2012/563493 Shintani, M., Sanchez, Z. K., & Kimbara, K. (2015). Genomics of microbial plasmids: Classification and identification based on replication and transfer systems and host taxonomy. Frontiers in Microbiology, 6. https://doi.org/10.3389/fmicb.2015.00242 Tissue Engineering and Regenerative Medicine. (n.d.). Retrieved July 23, 2020, from https://www.nibib.nih.gov/ science-education/science-topics/tissue-engineering-andregenerative-medicine U.S. FDA Approves Kiteâ&#x20AC;&#x2122;s TecartusTM, the First and Only CAR T Treatment for Relapsed or Refractory Mantle Cell Lymphoma. (n.d.). Retrieved July 26, 2020, from https://www.gilead.com/ news-and-press/press-room/press-releases/2020/7/us-fdaapproves-kites-tecartus-the-first-and-only-car-t-treatmentfor-relapsed-or-refractory-mantle-cell-lymphoma
SUMMER 2020
369
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE Hinman Box 6225 Dartmouth College Hanover, NH 03755 USA http://dujs.dartmouth.edu dujs@dartmouth.edu
SUMMER 2020
370