DUJS
Dartmouth Undergraduate Journal of Science SPRING 2020
|
VOL. XXII
|
N O. 2
UNPRECEDENTED
WRITING OF A WORLD IN FLUX
1
Sleeping on the Job: America's Maternal Mortality Crisis
Should Doctors be Playing Games?
Lung Analysis of Biomarkers (LAB): The Future of Biomarker Detection
p. 6
p. 54
p. 106 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Note from the Editorial Board The spring of 2020 brought with it the unprecedented remodeling of the world and a rude separation from the collegiate atmosphere that us Dartmouth students have become so acclimated to. What once was “vox clamantis in deserto” (a voice crying out from the wilderness) became “te per orbeum terrarum voces clamantium” (voices crying out from around the world) more quickly than we could possibly imagine. Fortunately, one of the many things that we were able to preserve was our Dartmouth Undergraduate Journal of Science print journal. And in the spirit of connectivity, we added an additional element to the journal in which students were able to collaborate: joint print articles. This term’s joint print articles included topics in an array of different subjects such as oncology, economics, neuroscience, environmental science, and, naturally, COVID-19. Students spent a few weeks researching and outlining the paper together, then each student claimed part of the paper as their own to conduct a deeper literature search, ultimately dedicating part of the draft to each of their discoveries. Every week students educated their fellow group members on their key findings so that while the drafts for each particular segment were done by different people, everyone learned about the topic as a whole while the paper progressed to the final draft stage. Board members with experience in the subject in question oversaw the process, educating students on the importance of credible sources, the research and drafting process, and elimination of bias in writing. Individual articles continued as they traditionally do and accordingly, we saw the same incredible diversity as we have seen in past terms, albeit with some new faces. DUJS welcomed a number of new staff writers to our ranks including Brittany Cleary '21, Jess Chen '21, Anna Kölln ’22, Sophia Arana '22, Rachel Tiersky ’23, Ben Newhall ’23, Kat Lasonde ’23, and Nina Klee '23, all of whom are featured in this issue. These eight students joined others as they all elected to write about everything from oncology to machine learning to electrochemistry, but there was a marked shift in the topics of some of the articles topics that were chosen due to lasting impact of the pandemic. Kristal Wong ’22 recognized the increasing trend of at-home workouts in the absence of gyms and wrote out an article on HIIT, or High Intensity Interval Training, to describe the differing scientific discoveries on this movement and how people can best join in. Along the same lines, Rachel Tiersky ’23 recognized the importance of mental health and its implications and wrote of psychological defense mechanisms that can manifest themselves with lack of social support. Audrey Herrald ’23 went a step further and took a look at possible solutions to this that could be found in remote psychotherapy, specifically how effective it would be within different personality subtypes. The whole DUJS team has done a phenomenal job researching, understanding, and conveying the wealth of cutting edge research that is being done in the academic community today. I hope that you have as good a time reading about these topics as we have had writing about them. Warmest Regards, Nishi Jain Editor-in-Chief
The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community and beyond by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EXECUTIVE BOARD President: Sam Neff '21 Editor-in-Chief: Nishi Jain '21 Chief Copy Editors: Anna Brinks '21, Liam Locke '21, Megan Zhou '21 EDITORIAL BOARD Managing Editors: Anahita Kodali '23, Dev Kapadia '23, Dina Rabadi '22, Kristal Wong '22, Maddie Brown '22 Assistant Editors: Aditi Gupta '23, Alex Gavitt '23, Daniel Cho '22, Eric Youth '23, Sophia Koval '21 STAFF WRITERS Aditi Gupta '23 Allan Rubio '23 Anahita Kodali '23 Aniketh Yalamanchili '23 Anna Brinks '21 Anna Kölln '22 Audrey Herrald '23 Ben Newhall '23 Brittany Cleary '21 Bryn Williams '23 Catherine Zhao '20 Daniel Cho '22 Dev Kapadia '23 Dina Rabadi '22 Hubert Galan '23 Jess Chen '21 Kamren Khan '23 Kat Lasonde '23 Kristal Wong '22 Liam Locke '21 Love Tsai '23 Maanasi Shyno '23 Megan Zhou '21 Nina Klee '23 Nishi Jain '21 Rachel Tiersky '23 Sam Neff '21 Sophia Arana '22 Teddy Press '23 SPECIAL THANKS
DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs.dartmouth.science@gmail.com Copyright © 2020 The Trustees of Dartmouth College
Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company
Table of Contents Individual Articles
6
Sleeping on the Job: America’s Maternal Mortality Crisis Aditi Gupta '23, pg. 6 The Physiological Impacts of Nicotine Use Anahita Kodali '23, pg. 12 Electrochemistry as a Tool for Understanding the Brain Anna Koln '22, pg. 18
24
Personality Type as an Indicator of Treatment Suitability in Remote Psychotherapy
Audrey Herrald '23, pg. 24 An Introduction to Quantum Computing and Quantum Optimization Ben Newhall '23, pg. 42
54
Rural Healthcare Accessibility Catherine Zhao '20, pg. 48
Should Doctors Be Playing Games? Dev Kapadia '23, pg. 54
Plantibodies: How plants are shaping our medicine Hubert Galan '23, pg. 62
66
Music Perception and Processing in the Human Brain Kamren Khan '23, pg. 66 Quantum Machine Learning in Society Today Kat Lasonde '23, Pg. 72
HIIT or Miss? An Insight into the Promise of HIIT Workouts Kristal Wong '22, Pg. 78
86
Notch-1 as a Promising Target for Antibody Drug Conjugate Therapies Nishi Jain '21, pg. 86 Psychological Defense Mechanism Subtypes Indicate Levels of Social Support Quality and Quantity Rachel Tiersky '23, Pg. 92
98 4
Straight for the Heart – Providing Resilience Before an Attack Sam Neff '21, pg. 98 Lung Analysis of Biomarkers (LAB): The Future of Biomarker Detection Sam Neff '21 and Dina Rabadi '22, pg. 106 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table of Contents Continued Group Articles A Homing Missile for Cancer: Antibody Drug Conjugates as a New Targeted Therapy Staff Writers: Dina Rabadi, Allan Rubio, Love Tsai Board Writers: Anna Brinks and Nishi Jain Pg. 124
124
Coronavirus: The Story of a Modern Pandemic Staff Writers: Nina Klee, Teddy Press, Bryn Williams, Maanasi Shyno, Allan Rubio Board Writers: Sam Neff and Megan Zhou Pg. 134
The Economics of Drug Development Staff Writers: Kristal Wong, Dev Kapadia, Maanasi Shyno, Love Tsai, Aniketh Yalamanchili Board Writers: Sam Neff and Nishi Jain Pg. 158
134 158
The Neurobiology of Eating Disorders Staff Writers: Anahita Kodali, Daniel Cho, Audrey Herrald, Sophia Arana Board Writers: Liam Locke and Megan Zhou Pg. 178
From Billiard Balls to Global Catastrophe: How Plastics Have Changed the World Staff Writers: Jess Chen, Brittany Cleary, Teddy Press, and Allan Rubio Board Writers: Anna Brinks and Liam Locke Pg. 194
SPRING 2020
178 194 5
Sleeping on the Job: America’s Maternal Mortality Crisis BY ADITI GUPTA '23 Cover Photo: The United Nation’s (UN) symbol for Maternal Mortality. In the Millennium Summit conference in 2000, the UN set a goal to improve maternal health as one of eight key performance indicators by 2015. (Source: Public Broadcasting Service (PBS))
“The CDC's Pregnancy Mortality Surveillance System found that the number of pregnancy-related deaths (per 100,000 live births) increased from 7.2 deaths in 1987 to 16.9 deaths in 2016..” 6
How many American Women Die from Pregnancy? Good Question. Questionable Answers. The World Health Organization (WHO) defines maternal mortality as “the death of a woman while pregnant or within 42 days of termination of pregnancy, irrespective of the duration and site of the pregnancy, from any cause related to or aggravated by the pregnancy or its management but not from accidental or incidental causes” (last updated 2020). The death of a mother is a tragic loss for families and whole communities. Even more devastating is the fact that 60% of pregnancy-related deaths are preventable in the United States, but due to inconsistent and underfunded data keeping, it is nearly impossible for public health experts and healthcare professionals to prevent the dangerously neglected crisis of maternal deaths (The Centers of Disease Control and Prevention (CDC), 2018). The CDC’s Pregnancy Mortality
Surveillance System found that the number of pregnancy-related deaths (per 100,000 live births) increased from 7.2 deaths in 1987 to 16.9 deaths in 2016 but that the maternal mortality ratio had stabilized in recent years (CDC, 2020). Even the CDC admits that it is unclear whether the increase in pregnancy-related deaths comes from improved measurement tools, an overestimation of maternal deaths, or an actual increase in real maternal deaths. The bottom line is that no one really knows how many American mothers really die from pregnancy because of egregious gaps in data collection. The last time the National Center for Health Statistics (NCHS) published an official estimate of maternal mortality was 2007 (Hoyert & Miniño, 2020). To improve data tracking surrounding maternal death, a “pregnancy status checkbox” (see Figure 3) was added to the U.S. Standard Certificate of Death in 2003 to improve maternal mortality tracking DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
(Hoyert & Miniño, 2020). However, since 1) states were gradual in adding the pregnancy checkbox to their death certificates and 2) the pregnancy checkbox contributed to significant overreporting of maternal death, it became even more difficult for researchers to report an accurate estimate of maternal mortality in the United States. According to MacDorman and Declercq, the “overreliance on the pregnancy checkbox leads to seriously flawed United States maternal mortality data” because even if a different cause of death is written, a checked checkbox for pregnancy-related deaths is counted (2018, p.106). So, if there are “nonspecific” causes of deaths, the mortality rate is still overestimated (MacDorman & Declercq, 2018). In separate study by MacDorman & Declercq that corrected for the overreporting caused by the pregnancy checkbox, they found that 48 states plus Washington DC reported an 26.6% increase in the estimated maternal mortality from 2000 to 2014. (MacDorman et al., 2016). Importantly, by dividing the adjusted maternal mortality rate (26.6%) by the unadjusted maternal mortality rate (132.3%), MacDorman & Declercq estimated that only 20.1% of the increase in maternal mortality rate between 2000 and 2014 came from a “real increase in maternal mortality”; thus, 79.9% of the increase was due to an “improved assessment” (MacDorman et al., 2016). Even this 2016 study has come under scrutiny, with another set of researchers suggesting that MacDorman & Declercq overestimated Texas’ 2012 maternal death rate by about half (Baeva et al., 2018). The most current data by the NCHS shows that the national maternal mortality rate in 2018 was 17.4 deaths per 100,000 live births (Hoyert SPRING 2020
& Miniño, 2020). While this statistic would mark an improvement in maternal death rate compared to MacDorman & Declercq’s 20002014 assessment, the NCHS’s data could just as easily be too conservative where MacDorman & Declercq’s was too liberal.
Cover Photo: The United Nation’s (UN) symbol for Maternal Mortality. In the Millennium Summit conference in 2000, the UN set a goal to improve maternal health as one of eight key performance indicators by 2015.
But no matter how the data is analyzed, the United States’ maternal mortality is much higher than other industrialized countries. In an interview with Vox, public health expert Dr. Eugene Declercq said, “If you limit the comparison to those similarly wealthy countries, the US would rank 10th — out of 10 countries… No matter how one analyzes the data, we still lag well behind other countries” (Belluz, 2020). So, with such murky and troubling data, it is difficult to quantify the scope of the maternal mortality crisis in the United States. But one fact is clear: maternal mortality is a crisis that affects about 700 American women every year and exposes the failures of the American healthcare system (CDC, 2019). In recent years, healthcare professionals and researchers are doubling down their efforts to scrutinize existing assessment methods and prevent maternal death, which disproportionately affects low-income and minority women.
(Source: Public Broadcasting Service (PBS))
“...no matter how the data is analyzed, the United States' maternal mortality is much higher than other industrialized countries.”
What are the Racial and Social Determinants that Contribute to Maternal Mortality? Healthcare inequity has long been affecting minorities and low-income patients, and maternal health is unfortunately no exception. In a country where the maternal mortality rate is considerably higher than its industrialized peers, scientists have turned their attention to which populations of American women are at greater risk for pregnancy-related deaths. In a study analyzing the demographic factors that
Figure 2: Maternal mortality rate (per 100,000 births) across the United States. Source: Wikimedia Commons (published in 2015).
7
Figure 3: Pregnancy status checkbox on a death certificate. Studies have shown that the uneven incorporation of the checkbox and the overreporting of maternal mortality due to the checkbox have made it difficult for researchers to determine the actual maternal mortality rate in the United States. Source: ProPublica
“...black women are three to four times more likely to die from pregnancyrelated complications than white women.”
8
contribute to differences in individual states’ maternal mortality rate, Moaddab et al. found a significant correlation between a state’s maternal mortality ranking and the percentage of black women living in the state (2018). Poignantly, they concluded that “much of the variation in statewide maternal mortality ratios in the United States is accounted for by social…factors— unintended pregnancy, unmarried mother, and non-Hispanic black race…and provide evidence for a strong contribution of racial disparity to maternal mortality ratio in the United States” (Moaddab et al., 2018). Moaddab et al.’s findings are echoed by the CDC: black women are three to four times more likely to die from pregnancyrelated complications than white women (CDC, 2020). Black women are less likely to receive early prenatal care (57.7% for black women versus 76.2% for white women) (Bryant et al., 2010), more likely to have unintended pregnancies, which is significantly correlated with maternal mortality (Moaddab et al., 2018), and more likely to be poor and “reliant on public insurances” like Medicaid, which are often inaccessible before a woman’s pregnancy (Bryant et al., 2010). Alaskan Native and American Indian women are also at an elevated risk for pregnancy related deaths and complications, especially for the 40% of them who live in rural counties (Kozhimannil et al., 2020). The incidence of severe maternal morbidity and mortality (M&M) was twice as high for indigenous women than white women, with the highest incidence of maternal M&M occurring for indigenous women living in rural counties (Kozhimannil et al., 2020). Clearly, geographical, social, and racial healthcare disparities affect maternal health. Thus, it is vitally important for public health experts to address the community-based healthcare inequities to improve maternal health outcomes at the patient, local, and state levels.
How can Doctors and Midwives Improve Maternal Health Outcomes? Improved maternal healthcare starts at the patient-physician interaction. Although the traditional model of American medicine relies on a 1:1 patient to physician interaction, there is increasing evidence that a team-based approach among healthcare professionals can improve maternal health (Cornthwaite et al., 2013). However, many physicians have been reluctant to adopt a team-based approach, and there exists considerable friction among midwives, primary care physicians (PCPs), and obstetrician-gynecologists (OB-GYNs). Vedam et al., found that midwives are a “key determinant of optimal maternal-newborn outcomes” (2018). However, doctors and hospitals, who fear that midwives may steal their business, have been extremely apprehensive to allow midwives to practice in America; unsurprisingly, then, there are very few jurisdictions where midwives are licensed to practice in America (Martin, 2018). As a result, only seven percent of babies are delivered by midwives in the United States; in similarly highincome countries, midwives deliver anywhere from 50-75% of babies (Goodman, 2007). Vedam et al. found that in the states where midwives are integrated in the healthcare system like Washington and Oregon, mothers and babies have much higher rates of healthy outcomes than states like Ohio, where midwifery is less integrated (2018). Interestingly, states where more black babies are born had a significantly lower density of midwives/access to midwives. Vedam et al. conclude that “with greater integration of midwives in these states, the associated reduced rates of neonatal mortality, preterm birth, and increased breastfeeding
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
women around pregnancy” (Essien et al., 2018).
How Can Systemic Healthcare and Policy Changes Improve Maternal Health?
success could confer important long-term health benefits…for African American mothers” (Vedam & Stoll, 2018). Despite such strong evidence in favor for midwifery, the practice remains legally and socially restricted in America. Another area for improvement is the transition of mothers from obstetrics-gynecology to primary care, especially because maternal mortality includes death that has occurred within 42 days of giving birth (WHO, 2020). While OB-GYNs “regularly manage chronic medical conditions” during pregnancy and continue checkups for at least the first six weeks post-delivery, OB-GYNs look to PCPs to take an active role in treating the mother afterwards (Essien et al., 2018). According to Essien et al., “[c]ollaboration between OB-GYNs and PCPs is key to strengthening the transition between obstetrical and primary care for women and managing chronic conditions across the life course” (2018). However, very few studies have actually researched the transition of post-partum care. In a small study at a San Antonio clinic, researchers provided postpartum mothers educational information about their transition from obstetrical to primary care trained OB-GYNs on the importance of facilitating a smooth transition for their patients to primary care. After the intervention, 23 of 27 women (82%) scheduled an appointment with a PCP and 12 women (44%) went to the appointment; nationally, only 33% of women transition to a PCP (Matthews, 2016). Although the sample was too small to draw broad conclusions, the San Antonio study highlights the importance of educating both patients and physicians about the need for greater communication to “prevent adverse postpartum outcomes and improve the quality and safety of healthcare for
SPRING 2020
Figure 4: Maternal mortality disproportionately affects women of color, particularly black women. Source: Kaiser Family Foundation
The dominant biomedical model in American healthcare treats patients like machines whose well-beings are independent from their environment. However, it is abundantly clear that social determinants affect maternal health outcomes, often to the detriment of minority and low-income women. Although physicians are increasingly shifting towards a more holistic model of medicine like the biopsychosocial model, which incorporates the psychological and social determinants of health, there is an urgent need for healthcare experts at the public and governmental level to address the maternal mortality crisis at a macro level. Since individual risk factors (race, age, socioeconomic status factors) are “inadequate for explaining population-based health disparities alone, Kramer et al. propose a comprehensive and tiered health equity framework for Maternal Mortality Review Committees (MMRCs) to use in their analyses of maternal mortality (2019). According to the CDC, MMRCs have access to clinical and non-clinical information and are “are multidisciplinary committees that convene at the state or local level to comprehensively review deaths of women during or within a year of pregnancy” (updated 2019). MMRCs are crucial to combat the maternal mortality crisis because they exist to make informed suggestions to healthcare organizations about improving maternal healthcare based on medical and sociological data for regional populations (see Figure 5). Kramer et al. acknowledge that MMRCs may not have the tools to properly assess the extent
“Another area for improvement is the transition of mothers from obstetricsgynecology to primary care, especially because maternal motrality includes death that has occured within 42 days of giving birth.”
Figure 5: The top down factors that Maternal Mortality Review Committees (MMRCs) consider before making suggestions to improve maternal health at the population level. Source: CDC Foundation
9
to which social and geographic determinants affect maternal health in their jurisdiction. Thus, Kramer et al. suggest a framework that incorporates biomedical (i.e. cardio metabolic, neuro-endocrine, etc.), contextual (i.e. structural racism, poverty, etc.), and environmental (i.e. healthcare and transportation access, social support, etc.) factors that inform a pregnant woman’s life trajectory and pregnancy experience. Additionally, Kramer and his colleagues provide MMRCs with suggested sources that can provide data on specific health determinants; for example, Kramer et al suggest accessing county health rankings to research housing problems and food insecurity, both of which may impact maternal health. Finally, Kramer et. al implore MMRCs, clinical, and community leaders to “identify novel underlying factors at the community level that enhance understanding of racial and geographic inequity in maternal mortality” (2019).
Conclusion
“Because maternal health is intertwined with individual and communal factors at every level, it is crucial that healthcare and governmental agencies analyze the extent to which underlying social factors affect maternal healthcare outcomes.”
There is certainly a need for multi-disciplinary response to improve maternal health and prevent hundreds of annual maternal deaths at the patient, local, and state level. Physicians and hospitals need to work with midwives and provide patient-centered care that extends beyond the first six weeks of delivery; MMRCs and public health experts need to analyze the racial, social, and geographic determinants of maternal health. Because maternal health is intertwined with individual and communal factors at every level, it is crucial that healthcare and governmental agencies analyze the extent to which underlying social factors affect maternal healthcare outcomes. However, in order to analyze these various determinants, properly funded and consistent data collection regarding maternal deaths must be prioritized. Research professor Dr. Marian MacDorman (who published many ground-breaking studies with Dr. Eugene Declercq) emphasizes that “[m]aternal mortality is a sentinel public health indicator” (Belluz, 2020). If maternal mortality is a sentinel, then it has been fatally asleep on the job, at least according to the troubling statistics and the “embarrass[ing]” lack of data (Belluz, 2020). It is time to wake up. References Baeva, S., Saxton, D.L., et al. Identifying Maternal Deaths in Texas Using an Enhanced Method, 2012, Obstetrics & Gynecology: May 2018. Vol 131 (5). 762-769. doi: 10.1097/ AOG.0000000000002565.
10
Belluz, J. (2020, January, 30). We finally have a new US maternal mortality estimate. It’s still terrible. Vox, https:// www.vox.com/2020/1/30/21113782/pregnancy-deaths-usmaternal-mortality-rate. Bryant, A. S., Worjoloh, A., Caughey, A. B., & Washington, A. E. (2010). Racial/ethnic disparities in obstetric outcomes and care: prevalence and determinants. American journal of obstetrics and gynecology, Vol 202 (4), 335-343. https://doi. org/10.1016/j.ajog.2009.10.864. Building U.S. Capacity to Review and Prevent Maternal Deaths. (2018). Report from nine maternal mortality review committees. Retrieved from http://reviewtoaction.org/ Report_from_Nine_MMRCs. Centers for Disease Control and Prevention. (2019, September, 4). Data Brief From 14 U.S. Maternal Mortality Review Committees, 2008-2017. https://www.cdc.gov/ reproductivehealth/maternal-mortality/erase-mm/mmrdata-brief.html. Centers of Disease Control and Prevention. (2019). Maternal Mortality. https://www.cdc.gov/reproductivehealth/ maternal-mortality/index.html. Centers of Disease Control and Prevention. (2020). Pregnancy Mortality Surveillance System. https://www.cdc.gov/ reproductivehealth/maternal-mortality/pregnancy-mortalitysurveillance-system.htm. Cornthwaite, K., Edwards, S., et al. (2013). Reducing risk in maternity by optimising teamwork and leadership: an evidence-based approach to save mothers and babies. Best Practice & Research Clinical Obstetrics & Gynaecology, 27(4), 571-581, https://doi.org/10.1016/j.bpobgyn.2013.04.004. Goodman, S. (2007). Piercing the veil: The marginalization of midwives in the United States. Social Science & Medicine, 65(3), 610-621. Doi: 10.1016/j. socscimed.2007.03.052. Hoyert DL, Miniño AM (2020). Maternal mortality in the United States: Changes in coding, publication, and data release, 2018. National Vital Statistics Reports; Vol 69. no 2. Hyattsville, MD: National Center for Health Statistics. 2020. Kozhimannil, K., Interrante, J. D, et al. (2020). Severe Maternal Morbidity and Mortality Among Indigenous Women in the United States, Obstetrics & Gynecology: February 2020. Vol 135 (2), 294-300. doi: 10.1097/AOG.0000000000003647. Kramer, M., Strahan, A., et al. (2019). Changing the conversation: applying a health equity framework to maternal mortality reviews. American Journal of Obstetrics and Gynecology., 221(6), 609.e1–609.e9. https://doi. org/10.1016/j.ajog.2019.08.057. MacDorman, M., & Declercq, E., et al. (2016). Is the United States Maternal Mortality Rate Increasing? Disentangling trends from measurement issues. Obstet Gynecol. Vol 128 (3), 447-455. doi:10.1097/AOG.0000000000001556. MacDorman, M., & Declercq, E. (2018). The failure of United States maternal mortality reporting and its impact on women’s lives. Birth: Issues in Perinatal Care., Vol 45(2), 105–108. https://doi.org/10.1111/birt.12333. Martin, N. (2018, February, 22). A Larger Role for Midwives Could Improve Deficient U.S. Care for Mothers and Babies.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
ProPublica, https://www.propublica.org/article/midwivesstudy-maternal-neonatal-care. Matthews, C. J. (2016). A Systems Intervention for the Transition of Postpartum Women to Primary Care. Doctor of Nursing Practice. 5. Retrieved from https://athenaeum.uiw. edu/cgi/viewcontent.cgi?article=1005&context=uiw_dnp. Moaddab, Amirhossein MD; Dildy, Gary A. MD; Brown, Haywood L. MD; Bateni, Zhoobin H. MD; Belfort, Michael A. MD, PhD; Sangi-Haghpeykar, Haleh PhD; Clark, Steven L. MD Health Care Disparity and Pregnancy-Related Mortality in the United States, 2005–2014, Obstetrics & Gynecology: April 2018 - Volume 131 - Issue 4 - p 707-712 doi: 10.1097/ AOG.0000000000002534. Vedam S, Stoll K, MacDorman M, Declercq E, Cramer R, Cheyney M, et al. (2018). Mapping integration of midwives across the United States: Impact on access, equity, and outcomes. PLoS ONE 13(2). https://doi.org/ 10.1371/journal. pone.0192523. World Health Organization. (2020). Maternal mortality ratio (per 100 000 live births). https://www.who.int/healthinfo/ statistics/indmaternalmortality/en/.
SPRING 2020
11
The Physiological Impacts of Nicotine Use
BY ANAHITA KODALI '23 Cover: During the late 1800s and early 1900s, there was a boom of antismoking campaigns. In fact, in 1905, Indiana, Nebraska, and Wisconsin passed cigarette prohibition laws. After the laws passed, newspapers would publish ads like these to convince other states to follow suit, and indeed, Arkansas, Kansas, Minnesota, South Dakota, and Washington also passed laws prohibiting cigarettes in the following years. However, the laws were easy to evade and cigarettes were popular with soldiers so by the 1930s antismoking laws were repealed nationwide, and the antismoking campaigns were waning (U.S. Department of Health and Human Services). Source: Wikimedia Commons
Section 1: Levels of NicotineContaining Substance Use in the US Smoking became popular in the US after World War II, which is also around the time that physicians and researchers began documenting the health impacts of cigarettes. In 1964, the Surgeon General released a national report consolidating over 15 years’ worth of research about cigarette smoking’s negative effects. In doing so, he started a revolutionary movement in the US to limit cigarette use (U.S. Department of Health and Human Services). In 1965, the government began a nationwide antismoking campaign and also started to track national cigarette smoking statistics. Federal efforts have been successful: between the years 1965 and 2017, the percentage of current cigarette smokers in the US dropped 67% (“Cigarette Smoking Among U.S. Adults”). Despite the fact that cigarette smoking amongst adults has been declining over the past several decades, there are still a significant number of smokers in the US. In 2017, the CDC reported that about 38 million American adults are current smokers (with current smokers defined as people who smoke every day or some days), showing that smoking is still a public health issue (“Smoking is down”). Along
12
the same lines, in recent years researchers have found that cigarette use is still the most significant cause of preventable and premature death in the US; in 2014, for example, 480,000 deaths in the US were caused by cigarette smoking (Bonnie, Stratton, Kwan 2015). In addition to the sustained prevalence of tobacco use, vaping (use of e-cigarettes or similar devices) in the US has spiked, especially amongst high-school students. Despite being promoted as a safer alternative to smoking and as a way to quit cigarettes, there are several dangers associated with e-cigarettes that are made more alarming due to e-cigarette’s higher popularity than its cigarette predecessor amongst young adults and children. In 2014, about 17% of 12th grade students reported vaping;by 2019, that number had shot up to about 45% (Schraufnagel 2015; “Tobacco/ Nicotine and Vaping”). While the individual e-cigarette devices may be less harmful than cigarettes, the e-cigarette market is still largely unregulated, and long-term effects of e-cigarettes are not fully understood (Schraufnagel 2015). With cigarette and e-cigarette use on the rise in the US, it is important to fully understand the impacts that use of these devices and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
chemicals have on the body. The actual inhalants and chemicals within both cigarettes and e-cigarettes can be incredibly harmful. Smoking, with either cigarettes or e-cigarettes, causes poor blood circulation, reduces the flow of saliva, reduces the sense of taste, can lead to several types of cancer, and increases the risk of coronary heart disease (see Figure 1) (Gometz 2011). Potentially more dangerous than the other chemicals, however, is the presence of nicotine. Nicotine is a drug that acts as both a stimulant and a sedative. While vaping is considered to be healthier than the use of traditional tobacco products, studies have shown that nicotine still poses several health hazards, including the increased risk of certain disorders, decreased immune response, and impacts to reproductive health. Additionally, despite not being considered a carcinogen (a cancercausing agent), nicotine can affect cellular processes that lead to cancer, including cell proliferation, apoptosis, and DNA mutation. Furthermore, it can affect tumor metastasis and lowers effectiveness of chemotherapies and radiological treatments (Mishra et. al. 2015). Since nicotine is a highly addictive drug and has the potential to be carcinogenic, a better understanding of the physiological impacts that nicotine has on both the body and the brain is the key to developing effective therapies against nicotine addiction. This paper will discuss the mechanisms by which nicotine is absorbed by the body and causes addiction. It then goes on to discuss some of the body systems that are most significantly impacted by nicotine, including the cardiovascular system, the metabolic
system, the endocrine system, and the immune system. Although this paper largely focuses on the effects of nicotine, it is important to note that when studying cigarettes and vapes, many of the other chemicals work in concert with and propagate nicotine’s effects.
Section 2: Mechanisms of Nicotine Absorption and Addiction There are several non-traditional mechanisms for nicotine intake, including nicotine patches, gum, and injections. The most common method, as discussed in the introduction, is inhalation of smoke or vapors from cigarettes or e-cigarettes. When nicotine enters the lungs, it is quickly absorbed into the bloodstream and travels to the brain. In the brain, it binds to ligand-gated ion channels known as nicotinic cholinergic receptors (nAChRs) (Benowitz 2010). These receptors are composed of several subunits, with α4 and α5 being the most critical for the mediation of nicotine dependence. The implication of these receptors in nicotine dependence was proven by studies with mice that showed that a single nucleotide point mutation in the α4 subunit was found to cause hypersensitivity towards nicotine and caused mice to be more responsive towards nicotine as a reward (Benowitz 2009). Gene variants in the α5 subunit also changed nicotine responsiveness in mice, with some variants causing heightened sensitivity while others decreased it (Benowitz 2010). The other subunits in the receptors help mediate nicotine’s different effects on the body, including cardiovascular impacts, and also help to moderate the sensitivity of the receptors themselves (Benowitz 2009).
“While vaping is considered to be healthier than the use of traditional tobacco products, studies have shown that nicotine still poses several health hazards, including the increased risk of certain disorders, decreased immune response, and impacts to reproductive health."
When nAChRs are stimulated, the brain releases Figure 1: Due to the harmful chemicals found in cigarettes, several states (pictured here in red) have strict laws banning smoking in workplaces, restaurants, and bars. Others (pictured here in orange) ban smoking in two of the three locations. States pictured in tan have either limited or no restrictions on smoking in these areas. Image Source: Wikimedia Commons
SPRING 2020
13
several hormones and neurotransmitters. Perhaps the most critical hormonal effect in addiction responses is the release of dopamine. Dopamine is the body’s “feel-good” hormone and has a vital role in the reward pathways of the brain. It is widely implicated in addiction because as dopamine levels spike, the user experiences an intense drug-related euphoria (Olguín et. al.). Nicotine causes release of dopamine into the mesolimbic area, corpus striatum, and the frontal cortex , portions of the brain that help control reward responses and cognitive processes (Benowitz 2010; Alcaro, Huber, Panksepp 2008; Báez-Mendoza, Schultz 2013; Kennerley, Walton 2011). Other molecules released include but are not limited to serotonin, norepinephrine, and acetylcholine (Quattrocki, Yurgelun-Todd 2000).
“When ingested, nicotine immediately impacts the heart... [nicotine] raised the heartbeat rate by 10 to 15 beats per minute and also increased blood pressure by 5 to 10 mmHg.”
When nicotine is frequently used, the brain adapts and develops a tolerance to it, causing nAChRs to become less sensitive to nicotine with then less dopamine released. In addition, the number of nAChRs in the brain goes up with prolonged nicotine use; this is thought to be a representation of the upregulation in response to desensitization of the receptors. Both desensitization to nicotine and increased numbers of nAChRs contribute to the withdrawal symptoms that chronic nicotine users experience when they stop using nicotine for long periods of time (Benowitz 2008). It is important to note that recent studies have found that tolerance is not closely associated with dependence on tobacco (Perkins 2002). This suggests that more research needs to be done to understand exactly why nicotine and nicotine-containing devices are so addictive, as there may be factors besides dopamine-release that contribute significantly.
Section 3: Nicotine’s Impacts on the Cardiovascular System When ingested, nicotine immediately impacts the heart. In tests with nicotine nasal spray, chewing gum, and intravenous injection, nicotine raised the heartbeat rate by 10 to 15 beats per minute and also increased blood pressure by 5 to 10 mmHg. Nicotine also increases myocardial contractility, which is the heart’s ability to forcefully contract (Benowitz, Gourlay 1997; “Heart Muscle Contractility”). In certain parts of the body, like the skin and fingers, nicotine causes blood vessels to constrict (called “vasoconstriction”) and therefore reduces blood flow (Benowitz, Gourlay 1997). On the other hand, other regions, like skeletal muscle, experience a greater amount of blood flow because blood vessels dilate (called “vasodilation”). For the
14
coronary arteries, it has been suggested that nicotine causes a constriction that does not come immediately. First, the heart increases blood output to the large coronary vessels, potentially due to the increase in heart rate and myocardial contractility. However, after this initial increase, there is a decreased blood flow that appears to be due to the vasoconstriction of the coronary arteries, though the exact biochemical processes are still being studied (Benowitz, Gourlay 1997). Nicotine also has long term effects on the heart. Nicotine can contribute to chronic hypertension and can cause irregular heart rhythms Nicotine decreases the ventricular fibrillation threshold which is the minimum amount of electricity needed to cause the heart to fibrillate, thereby promoting ventricular fibrillation (Benowitz, Burbank 2016, Valentinuzzi, del Valle, da Costa 1984). Ventricular fibrillation refers to heart fibrillations that occur in the lower two chambers of the heart. In addition, repeated nicotine use contributes to atrial fibrosis (or scarring of the atrial portion of the heart), which increases the risk of atrial fibrillation, or the irregular heart fibrillations occurring in the upper two chambers of the heart (see Figure 2) (Benowitz, Burbank 2016). In particular, ventricular fibrillation increases the risk of sudden cardiac death, and both ventricular and atrial fibrillation can increase the risk of heart damage and other heart diseases (Koplan, Stevenson 2009).
Section 4: Nicotine’s Impacts on the Metabolic System Nicotine-use has been associated with significant weight loss. Body weight is the consequence of the body’s equilibrium between metabolism of energy and caloric intake, as well as several other factors (including genetics, age, and race) and can be easily manipulated using hormones (Galgani, Ravussin 2008; Institute of Medicine (US)). Firstly, nicotine increases the body’s metabolism by promoting release of metabolic hormones like norepinephrine and epinephrine, both of which help control weight by promoting thermogenesis, or the body’s process of turning fat into heat ((AudrainMcGovern, Benowitz 2011; Ste, Curtis, Palmiter 2005). Norepinephrine and epinephrine also increase heart rate and myocardial contractility, increasing the body’s energy expenditure and further promoting fat burning. On average, metabolism increases by 10% for up to 24 hours after ingesting the nicotine equivalent to
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
smoking just one cigarette. While this increase may seem insignificant, over the course of one year and with no changes to caloric intake, frequent smokers can lose about 22 pounds (Audrain-McGovern, Benowitz 2011). On top of increasing metabolism, nicotine suppresses appetite. The effects of nicotine on the hormones that suppress eating are very complex and still not completely understood, but it is clear that in the short-term nicotine activates neural systems and pathways that decrease appetite and increase metabolism. The long-term impacts of nicotine use on appetite are less clear since some changes are consistent with changes that increase appetite and decrease metabolism. Generally speaking, however, drugs that increase levels of norepinephrine, serotonin, and dopamine - including nicotine - facilitate weight loss (Audrain-McGovern, Benowitz 2011). Finally, nicotine alters the body’s weight distribution. Smokers also have been found to have a high percentage of visceral fat (fat in the abdomen) than nonsmokers; this relation is thought to be due to insulin use, as insulin can affect both male and female sex hormones that affect weight (Audrain-McGovern, Benowitz 2011).
Section 5: Nicotine’s Impact on the Endocrine System The endocrine system is the body’s hormonal messaging system, and there are several more hormones that are impacted by nicotine
SPRING 2020
than those previously discussed in this paper (including serotonin, norepinephrine, acetylcholine, and epinephrine). Although many of the endocrinal changes are related to generalized cigarette use rather than isolated nicotine use, isolated nicotine still has significant impacts on hormones and their use. For example, the nicotine content in cigarettes has been directly associated with short term increases in blood plasma levels of pituitary hormones including prolactin, adrenocorticotropin, growth hormone, and arginine vasopressin (Kapoor, Jones 2005). Specifically, in male chronic smokers, cortisol, prolactin, and growth hormone levels are significantly raised above baseline levels for at least an hour after nicotine was absorbed from cigarette use (Wilkins et. al. 1982).
Figure 2: The heart is split into four main chambers: the left atrium, the right atrium, the left ventricle, and the right ventricle. The right atrium receives oxygen-poor blood from the body and it pumps it into the right ventricle, which in turn sends it to the lungs to get oxygenated. The oxygen-rich blood enters the left atrium, which then sends it to the left ventricle, which pumps blood back into the body. While damage to any of the four chambers is serious, ventricular damage (particularly left ventricular damage) is considered more harmful than atrial damage because the ventricles are the parts of the heart that receive and send blood from the body.
Adrenal hormones, which regulate the kidneys, are also impacted. Nicotine generally raises levels of adrenocortical activity, likely caused by nicotine-induced increases in sympathetic nervous system and catecholamine activity (catecholamines are aromatic amines such as dopamine and norepinephrine) (Kershbaum et. al. 1968). Nicotine causes an increase in vasopressin levels, which can also lead to increases in adrenocorticotropin and cortisol (Kapoor, Jones 2005).
Image Source: Wikimedia Commons
Nicotine also affects sex hormones in both men and women. In women, nicotine reduces circulating estrogen levels and inhibits estrogen signaling in the brain (Raval 2011). In men, nicotine reduces fertility and decreases testosterone levels, the latter of which returns to normal after stopping nicotine intake (Oyeyipo, Raji, Bolarinwa 2013).
“In women, nicotine reduces circulating estrogen levels and inhibits estrogen signaling in the brain. In men, nicotine reduces fertility and decreases testosterone levels.”
Section 6: Nicotine’s Impact on the Immune System There has been some research into the direct impacts of nicotine on the immune system, and many of the findings have been contradictory, suggesting that nicotine’s impact on the immune system is complex and highly variable. In general, nicotine weakens and slows the body’s inflammatory response (Piao et. al. 2009). It has also been found to impair immune response due to impacts on T-cell behavior. T-cells are critical to the immune response as they are at the core of the body’s adaptive immune response. Nicotine was shown to inhibit T-cell proliferation after exposure to disease molecules. This is because is a direct response to T-cell receptor ligation with antibody and
15
an increase in calcium concentration; however, nicotine inhibits increase in calcium levels, thus impairing T-cell proliferation (Kalra et. al. 2004). In addition, nicotine can affect the function of certain lymphocytes in vitro, as it was found that natural-killer cells were less effective in lysing foreign cells when cultured with nicotine. In humans, increased T-lymphocytes suppression activity was noted, and, in mice, nicotine altered the structure of some T-cell surface proteins (Mcallister-Sistilli et. al. 1998). It is important to note that these same effects have not been found in in vivo settings. Where the controversy sets in is the effect of nicotine on the body’s autoimmune response. Several studies have found that smoking in general suppresses humoral autoimmunity in the case of inflammatory diseases like systemic lupus erythematosus and ulcerative colitis. However, other studies have shown that smoking may worsen other autoimmune diseases like Crohn’s disease and multiple sclerosis. Nicotine has been found to weaken the autoimmune response, thus adding to the complexity of understanding nicotine’s impact on the immune system (Piao et. al. 2009). In order to more conclusively define nicotine’s impact in autoimmune diseases, more studies need to be completed.
Section 7: Conclusions “...the harmful effects of tobacco and e-cigarette use go far beyond the negative effects of nicotine isolated, since both have thosands of other chemical irritants.”
This review covers four bodily systems that nicotine impacts: cardiovascular, metabolic, endocrine, and immune. It is important to note that nicotine effects virtually all body systems. In addition, the harmful effects of tobacco and e-cigarette use go far beyond the negative effects of nicotine isolated, since both have thousands of other chemical irritants. It’s important to recognize that even with the breadth of research on nicotine, there are gaps in the existing literature, particularly regarding the effects of nicotine on the endocrine system and on the immune system. Therefore, it is critical to continue researching the physiological effects of nicotine on the body, not just for understanding the negative impacts of smoking and e-cigarette use, but also to discover novel therapeutic avenues.
References Alcaro, A., Huber, R., & Panksepp, J. (2008). Behavioral Functions
16
of the Mesolimbic Dopaminergic System: an Affective Neuroethological Perspective. Brain Research Reviews, 56(2), 283–321. doi: 10.1016/j.brainresrev.2007.07.014 Audrain-McGovern, J., & Benowitz, N. L. (2011). Cigarette Smoking, Nicotine, and Body Weight. Clinical Pharmacology & Therapeutics, 90(1), 164–168. doi: 10.1038/clpt.2011.105 Benowitz, N. L. (2008). Neurobiology of Nicotine Addiction: Implications for Smoking Cessation Treatment. The American Journal of Medicine, 121(4), S3–S10. doi: https://doi. org/10.1016/j.amjmed.2008.01.015 Benowitz, N. L. (2009). Pharmacology of Nicotine: Addiction, Smoking-Induced Disease, and Therapeutics. Annual Review of Pharmacology and Toxicology, 49, 57–71. doi: 10.1146/ annurev.pharmtox.48.113006.094742 Benowitz, N. L. (2010). Nicotine Addiction. The New England Journal of Medicine, 362(24), 2295–2303. doi: 10.1056/ NEJMra0809890 Benowitz, N. L., & Burbank, A. D. (2016). Cardiovascular Toxicity of Nicotine: Implications for Electronic Cigarette Use. Trends in Cardiovascular Medicine, 26(6), 515–523. doi: 10.1016/j.tcm.2016.03.001 Benowitz, N. L., & Gourlay, S. G. (1997). Cardiovascular Toxicity of Nicotine: Implications for Nicotine Replacement Therapy. Journal of the American College of Cardiology, 29(7), 1422–1431. doi: 10.1016/S0735-1097(97)00079-X Bonnie R. J., Stratton K., Kwan L. Y. (2015 Jul 23). Public Health Implications of Raising the Minimum Age of Legal Access to Tobacco Products. Washington (DC): National Academies Press (US); Available from: https://www.ncbi.nlm.nih.gov/ books/NBK310413/ Báez-Mendoza, R., & Schultz, W. (2013). The role of the striatum in social behavior. Frontiers in Neuroscience, 7. doi: 10.3389/fnins.2013.00233 Cigarette Smoking Among U.S. Adults Lowest Ever Recorded: 14% in 2017. (2018, November 8). Center for Disease Control. Retrieved from https://www.cdc.gov/media/releases/2018/ p1108-cigarette-smoking-adults.html Galgani, J., & Ravussin, E. (2008). Energy metabolism, fuel selection and body weight regulation. International Journal of Obesity, 32(Suppl 7), S109–S119. doi: 10.1038/ijo.2008.246 Gometz, E. D. (2011). Health Effects of Smoking and the Benefits of Quitting. AMA Journal of Ethics, 13(1), 31–35. doi: 10.1001/virtualmentor.2011.13.1.cprl1-1101. Heart Muscle Contractility. (n.d.). Science Direct. Retrieved from https://www.sciencedirect.com/topics/medicine-anddentistry/heart-muscle-contractility Institute of Medicine (US) Subcommittee on Military Weight Management. Weight Management: State of the Science and Opportunities for Military Programs. Washington (DC): National Academies Press (US); 2004. 3, Factors That Influence Body Weight. Available from: https://www.ncbi.nlm.nih.gov/books/ NBK221834/ Kalra, R., Singh, S. P., Pena-Philippides, J. C., Langley, R. J., Razani-Boroujerdi, S., & Sopori, M. L. (2004). Immunosuppressive and Anti-Inflammatory
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Effects of Nicotine Administered by Patch in an Animal Model. Clinical Diagnostic Laboratory Immunology, 11(3), 563–568. doi: 10.1128/cdli.11.3.563-568.2004 Kapoor, D., & Jones, T. J. (2005). Smoking and hormones in health and endocrine disorders. European Journal of Endocrinology, 152(4), 491–499. doi: https://doi.org/10.1530/eje.1.01867 Kennerley, S. W., & Walton, M. E. (2011). Decision Making and Reward in Frontal Cortex. Behavioral Neuroscience, 125(3), 297–317. doi: 10.1037/ a0023575 Kershbaum A, Pappajohn DJ, Bellet S, Hirabayashi M, Shafiiha H. (1968) Effect of Smoking and Nicotine on Adrenocortical Secretion. JAMA ;203(4):275– 278. doi:10.1001/jama.1968.03140040027006 Koplan, B. A., & Stevenson, W. P. (2009). Ventricular Tachycardia and Sudden Cardiac Death. Mayo Clinic Proceedings, 84(3), 289–297. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2664600/ Mcallister-Sistilli, C. G., Caggiula, A. R., Knopf, S., Rose, C. A., Miller, A. L., & Donny, E. C. (1998). The effects of nicotine on the immune system. Psychoneuroendocrinology, 23(2), 175–187. doi: 10.1016/ s0306-4530(97)00080-2 Mishra, A., Chaturvedi, P., Datta, S., Sinukumar, S., Joshi, P., & Garg, A. (2015). Harmful effects of nicotine. Indian Journal of Medical and Paediatric Oncology, 36(1), 24–31. doi: 10.4103/0971-5851.151771 Olguín, H. J., Guzmán, D. C., García, E. H., & Mejía, G. B. (2015). The Role of Dopamine and Its Dysfunction as a Consequence of Oxidative Stress. Oxidative Medicine and Cellular Longevity, 2016. doi: http:// dx.doi.org/10.1155/2016/9730467
Smoking is down, but almost 38 million American adults still smoke. (2018, January 18). Center for Disease Control. Retrieved from https://www.cdc.gov/media/releases/2018/p0118-smokingrates-declining.html Ste, M. L., Luquet, S., Curtis, W., & Palmiter, R. D. (2005). Norepinephrine- and epinephrine-deficient mice gain weight normally on a high-fat diet. Obes Res, 13(9), 1518–1522. doi: https://doi. org/10.1038/oby.2005.185 Tobacco/Nicotine and Vaping. (n.d.). National Institute of Drug Abuse. Retrieved from https://www.drugabuse.gov/drugs-abuse/tobacconicotinevaping U.S. Department of Health and Human Services. Reducing Tobacco Use: A Report of the Surgeon General. Atlanta, Georgia: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, 2000. Valentinuzzi, M. E., del Valle, R. E., da Costa, C. P. (1984) Ventricular fibrillation threshold in the three-toed sloth (Bradypus tridactylus). Acta physiologica et pharmacologica latinoamericana: organo de la Asociación Latinoamericana de Ciencias Fisiológicas y de la Asociación Latinoamericana de Farmacología, 34(3), 313-322. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/6241785 Wilkins, J.N., Carlson, H.E., Van Vunakis, H. et al. (1982) Nicotine from cigarette smoking increases circulating levels of cortisol, growth hormone, and prolactin in male chronic smokers. Psychopharmacology 78, 305–308. https://doi.org/10.1007/BF00433730
Oyeyipo, I. P., Raji, Y., & Bolarinwa, A. F. (2013). Nicotine alters male reproductive hormones in male albino rats: The role of cessation. Journal of Human Reproductive Sciences, 6(1), 40–44. doi: 10.4103/09741208.112380 Perkins, K. A. (2002). Chronic tolerance to nicotine in humans and its relationship to tobacco dependence. Nicotine & Tobacco Research, 4(4), 405–422. doi: https://doi.org/10.1080/1462220021000018425 Piao, W. H., Campagnolo, D., Dayao, C., Lukas, R. J., Wu, J., & Shi, F.-D. (2009). Nicotine and inflammatory neurological disorders. Acta Pharmacologica Sinica, 30(6), 715–722. doi: 10.1038/aps.2009.67 Quattrocki, E., Baird, A., & Yurgelun-Todd, D. (2000). Biological aspects of the link between smoking and depression. Harvard Review of Psychiatry, 8(3), 99–110. Retrieved from https://www.ncbi.nlm.nih.gov/ pubmed/10973935 Raval, A. P. (2011) Nicotine Addiction Causes Unique Detrimental Effects on Women's Brains, Journal of Addictive Diseases, 30:2, 149-158, DOI: 10.1080/10550887.2011.554782 Schraufnagel, D. E. (2015). Electronic Cigarettes: Vulnerability of Youth. Pediatric Allergy, Immunology, and Pulmonology, 28(1), 2–6. doi: 10.1089/ ped.2015.0490
SPRING 2020
17
Electrochemistry as a Tool for Understanding the Brain BY ANNA KÖLLN '22 Cover: A neuron generating and receiving neurochemical signals. Source: Flickr, National Institutes of Health Image Gallery
“Neurotransmitters are constantly shaping humans' emotions and perceptions of reality. Over one hundred chemicals have been identified as neurotransmitters in the human brain.”
18
The Brain and the Significance of Dopamine The brain is one of the most challenging and vast “final frontiers” of science. Unlike in many other fields of science, an overarching theory on brain function is yet to be derived due to the brain’s overwhelming specificity and complexity. The human brain contains an estimated 100 trillion synapses (Alivisatos et al., 2013), which convey neurochemical signals with concentrations in the micro- to nanomolar range on the second to millisecond timescale (Ferapontova, 2017). Due to this lack of understanding, it is particularly difficult to detect and analyze faults in brain function, especially in diagnosing neurological diseases. High resolution instruments for in vivo measurements of the brain are therefore critical to scientists’ ability to fully model its systems. Neurotransmitters are constantly shaping humans’ emotions and perceptions of reality. Over one hundred chemicals have been identified as neurotransmitters in the human brain (Alivisatos et al., 2013). Disruption to the delicate balance of these neurochemicals can lead to a myriad of chronic conditions including depression, Alzheimer’s Disease, Parkinson’s Disease, and schizophrenia. Catecholamines, a class of small–molecule neurotransmitters including dopamine (DA) and epinephrine, have received a great deal of attention due to their roles in a vast range of neurological conditions. As suggested by
their name, catecholamines are structurally characterized by the presence of a catechol, or 1,2-dihydroxylbenzene, and an amine group [Figure 1]. In the brain, catecholamines are found at relatively low concentrations; DA concentrations, for instance, typically fluctuate within the 10 nanomolar to 1 micromolar range (Ferapontova, 2017). DA exhibits enormous potential to aid both diagnosis and treatment of pathophysiology. For example, in schizophrenic patients, DA activity has been shown to be abnormally high. It is thought that this overrun system causes schizophrenic individuals to struggle assigning importance to their surroundings, distorting their perception of reality (Grace, 2016). In contrast, DA activity in those suffering from depression and Alzheimer’s tends to be lower than normal (Zhang et al., 2004). Clearly, it is critical that the activity of neurotransmitters—and DA in particular— be thoroughly understood and monitored in order to accurately diagnose and treat these neurological conditions. Alterations in the DA system come in many forms and can progress either rapidly or over a lifetime (Grace, 2016). It is of utmost importance that scientists develop catecholamine detection methods which are able to capture the nuanced neurotransmitter activity accurately, efficiently, and recurrently.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
et al., 2013). Microdialysis, another sampling technique which measures the analytes in the extracellular fluid of the brain, is invasive and unable to detect neurochemicals in real-time (Ferapontova, 2017). In order to acquire holistic data of the brain, detection methods must be fast and capable of collecting thousands to millions of samples.
The Application of Electrochemistry to in vivo Detection The use of in vivo monitoring presents a solution to many of the barriers surrounding the detection of neurotransmitters. Sampling the chemical activity of the brain in real-time could provide tremendous insight on the minutiae of neurotransmission and allow for more accurate diagnosis and effective treatment of neurological disorders. For example, this may be useful in better understanding Parkinson’s Disease, a condition characterized by abnormally low levels of DA that typically affects the elderly. The cause of this disease is still unknown, but it likely involves complicated long-term disruption of neurotransmitter pathways. Currently, the most effective treatment of Parkinson’s is levodopa (L-DOPA), a DA analog which converts to DA in the brain. L-DOPA is generally administered orally in daily doses at the order of 100 milligrams (Hermanowicz, 2007). However, negative side-effects have been observed in patients who were administered too much L-DOPA (Dorszewska et al., 2014). In vivo observation of the chemicals at play would allow for the finetuning of L-DOPA treatment—wherein doctors could select the optimal milligram amount of treatment for individual patients—and for the monitoring of the long-term effects of Parkinson’s (Ferapontova, 2017). Current methods for the measurement of brain function lack the resolution and multidimensionality needed to understand and model the pathways of the brain. Imaging techniques, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), for instance, offer a snapshot of the entire brain at a particular moment. While vastly helpful for understanding the brain at the macroscale, these techniques do not achieve the spatial and temporal resolution necessary to understand the highly complex pathways of thought and pathophysiology (Alivisatos
SPRING 2020
Electrochemistry is currently being studied as a potential method of in vivo detection. In an electrochemical sensor, two or three electrodes act as interfaces between an electrical circuit and a solution containing relevant chemical analytes, such as DA [Figure 2]. Varying electric potentials are applied to the electrodes in the electrochemical cell, and as chemicals react at the surface of the working electrode (WE), a quantifiable current flows through the circuit. Each analyte in solution will undergo an oxidation and reduction reaction at the surface of the WE at a unique potential, known respectively as its oxidation potential and reduction potential. Thus, the analytes are detected by observing spikes in the current flowing through the electrode system (Bard & Faulkner, 2001) [Figure 3]. In the context of brain implantation, electrodes have the potential to be fine-tuned in shape and miniaturized for minimally invasive sensing.
Figure 1: Structure of the catecholamine dopamine. Source: Created by author in Chem3D
“Sampling the chemical activity of the brain in realtime could provide tremendous insight on the minutiae of neurotransmission and allow for more accurate diagnosis and effective treatment of neurological disorders.”
Electrochemical sensing offers a variety of advantages when utilized as a detection method. First, electrochemical measurements can be taken within seconds and yield immediate results, allowing data to be collected rapidly and frequently (Ferapontova, 2017). This is especially important as changes in DA levels occur on a second to millisecond timescale (Baranwal & Chandra, 2018). Additionally, because chemicals undergo redox reactions at
Figure 2: A diagram of a threeelectrode electrochemical cell in which (1), (2), and (3) respectively represent the working, counter, and reference electrodes. (A) represents an ammeter which measures current, and (V) represents a voltmeter which measures the cell potential Source: Wikimedia Commons
19
unique potentials, electrochemistry can be used to selectively identify specific analytes. Various electrodes have achieved exceedingly low limits of detection of neurochemicals, on the order of 100 to 10 nanomolar (Sajid et al., 2016). This is a critical advantage, as many neurotransmitters such as DA exist in the nanomolar concentration range physiologically. Finally, electrochemistry has the increasing potential for practical applications as it uses relatively low-cost instrumentation (Baranwal & Chandra, 2018).
The Challenges of Electrochemistry “Various electrodes have achieved exceedingly low limits of detection of neurochemicals, on the order of 100 to 10 nanomolar.�
Although electrochemistry is an extremely promising method of neurochemical detection, there are many fundamental challenges that must be overcome before it can be implemented. One of the biggest problems with DA and other catecholamines is that their oxidation potentials are very similar to one another and to other chemicals found commonly in biological fluids, such as ascorbic acid (AA) and uric acid (UA). This similarity is exacerbated by the fact that AA and UA are found at concentrations several orders of magnitude higher than many relevant neurotransmitters (Sajid et al., 2016). The result is a muddled electrochemical signal from which it is functionally impossible to differentiate which analytes are present. Another issue encountered during in vivo experiments is electrode fouling. This occurs when a coating forms over the working electrode during data collection. The layer can be formed either by the detected analyte itself or by other chemical species in solution and reduces the sensitivity of the WE as the relevant analyte is blocked from interacting with its surface. This problem often arises in DA sensing, as the oxidized species of DA polymerizes and adheres strongly to the electrode (Peltola et al., 2018).
Figure 3: Schematic of the process of the electrochemical detection of dopamine Source: Adapted from Wikimedia Commons in ChemDraw and Microsoft PowerPoint
20
Advances in Materials Chemistry The field of materials chemistry focuses on the synthesis, characterization, and application of novel molecules and chemical structures. Scientists have made great strides in the field of nanochemistry recently, which, as the name suggests, deals with chemical structures on the nanoscale. In the context of electrochemical sensing, materials chemistry investigates the composition and structure of the electrodes. Nanomaterials pose a particularly promising opportunity for the optimization of electrodes, as neurotransmission itself occurs on the microto nanoscale (Alivisatos et al., 2013). To address the limitations of electrochemistry and improve performance and biocompatibility, many chemical materials are currently being investigated. These materials constitute the surface of the WE and participate directly in redox chemistry with the detectable analytes. The current standard for electrode material is carbon, in forms such as glassy carbon or carbon-fiber. These electrodes function well because they are good conductors, relatively sensitive, biocompatible, and are composed of abundant and inexpensive material (Cao et al., 2019). However, the previously discussed issues surrounding selectivity and electrode fouling limit the ability of carbon electrodes in vivo. Therefore, researchers are investigating chemical materials which are more sensitive, selective, tunable, and less prone to fouling. Metal–organic frameworks (MOFs) are a promising class of material being studied for its ability to detect neurotransmitters. Twodimensional MOFs are composed of a plane of metal ions coordinated in a network of organic ligands. Their extended structure often results in a porous surface, which increases the functional surface area of the material. As both the metal and ligand can be interchanged, the materials are highly tunable for specific applications. Certain conductive MOFs are redox-active and solution stable, making them well-suited for electrochemical sensing. Recently published data studying the MOF of Ni3HHTP2 have shown that these materials have physiologically applicable limits of detection and are able to differentiate between DA, AA, and UA in a multianalyte solution (Ko et al., 2020). These results show promise for the use of these materials for in vivo detection [Figure 4].
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
potentially be difficult to miniaturize and cohesively arrange into an array (Wolfrum et al., 2016). Chip-based technologies are a promising alternative to address these issues. A single chip could contain a large number of microscale electrodes and arrange them in crossed or stacked patterns to maximize the use of space. In addition, new printing technologies could provide a way to deposit materials onto such chips (Wolfrum et al., 2016). Although these electrode configurations will require further studies, they could prove to be practically applicable as implantable devices for in vivo monitoring.
Developments in Electrochemical Probe Devices Another important aspect of the implementation of electrochemical sensing is the development of electrochemical devices engineered for in vivo application. A potential route for chemical detection is implantable devices, which would allow for long-term monitoring of the neurochemical systems of patients (Hejaz, 2020). Miniaturization is crucial for the electrodes of these devices to be minimally invasive in the body. Detection via an array of electrodes would allow for the simultaneous collection of multiple data points, yielding a more holistic look at neurotransmitter activity (Cao et al., 2019). The application of in vivo electrochemical detection is already under investigation in small animal brains, such as rats. In such experiments, a sensing device is attached to the skull and electrodes are inserted into relevant parts of the brain for neurotransmitter detection (Baranwal & Chandra, 2018). It is clear that the less invasive these sensing devices are, the more practical they are, as the animal subjects are able to behave as normally as possible—an imperative consideration for monitoring brain function. To this end, researchers are miniaturizing sensing devices and utilizing wireless systems (Roberts & Sombers, 2018). With these developments, devices are approaching a level of sophistication with which they will be ready to study human systems, both ex vivo and in vivo. Currently, probe-based electrochemical experiments are the most common. In these investigations, the electrodes are fabricated as extremities which reach into the region of detection. However, these devices could
SPRING 2020
Figure 4: Schematic representing how metal-organic framework materials are used to modify glassy carbon electrodes and detect catecholamines using redox chemistry. Source: Reprinted with permission from Ko, M., Mendecki, L., Eagleton, A. M., Durbin, C. G., Stolz, R. M., Meng, Z., & Mirica, K. A. (2020). Employing Conductive Metal-Organic Frameworks for Voltammetric Detection of Neurochemicals. Journal of the American Chemical Society.
Conclusions The ability to monitor neurochemicals in real-time will vastly expand scientists’ understanding of the brain. This knowledge will contribute to improving the diagnosis and treatment of neurological conditions that affect a considerable portion of the human population. Electrochemical sensing is a fast-growing field of study for the sensing of neurotransmitters in vivo. Although electrochemical methods currently face limitations to optimized detection, many chemical materials are under investigation to enhance their performance. In time, these materials will be applied to electrochemical devices which will be implanted and operated in vivo. With these cutting-edge tools, scientists will be significantly closer to modelling and deriving general theories for brain function.
“The application of in vivo electrochemical detection is already under investigation in small animal brains, such as rats.”
References Alivisatos, A. P., Andrews, A. M., Boyden, E. S., Chun, M., Church, G. M., Deisseroth, K., Donoghue, J. P., Fraser, S. E., Lippincott-Schwartz, J., Looger, L. L., Masmanidis, S., McEuen, P. L., Nurmikko, A. V., Park, H., Peterka, D. S., Reid, C., Roukes, M. L., Scherer, A., Schnitzer, M., … Zhuang, X. (2013). Nanotools for Neuroscience and Brain Activity Mapping. ACS Nano, 7(3), 1850–1866. https://doi.org/10.1021/nn4012847 Baranwal, A., & Chandra, P. (2018). Clinical implications and electrochemical biosensing of monoamine neurotransmitters in body fluids, in vitro, in vivo, and ex vivo models. Biosensors and Bioelectronics, 121, 137–152. https://doi.org/10.1016/j. bios.2018.09.002 Bard, A. J., & Faulkner, L. R. (2001). Electrochemical Methods: Fundamentals and Applications (2nd ed.). John Wiley & Sons. BruceBlaus. (2017). An illustration showing the dopamine pathway. Own work. https://commons.wikimedia.org/wiki/ File:Dopamine_Pathway.png Cao, Q., Puthongkham, P., & Jill Venton, B. (2019). Review: New insights into optimizing chemical and 3D surface structures
21
of carbon electrodes for neurotransmitter detection. Analytical Methods, 11(3), 247–261. https://doi.org/10.1039/C8AY02472C Chem461S16Group5. (2016). Shows how different a voltammogram can look at different scan rates. Own work. https://commons.wikimedia.org/wiki/File:Scan_rate_5.jpg
Zhang, L., Zhou, F.-M., & Dani, J. A. (2004). Cholinergic Drugs for Alzheimer’s Disease Enhance in Vitro Dopamine Release. Molecular Pharmacology, 66(3), 538–544. https://doi. org/10.1124/mol.104.000299
Dorszewska, J., Prendecki, M., Lianeri, M., & Kozubski, W. (2014). Molecular Effects of L-dopa Therapy in Parkinson’s Disease. Current Genomics, 15(1), 11–17. https://doi.org/10.2174/13892 02914666131210213042 Ferapontova, E. E. (2017). Electrochemical Analysis of Dopamine: Perspectives of Specific In Vivo Detection. Electrochimica Acta, 245, 664–671. https://doi.org/10.1016/j. electacta.2017.05.183 Grace, A. A. (2016). Dysregulation of the dopamine system in the pathophysiology of schizophrenia and depression. Nature Reviews Neuroscience, 17(8), 524–532. https://doi.org/10.1038/ nrn.2016.57 Hejazi, M. A., Tong, W., Stacey, A., Soto-Breceda, A., Ibbotson, M. R., Yunzab, M., Maturana, M. I., Almasi, A., Jung, Y. J., Sun, S., Meffin, H., Fang, J., Stamp, M. E. M., Ganesan, K., Fox, K., Rifai, A., Nadarajah, A., Falahatdoost, S., Prawer, S., … Garrett, D. J. (2020). Hybrid diamond/ carbon fiber microelectrodes enable multimodal electrical/chemical neural interfacing. Biomaterials, 230, 119648. https://doi.org/10.1016/j. biomaterials.2019.119648 Hermanowicz, N. (2007). Drug Therapy for Parkinson’s Disease. Seminars in Neurology, 27(2), 97–105. https://doi. org/10.1055/s-2007-971177 Ko, M., Mendecki, L., Eagleton, A. M., Durbin, C. G., Stolz, R. M., Meng, Z., & Mirica, K. A. (2020). Employing Conductive Metal-Organic Frameworks for Voltammetric Detection of Neurochemicals. Journal of the American Chemical Society. https://doi.org/10.1021/jacs.9b13402 National Institutes of Health Image Gallery (2016). Signaling in Neurons [Photo]. https://www.flickr.com/photos/ nihgov/29408129610/ Peltola, E., Sainio, S., Holt, K. B., Palomäki, T., Koskinen, J., & Laurila, T. (2018). Electrochemical Fouling of Dopamine and Recovery of Carbon Electrodes. Analytical Chemistry, 90(2), 1408–1416. https://doi.org/10.1021/acs.analchem.7b04793 Rędzikowski, A. (2013). Three-electrode setup for measurement of potential. Own work, based on: https://commons.wikimedia. org/wiki/File:Three_electrode_setup.svg Roberts, J. G., & Sombers, L. A. (2018). Fast-Scan Cyclic Voltammetry: Chemical Sensing in the Brain and Beyond. Analytical Chemistry, 90(1), 490–504. https://doi.org/10.1021/ acs.analchem.7b04732 Sajid, M., Nazal, M. K., Mansha, M., Alsharaa, A., Jillani, S. M. S., & Basheer, C. (2016). Chemically modified electrodes for electrochemical detection of dopamine in the presence of uric acid and ascorbic acid: A review. TrAC Trends in Analytical Chemistry, 76, 15–29. https://doi.org/10.1016/j. trac.2015.09.006 Wolfrum, B., Kätelhön, E., Yakushenko, A., Krause, K. J., Adly, N., Hüske, M., & Rinklin, P. (2016). Nanoscale Electrochemical Sensor Arrays: Redox Cycling Amplification in Dual-Electrode Systems. Accounts of Chemical Research, 49(9), 2031–2040. https://doi.org/10.1021/acs.accounts.6b00333
22
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
23
Personality Type as an Indicator of Treatment Suitability in Remote Psychotherapy BY AUDREY HERRALD '23
Abstract Comparing Modes of Remote Psychotherapy Source: Unsplash; Creator: Al Hakiim
Keywords: remote psychotherapy, telemental health, messagingmediated, videochatmediated, experience, personality type, motivation.
“Guidelines established by the American Psychological Association (APA) advise that psychotherapy should play a central role in the treatment of mental illness.” 24
Remote psychotherapy is a form of psychiatric treatment in which a licensed provider administers psychotherapy (goal-oriented communication guided by psychological principles) to patients via phone, video, online messaging, or some other technological means. Studies suggest that remote psychotherapy is a promising solution to many of the access barriers that limit engagement in traditional psychotherapy. However, remote psychotherapy takes numerous forms, and few empirical comparisons between these different modes of remote psychotherapy have been conducted. This paper contains a literature review that highlights the need for empirical comparisons between treatment modes. It also includes a two-arm, individually randomized exploratory study comparing two forms of remote engagement: videochat-mediated and messaging-mediated psychotherapy. The study was conducted with a small sample size in a non-clinical environment and is simply intended to generate hypotheses for future empirical comparisons. Results identify extroversion, low levels of intrinsic motivation, and an inclination for schedule adherence as personality traits that might correlate more strongly with treatment success in videochatmediated than messaging-mediated therapy. Other potential predictors of treatment mode success are discussed, each with the aim of generating hypotheses for future empirical comparisons of different remote therapy modes.
Introduction Guidelines established by the American Psychological Association (APA) advise that psychotherapy should play a central role in the treatment of mental illness. These guidelines are supported by numerous studies that find psychotherapy effective in reducing symptoms across a broad range of mental disorders and patient populations (APA Recognition of Psychotherapy Effectiveness, 2012). However, access barriers like transportation issues, the need for childcare, perceived stigmatization of mental disorders, and difficulty leaving work can limit engagement in traditional psychotherapy (Penny Elizabeth Bee et al., 2010; Mohr et al., 2005; Zivin et al., 2009). Remote psychotherapy—delivered via telephone, videochat, text-message, or other electronic media—has been shown to greatly improve treatment access (Brenes et al., 2011; Langarizadeh et al., 2017; Tutty et al., 2005). But remote psychotherapy takes many different forms, and few studies provide empirical comparisons between the different remote treatment options. One problem associated with the lack of empirical comparison is its inhibition of pre-treatment screening for effective treatment “fit” (Watzke et al., 2010). In pre-treatment screening, patients referred to a psychiatrist receive a set of recommended psychotherapy types based on their unique needs and circumstances. Screening might be offered by a medical provider, but selfscreening with the help of online tools is also common. Both forms of preparation reduce the likelihood of patient dropout, significantly DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
improving treatment efficacy (Mohr et al., 2006; Ogrodniczuk et al., 2005). Such screening is possible for traditional psychotherapy due to a well-developed body of comparative treatment-type research, but currently, remote psychotherapy lacks this body of research. Empirical comparisons between different remote treatment types will enable providers to offer evidence-based recommendations and patients to pursue the most suitable treatment type. This paper addresses the lack of empirical comparisons between remote treatment modes in two parts. First, a literature review highlights the need for further research. Then, an exploratory study compares the effectiveness of two different remote treatment modes with respect to participant personality type. The study consists of two treatment groups, one of which engaged in messagingmediated interactions and the other in videochat-mediated interactions. A poststudy questionnaire generated three modes of comparison between treatment groups: (1) assessment of overall success rates (quantified as weekly goal achievement), (2) assessment of success rates with respect to participant personality type, and (3) assessment of success rates with respect to perceived treatment efficacy. The study was conducted with a limited number of participants (n=6). Accordingly, the results lack sufficient statistical power for terminal conclusions and should not serve as a basis for clinical action. Rather, this exploratory study is intended to help generate hypotheses for future investigations by identifying potential links between personality type, perceptions, and the perceived efficacy of different remote treatment modes. Remote psychotherapy is a promising treatment method for mental illness; recent investigations find that remote psychotherapy is more effective than “no treatment” in reducing symptoms of mental illness (Castro et al., 2020; Lamb et al., 2018). Though a small number of studies suggest that remote psychotherapy is less effective than in-person treatment (Brenes et al., 2011; Mohr et al., 2011), others find it equally as effective (Langarizadeh et al., 2017) or even more effective than inperson treatment (Lovell et al., 2006). Some researchers note that remote treatments might come with unforeseen legal and ethical issues (Berryhill et al., 2018; Reynolds et al., 2013). The APA advises clinicians who are considering remote treatment to “tread carefully,” stressing
SPRING 2020
that online platforms should be surveyed for compliance with HIPAA, state licensing laws, and other psychiatric regulations (Guidelines for the Practice of Telepsychology, n.d.). Legal and ethical implications should be taken into account by any clinician who seeks to administer remote psychotherapy, but these issues are far from prohibitive; careful documentation paired with review of existing regulations should enable safe administration of the promising treatment (Barnett & Scheetz, 2003). As studies continue to demonstrate the potential value of remote psychotherapy, different modes of remote treatment have emerged. Four of the most common modes, identified here and further explored in the literature review, are telephonemediated therapy, videochat-mediated therapy, messaging-mediated therapy, and internet cognitive behavioral therapy (ICBT). Telephone and videochat-mediated therapies typically consist of weekly patient-provider engagements that are run much like a traditional therapy session, except all interaction takes place over the phone or through a videochat. In messaging-mediated therapy, clients and therapists chat asynchronously (often with a maximum response-time latency of 24 hours). Finally, ICBT provides a self-guided online “course” complete with therapist-created modules and prerecorded videos. The client might have to submit “assignments,” and the therapist delivers periodic check-ins via email or phone during treatment. This paper will first address the variation in these treatment types through a review of the literature on remote psychotherapy, intended to highlight the need for empirical comparisons between remote treatment modes by outlining the gap in the existing literature.
“Remote psychotherapy certainly benefits from recent technological developments, but the practice existed years before the first cell phones were developed.”
Literature Review Remote psychotherapy certainly benefits from recent technological developments, but the practice existed years before the first cell phones were developed; its first known implementation occurred in 1960, when the Nebraska Psychiatric Institute established a closed-circuit television link with Norfolk State Hospital (Services & Medicine, 2012). This connection allowed psychiatrists to provide psychiatric diagnoses to Norfolk State patients from hundreds of miles away. Now, remote psychiatric care accomplishes more than just diagnoses and is certainly not limited to closed-
25
circuit television, yet its intent—improving access to psychiatric care—remains the same. Since 1960, researchers have generated an extensive body of scientific literature on the subject of “remote psychotherapy.” The present review surveys a subset of this literature: peer-reviewed, English-language journal articles published in the year 2000 or later, archived in the PubMed or EBSCO PsychINFO databases. A bibliographic search of the PubMed and PsychINFO databases was conducted, with the search terms “remote psychotherapy,” “telemental health,” “telepsychiatry,” “videophone therapy,” “text-message therapy,” and all related terms generated by database search thesauri. The present review has been conducted under moderate time constraints, and its results do not account for the entirety of telepsychiatry-related scientific literature. However, the review does reveal an important gap in the existing literature: despite a wide range of remote psychotherapies, few studies provide empirical comparisons between these treatment types. This assessment arises from observation of the selected works with respect to two categories: (1) the mode(s) of remote communication studied and (2) the method of comparison. Each of these categories will be thoroughly defined in the following sections.
Remote Treatment Mode “...the review does reveal an important gap in the existing literature: despite a wide range of remote psychotherapies, few studies provide empirical comparisons between these treatment types.”
The earliest forms of telepsychiatry relied upon closed-circuit television systems (Shore, 2015). In modern times, however, most remote psychiatric treatments take one or some combination of the following forms: telephone-mediated therapy, videoconferencing-mediated therapy, text-mediated therapy, or internet cognitivebehavioral therapy (ICBT). It is important to note that many psychiatrists offer remote connections as a supplement to in-person engagements (i.e., text message conversations between weekly appointments, or video-recorded diagnoses followed by in-person meetings) (Sayal et al., 2019; Schubert et al., 2019). However, for the purposes of this review, “telepsychiatry” will refer to psychiatric treatments delivered entirely via remote platforms. Of the literature identified, studies regarding telephone-mediated therapy are most common (Mohr et al., 2005, 2008; Simon et al., 2004). In recent years, however, the trend begins to shift towards videochat-mediated engagements (Hantke et al., 2019; Koblauch et al., 2018; Pierce et al., 2020; Varker et al., 2019). Both methods
26
are capable of diagnosing and treating disorders across a number of populations (children, adults, elderly, socioeconomically disadvantaged, minorities, veterans) and for a wide range of psychiatric disorders (including mood, anxiety, and eating disorders) (Hilty et al., 2013). Patient-provider concordance, a common metric for perceived effectiveness of psychiatric treatment, is high in most studies of telephone and videochat-mediated treatment, and the patient retainment rate is often comparable to that of traditional therapy (Hilty et al., 2013; Jenkins-Guarnieri et al., 2015). Any studies that identify one of these modes as superior to the other tend to do so with their own commentary (e.g., “telephone-based interventions may confer specific advantages in terms of widespread availability and ease of operation” or “telephone consultations are... less demanding than video consultations,”) but these assessments lack the support of empirical comparisons (Penny E Bee et al., 2008; van den Berg et al., 2011). Telephone and videochat-mediated engagements are not the only two modes of remote psychotherapy: recent years have also seen an increase in text messagingmediated treatment. However, few empirical comparisons have been made between messaging-mediated treatment and other forms of remote therapy. Early studies on messaging-mediated psychiatric treatment tend to investigate the efficacy of messaging in aiding medication adherence, easing transitions between inpatient and outpatient treatment, or supplementing another form of psychiatric treatment (Bauer et al., 2012; Levin et al., 2019; Naslund et al., 2017). Recently, a rise in message-mediated therapy platforms (e.g., Better Help and Talkspace) has necessitated the investigation of standalone messagingmediated therapy. A collection of small-scale studies tracked the experiences of Better Help participants and found evidence for symptom reduction and patient satisfaction (Marcelle et al., 2019). A 2013 study found that the alliances created between patient and provider in messaging-mediated therapy were similar to, and sometimes better than, alliances created during in-person treatment (Reynolds et al., 2013). Likewise, a review by Hoermann and colleagues found that studies of messagingmediated therapy showed significant and sustained improvements in mental health outcomes compared to no-treatment control groups. The outcomes were equivalent to inperson treatment, suggesting a promising
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
future for messaging-mediated treatments as an alternative to traditional psychotherapy (Hoermann et al., 2017). ICBT (internet cognitive behavioral therapy), delivered through online modules, is another prevalent mode of remote psychotherapy (Christensen et al., 2004). Though relatively new (no studies regarding ICBT could be identified prior to 2015), ICBT is emerging as an effective mode of remote therapy (Romijn et al., 2019; Titov et al., 2018). ICBT differs slightly from the three previously mentioned modes of remote treatment; while videochat, messaging, and telephone-mediated therapy simply shift in-person conversations to an online platform, ICBT involves substantially less patient-provider interaction than traditional treatment. Participants work through online modules designed by their therapist and take “quizzes” to track their progress. Occasional therapist interaction might occur via email, but such communication is not intended to be an integral part of most ICBT treatments. Even without substantial patient-provider interaction, ICBT has proven successful in recent trials (Andersson et al., 2005, 2006, 2014; Andersson & Titov, 2014). Such results are particularly powerful in bolstering support for remote therapy, given the demonstrated potential for effective treatment without direct patient-provider interaction.
Method of Comparison The mode of remote treatment investigated, as detailed above, is one defining characteristic of the reviewed literature—but equally important are the types of comparison that researchers perform. Most investigations compare remotely delivered treatment to in-person treatment (De Las Cuevas et al., 2006; Mohr et al., 2011; Morland et al., 2010, 2014; Nelson et al., 2003; O’Reilly et al., 2007; Ruskin et al., 2004), and many of these studies find remotely delivered psychiatric treatment to be equally as effective as in-person treatment, though a small number of publications find remote treatment less effective (Koblauch et al., 2016; Mitchell et al., 2008). Comparisons between remote and in-person treatments are important, and these investigations were particularly critical when remote therapy was still emerging as a novel form of psychiatric treatment. Now, however, remote therapy is generally accepted as a valuable means of improving access to psychiatric care (Hilty et al., 2013; Hoermann et al., 2017; Hubley et al., 2016; Koblauch et al.,
SPRING 2020
2016). For example, while journal articles from the early 2000s tend to question the feasibility and effectiveness of remote psychiatry, articles from the recent decade are more likely to address expansion issues, such as the logistics of remote-therapy implementation or telepsychiatry use among prisoners, refugees, veterans, and other special populations (Folker et al., 2018; Hassan et al., 2019; Nicholas et al., 2019; Shreck et al., 2020). At this point, the expansion of remote psychotherapy depends not on further comparison to in-person treatment, but on empirical assessment of the strengths and weaknesses of different remote psychotherapy modes compared to one another. A small number of studies approach comparison between remote treatment modes, though no direct comparisons could be identified. Some investigations, like a recent study conducted by Erbe and colleagues, assess the feasibility of “blended” remote treatments (Erbe et al., 2017). These blended treatments combine elements of multiple remote treatment modes (e.g., ICBT supplemented with daily text message reminders or weekly telephone check-ins). Blended treatment feasibility studies address relative strengths and weaknesses of remote therapies, and some have found (for example) that patients tend to express greater confidence in videochat-mediated methods than texting or internet-based arrangements (Erbe et al., 2017). Another such study concluded that telephone-mediated treatments were superior to videochat because of the “prevalence of cellphones,” presumably under the assumption that videochat engagements would require supplemental technology (Penny E Bee et al., 2008). The beginnings of remote treatment comparisons exist, but a review of the literature reveals them to be few and far between, and often based upon limited evidence.
“In short, the current body of literature provides ample support for the feasibility of remote pscyhotherapy as an alternative to traditional in-person treatment.”
In short, the current body of literature provides ample support for the feasibility of remote psychotherapy as an alternative to traditional in-person treatment. Importantly, no studies could be identified that offered empirical comparisons between different modes of remote therapy. This issue is especially relevant now, given the burgeoning largescale transition towards telemedicine and growing acceptance of remote psychotherapy; providers face increasing expectations to offer some form of remote therapy, but a substantial lack of research regarding comparisons between remote treatment modes leaves
27
providers with limited evidence upon which to base their decisions. Future research in this area would enable providers to make informed decisions in remote treatment implementation, leading to better-tailored patient experiences and greater access to psychiatric care.
Methodology “For four weeks, six voluntary participants engaged remotely with the experimenter (referred to as the "coach") to discuss and bolster their progress towards a self-defined goal.”
Overview For four weeks, six voluntary participants engaged remotely with the experimenter (referred to as the “coach”) to discuss and bolster their progress towards a self-defined goal. This design intentionally mirrors the aim of psychotherapy as defined by the APA: to help participants “modify their behaviors, cognitions, emotions, and/or other personal characteristics in directions that the participants deem desirable” (APA Recognition of Psychotherapy Effectiveness, 2012). However, all participants were made explicitly aware that they would not be receiving psychotherapy and that the “coach” was not a licensed psychotherapist, counselor, or mental healthcare provider. All participantcoach interactions were conducted in a manner consistent with the cordiality that would exist between a motivation-seeking individual and their motivating friend. The specifics of “coach” conduct and interaction style are addressed in the subsequent section. After defining their goals, study participants were randomly assigned to one of two treatment groups: a videochat-mediated group (Group V) or a messaging-mediated group (Group M). Treatment Group V engaged weekly with the coach via FaceTime during scheduled 30-minute calls. Treatment Group M engaged with the coach via continuous, asynchronous text messaging. Group M participants were allowed to send as many text messages as they desired, at any time of day, on any day of the week. The coach responded to all messages within 24 hours of receiving them. At the end of four weeks, participants’ progress towards their measurable goals was discussed and assessed (via text for Group M and FaceTime for Group V). Participants also completed an online questionnaire at the conclusion of the study. The questionnaire, available in Appendix A or upon request, assessed participant personality type, goal achievement, and perception of treatment efficacy. Participants Six participants were assigned to one of two treatment groups according to an alphabetized
28
random-number generator. All participants were between the ages of 18 and 22. Ten potential participants were invited to take part in the study. Of these ten, six chose to participate. All ten potential participants were approached on the basis of four contingencies: 1. The participants had a well-established positive relationship with the “coach.” 2. The participants were 16-18 years in age 3. The participants would be willing and able to participate in one 30-minute video call per week. 4. The participants would be willing and able to send periodic text messages to the experimenter. 5. The participants would be willing and able to work with the experimenter in defining a measurable goal that they genuinely hoped to achieve. Each of the above contingencies limited potential confounding variables by maintaining consistency in participant characteristics. Contingency (1) reduced confounding that might occur as a result of drastic differences in pre-treatment relationship between the coach and the participant; an extended period of familiarization or trust-building, typical in a therapeutic setting and necessary for fruitful engagement, might have detracted from goal achievement by shortening the duration of effective post-familiarization engagement. Contingency (2) minimized the intervening variable of age. Contingencies (3) and (4) were logistically essential for treatment administration, and contingency (5) allowed for replication of the genuine client desire for behavioral change that is present in true psychotherapy. Independent Variables The independent variables for this study are the two different modes of coach-participant interaction: Messaging-mediated interactions consisted of an unlimited number of asynchronous text messages sent at any time of day. The coach responded to all participant messages within 24 hours of receiving them. Videophone-mediated interactions consisted of weekly 30-minute FaceTime calls. The calls were scheduled in advance and occurred on a regular basis. Treatment Group V. Upon treatment group
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table 1 Treatment Group V Schedule
assignment, participants were briefed with a “Study Introduction” document (available in Appendix A or upon request). The document instructs participants to arrange an initial time and date for their recurring videochat sessions. All subsequent sessions were conducted according to the schedule shown in Table 1. Treatment Group M. Group M participants were also briefed with a “Study Introduction” document (available in Appendix A or upon request). On the same day, participants received an initial text message from the experimenter. This initial message followed the template as displayed in Figure 1. From here, the participant and coach engaged in asynchronous text-messaging conversations for a period of four weeks. During this time, both the participant and coach were free to send as many text messages as desired, on any day, at any time, to the other individual. The coach responded to all participant messages within 24 hours of receiving them. Just as in messaging-mediated therapy, the coach was free to respond with as many or as few messages as she believed would benefit the participant and as she had time to send. “Benefit” here is defined as the participant feeling supported and making measurable progress towards their self-identified goal. Messaging proceeded according to the schedule shown in Table 2. Dependent Variables The observed dependent variables in this study are as follows: 1) Quantitative progress towards measurable goals, as defined by the coach and participant within the first 7 days of the study and assessed
SPRING 2020
via anonymous post-study questionnaire. 2) Participant evaluation of experience via an anonymous, electronic post-study questionnaire. The categories of evaluation are as follows: a) Goal Achievement: i) Number of goal-achieved days per week ii) Number of intentional goal-related thoughts per day b) Participant Personality Type: i) Self-assessed level of extroversion ii) Typical levels of intrinsic motivation iii) Inclination for schedule adherence c) Participant perceptions of: i) Treatment efficacy ii) Level of coach investment in participants’ goal achievement Participants also provided a written qualitative assessment of the experience, highlighting any difficulties or unanticipated benefits, especially those related to communication. The full-length questionnaire and anonymized questionnaire responses are included in Appendix A (or available upon request).
“...the participant and coach engaged in asynchronous text-messaging conversations for a period of four weeks. During this time, both the participant and coach were free to send as many text messages as desired, on any day, at any time, to the other individual.”
Relation to Therapy These goal-oriented engagements are not therapy sessions, nor is the “coach” a therapist. The study mimics certain aspects of remote psychotherapy in full recognition of the fact that participant experiences do not mirror those of clients who receive true remote therapy. The following points address four of the most prominent therapy-mimicking methodology decisions.
29
psychotherapy, the relationship between provider and participant is important (Marmarosh et al., 2009). Had some portion of study participants been significantly less comfortable with the coach than others, the according differences in treatment experience might have overshadowed the potential treatment-related differences in experience.
Figure 1: Initial Message Template
(3) The age of participants: The selected age cohort (16-18 years) is representative of a student population. Some studies suggest that college students are particularly likely to experience the emergence of mental illnesses and substance abuse issues (Pedrelli et al., 2015). The busy schedules, low income, and high degree of mental health-related stigma among students might also exacerbate barriers to traditional psychotherapy, making findings related to the efficacy of remote therapy among students particularly relevant.
“Multiple factors have been identified that might render a patient bettersuited for particular modes of in-person treatment, but few equivalent assessments exist for clinicians who must choose between various modes of remote therapy.”
(4) The choice of self-identified goals: In a typical therapist-client relationship, the client pays for the therapist’s services because they harbor an intrinsic desire to change some aspect(s) of their life. For the purposes of this study, giving all participants a uniform goal (e.g., get at least 8 hours of sleep each night) would have greatly simplified data collection, progress tracking, and “coaching,” as only one set of goal-achievement strategies would be necessary (as opposed to strategies for six entirely different and fairly nuanced goals.) However, the self-identified goal was a critical element of the study, because it helped approximate the genuine client desire for behavioral change and the uniqueness of each client-therapist engagement that are typical of true psychotherapy.
Results and Discussion
(1) The willingness of participants to engage in messaging or video chats: If real-world clients did not want to use a remote therapy platform because they were uncomfortable or unconvinced of its efficacy, then one can presume that they would not pay to do so. Participants were chosen whose comfort communicating via technology might mirror the comfort levels of those who might choose to participate in real-world remote therapy. (2) The relationship between coach and selected participants: In remote and traditional
30
Summary Multiple factors have been identified that might render a patient better-suited for particular modes of in-person treatment (e.g., cognitive therapy vs. exposure therapy), but few equivalent assessments exist for clinicians who must choose between various modes of remote therapy (Kaczkurkin & Foa, 2015). Importantly, providers who perform pretreatment assessments are better positioned to provide their clients with personalized treatment (Ogrodniczuk et al., 2005). This exploratory study aims to identify personality traits and other engagement factors that could be examined for their predictive power with regard to remote treatment effectiveness in
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table 2 Treatment Group M Schedule
future empirical work. The engagement factors identified in the study emerged from a poststudy questionnaire and participant interview, both of which assessed three elements of participant experience: goal achievement, personality type, and participant perceptions. Weekly goal achievement was higher on average and more consistent between weeks in Group M, but Group V participants showed a steady increase in weekly goal achievement and higher average week 4 goal achievement than Group M. These results suggest that messaging-mediated engagements might lead to more immediate behavioral changes, while videochat-mediated engagements might have an overall stronger (but slower) effect on behavior. We also asked participants whether they were more, less, or equally likely to achieve their goal when they had recently interacted with the coach. One participant from each treatment group answered “more,” and though the Group M respondent showed consistent daily goal achievement, the Group V respondent tended to achieve their daily goal less on the days when most time had elapsed since a videochat engagement. Thus, for patients whose behavior tends to be strongly affected by provider interaction, the constancy of messaging-mediated interaction might prove more beneficial than periodic videochatmediated engagements. Extroverted personality types tended to improve more through videochat-mediated than messaging-mediated engagements, suggesting that extroversion could be a relevant trait for clinicians to assess during treatment assignment. Goal achievement was
SPRING 2020
more strongly correlated with “typically high” intrinsic motivation for Group V participants than Group M participants, suggesting that patients who self-identify as intrinsically motivated might benefit more from videochatmediated therapy, and less-motivated patients might benefit from messaging-mediated treatment. Finally, Group V participants who expressed an inclination for schedule adherence demonstrated higher levels of goal achievement than Group M participants who expressed the same. Inclination for schedule adherence, then, could be another relevant personality trait for consideration during treatment assignment. The participant perception of coach investment (i.e., how much participants thought the coach cared about helping them achieve their goal) varied more widely in Group M than Group V. Studies identify patient perception of clinician investment as a direct factor in therapeutic success (Kallergis, 2019). These findings, coupled with our results, suggest that messagemediated engagements might necessitate additional effort on the clinician’s part to ensure more positive perceptions of coach investment. Participant perceptions of treatment-mode efficacy were also assessed. All participants, regardless of treatment group, perceived videochat-mediated therapy as more effective than, or equally as effective as, messagingmediated therapy. Like the perceived clinician investment, patient perceptions of treatment effectiveness correlate strongly with treatment outcomes (Vîslă et al., 2018). Thus, our results suggest that the effectiveness of messagingmediated treatments might be limited by a lack of patient preference and that patients’
“Extroverted personality types tended to improve more through videochatmediated than messaging-mediated engagements.”
31
“By the fourth week, participants in both treatment groups showed daily goal achievement during more than half of the week.”
confidence in a remote treatment mode should be taken into account during treatment assignment. Goal Achievement The main findings associated with “goal achievement” are outlined in the preceding summary section: goal achievement was higher on average and more consistent between weeks in Group M, but Group V participants saw a steady increase in weekly goal achievement and higher average week 4 goal achievement than Group M. The present section provides further exploration and discussion of these central findings. Table 2 displays a summary of these findings, and Figure 3 presents daily goal achievement for all study participants, grouped by treatment type. By the fourth week, participants in both treatment groups showed daily goal achievement during more than half (≥ 4 days) of the week. However, this ≥ 50% goal achievement rate came differently depending on participants’
treatment group, and these differences are the focus of this section. The pattern of change in goal achievement for Group M stayed more stable over the course of the study than goal achievement for Group V participants; in Group M, just one week of treatment (daily messages) proved sufficient to produce an average goal achievement rate of about 6 days per week. Conversely, in Group V, the first week of treatment (just one videochat call) yielded the much lower average goal achievement of 3 days per week. However, with each additional week of treatment (each additional videochat call), Group V goal achievement rates increased until they actually surpassed those of Group M in week 4. Group M goal achievement rates were high (avg. 6 days/wk.) after just one week of messages, but three additional weeks of messaging did not elicit the strong average goal achievement rates seen in Group V videochat calls after 4 weeks. One possible explanation is the development of a closer interpersonal relationship through videochat engagements than through messaging; the
Table 3: Summary Findings: Goal Achievement, Personality Type, and Participant Perception Key: “M1” – Treatment Group M, Participant 1 (2, 3, etc.) “V1” – Treatment Group V, Participant 1 (2, 3, etc.)
32
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table 4 Goal Achievement: Summary Data
videochat participants might have grown closer to, and perhaps felt more inclined to impress or succeed for, the “coach” after each subsequent videochat. For messagingmediated participants, the relationship might not have developed as strongly, hence a lower overall goal achievement, but the motivation did come more quickly and consistently (via daily messages) from the moment that treatment began—which could explain the initially high goal achievement. The value of messaging-mediated interactions is also reflected by daily “intentional goalrelated thoughts,” a metric shown in Figure 4. Daily goal achievement correlates positively with daily “goal-related thoughts,” and Group M participants show a higher total number of intentional goal-related thoughts than Group V. These results are consistent with those represented in Figure 3; Group M participants showed less improvement, but higher average daily goal achievement, than Group V participants. A small sample size (n=3 for each group) greatly limits the reliability of statistical tests, but a two-sample independent t-test for the difference between average daily goal thought in the two groups (Avg.M , Avg.V) gives weak evidence for the rejection of the null hypothesis that Avg.M = Avg.V in favor of Avg.M > Avg.V, with p = 0.08 at α = 0.10. (See Data Analysis: A Model Comparison Approach, p. 204, for a discussion of flexible statistical significance in exploratory studies, as well as “Practical Interpretation of Hypothesis Tests” from the American Statistical Association [1987, p. 246] for commentary on 0.10 level significance in the case of small sample sizes.)
achieve their goal on the day immediately before or after a coach engagement? Overall, two-thirds of participants responded “more likely” to this question in the post-study questionnaire. These results, too, are consistent with the findings that Group M participants had a more consistent and higher average goal achievement than their Group V counterparts; Group M participants usually interacted with the coach on a daily basis, providing ample opportunity for this coach-associated increase in motivation to bolster their goal achievement, whereas Group V participants spent six days a week without any form of coach engagement. For clinicians, this knowledge in conjunction with the “Goal Achievement” section could prove valuable in treatment assignment. For example, a highly interactive patient who has indicated (either in prior treatment or in
“Daily goal achievement correlates positively with daily "goalrelated thoughts", and Group M participants show a higher total number of intentional goal-related thoughts than Group V.”
Figure 2: Daily Goal Achievement by Treatment Group Goal Achievement was higher on average and more consistent between weeks in Group M, but Group V participants saw a weekly increase in goal achievement and higher average week 4 goal achievement than Group M.
A related metric is the participants’perceptions of the effect that proximity of coach engagements had on daily goal achievement. In other words, were participants any more or less likely to
SPRING 2020
33
Figure 3: Participants’ Estimated Intentional “Goal Thought” by Treatment Group Message-mediated participants (Group M) reported a higher total number of “intentional goal-related thoughts” than the videochat-mediated group (Group V).
“The final personalitytype assessment pertained to participants' inclination for schedule adherence.”
consultation) strong responses to therapist engagements might benefit most from a daily messaging-mediated treatment, while a “long haul,” quieter, or less easily influenced patient might benefit more from videochat-mediated psychotherapy. Personality Type Personality type serves as the second area of analysis in the post-study questionnaire. Extraversion, intrinsic motivation, and schedule adherence were self-assessed, and Table 3 displays the responses in relation to treatment group and goal achievement. Two out of three Group V participants and one out of three Group M participant described themselves as extroverts. The Group V extroverts (P2 and P3 in Figure 3, Group V) demonstrated the most significant increase in daily goal achievement over the course of the study. However, these results did not hold for the Group M extrovert (P3 in Figure 3, Group M). For this participant, daily goal achievement was the same (and low; 4 days/wk.) in week 4 and week 1. These results, shown in Figure 5, suggest that personality type might indeed play a role in the effectiveness of different remote therapy modes, and that self-defined extroverts appeared to benefit more from videochat-mediated engagements than from messaging-mediated engagements. Two additional personality-type questions were assessed: typical intrinsic motivation levels and inclination for schedule adherence. Self-declared levels of typical motivation were assessed in the following form: “in most cases, I would describe myself as highly motivated” [agree/disagree assessment scale]. The results are displayed in Figure 6. Goal achievement was more strongly correlated with “typically high” intrinsic
motivation for Group V participants than Group M participants. Thus, highly motivated individuals seem more likely to benefit from messaging-mediated therapy than lessmotivated individuals; a less-motivated patient might easily neglect therapist text messages, but disengagement in a live videochat would require active avoidance. The final personality-type assessment pertained to participants’ inclination for schedule adherence. Group V participants who expressed an inclination for schedule adherence demonstrated higher levels of goal achievement than Group M participants who expressed the same. These results are shown in Figure 7. As with extraversion and intrinsic motivation, clinicians could benefit from considering a patient’s preference (or lack thereof ) for schedule adherence; those patients who tend to thrive on a rigid schedule adherence might benefit more through planned, periodic videochat engagements than more constant, but unscheduled, messagingmediated interactions.
Table 5: Personality Type: Summary Data
34
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: Daily Goal Achievement for Self Described Extroverts by Remote Treatment Mode Self-defined extroverts receiving videochat engagement (blue lines) saw more improvement in daily goal achievement than the self-defined messaging-group extrovert (red line).
Participant Perception A final area of investigation in the poststudy questionnaire pertains to participant perceptions. Two participant perception topics were addressed: perceived level of coach investment and perceived treatment efficacy. Table 4 summarizes the results. The first question addressed participants’ perceived level of coach investment. The patient-provider relationship, or the extent to which the patient feels valued, cared for, and respected by their clinician, has long been observed as a predictor of therapeutic success (Marsh et al., 2012). In this case, the perceived level of coach investment probed the patient-provider relationship. All Group V participants reported a perceived “high degree” of investment from the coach (i.e., they agreed with the statement “I felt as if the coach cared about my goal-achievement nearly as much as I did”). However, no two Group M participants reported the same perceived degree of coach investment. Responses ranged from “moderate degree” to “highest degree.” These results are shown in Figure 8. A close look at Figure 8 reveals similarity in the levels of perceived coach investment between treatment groups: while there is clearly less variation on the subject among Group V participants than among members of Group M, both treatment groups have the same average perceived coach investment (“moderate,” or “2” on the 1-3 scale in Figure 8). These results suggest that even when clinician investment
SPRING 2020
is relatively constant between patients, messaging-mediated therapy leaves more room for varied patient interpretation. Further research could investigate these preliminary findings and assess their implications; these initial results suggest that messaging-mediated providers should be aware of variation in patients’ perceptions of their investment levels and that patients might benefit from increased efforts to facilitate stable interpersonal connections during message-mediated therapy. Participants were also asked about the perceived effectiveness of the opposite treatment: videochat-mediated engagements for Group M, and messaging-mediated engagements for Group V. One out of three Group M participants responded with the expectation that their engagements would have been “more motivating” if delivered via videochat, while the same proportion of Group V participants believed that their engagements
“One out of three Group M participants responded with the expectation that their engagements would have been "more motivating" if delivered via videochat...”
Figure 5: Self-Assessed Levels of Typical Intrinsic Motivation and Daily Goal Achievement Videochat-mediated treatment group participants who identified as having “typically high” levels of intrinsic motivation showed higher average daily goal achievement than messaging-mediated participants who identified the same.
35
Figure 6: Participants’ SelfAssessed Inclination for Schedule Adherence and Daily Goal Achievement Messaging-mediated treatment group participants who responded “no” to the question of whether they functioned best on a schedule showed higher average daily goal achievement than participants in the videochat-mediated treatment group who answered the same.
“In conclusion, both message-mediated and videophonemediated forms of therapy could prove effective in improving treatment access for individuals who cannot access traditional in-person therapy.”
would have been “less motivating” if delivered via text. In general, videochat-mediated engagements were perceived as more desirable than messaging-mediated ones, even though data indicate that average goal achievement was higher among Group M participants. Due to the small sample size and variability of the results, these findings do not provide sufficient evidence to support the efficacy of one treatment mode over another, though they do add to the existing evidence that perceptions of remote therapy efficacy tend to favor videochat-mediated methods over messaging-mediated methods, despite familiarity with messaging methods and an equal, if not better, rate of goal achievement during messaging-mediated engagements.
Limitations This study was conducted with limited personnel, time, and clinical expertise. As previously stated, the study sought to explore a broad swath of potential links between personality type and remote engagement mode in the hope of generating hypotheses for future empirical investigations. The results of this study should not be regarded as terminal
conclusions or as clinically relevant findings. The small sample size (n=6) of this study makes statistically significant inference unreliable, and the limited age range of participants (18-22 years) limits generalizability to the adolescent cohort. Additionally, though the study design was indeed intended to mimic certain aspects of psychotherapy to best generate clinically relevant hypotheses, no psychotherapy was administered, nor were any screenings for mental illness performed. The engagements between coach and participant were simply intended to foster goal achievement, and qualitative differences between remote platforms were determined via the metric of daily goal-achievement. Finally, since the primary aim of the study was to compare participant experiences between two different modes of remote engagement, resources were allocated to the establishment of two treatment groups ahead of a control group. The inclusion of a control group (e.g., in-person interactions) would have strengthened the reliability of the data by providing a point of comparison for both modes of remote engagement aside from the opposite treatment group. With these limitations in mind, the present study should serve primarily to bring about potential hypotheses for future investigations—a critical first step towards the generation of clinically relevant findings.
Conclusion In conclusion, both message-mediated and videophone-mediated forms of therapy could prove effective in improving treatment access for individuals who cannot access traditional inperson therapy. Numerous studies investigate the efficacy of remote therapy compared to in-person treatments, but few empirical comparisons between different remote
Table 6: Participant Perception: Summary Data
36
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 7: Perceived Levels of Coach Investment: Treatment Groups V (top) and M (bottom) Though the average perception of coach investment was the same (moderate perceived investment, a “2” in Figure 7) for both treatment groups, perceptions varied more widely in the messaging-mediated treatment group.
treatment modes exist. The results of our study suggest that messaging-mediated and videochat-mediated remote psychotherapy likely offer different advantages. Further empirical comparisons between remote treatment types will enable screening for patient-treatment fit, thus reducing treatment dropout rates, improving treatment efficacy, and broadening access to much-needed psychological care. The following hypotheses should be investigated in future empirical comparisons: • The early but less salient behavioral shifts in the messaging-mediated treatment group suggest that messaging might be better suited to serve less severe day-to-day struggles, while videochat-mediated treatment might be more effective in more severe or long-term circumstances. • Messaging-mediated treatment might serve patients who desire consistent, nearcontinuous motivation better than videochat treatments. • Videochat treatment might be more effective than messaging treatment for extroverts, individuals with an inclination for schedule adherence, and those with typically low levels of intrinsic motivation. • Patient perceptions of their therapist’s investment could vary more widely in messaging treatment than in videochat; messaging-mediated providers might benefit from heightened efforts to facilitate connection. • Participants perceived videochat-mediated treatment as more effective than messaging-
SPRING 2020
mediated treatment, regardless of actual treatment efficacy, suggesting that messagingmediated treatment might be underutilized compared to videochat when both options are presented. References A growing wave of online therapy. (n.d.). Https://Www.Apa. Org. Retrieved May 23, 2020, from https://www.apa.org/ monitor/2017/02/online-therapy Andersson, G., Bergström, J., Holländare, F., Carlbring, P., Kaldo, V., & Ekselius, L. (2005). Internet-based self-help for depression: Randomised controlled trial. The British Journal of Psychiatry, 187(5), 456–461. https://doi.org/10.1192/ bjp.187.5.456 Andersson, G., Carlbring, P., Holmström, A., Sparthan, E., Furmark, T., Nilsson-Ihrfelt, E., Buhrman, M., & Ekselius, L. (2006). Internet-based self-help with therapist feedback and in vivo group exposure for social phobia: A randomized controlled trial. Journal of Consulting and Clinical Psychology, 74(4), 677–686. https://doi.org/10.1037/0022-006X.74.4.677 Andersson, G., Cuijpers, P., Carlbring, P., Riper, H., & Hedman, E. (2014). Guided Internet-based vs. face-to-face cognitive behavior therapy for psychiatric and somatic disorders: A systematic review and meta-analysis. World Psychiatry, 13(3), 288–295. https://doi.org/10.1002/wps.20151 Andersson, G., & Titov, N. (2014). Advantages and limitations of Internet-based interventions for common mental disorders. World Psychiatry, 13(1), 4–11. https://doi. org/10.1002/wps.20083 APA Recognition of Psychotherapy Effectiveness. (2012, August). American Psychological Association. https://www. apa.org/about/policy/resolution-psychotherapy Association, A. P. (2002). Practice Guideline for the Treatment of Patients with Bipolar Disorder (revision). American Psychiatric Pub. Barnett, J. E., & Scheetz, K. (2003). Technological advances and telehealth: Ethics, law, and the practice of psychotherapy. Psychotherapy: Theory, Research, Practice, Training, 40(1–2),
37
86–93. https://doi.org/10.1037/0033-3204.40.1-2.86 Bauer, S., Okon, E., Meermann, R., & Kordy, H. (2012). Technology-enhanced maintenance of treatment gains in eating disorders: Efficacy of an intervention delivered via text messaging. Journal of Consulting and Clinical Psychology, 80(4), 700–706. APA PsycInfo. https://doi.org/10.1037/ a0028030 Bee, Penny E, Bower, P., Lovell, K., Gilbody, S., Richards, D., Gask, L., & Roach, P. (2008). Psychotherapy mediated by remote communication technologies: A meta-analytic review. BMC Psychiatry, 8, 60–60. PubMed. https://doi.org/10.1186/1471244X-8-60 Bee, Penny Elizabeth, Lovell, K., Lidbetter, N., Easton, K., & Gask, L. (2010). You can’t get anything perfect: “user perspectives on the delivery of cognitive behavioural therapy by telephone.” Social Science & Medicine (1982), 71(7), 1308–1315. https://doi. org/10.1016/j.socscimed.2010.06.031 Berryhill, M. B., Culmer, N., Williams, N., Halli-Tierney, A., Betancourt, A., Roberts, H., & King, M. (2018). Videoconferencing Psychotherapy and Depression: A Systematic Review. Telemedicine and E-Health, 25(6), 435–446. https://doi.org/10.1089/tmj.2018.0058 Brenes, G. A., Ingram, C. W., & Danhauer, S. C. (2011). Benefits and Challenges of Conducting Psychotherapy by Telephone. Professional Psychology, Research and Practice, 42(6), 543–549. https://doi.org/10.1037/a0026135 Castro, A., Gili, M., Ricci-Cabello, I., Roca, M., Gilbody, S., PerezAra, M. Á., Seguí, A., & McMillan, D. (2020). Effectiveness and adherence of telephone-administered psychotherapy for depression: A systematic review and meta-analysis. Journal of Affective Disorders, 260, 514–526. https://doi.org/10.1016/j. jad.2019.09.023 Christensen, H., Griffiths, K. M., & Jorm, A. F. (2004). Delivering interventions for depression by using the internet: Randomised controlled trial. Bmj, 328(7434), 265. De Las Cuevas, C., Arredondo, M. T., Cabrera, M. F., Sulzenbacher, H., & Meise, U. (2006). Randomized clinical trial of telepsychiatry through videoconference versus face-to-face conventional psychiatric treatment. Telemedicine Journal and E-Health: The Official Journal of the American Telemedicine Association, 12(3), 341–350. https://doi.org/10.1089/ tmj.2006.12.341 Erbe, D., Eichert, H.-C., Riper, H., & Ebert, D. D. (2017). Blending Face-to-Face and Internet-Based Interventions for the Treatment of Mental Disorders in Adults: Systematic Review. Journal of Medical Internet Research, 19(9). https://doi. org/10.2196/jmir.6588 Folker, A. P., Mathiasen, K., Lauridsen, S. M., Stenderup, E., Dozeman, E., & Folker, M. P. (2018). Implementing internetdelivered cognitive behavior therapy for common mental health disorders: A comparative case study of implementation challenges perceived by therapists and managers in five European internet services. Internet Interventions, 11, 60–70. https://doi.org/10.1016/j.invent.2018.02.001 Guidelines for the practice of telepsychology. (n.d.). Https:// Www.Apa.Org. Retrieved May 23, 2020, from https://www.apa. org/practice/guidelines/telepsychology
38
Hantke, N., Lajoy, M., Gould, C. E., Magwene, E. M., Sordahl, J., Hirst, R., & O’Hara, R. (2019). Patient satisfaction with geriatric psychiatry services via video teleconference. The American Journal of Geriatric Psychiatry. APA PsycInfo. https://doi. org/10.1016/j.jagp.2019.08.020 Hassan, A., Sharif, K., A, H., & K, S. (2019). Efficacy of Telepsychiatry in Refugee Populations: A Systematic Review of the Evidence. Cureus Journal of Medical Science, 11(1). https://doi.org/10.7759/cureus.3984 Hilty, D. M., Ferrer, D. C., Parish, M. B., Johnston, B., Callahan, E. J., & Yellowlees, P. M. (2013). The effectiveness of telemental health: A 2013 review. Telemedicine Journal and E-Health: The Official Journal of the American Telemedicine Association, 19(6), 444–454. https://doi.org/10.1089/tmj.2013.0075 Hoermann, S., McCabe, K. L., Milne, D. N., & Calvo, R. A. (2017). Application of Synchronous Text-Based Dialogue Systems in Mental Health Interventions: Systematic Review. Journal of Medical Internet Research, 19(8), e267. https://doi. org/10.2196/jmir.7023 Hubley, S., Lynch, S. B., Schneck, C., Thomas, M., & Shore, J. (2016). Review of key telepsychiatry outcomes. World Journal of Psychiatry, 6(2), 269–282. https://doi.org/10.5498/wjp. v6.i2.269 Jenkins-Guarnieri, M. A., Pruitt, L. D., Luxton, D. D., & Johnson, K. (2015). Patient Perceptions of Telemental Health: Systematic Review of Direct Comparisons to In-Person Psychotherapeutic Treatments. Telemedicine Journal and E-Health: The Official Journal of the American Telemedicine Association, 21(8), 652–660. https://doi.org/10.1089/ tmj.2014.0165 Kallergis, G. (2019). [The contribution of the relationship between therapist-patient and the context of the professional relationship]. Psychiatrike = Psychiatriki, 30(2), 165–174. https://doi.org/10.22365/jpsych.2019.302.165 Koblauch, H., Reinhardt, S. M., Lissau, W., & Jensen, P.-L. (2016). The effect of telepsychiatric modalities on reduction of readmissions in psychiatric settings: A systematic review: Journal of Telemedicine and Telecare. https://doi. org/10.1177/1357633X16670285 Koblauch, H., Reinhardt, S. M., Lissau, W., & Jensen, P.-L. (2018). The effect of telepsychiatric modalities on reduction of readmissions in psychiatric settings: A systematic review. Journal of Telemedicine and Telecare, 24(1), 31–36. APA PsycInfo. https://doi.org/10.1177/1357633X16670285 Lamb, T., Pachana, N. A., & Dissanayaka, N. (2018). Update of Recent Literature on Remotely Delivered Psychotherapy Interventions for Anxiety and Depression. Telemedicine and E-Health, 25(8), 671–677. https://doi.org/10.1089/ tmj.2018.0079 Langarizadeh, M., Tabatabaei, M. S., Tavakol, K., Naghipour, M., Rostami, A., & Moghbeli, F. (2017). Telemental Health Care, an Effective Alternative to Conventional Mental Care: A Systematic Review. Acta Informatica Medica, 25(4), 240–246. https://doi.org/10.5455/aim.2017.25.240-246 Levin, J. B., Sajatovic, M., Rahman, M., Aebi, M. E., Tatsuoka, C., Depp, C., Cushman, C., Johnston, E., Cassidy, K. A., Blixen,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
C., Eskew, L., Klein, P. J., Fuentes-Casiano, E., & Moore, D. J. (2019). Outcomes of Psychoeducation and a Text Messaging Adherence Intervention Among Individuals With Hypertension and Bipolar Disorder. Psychiatric Services, 70(7), 608–612. https://doi.org/10.1176/appi.ps.201800482 Lovell, K., Cox, D., Haddock, G., Jones, C., Raines, D., Garvey, R., Roberts, C., & Hadley, S. (2006). Telephone administered cognitive behaviour therapy for treatment of obsessive compulsive disorder: Randomised controlled non-inferiority trial. BMJ (Clinical Research Ed.), 333(7574), 883. https://doi. org/10.1136/bmj.38940.355602.80 Marcelle, E. T., Nolting, L., Hinshaw, S. P., & Aguilera, A. (2019). Effectiveness of a Multimodal Digital Psychotherapy Platform for Adult Depression: A Naturalistic Feasibility Study. JMIR MHealth and UHealth, 7(1), e10948–e10948. PubMed. https:// doi.org/10.2196/10948 Marmarosh, C. L., Gelso, C. J., Markin, R. D., Majors, R., Mallery, C., & Choi, J. (2009). The real relationship in psychotherapy: Relationships to adult attachments, working alliance, transference, and therapy outcome. Journal of Counseling Psychology, 56(3), 337–350. https://doi.org/10.1037/ a0015169 Marsh, J. C., Angell, B., Andrews, C. M., & Curry, A. (2012). Client-Provider Relationship and Treatment Outcome: A Systematic Review of Substance Abuse, Child Welfare, and Mental Health Services Research. Journal of the Society for Social Work and Research, 3(4), 233–267. https://doi. org/10.5243/jsswr.2012.15 Mitchell, J. E., Crosby, R. D., Wonderlich, S. A., Crow, S., Lancaster, K., Simonich, H., Swan-Kremeier, L., Lysne, C., & Myers, T. C. (2008). A randomized trial comparing the efficacy of cognitive-behavioral therapy for bulimia nervosa delivered via telemedicine versus face-to-face. Behaviour Research and Therapy, 46(5), 581–592. https://doi.org/10.1016/j. brat.2008.02.004 Mohr, D. C., Carmody, T., Erickson, L., Jin, L., & Leader, J. (2011). Telephone-administered cognitive behavioral therapy for veterans served by community-based outpatient clinics. Journal of Consulting and Clinical Psychology, 79(2), 261–265. https://doi.org/10.1037/a0022395 Mohr, D. C., Hart, S. L., Julian, L., Catledge, C., Honos-Webb, L., Vella, L., & Tasch, E. T. (2005). Telephone-administered psychotherapy for depression. Archives of General Psychiatry, 62(9), 1007–1014. Mohr, D. C., Vella, L., Hart, S., Heckman, T., & Simon, G. (2008). The effect of telephone‐administered psychotherapy on symptoms of depression and attrition: A meta‐analysis. Clinical Psychology: Science and Practice, 15(3), 243–253. Morland, L. A., Greene, C. J., Rosen, C. S., Foy, D., Reilly, P., Shore, J., He, Q., & Frueh, B. C. (2010). Telemedicine for anger management therapy in a rural population of combat veterans with posttraumatic stress disorder: A randomized noninferiority trial. The Journal of Clinical Psychiatry, 71(7), 855–863. https://doi.org/10.4088/JCP.09m05604blu Morland, L. A., Mackintosh, M.-A., Greene, C. J., Rosen, C. S., Chard, K. M., Resick, P., & Frueh, B. C. (2014). Cognitive processing therapy for posttraumatic stress disorder delivered to rural veterans via telemental health: A
SPRING 2020
randomized noninferiority clinical trial. The Journal of Clinical Psychiatry, 75(5), 470–476. https://doi.org/10.4088/ JCP.13m08842 Naslund, J. A., Aschbrenner, K. A., Araya, R., Marsch, L. A., Unützer, J., Patel, V., & Bartels, S. J. (2017). Digital technology for treating and preventing mental disorders in low-income and middle-income countries: A narrative review of the literature. The Lancet Psychiatry, 4(6), 486–500. https://doi. org/10.1016/S2215-0366(17)30096-2 Nelson, E.-L., Barnard, M., & Cain, S. (2003). Treating childhood depression over videoconferencing. Telemedicine Journal and E-Health: The Official Journal of the American Telemedicine Association, 9(1), 49–55. https://doi. org/10.1089/153056203763317648 Nicholas, J., Ringland, K. E., Graham, A. K., Knapp, A. A., Lattie, E. G., Kwasny, M. J., & Mohr, D. C. (2019). Stepping Up: Predictors of ‘Stepping’ within an iCBT Stepped-Care Intervention for Depression. International Journal of Environmental Research and Public Health, 16(23), 4689. https://doi.org/10.3390/ijerph16234689 O’Reilly, R., Bishop, J., Maddox, K., Hutchinson, L., Fisman, M., & Takhar, J. (2007). Is telepsychiatry equivalent to faceto-face psychiatry? Results from a randomized controlled equivalence trial. Psychiatric Services (Washington, D.C.), 58(6), 836–843. https://doi.org/10.1176/ps.2007.58.6.836 Pedrelli, P., Nyer, M., Yeung, A., Zulauf, C., & Wilens, T. (2015). College Students: Mental Health Problems and Treatment Considerations. Academic Psychiatry : The Journal of the American Association of Directors of Psychiatric Residency Training and the Association for Academic Psychiatry, 39(5), 503–511. https://doi.org/10.1007/s40596-014-0205-9 Pierce, B. S., Perrin, P. B., & McDonald, S. D. (2020). Demographic, organizational, and clinical practice predictors of US psychologists’ use of telepsychology. Professional Psychology: Research and Practice, 51(2), 184–193. APA PsycInfo. https://doi.org/10.1037/pro0000267 Practice guideline for the treatment of patients with eating disorders (revision). American Psychiatric Association Work Group on Eating Disorders. (2000). The American Journal of Psychiatry, 157(1 Suppl), 1–39. Practice guideline for the treatment of patients with major depressive disorder (revision). American Psychiatric Association. (2000). The American Journal of Psychiatry, 157(4 Suppl), 1–45. Reynolds, D. J., Jr, Stiles, W. B., Bailer, A. J., & Hughes, M. R. (2013). Impact of exchanges and client-therapist alliance in online-text psychotherapy. Cyberpsychology, Behavior and Social Networking, 16(5), 370–377. PubMed. https://doi. org/10.1089/cyber.2012.0195 Romijn, G., Batelaan, N., Kok, R., Koning, J., Balkom, A. van, Titov, N., & Riper, H. (2019). Internet-Delivered Cognitive Behavioral Therapy for Anxiety Disorders in Open Community Versus Clinical Service Recruitment: MetaAnalysis. Journal of Medical Internet Research, 21(4), e11706. https://doi.org/10.2196/11706 Ruskin, P. E., Silver-Aylaian, M., Kling, M. A., Reed, S. A., Bradham, D. D., Hebel, J. R., Barrett, D., Knowles, F., & Hauser,
39
P. (2004). Treatment outcomes in depression: Comparison of remote treatment through telepsychiatry to in-person treatment. The American Journal of Psychiatry, 161(8), 1471–1476. https://doi.org/10.1176/appi.ajp.161.8.1471 Sayal, K., Roe, J., Ball, H., Atha, C., Kaylor-Hughes, C., Guo, B., Townsend, E., & Morriss, R. (2019). Feasibility of a randomised controlled trial of remotely delivered problem-solving cognitive behaviour therapy versus usual care for young people with depression and repeat self-harm: Lessons learnt (e-DASH). BMC Psychiatry, 19(1), 1–12. https://doi.org/10.1186/ s12888-018-2005-3 Schubert, N., Backman, P., Bhatla, R., & Corace, K. (2019). Telepsychiatry and patient–provider concordance. In Canadian Journal of Rural Medicine (Vol. 24, Issue 3, pp. 75–82). Wolters Kluwer Medknow Publications. Services, B. on H. C., & Medicine, I. of. (2012). The Evolution of Telehealth: Where Have We Been and Where Are We Going? In The Role of Telehealth in an Evolving Health Care Environment: Workshop Summary. National Academies Press (US). https:// www.ncbi.nlm.nih.gov/books/NBK207141/ Shore, J. (2015). The evolution and history of telepsychiatry and its impact on psychiatric care: Current implications for psychiatrists and psychiatric organizations. International Review of Psychiatry, 27(6), 469–475. https://doi.org/10.3109/0 9540261.2015.1072086 Shreck, E., Nehrig, N., Schneider, J. A., Palfrey, A., Buckley, J., Jordan, B., Ashkenazi, S., Wash, L., Baer, A. L., & Chen, C. K. (2020). Barriers and facilitators to implementing a US Department of Veterans Affairs Telemental Health (TMH) program for rural veterans. Journal of Rural Mental Health, 44(1), 1–15. APA PsycInfo. https://doi.org/10.1037/rmh0000129 Simon, G. E., Ludman, E. J., Tutty, S., Operskalski, B., & Von Korff, M. (2004). Telephone psychotherapy and telephone care management for primary care patients starting antidepressant treatment: A randomized controlled trial. Jama, 292(8), 935–942.
D., Mellman, L., Moench, L. A., Norquist, G., Twemlow, S. W., Woods, S., … Steering Committee on Practice Guidelines. (2004). Practice guideline for the treatment of patients with acute stress disorder and posttraumatic stress disorder. The American Journal of Psychiatry, 161(11 Suppl), 3–31. van den Berg, N., Grabe, H.-J., Freyberger, H. J., & Hoffmann, W. (2011). A telephone- and text-message based telemedical care concept for patients with mental health disorders— Study protocol for a randomized, controlled study design. BMC Psychiatry, 11, 30. https://doi.org/10.1186/1471244X-11-30 Varker, T., Brand, R. M., Ward, J., Terhaag, S., & Phelps, A. (2019). Efficacy of synchronous telepsychology interventions for people with anxiety, depression, posttraumatic stress disorder, and adjustment disorder: A rapid evidence assessment. Psychological Services, 16(4), 621–635. APA PsycInfo. https://doi.org/10.1037/ser0000239 Vîslă, A., Constantino, M. J., Newkirk, K., Ogrodniczuk, J. S., & Söchting, I. (2018). The relation between outcome expectation, therapeutic alliance, and outcome among depressed patients in group cognitive-behavioral therapy. Psychotherapy Research: Journal of the Society for Psychotherapy Research, 28(3), 446–456. https://doi.org/10.1 080/10503307.2016.1218089 Watzke, B., Rüddel, H., Jürgensen, R., Koch, U., Kriston, L., Grothgar, B., & Schulz, H. (2010). Effectiveness of systematic treatment selection for psychodynamic and cognitive– behavioural therapy: Randomised controlled trial in routine mental healthcare. The British Journal of Psychiatry, 197(2), 96–105. https://doi.org/10.1192/bjp.bp.109.072835 Zivin, K., Pfeiffer, P. N., McCammon, R. J., Kavanagh, J. S., Walters, H., Welsh, D. E., Difranco, D. J., Brown, M. M., & Valenstein, M. (2009). “ No-shows”: Who fails to follow up with initial behavioral health treatment? The American Journal of Managed Care, 15(2), 105–112.
Titov, N., Dear, B., Nielssen, O., Staples, L., Hadjistavropoulos, H., Nugent, M., Adlam, K., Nordgreen, T., Bruvik, K. H., Hovland, A., Repål, A., Mathiasen, K., Kraepelien, M., Blom, K., Svanborg, C., Lindefors, N., & Kaldo, V. (2018). ICBT in routine care: A descriptive analysis of successful clinics in five countries. Internet Interventions, 13, 108–115. https://doi.org/10.1016/j. invent.2018.07.006 Turvey, C. L., & Myers, K. (2013). 19 - Research in Telemental Health: Review and Synthesis11The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government. In K. Myers & C. L. Turvey (Eds.), Telemental Health (pp. 397–419). Elsevier. https://doi. org/10.1016/B978-0-12-416048-4.00019-1 Tutty, S., Ludman, E. J., & Simon, G. (2005). Feasibility and acceptability of a telephone psychotherapy program for depressed adults treated in primary care. General Hospital Psychiatry, 27(6), 400–410. https://doi.org/10.1016/j. genhosppsych.2005.06.009 Ursano, R. J., Bell, C., Eth, S., Friedman, M., Norwood, A., Pfefferbaum, B., Pynoos, J. D. R. S., Zatzick, D. F., Benedek, D. M., McIntyre, J. S., Charles, S. C., Altshuler, K., Cook, I., Cross, C.
40
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
41
M AT E R I A L S S C I E N C E
An Introduction to Quantum Computing and Quantum Optimization BY BEN NEWHALL '23 Cover: A zoomed in picture of a D-Wave quantum computer. Source: Wikimedia Commons
“Computation is based around sets of instructions called algorithms. Algorithms existed long before modern computers, with some algorithms dating back to the Ancient Babylonians.”
Abstract Quantum computing has the potential to transform computation. However, most educational tools for quantum computing are created for graduate students. The goal of this paper is to explain quantum optimization, one small part of quantum computing, with only a basic familiarity of math notation, a bit of chemistry and physics, and an open mind. To accomplish this, I will first define computation and use that definition to show how quantum computers work. Then I will introduce the concept of optimization, and an analog will be drawn between classical and quantum optimization problems. Finally, I will explain VQE, a type of quantum optimization, and end with a discussion of contemporary quantum computing and the uses of quantum optimization.
What is computation? Computation
42
is
based
around
sets
of
instructions called algorithms. Algorithms existed long before modern computers, with some algorithms dating back to the Ancient Babylonians. Algorithms allow humans to efficiently solve complex problems, like finding the square root of a number, even without the help of a computer. While algorithms have been around at least since 1800 BCE, it took until 1936 for the notion of a computer to be formalized (Knuth, 1972). Alan Turing created the modern idea of a computer in his response to the decision problem. The decision problem asks if, for every yes/no question that exists, there is an algorithm that can decide yes or no. These yes/ no questions are called decision problems. In answering this question, Turing concluded that every algorithm that can decide yes or no for a problem can be solved using a Turing machine (Turing, 1937). The Turing machine kickstarted the whole discipline of computer science by allowing people to not only design algorithms DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
to solve, but to create new ways of solving them. Modern computers, however, are still versions of the Turing Machine. In a 1937 paper, Alan Turing proved that all decision problems that can be solved, can be solved on a Turing Machine. And any computer ever built ¬– from smartphones and calculators to cuneiform tablets – can be classified as a Turing Machine (Nielson, Chuang, 2010). In fact, the study of artificial intelligence really boils down to one question, can a human be simulated by a universal Turing machine (Muggleton, 2014)? Of course, knowing whether a problem is solvable or not is completely different from actually solving the problem. Turing only proved that a Turing Machine could solve problems, not how long it would take. Many problems can take billions of years to solve on the most powerful computers that exist today. So, what kind of computer will solve a certain problem most efficiently? Specifically, classical computers have trouble simulating some physical systems. A physical system could be a ball rolling down a hill or the electrons in an atom. In 1985, David Deutsch hypothesized that creating a computer from a physical system could be more efficient. Nielson and Chuang note: “Deutsch attempted to define a computational device that would be capable of efficiently simulation an arbitrary physical system. Because the laws of physics are ultimately quantum mechanical, Deutsch was naturally led to consider computing devices based upon the principles of quantum mechanics.” (Nielson, Chuang, 2010 p. 6) These computing devices are what we now call quantum computers.
How do quantum computers work? To understand quantum computers, one first needs to understand a bit about quantum mechanics. Much of quantum mechanics can be described by versions of the Schrödinger equation, like this one:
For right now, the only part of the equation we will consider is . . is the state of a quantum system at a time t. So ) is the state
SPRING 2020
of a quantum particle at time t0. According to Newtonian mechanics, it only makes sense to say that an object is in one place at any given time. A baseball is only either in the pitcher’s hand, in the air, or hitting a bat at a given time. In quantum mechanics, however, a particle at a certain time could be anywhere. If a baseball were a quantum particle, it could be in the pitcher’s hand, the air, or hitting the bat at any given time. All of this information representing the state of a quantum particle is contained in the symbol. Another important part of quantum mechanics is that a quantum state can be a superposition of multiple states:
Superposition is just the combination of multiple states. So, some state is the state multiplied by constant a plus the state multiplied by constant b. Superposition can be geometrically represented by the addition of vectors [Figure 1]. In modern computers, information is manipulated using bits. A bit can only be one of two states. A bit can represent true or false, yes or no, or just one or zero. When Deutsch and his colleagues decided to use quantum mechanics for computing, they first had to show that quantum systems can be used to represent arbitrary information. So, while in quantum theory is the state of a quantum particle in the physical world, in quantum computing, is used to represent states of information. In quantum computing, the basic unit of information is the qubit, in place of the bit of classical computing (Nielson, Chuang, 2010).
“What advantage does a qubit have compared to a normal bit? Bits can only be two states, 1 or 0, whereas a qubit can represent any number of states.”
Why is representing information with qubits useful? What advantage does a qubit have compared to a normal bit? Bits can only be two states, 1 or 0, whereas a qubit can represent any number of states . However these qubits can also be represented as a superposition of 1 and 0.
So, a qubit in superposition between 1 and 0 can be represented as
43
Figure 1: In this picture is a superposition of
and
.
Retrieved from Wikimedia Commons
But quantum mechanics adds another twist, a qubit can only be in an arbitrary state when it isn’t measured. When a qubit is measured it becomes 1 with probability or 0 with probability . This means if you measured qubit an infinite number of times, it would become 1 half the time and become 0 half the time. Much like normal superposition, qubits can be represented with vector addition. The vectors 1 and 0 are on the graph’s axis, and the vector represents the qubit. Because of its constraints, this graph is a sphere [Figure 2]. In practice, quantum computing has its difficulties. Much like the digital computers, quantum computers are circuits with gates that perform operations on qubits. A gate takes in a qubit and changes it. It is hard to perform operations on qubits accurately without measuring them. Because of this, qubits are inherently error prone. Also, current physical implementations of qubits can’t maintain their quantum state for very long, so quantum circuits can only be built with a limited number of gates. While these problems make quantum computing difficult, quantum computers can still provide an advantage over classical computers in multiple instances. One such instance is their capacity for optimization.
“NP-Hard problems involve a series of decision problems for which it is easy to check a potential solution, but inefficient to find the solution in the first place.”
What is Optimization? Optimization is the process of finding the maximum and minimum values of a function. Sometimes optimization is used to find the maximum or minimum of a function on a certain boundary, but more often than not it is used for finding the absolute maximum or minimum on an entire function. Optimization is incredibly fundamental to everyday life. A typical problem could look like: You are in the process of creating a pig pen for your farm, but you only have 50 feet of fencing lying around. What are the dimensions of the pig pen will give your pigs the most space? A first-year calculus student would recognize that the 50 feet of fencing is a constraint on the perimeter of the pen, and that the area of the pen is the quantity that you want to optimize. Then the student would use calculus to solve the rest of the problem, but in essence the problem, and optimization in general, is much simpler.
44
No matter what method you use to optimize, you are using a form of the variational principle:
Each function f has certain parameters that you can vary to change the value of the function. This value must always be greater than or equal to the absolute minimum of the function or less than or equal to the absolute maximum of the function . In the case of the pig pen problem, the width and length of the pig pen are the parameters of the area function.
These parameters might have to obey certain constraints however, like being a whole number, an odd number, or an integer. For the pig pen problem, the sides of the pen must be equal to 50 feet.
The variational principle states that for a minimization problem there is some minimum of the function that each variation of parameters has to be greater than or equal to (Rieger, 2009). Another common optimization scenario is the travelling salesman problem. In the travelling salesman problem, you have to find the shortest path between a group of cities and back home. Anybody can find the correct solution for a few cities, and with enough time, could probably plot the shortest route between ten cities, but the problem gets prohibitively harder the more cities you add. The travelling salesman problem is known as an NP-Hard problem. NP-Hard problems involve a series of decision problems
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
for which it is easy to check a potential solution, but inefficient to find the solution in the first place (Kirkpatrick, Gelatt, Vecchi, 1983). This problem, along with some other problems in computer science and math, has be designated a millennium problem. Each millennium problem has a bounty of one million dollars (Clay Mathematics Institute, 2018), and only one millennium problem has ever been solved. The mathematician who solved it declined the prize money (Mackenzie, 2006). The travelling salesman problem still has yet to be solved, but at its heart, it is still another optimization problem.
Classical Optimization Since there is an immense variety of optimization problems, there is also an immense number of methods to solve them. Two examples – gradient descent and genetic algorithms – are vastly dissimilar and work in different ways on different types of problems. Some methods find exact maxima and minima, while others can only find good approximations. These differences can be illustrated with Black Box Optimization Benchmarking (BBOB), which defines a set of optimization problems to test optimization algorithms with. Depending on the shape of the problem, some algorithms will perform better than others, so BBOB gives researchers a way to decide exactly which algorithms are good for which problems (Hansen, Fink, Ros, Auger, 2009).
Quantum Optimization Optimization is also fundamental to quantum mechanics. Another form of the Schrödinger equation shows that In this equation, again corresponds to the state of a quantum system, whereas H determines the total energy of any system. For example, a baseball has kinetic and gravitational potential energy, and its total energy is determined by its H. This fundamental principle still stands for quantum mechanics. H for a quantum system determines its total energy. In combination, deals with both the total energy and the quantum state of a particle. The way that H determines the total energy E of a system in state is (Bès, 2012, p. 144).
where is the minimum possible energy of a quantum system (also known as the ground state energy) Notice that this looks a lot like an optimization problem. The expected total energy of a quantum particle can never be less than the ground state energy, and by varying Ψit is possible to change the expected total energy of the particle. So, by applying the variational principle, you can find both the ground state and the ground state energy of a quantum particle.
“Each millennium problem has a bounty of one million This sort of problem is hard to solve with dollars, and only one a classical computer. Classical computing millennium problem is deterministic and quantum mechanics has ever been solved.” deals with a lot of uncertainty. Quantum computing, however, is particularly suited to solve this problem because it emulates quantum mechanics. Since quantum states can be entirely represented by qubits, varying a quantum state is relatively simple.
VQE A Variational Quantum Eigensolver, or VQE, uses qubits, classical optimization, and the variational principle together to solve optimization problems like finding ground state energies of quantum particles. Since qubits can represent a quantum state, operations on a group of these qubits can be used to find the energy associated with a quantum state. These operations are performed by a quantum Figure 2: An Image of a Bloch Sphere. Retrieved from Wikimedia Commons
This equation can be further modified to look like:
SPRING 2020
45
“Because optimization problems are so prevalent in everyday life, the capacity to solve them with VQE is of significant societal value.”
circuit composed of multiple gates. Once an energy is calculated by a set of qubits, a classical optimization algorithm uses the variational principle to prepare another set of qubits. This process is then repeated many times until a minimum energy is found – the ground state energy of the quantum particle (Peruzzo et al., 2014).
Conclusion Because optimization problems are so prevalent in everyday life, the capacity to solve them with VQE is of significant societal value. On this point, to promote their quantum research, Microsoft has started a video series to show how quantum computing can impact the human condition. In the first episode, Microsoft illustrates how quantum optimization can help environmental scientists with land use optimization. By decreasing land use and maximizing land productivity, in addition to other quantum computing applications, Microsoft is hoping quantum optimization can create real practical change (Microsoft, 2020). While quantum computers are great at solving problems involving the incredibly microscopic, like finding the energies of molecules, they can also benefit our comparatively macroscopic lives. Quantum computers are difficult to make. But in the over 30 years since quantum computing’s invention, quantum computers have already become very sophisticated. On October 23, 2019 Google published a paper claiming that its 53 qubit computer was able to complete a problem in 200 seconds that would take IBM’s most powerful supercomputer ten thousand years to solve (Arute et al., 2019). IBM quickly published a rebuttal saying that it would only take two and a half days to run this problem on their supercomputer (Pednault, Gunnels, Maslov, Gambetta, 2020). Regardless, this anecdote demonstrates the rapid progress of quantum computing.
introductory course (3rd ed.). Berlin ;: Springer. Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by Simulated Annealing. Science, 220(4598), 671–680. https:// doi.org/10.1126/science.220.4598.671 Knuth, D. E. (1972). Ancient Babylonian algorithms. Communications of the ACM, 15(7), 671–677. https://doi. org/10.1145/361454.361514 Mackenzie, D., (2006). Mathematics. Perelman declines math’s top prize; three others honored in Madrid. Science (New York, N.Y.), 313(5790), 1027–1028. https://doi.org/10.1126/ science.313.5790.1027a Microsoft. (2020 21). Quantum Impact: Computing a more sustainable future (Ep. 1) [Video file]. Retrieved from https:// www.youtube.com/watch?v=JUG9MXB7UK4 Muggleton, S., (2014). Alan Turing and the development of Artificial Intelligence. AI Communications, 27(1), 3–10. https:// doi.org/10.3233/AIC-130579 Nielsen, M., & Chuang, I. (2010). Quantum computation and quantum information (10th anniversary ed.). Cambridge ;: Cambridge University Press. Hansen, N., Finck, S., Ros, R., Auger, A. (2009). Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. [Research Report] RR-6829, INRIA. Pednault, E., Gunnels, J., Maslov, D., & Gambetta, J. (2020 21). On “Quantum Supremacy.” Retrieved from https://www.ibm. com/blogs/research/2019/10/on-quantum-supremacy/ Rieger, H. (2009). Optimization Problems and Algorithms from Computer Science. Encyclopedia of Complexity and Systems Science, 6407–6425. https://doi.org/10.1007/978-0387-30440-3_378 Rules for the Millennium Prizes | Clay Mathematics Institute. (2018 26). Retrieved from https://www.claymath.org/ millennium-problems/rules-millennium-prizes Turing, A. (1937). On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42(1), 230–265. https:// doi.org/10.1112/plms/s2-42.1.230
References Alberto Peruzzo, Jarrod Mcclean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J. Love, … Jeremy L. O’brien. (2014). A variational eigenvalue solver on a photonic quantum processor. Nature Communications, 5(1), 4213. https://doi. org/10.1038/ncomms5213 Arute, F., Arya, K., Babbush, R., Bacon, D., Bardin, J. C., Barends, R., … Martinis, J. M. (2019). Quantum supremacy using a programmable superconducting processor. Nature, 574(7779), 505–510. https://doi.org/10.1038/s41586-019-1666-5 Bès, D. (2012). Quantum mechanics : a modern and concise
46
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
47
CHEMISTRY
Rural Healthcare Accessibility
BY CATHERINE ZHAO '20 Cover: An example of a rural region. Source: Unsplash
“If you are living in a rural area, you may not have direct access to healthcare. Instead of driving a couple minutes for a checkup with a doctor or for an emergency visit at the hospital, you may have to travel hours to receive care.�
48
Introduction If you are living in a rural area, you may not have direct access to healthcare. Instead of driving a couple minutes for a checkup with a doctor or for an emergency visit at the hospital, you may have to travel hours to receive care (Figures 1, 2). The disparity in healthcare access makes rural residents spend more time traveling to see healthcare providers and more likely to receive dangerously late diagnoses for potentially fatal diseases, including cancer and heart disease (Marquis-Eydman, 2019). The differences in disease management, life expectancy, and mortality rates all signify the disparity in healthcare accessibility between urban and rural areas. This disparity may be exacerbated as rural doctors begin to retire, while newer doctors choose to practice in more urban settings (Jaret, 2020). The need for rural physicians is growing every day across nations worldwide. The pressing challenge of increasing healthcare accessibility is one that needs to be addressed immediately. The United States is plagued by healthcare inaccessibility in rural areas (Figure 1). There
are a few main reasons for this. First, there is a lack of healthcare professionals and hospitals in these areas. 60 million, around 19.3%, of Americans live in rural areas, but only 9% of healthcare professionals work in these areas (Estes, 2020). There are even fewer specialty doctors in rural areas since many will choose to work in urban areas. Second, rural hospitals are shutting down at an alarming rate (Cossman, 2017). With 1,844 rural hospitals, the closure of 120 of them over the last 10 years accounts for 7% of rural hospitals (Estes, 2020). 453 rural hospitals are considered vulnerable to closure based on being in the same conditions as hospitals that have closed (Estes, 2020). The most significant contribution to the closure of rural hospitals is cost, specifically due to insurance. The percentage of the population that is uninsured is high in rural areas, despite that fact that many of these residents should be covered by Medicaid. Unfortunately, many of the rural states have not adopted Medicaid expansion, leading to hospitals not operating under Medicaid to be operating under negative expenses (Estes, 2020). Coupled with the lack of health insurance and hospital closures, even if DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Community Building
more doctors were to become available, they may not be of much use to the residents in the area because most rural residents do not have conventional health insurance (Rosenblatt, 2000). China is another example of a nation with poor healthcare access in rural areas (Figure 2). With the rapid economic growth of China in recent decades, the gap between rich, urban areas and poor, rural areas and their access to healthcare, has also grown (Lee, 2001). Due to financial burdens, the 70% of citizens who live in the rural areas of China have financial barriers to receiving healthcare (Lee, 2001). In addition to these financial setbacks, those in rural areas have access to fewer healthcare workers and to top-tier trained healthcare workers (Lee, 2001). In China, the three levels of doctors are countryside doctors, health school trained doctors, and university medical school trained doctors. Countryside doctors and health school trained doctors have the least amount of education and training, yet they are the most prevalent in rural areas (Lee, 2001). University medical school trained doctors and other toplevel doctors rarely work in rural areas (Lee, 2001). Overall, healthcare is not immediately accessible in rural areas in China. Many rely on home remedies or long travels for more serious conditions (Project Partner).
Some believe that the solution to increasing healthcare accessibility in rural areas is to start early in a physician’s career. During medical school, rural immersion programs and rural residency programs are provided (MarquisEydman, 2019; Jaret, 2020). The goal of these programs in rural areas is to demonstrate the value of said areas to medical students, such as the opportunity for more hands-on experience, and to entice future doctors to work in rural areas. Studies also show that those who grow up in a rural area far removed from urban areas are more likely to return to rural areas (Jaret, 2020). Medical schools try to reduce barriers for these students from such rural areas. They make an extra effort to reach out and encourage students from rural areas to pursue a healthcare profession in hopes that they will be more likely to return to work in a rural area than other students who may have grown up in urban environments. Medical schools are also addressing the lack of community in rural areas since this deters medical students from practicing there. They are working on building a community and network for rural doctors early on during medical school by providing specialized seminars on medical practices in rural areas, forming student networks, and offering rural immersion experiences, all in the hopes of building a network where students and doctors can brainstorm ideas to help the rural communities, while also helping students understand the significance of their work in
Figure 1. This picture shows an area of rural America, in Midwest Nebraska. Residents in rural areas often have to travel long distances in order to receive quality healthcare. Source: Pixabay
“Each country has their own unique issues, related to other systems like government and overall state of healthcare. However, the consensus is that healthcare is lacking in quality and accessibility for rural patients."
Figure 2. This picture shows a rural area in Changchun, Jilin, China. Most doctors who live in rural areas in China are not well trained. Source: Wikimedia Commons
Inaccessibility to healthcare in rural areas is a problem that impacts countries across the globe. Each country has their own unique issues, related to other systems like government and overall state of healthcare. However, the consensus is that healthcare is lacking in quality and accessibility for rural patients.
SPRING 2020
49
Figure 3. An example of telemedicine. Doctors can view patient information and monitor patients virtually. Source: Wikimedia Commons,
these communities (Jaret, 2020).
Telemedicine “Telemedicine can bring higher quality healthcare services to rural areas at a lower price.�
Figure 4. An example of telemedicine. Doctors can see and provide care for patients virtually. Source: https://www.minot. af.mil/News/Article-Display/ Article/1933933/telemedicinevirtual-care/
50
Another growing solution to healthcare inaccessibility in rural areas is telemedicine. A part of telemedicine is the utilization of technology to provide remote clinical services, allowing doctors to interact with and prescribe treatment to patients virtually (Nittari, 2020; Rural Health Information Hub, Telehealth Use in Rural Healthcare) (Figures 3, 4, 5). Telemedicine can bring higher quality healthcare services to rural areas at a lower price. It allows specialty doctors to see rural patients virtually for services including oncology, emergency care, therapy, pharmacy services, and long-term care (Rural Health Information Hub, Telehealth Use in Rural Healthcare). If rural hospital doctors do not have enough experience with the medical situation, they can use telemedicine to receive help from doctors all over the world (Darves, 2013). This service increases collaboration and develops a community of doctors for those working in rural areas. Telemedicine also saves on costs of emergency hospital transfers as well as transportation costs for patients in rural areas (Natafgi, 2018). The utilization of telemedicine is increasing in both urban and rural areas. In 2014, a third of the rural hospitals had telehealth services, while the other two-thirds were beginning to implement these services (Rural Health Information Hub, Telehealth Use in Rural Healthcare). In the United States, the government helps rural communities adopt telemedicine through grants like the Rural Health Information Technology Network Development and the Rural Health Workforce Development grant programs (Darves, 2013). The Rural Health Information Technology Network Development grant offers
up to $100,000 to rural healthcare networks for one year of planning to develop the network. Development of the network includes improving system efficiency, expanding healthcare access, and strengthening the rural healthcare system within the community. In order to receive the grant, the organization must be a rural nonprofit organization, public entity, or federally recognized tribe that has not previously received funding from the grant (Rural Health Information Hub, Rural Health Network Development Planning Program). There are several grants relating to the Rural Health Workforce Development program. Each grant has different requirements, but their goals all have a mutual focus on offering education or support to healthcare workers in rural communities. For example, the Rural Residency Planning and Development Program
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
aims to expand the physician workforce in rural communities, and the Nurse Education, Practice, Quality and Retention Simulation Education Training Program works to improve nursing education and to strengthen the nursing workforce (Rural Health Network, Rural Healthcare Workforce – Funding & Opportunities). However, the application of telemedicine does come with its own barriers, such as reimbursements, cross-border licensure issues, administrative and financial burdens, and issues pertaining to the patient doctor relationship, such as professional conduct and patient safety liabilities as well as data privacy concerns (Center for Medicare and Medicaid Services, 2018; Nittari, 2020). While telemedicine is spreading in both developed and developing countries, they all face similar burdens with a lack of unified legislation (Nittari, 2020). For example, in Africa, the spread of telemedicine is important to provide healthcare to rural areas, but they are faced with ethical and legal issues regarding telemedicine across borders (Mars, 2013). Cross border issues refer to doctor liability, jurisdiction, and licensure between different states and countries (Mars, 2013). North America deals with varying telemedicine practices and licenses from state to state (Nittari, 2020). In Taiwan, there are no clear-cut regulations or procedures on dealing with telemedicine disputes (Nittari, 2020). Addressing these various security, legal, and ethical issues could raise the costs of telemedicine. While technology does have its limits, the rise of telemedicine has proved to be extremely beneficial to rural patients. It allows patients to receive care without the burdens of
SPRING 2020
transportation and the lack of healthcare providers, allowing for long-term care via remote patient monitoring (Schulte, 2019). With so many branches of healthcare, the efficacy of the treatments vary. On one hand, telemedicine can alleviate the burden of traveling for rural patients with chronic conditions that require regular checkups and can increase the number of patients who receive treatment for mental health conditions, but on the other hand, telemedicine is a barrier to the trust that can be built through in-person interactions between doctors and patients. Many patients count on their doctor to understand them and their history and may be wary of telemedicine services, which can work with anonymous doctors and can decrease the personalization that usually goes hand in hand with face-to-face visits. However, many in-person visits are able to be substituted for these virtual visits. Telemedicine can be improved with more robust infrastructure and regulations; however, it has addressed a major problem that plagues rural communities: their inaccessibility to healthcare.
Figure 5. An increasing amount of healthcare can be provided virtually, for example, even through web and mobile applications. Some telemedicine applications include swyMed (emergency medicine), Curology (dermatology), and Teladoc (virtual doctor visits). Source: Pixabay
Conclusion Healthcare accessibility needs to be increased in rural areas around the world. In several nations, there are gaping disparities between healthcare systems in urban versus rural areas. This is a challenge that nations are seeking to address through education, the innovation of technology, and the development of new regulations and initiatives. Especially with the growth of the telehealth and telemedicine industries, technology can play an important role of allowing rural residents to receive a full range of higher quality healthcare services without having to travel for hours or forgo treatment. There are still barriers that need to be overcome in terms of working with and disseminating the use of telemedicine. First, an in-person interaction cannot fully be replaced with technology. The virtual experience can continually be improved upon in order to provide as professional and useful of a healthcare experience as possible. Additionally, new regulations regarding the uprising of technology and AI with telemedicine need to be established in order for the field to grow, allowing more doctors to practice in telemedicine. It is especially critical to advocate for nationwide policies and regulations regarding technology and AI. Currently, this is a nebulous space that has been dictated largely by private companies. While the advancement
“... on the other hand, telemedicine is a barrier to the trust that can be built through in-person interactions between doctors and patients.�
51
in the field has been rapid due to deregulation, until additional regulation is established, more problems with technology and distrust towards technology may arise. This could impede the potential applications of technology, including in healthcare. A sense of distrust and disconnection with technology and telemedicine are already an issue, so it is crucial to address these concerns in order to bring awareness and advancement to telemedicine globally. In the future, more steps need to be taken to flesh out a robust, long-term plan to bring seamless accessibility to healthcare to patients in rural areas.
References Center for Medicare and Medicaid Services. (2018). CMS Rural Health Strategy. Center for Medicare and Medicaid Services. https://www.cms.gov/ About-CMS/Agency-Information/OMH/Downloads/RuralStrategy-2018.pdf Cossman, J., Wesley, J., Wolf, J. K. (2017). The differential effects of rural health care access on race-specific mortality. SSM Population Health, 3, 618-623. https://doi.org/10.1016/j.ssmph.2017.07.013 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5769119/ Darves, B. (2013). Telemedicine: Changing the Landscape of Rural Physician Practice. NEJM CareerCenter. https://www.nejmcareercenter.org/article/ telemedicine-changing-the-landscape-of-rural-physicianpractice/ Estes, C. (2020). 1 In 4 Rural Hospitals Are At Risk Of Closure And The Problem Is Getting Worse. Forbes. https://www.forbes.com/sites/ claryestes/2020/02/24/1-4-rural-hospitals-are-at-risk-ofclosure-and-the-problem-is-getting-worse/#2e3e52631bc0
Nittari, G., Khuman, R., Baldoni, S., Pallotta, G., Battineni, G., Sirignano, A., Amenta, F., Ricci, G. (2020). Telemedicine Practice: Review of the Current Ethical and Legal Challenges. Mary Ann Liebert, Inc. Publishers. https://doi.org/10.1089/tmj.2019.0158 Project Partner. Poverty and Healthcare in Rural China. Project Partner. https://projectpartner.org/poverty/poverty-healthcare-ruralchina/ Jaret, P. (2020). Attracting the next generation of physicians to rural medicine. Association of American Medical Colleges. https://www.aamc.org/ news-insights/attracting-next-generation-physicians-ruralmedicine Rosenblatt, R. A., Hart, L. G. (2000). Physicians and rural America. West J Med., 173(5), 348-351. doi: 10.1136/ewjm.173.5.348 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1071163/# Rural Health Information Hub. Telehealth Use in Rural Healthcare. Rural Health Information Hub. https://www.ruralhealthinfo.org/topics/ telehealth#different-from-telemedicine Rural Health Information Hub. Rural Health Network Development Planning Program. Rural Health Information Hub. https://www.ruralhealthinfo. org/funding/218 Rural Health Information Hub. Rural Healthcare Workforce – Funding & Opportunities. Rural Health Information Hub. https://www.ruralhealthinfo. org/topics/health-care-workforce/funding Schulte, A., Majerol, M., Nadler, J. (2019). Narrowing the ruralurban health divide. Deloitte. https://www2.deloitte.com/us/en/insights/industry/ public-sector/virtual-health-telemedicine-rural-areas.html
Lee, C. (2001). Healthcare in rural China: a view from otolaryngology. Journal of the Royal Society of Medicine, 94, 190-193. https://doi. org/10.1177/014107680109400414 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1281396/ Mars, M. (2013). Telemedicine and Advances in Urban and Rural Healthcare Delivery in Africa. Progress in Cardiovascular Diseases, 56(3), 326-335. https://doi.org/10.1016/j.pcad.2013.10.006 https://www.sciencedirect.com/science/article/pii/ S0033062013001795?casa_token=qusn64i7DksAAAAA: LkWSAcRx2kyKorqgZFL5ZE9D4yZLVo8tubYM4FJTNASG Sf_TjitOuQozmpoyALGZmV5qQARUgg Marquis-Eydman, T. (2019). Strong medicine is needed to solve America’s rural health crisis. STAT. https://www.statnews.com/2019/07/25/ruralhealth-crisis-physicians/ Natafgi, N., Shane, D.M., Ullrich, F., MacKinney, A. C., Bell, A., Ward, M. M. (2018). Using tele-emergency to avoid patient transfers in rural emergency departments: An assessment of costs and benefits. J Telemed Telecare, 24(3), 193-201. doi: 10.1177/1357633X17696585 https://www.ncbi.nlm.nih.gov/pubmed/29278984/
52
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
53
P S YC H O LO G Y
Should Doctors Be Playing Games? BY DEV KAPADIA '23
Introduction This is an example of the expectations for augmented reality to assist surgeons in the future. Using AR, surgeons can see key features in the body without even having to make incisions or place foreign objects into small cuts. This could not only increase probability of success for surgeries but also decrease infection in these patients. Source: Flickr
54
When Hollywood directors set their movies in futuristic scenes using flying cars and robotic assistants, one feature that is almost always present is the use of augmented reality (AR) or virtual reality (VR). Movies like The Matrix, Ready Player One, Tron, and even Spy Kids all had intensive use of virtual reality and augmented reality, but these technologies are no longer features of the future; rather, they are now features of the present. AR simply uses glasses and headsets to add virtual elements to the world that users see and interact with (The Important Difference Between Virtual Reality, Augmented Reality and Mixed Reality, n.d.). Users can still see the real-life books on their bookshelf, but the augmented reality set could add a layer of red to that bookshelf to help them see if it could use a new coat of paint. AR is more familiar and accessible to the general population, especially to young adults. Examples include Pokémon Go and Snapchat filters, which are technologies that can add layers of Pokémon, Pokéballs, dog ears, and flower tiaras to a user’s view of the world, thereby augmenting reality.
VR, on the other hand, takes the user completely out of reality and instead places their perspective in a world completely generated by the VR technology, as shown in Figure 1 with the user reality being that of an archer (The Important Difference Between Virtual Reality, Augmented Reality and Mixed Reality, n.d.). One extremely popular example of VR is the early-stage company Oculus. Oculus creates quests that completely immerse the user in the virtual world the device creates. The technology was seen as so revolutionary that Mark Zuckerberg claimed it would be the new medium of online social interaction when he bought it for $3 billion in 2014, just two years after the former Oculus CEO launched a Kickstarter campaign for the company’s first headsets (Cava, 2017). Now Oculus is one of the biggest VR companies, competing with giants like Apple and Samsung for sales. VR and AR, however, potentially have benefits outside of the realm of entertainment. Medicine, in particular, can benefit from VR and AR technology in a variety of ways. Because of the delicate nature of the human body, medical professionals must be extremely careful when performing operations or DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: VR is especially popular in the entertainment sector. By placing users in their own gaming world, they can feel as though they are actually going on the adventures, playing the football game, or even shooting flaming arrows at enemies. Source: Wikimedia Commons
administering injections to ensure a mistake is not made that puts the patient’s life in jeopardy. Furthermore, they also must conduct these operations with limited visibility as every cut that is made or camera that is placed in the body is another action that disrupts the usual bodily functions of the patient. Even training for these operations can be costly, frustrating, and many times simply unavailable. Because of VR and AR’s ability to create complete views of what healthcare workers need to see either for training or treatment, they can improve the way that healthcare is provided. VR and AR have the potential to help medical professionals through medical training, surgery, and even through patient pain management.
Virtual Reality in Medical Training Before doctors can even begin their first operation on or treatment of a patient, they must first be trained so that they know how to do so safely. This entails an extensive education, which includes a pre-medicine track in their undergraduate education, the completion of medical school, a medical residency where they get the opportunity to work with real patients, and even fellowships for students who want to further specialize in their respected field. While the steps may be slightly different, countries across the globe have a form of this tiered educational system for aspiring doctors and other types of medical professionals to secure the training they need to perform their duties well (Wallenburg et al., 2010). But to improve the doctors of tomorrow, doctor’s process and curriculum training of today is constantly being altered and added to.
SPRING 2020
In the Netherlands, students in postgraduate medical programs are asking for more transparency in the medical process to provide efficient care to patients (Wallenburg et al., 2010). In Russia, along with understanding their field and securing experience, doctors are expected to secure fluency in Microsoft Office and skills in teamwork and analysis to better contribute towards innovation in the form of medical research (Akopov et al., 2014). In the Philippines, limited access to resources along with inadequate training programs are causing students to want the government to reform the way that medical resources are managed and the way that curriculum is constructed (Guinto, 2012). Lastly, in the United States, training programs are being implemented to emphasize quality of treatment and patient safety (Wallenburg et al., 2010). With these major changes in medical education, institutions are starting to modernize by integrating technology in training programs to improve the efficiency of medical education as well as address a many of the problems that reform programs listed above are trying to solve. Optimized resource management, efficient learning programs, and limited use of human bodies are all important considerations that universities are taking into account when improving training programs. This is where VR technology can greatly help the medical industry. One important factor to consider in medical training is the safety of the student. Doctors use a variety of needles and sharp laboratory equipment, and a simple loss of focus for even a second can cause injury to not just the
“Optimized resource management, efficient learning programs, and limited use of human bodies are all important considerations that universities are taking into account when improving training programs. This is where VR technology can greatly help the medical industry.”
55
patient but also to the doctor. A needlestick or sharp injury (NSI) is an injury that is sustained due to accidental piercing by a sharp piece of medical equipment. NSIs are a threat to students worldwide – almost 40% of students report that they have sustained an NSI sometime in their training period (Ghasemzadeh et al., 2015). The NSI rate was found to be particularly high in areas of Taiwan, which led Wu et al. (2020) to develop a VR game designed to increase familiarity and ease anxiety with using sharps objects. As a result of using the game, interns’ confidence and familiarity with handling sharps objects more than doubled, and the NSI rate in Taiwanese subjects decreased from an average of close to 80% for both groups of interns to an average of 35% for medical interns and 31% for nursing interns for the test groups (Wu et al., 2020).
“...VR technology to create 3D visuals of body parts that would replace traditional cadavers; this would allow students to get real hands-on experience with anatomy without needing an actual body.”
VR training has also proven to be an excellent tool for students to learn the anatomy of the human body. Prior to now, common forms of anatomical teaching have been lectures, videos, and dissections of cadavers. While lectures and videos can be informative, they do not allow the students to physically see how parts of the body look or to macroscopically interact with parts of the body. Cadavers (deceased human bodies) can allow for this dissection, but there are high costs associated with purchasing and maintaining cadavers. Furthermore, there is only a limited supply, and the quality of a cadaver decreases the longer students wait to dissect it (Falah et al., 2014). These problems could be addressed by using VR technology to create 3D visuals of body parts that would replace traditional cadavers; this would allow students to get real hands-on experience with anatomy without needing an actual body.
Falah et al. (2014) further helped outline the benefits of this virtual body experience by identifying three potential improvements upon lectures and cadavers that VR can provide to enhance student’s understanding of anatomy: learning through interaction with the virtual body, ability to manipulate relative sizes to see certain enhance view of structures, and a wider availability of VR technology than traditional cadavers or scheduled lectures. After providing a VR interface designed for anatomical exploration of the heart, ten medical professionals were asked to complete a ten-question questionnaire asking how much they agreed or disagreed that the VR systems designed by Falah’s team optimized the students’ learning processes for different aspects. All ten professionals reported some level of agreeance with every question on the questionnaire (19% slightly agree, 54% mostly agree, and 27% completely agree). VR technology can also be utilized to help train aspiring surgeons on how to perform surgeries. During their training, junior doctors observe senior surgeons during their operations and sometimes the junior doctors get their own hands-on experience with intense oversight from senior surgeons (Li et al., 2017). As shown in Figure 2, VR technology has been developed to allow students to gain the technical skills required for surgery through repetition and practice operations without using real people. Even better, VR surgeries do not require supervision and can provide metrics such as the duration of the surgery, number of incidents and injuries, and what anatomic structures the student identified. Further, because of a student’s ability to access VR technology on demand, a wider accessibility of training can
Figure 2: An ophthalmologist prepares for an upcoming surgery through a VR simulation walkthrough of the actual process. Through VR technology like this, medical professionals will have greater opportunity to practice before hands-on surgery. Source: Wikimedia Commons
56
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
be provided to users of surgical VR simulations. VR can also simulate unusual patient features to give students practice with cases that won’t respond well to traditional protocols. Simply put, the customizability, intractability, and accessibility of VR technology can greatly improve surgical training. For quantifiable results, Seymour et al. (2002) found that VRtrained residents performed gallbladder dissections 29% faster, were five times less likely to injure the gallbladder, and nine times less likely to transiently fail to make beneficial progress to the surgery compared to non-VRtrained residents. In some programs, VR is seen as so much of a benefactor to surgical training that it has become a pre-requisite for actual surgical operations (Li et al., 2017).
perspective and capabilities of the surgeon. For instance, many surgeons must interact with a wide area of the patient’s internal body in order to successfully perform the surgery. This often entails making a small incision and using a long instrument with cameras to see what they are actually doing inside the body. However, if augmented reality were incorporated into the operation, AR could build a 3D view of the body and allow doctors to potentially “see through” the skin in real time. Then, the doctors could see where their instrument is in relation to the entire section of the body instead of just relying on the perspectives of the multiple cameras (Khor et al., 2016). In fact, this technology has been proposed to extend to helping students interact with different parts of an AR-developed human body as shown in Figure 3.
Virtual Reality and Augmented Reality During Surgery
Even outside of this sophisticated application of AR technology to medicine, current AR devices have been well-received by the surgical community. Google Glass was first released in 2013 and received a significant amount of attention. It was particularly popular within the surgical community because of the possibility for hands-free documentation and recording capabilities along with the capability of searching for rare medical diseases, terms, and billing codes (Muensterer et al., 2014). While improvements needed to be made to address privacy concerns and support specialized medical applications, the surgical community was optimistic about Google and the rest of society’s development of surgical AR technology. Unfortunately, Google Glass was discontinued in 2015 because of its high price point and privacy concerns that inhibited sales potential to the general public (Google Glass Lives on in the Workplace. The Latest Pair Costs $999 - CNN, n.d.). Nevertheless, Google Glass demonstrated the real potential AR has to succeed in the field of medicine, which has spurred many medical AR startups today, and it is just a matter of time before one company releases a truly revolutionary product.
While VR and AR can greatly improve the training process for many doctors, the technology can also help experienced surgeons with operations. Just as there are concerns about students not being able to see body parts in full detail, surgeons are always looking for ways to build the most complete view of a patient’s body while limiting the amount of times they are making incisions or putting foreign objects, like cameras or sensors, in the patient’s body. VR can be a great asset to surgeons, as the technology can produce 3D models of a patient’s body parts so that points of interest can be highlighted and inspected to better understand where the problem is and how a surgeon should address it. Surgeons at Stanford University have already started to incorporate VR technology that takes in medical imaging such as CT scans, MRIs and angiograms to build the 3D model that surgeons claim provides them much more detail of the body than traditional methods provide (Virtual reality system helps surgeons, reassures patients, n.d.). The benefit of the technology extends beyond the operating room, as it can reassure patients that the surgeon understands how the condition is affecting them specifically, making the patient far calmer and more relaxed than when surgeons only have various scans to base their entire surgery off of. However, surgeons eventually have to step out of the computer-generated world of VR and into the actual operating room to perform the operation. In this market, AR is much more helpful than VR as it can augment the
SPRING 2020
“The benefit of the technology extends beyond the operating room, as it can reassure patients that the surgeon understands how the condition is affecting them specifically...”
Virtual Reality in Patient Care While VR and AR technology can be widely beneficial to medical professionals, they can also be extremely helpful in facilitating patient care. There are different types of pain, and consequently different means of pain management. Acute pain is defined as pain that arises suddenly and has some obvious cause. Chronic pain instead is pain that is the result of an underlying condition and continues for longer periods of time (Acute Pain vs. Chronic
57
Figure 3: Just as in the image above allowing children to zoom into specific parts of the body, augmented reality is expected to be a huge benefactor to surgeons in the future by allowing them to construct clear images of the body without making incisions. Though AR technology is not sophisticated enough to provide these functions yet, AR companies are growing more aware of the demand for their technology in the medical field, and the surgical field in particular. Source: Wikimedia Commons
Pain, n.d.).
“...there are three primary markets for VR and AR technology in the medical field: the learning market at medical universities and institutions, the doctor-facing hospital market (for assistance with surgeries and everyday activities), and lastly the patientfacing hospital market (for pain management and other aspects of patient care).�
58
Traditional methods of acute pain management involve using anesthetics or other drugs that act as analgesic agents (pain relieving agents) (Kehlet & Holte, 2001). VR, however, could be introduced into the clinical setting to distract the patient from their pain. Much like hypnosis, the colors, noises, and visuals observed by individuals while they are invested in an activity take their attention away from the acute pain they are feeling and instead redirects their focus to the game being played. Several studies have shown that VR can be effective in reducing the pain perceived by patients (Li et al., 2017). In particular, VR distraction was shown to be effective with adolescent patients. Chronic pain, on the other hand, has been studied far less than acute pain. Because of the longevity of chronic pain, there are many more options concerning the timing and use of VR in pain management, which makes it harder to study in a controlled manner. However, one study by Sato et al. (2010) demonstrated that coupling VR gaming with pain treatment once a week for five to eight weeks showed a longterm pain reduction of more than 50% in four of five patients, indicating VR technology may be a promising method of pain management not just for acute pain, but also chronic pain.
Economic Analysis In order for VR and AR technologies to be introduced and maintained in the economy, they must be economically sustainable. Given
the use cases described above, there are three primary markets for VR and AR technology in the medical field: the learning market at medical universities and institutions, the doctor-facing hospital market (for assistance with surgeries and everyday activities), and lastly the patient-facing hospital market (for pain management and other aspects of patient care). Each of these different markets should be addressed differently due to the differing dynamics that underly the wants and needs that lead to purchasing of technology. Due to the novelty of the medical VR and AR industry, along with the complexity of the industry in general, extreme sophistication is required to conduct an accurate economic analysis of the industry, as each medical institution has a unique set of needs, leaving a scarce amount of existing economic analyses (Lin et al., 2018). That being said, Delshad et al. (2018) conducted an economic analysis for pain management VR that has uncertainty built into the model by testing a variety of inputs. With uncertainty built in, the researchers can help analyze whether VR will be profitable in the future depending on the different ways that technological innovation proceeds, thereby making no absolute claims on profitability because the technology is currently not sophisticated enough. The two methods that contributed to accounting for uncertainty are a sensitivity analysis, a method that presents outcomes as a table based on varying the inputs, and a Monte Carlo simulation, which models complex systems that cannot easily be predicted because of randomness inherent
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
within the system. By using these two methods, the researchers accounted for some of the complexity of the medical market and were able to test the cost effectiveness of a variety of inputs. Analyzing the cost-effectiveness of VR pain management technology at a variety of inputs provides thresholds for the lowest possible values allowed for VR technology to still be profitable for hospitals to implement. If hospitals find it profitable, then there will be demand that can ensure sales to companies providing this VR technology, and the economic feasibility of these companies can be evaluated. Delshad et al’s (2018) economic analysis incorporated a sensitivity analysis that had six major results. Firstly, at least 14.6% of patients must utilize VR therapy. Secondly, fixed costs of VR programs, which are initial payments made whose sum is irrelevant to the number of products produced, must be less than $326,872. Thirdly, VR variable costs, which are costs incurred every time a product is produced, must be less than $31.27. Fourthly, the probability of minor or major adverse health effects as a result of using VR technology must be less than 21.8% and 0.06%, respectively. Fifthly, there must be at least 11,485 patient admissions in the hospital every year. Lastly, VR must reduce the marginal costs of the last day of hospitalization by at least 14.6%. While the researchers estimations for all of these variables were well within the threshold value, if any of these requirements are not met, then VR programs will, on average, not save costs for hospitals. According to the Monte Carlo probabilistic sensitivity analysis, 1000 simulations with random parameters for fixed costs of VR ($100,000 to $1,000,000) and admissions in the hospital per year (5,000 to 50,000) resulted in VR remaining cost-saving in 89.2% of trials (Delshad et al., 2018). Ultimately, the research team concluded that VR will reduce costs if it results in a decrease in the length of a patient’s hospital stay, an effect which will be determined in large part by the efficacy of the actual technology. Unfortunately, promising products that could be applicable to the medical market are not always targeted to this market, and the lack of specialization into the medical field leaves the product vulnerable to be considered a “failure” if they do not meet expectations from the general economy (an example of this is Google Glass). Technological sophistication, especially in the field of medical devices, is a huge competitive advantage for a startup. Once
SPRING 2020
technology is advanced enough to effectively improve medical training and surgical procedures, there will be ample demand to sustain a business. This is especially true if the cost of the devices decreases, which would unlock the potential to sell to low income regions that are currently incapable of buying the medical VR and AR technology that can cost hundreds of thousands of dollars in today’s market (Parham et al., 2019). However, once the industry matures further into the future, these variable requirements listed by Delshad should be met to have the best chance at VR and AR sustenance in the medical device market. Another important consideration for medical VR and AR technology suppliers should be the possibility of making their products open source (Muensterer et al., 2014). Open source is a designation that indicates the source code for the technology is available for download and manipulation by consumers. Open source should be an important consideration in the minds of general VR and AR developers (and medical VR and AR developers in particular) because it allows users to innovate on their own. This can stimulate the creation of applications for VR and AR that address problems that manufacturers did not know existed. This phenomenon has been demonstrated to actually work in many consumer products, especially in the smartphone industry. Many people consider Apple to be the king of the smartphone market by far; however, Apple is just the king of profiting in this industry. In reality, Android held an estimated 87% of the market share for smartphone technology in 2019 and has had its market share increasing since 2014 (Android vs IOS Market Share 2023, n.d.). One factor that contributes to their widespread acceptance is the open source feature of Android phones, allowing innovative users to customize their phone the way they want to. One of the biggest complaints that surgeons had when using Google Glass during surgical procedures is that there were not apps that were specific to medical use, which can be rectified if the technologies were made to be open source (Muensterer et al., 2014). Although there is possibility for users to change source code and accidentally harm patients, there is little chance that modified designs could be used without some regulatory approval, thereby preventing harm to patients and liability to companies while still allowing users to experiment. In general, if VR and AR producers made their products open source, consumers could develop their own suite
“Another important consideration for medical VR and AR technology suppliers should be the possibility of making their products open source.”
59
of medical apps to augment the technology themselves, which the company could then buy to incorporate into their technology.
Conclusion “The economic analysis, however, remains somewhat unclear, as the industry is still quite new, and many variables are uncertain.”
Although still in early stages of development, early research on medical VR and AR technology has demonstrated benefits in a variety of markets. Medical VR and AR technology can be utilized in medical training, medical operations, and even pain management for patient care in hospitals. The economic analysis, however, remains somewhat unclear, as the industry is still quite new, and many variables are uncertain. And even those optimistic about VR and AR applications state the need for continued testing of these technologies (State Of Innovation The Next US Medtech Hotspot, n.d.). However, integration of VR and AR is an exciting thought and with the momentum behind futuristic innovation, the future of virtual medicine may come a lot sooner than we think. References Acute Pain vs. Chronic Pain: What it is & Differences. (n.d.). Cleveland Clinic. Retrieved May 20, 2020, from https:// my.clevelandclinic.org/health/articles/12051-acute-vs-chronicpain Akopov, V. S., Otstavnov, S. S., & Breusov, A. V. (2014). Problems in Development of Medical Industry: Personnel Training Aspect. Biomedical Engineering, 47(6), 279–281. https://doi. org/10.1007/s10527-014-9390-9 Android vs iOS market share 2023. (n.d.). Statista. Retrieved May 20, 2020, from https://www.statista.com/statistics/272307/ market-share-forecast-for-smartphone-operating-systems/
Kehlet, H., & Holte, K. (2001). Effect of postoperative analgesia on surgical outcome. BJA: British Journal of Anaesthesia, 87(1), 62–72. https://doi.org/10.1093/bja/87.1.62 Khor, W. S., Baker, B., Amin, K., Chan, A., Patel, K., & Wong, J. (2016). Augmented and virtual reality in surgery-the digital surgical environment: Applications, limitations and legal pitfalls. Annals of Translational Medicine, 4(23), 454. https:// doi.org/10.21037/atm.2016.12.23 Li, L., Yu, F., Shi, D., Shi, J., Tian, Z., Yang, J., Wang, X., & Jiang, Q. (2017). Application of virtual reality technology in clinical medicine. American Journal of Translational Research, 9(9), 3867–3880. Lin, Y., Cheng, A., Hecker, K., Grant, V., & Currie, G. R. (2018). Implementing economic evaluation in simulation-based medical education: Challenges and opportunities. Medical Education, 52(2), 150–160. https://doi.org/10.1111/ medu.13411 Muensterer, O. J., Lacher, M., Zoeller, C., Bronstein, M., & Kübler, J. (2014). Google Glass in pediatric surgery: An exploratory study. International Journal of Surgery (London, England), 12(4), 281–289. https://doi.org/10.1016/j.ijsu.2014.02.003 Parham, G., Bing, E. G., Cuevas, A., Fisher, B., Skinner, J., Mwanahamuntu, M., & Sullivan, R. (2019, March 19). Creating a low-cost virtual reality surgical simulation to increase surgical oncology capacity and capability. https://doi. org/10.3332/ecancer.2019.910 Sato, K., Fukumori, S., Matsusaki, T., Maruo, T., Ishikawa, S., Nishie, H., Takata, K., Mizuhara, H., Mizobuchi, S., Nakatsuka, H., Matsumi, M., Gofuku, A., Yokoyama, M., & Morita, K. (2010). Nonimmersive Virtual Reality Mirror Visual Feedback Therapy and Its Application for the Treatment of Complex Regional Pain Syndrome: An Open-Label Pilot Study. Pain Medicine, 11(4), 622–629. https://doi.org/10.1111/j.15264637.2010.00819.x
Delshad, S. D., Almario, C. V., Fuller, G., Luong, D., & Spiegel, B. M. R. (2018). Economic analysis of implementing virtual reality therapy for pain among hospitalized patients. Npj Digital Medicine, 1(1), 1–8. https://doi.org/10.1038/s41746-018-0026-4
Seymour, N. E., Gallagher, A. G., Roman, S. A., O’Brien, M. K., Bansal, V. K., Andersen, D. K., & Satava, R. M. (2002). Virtual Reality Training Improves Operating Room Performance: Results of a Randomized, Double-Blinded Study. Annals of Surgery, 236(4), 458–464.
Cava, M. della. (2017, January 17). Oculus cost $3B not $2B, Zuckerberg says in trial. USA Today. Retrieved May 2, 2020, from https://www.usatoday.com/story/tech/news/2017/01/17/ oculus-cost-3billion-mark-zuckerberg-trial-dallas/96676848/
State Of Innovation The Next US Medtech Hotspot. (n.d.-a). Retrieved May 20, 2020, from https://www.meddeviceonline. com/doc/state-of-innovation-the-next-u-s-medtechhotspot-0001
Falah, J., Khan, S., Alfalah, T., Alfalah, S. F. M., Chan, W., Harrison, D. K., & Charissis, V. (2014). Virtual Reality medical training system for anatomy education. 2014 Science and Information Conference, 752–758. https://doi.org/10.1109/ SAI.2014.6918271
State Of Innovation The Next US Medtech Hotspot. (n.d.-b). Retrieved May 20, 2020, from https://www.meddeviceonline. com/doc/state-of-innovation-the-next-u-s-medtechhotspot-0001
Ghasemzadeh, I., Kazerooni, M., Davoodian, P., Hamedi, Y., & Sadeghi, P. (2015). Sharp Injuries Among Medical Students. Global Journal of Health Science, 7(5), p320. https://doi. org/10.5539/gjhs.v7n5p320
The Important Difference Between Virtual Reality, Augmented Reality and Mixed Reality. (n.d.). Retrieved May 20, 2020, from https://www.forbes.com/sites/ bernardmarr/2019/07/19/the-important-differencebetween-virtual-reality-augmented-reality-and-mixedreality/#37fbe7cf35d3
Google Glass lives on in the workplace. The latest pair costs $999—CNN. (n.d.). Retrieved May 20, 2020, from https://www. cnn.com/2019/05/20/tech/google-glass-enterprise-edition-2/ index.html Guinto, R. L. (2012). Medical education in the Philippines:
60
Medical students’ perspectives. The Lancet, 380, S14. https:// doi.org/10.1016/S0140-6736(13)60300-1
Virtual reality system helps surgeons, reassures patients. (n.d.). Medical Center Development. Retrieved May 20, 2020, from https://medicalgiving.stanford.edu/news/virtual-realitysystem-helps-surgeons-reassures-patients.html
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Wallenburg, I., van Exel, J., Stolk, E., Scheele, F., de Bont, A., & Meurs, P. (2010). Between trust and accountability: Different perspectives on the modernization of postgraduate medical training in the Netherlands. Academic Medicine: Journal of the Association of American Medical Colleges, 85(6), 1082–1090. https://doi.org/10.1097/ACM.0b013e3181dc1f0f Wu, S.-H., Huang, C.-C., Huang, S.-S., Yang, Y.-Y., Liu, C.-W., Shulruf, B., & Chen, C.-H. (2020). Effects of virtual reality training on decreasing the rates of needlestick or sharp injury in new-coming medical and nursing interns in Taiwan. Journal of Educational Evaluation for Health Professions, 17, 1. https://doi.org/10.3352/jeehp.2020.17.1
SPRING 2020
61
Plantibodies: How plants are shaping our medicine BY HUBERT GALAN '23
Cover: A greenhouse amidst a garden filled with plants (The Kew Gardens Palm House in London). Source: Wikimedia Commons
“A plantibody is an antibody that is produced synthetically in the cells of transgenic plants.�
62
Introduction Antibodies are proteins that are produced by lymphocytes - a class of specialized white blood cells - in the blood of vertebrates. They act as defense mechanisms by recognizing specific antigens on pathogens and activating immunological processes that degrade those pathogens. Immunological responses fall into three broad categories: neutralization, opsonization and complement system activation (Kozel & Follette, 1981). Neutralization is the process by which an antibody binds to surface proteins of a pathogen, preventing the pathogen from further propagating. Opsonization is a specialized process that involves coating a pathogen with particles so that it is recognized for phagocytosis by macrophages. Lastly, complement system activation is a process that involves the interaction of plasma proteins to facilitate pathogenic lysis, or cell member rupturing (Kozel & Follette, 1981).
A plantibody is an antibody that is produced synthetically in the cells of transgenic plants. Transgenic plants are plants that undergo specific genetic modification through the introduction of gene sequences into their cells. The intricate biological factories within transgenic plant cells are taken advantage of through two main processes: stable DNA integration with the nuclear genome or viral transient gene expression (Simpson, 2001). DNA integration is facilitated by the plant-specific bacteria agrobacterium that carries genetic vectors (T-DNA) into the nuclear membrane of a plant, which then become integrated into the plant’s genome. Viral transient gene expression is the process of temporarily expressing foreign genomic strands through the introduction of viral particles that carry DNA into the cell of a plant. Both methods have been developed to effectively induce plant cells to create protein structures, in this case antibodies, using foreign DNA (Shakya & Sharma, 2018).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
contaminated with harmful substances such as herbicides, pesticides and mycotoxins, which present significant risk to potential patients. In addition, further molecular modifications are very difficult in post translational proteins, limiting the capacity of scientists to favorably manipulate the antibodies that they create. (Oluwayelu & Adebiyi, 2016). Given these concerns, there are currently many clinical trials running to proactively test plantibodies to ensure their safety in medical practice.
Figure 1 shows a general model of the immunization process. A virus experiences phagocytosis, a process that involves a phagocyte engulfing a pathogen. In turn providing the T-helpers cell with a template for antigen particles. The B-cells then proceed to develop antibodies to neutralized pathogenic particles, like a general virus. Source: Wikimedia Commons
Applications Plantibody Advantages One of the largest motivating factors for plantibody production is the economic efficiency of the technique. The production cost is very low, and production yields a relatively high output rate. Plants are also readily available, so scientists do not have trouble getting access to cells to genetically modify. Production can be flexible, and easily adjusted to the needs of current economic markets (Shakya & Sharma, 2018). Additionally, using plants as subjects avoids any of the potential ethical issues that arise with the use of transgenic animals, which may be forced to undergo harmful and painful scientific procedures to produce antibodies. Also, using transgenic animals as a reservoir for antibody production significantly increases the risk of viral mammalian contamination, whereby antibodies are contaminated with viral particles, which can facilitate human infection (Shakya & Sharma, 2018). Given the eukaryotic machinery of plant cells, they can efficiently facilitate the assembly of full multimeric proteins from chains of peptides, something that is nearly impossible in their transgenic prokaryotic counter-parts. As antibiotics for prevailing diseases increase in demand, it is imperative to approach manufacturing in an ethical and efficient way (Oluwayelu & Adebiyi, 2016). By decreasing the overly high cost of production, the cost that consumers will be lower as a result. This will increase the accessibility of it internationally
Limitations Despite the stated advantages of using plantibodies, there are significant downsides as well. Plant derived antibodies have been shown to facilitate increased levels of immunogenicity, or negative immunological responses. That is, plantibodies may cause extensive allergic reactions in patients who are exposed to them (Oluwayelu & Adebiyi, 2016). Furthermore, using plants as the site for antibody production pose the risk of contamination; plantibodies may be
SPRING 2020
Plantibodies have various applications in the field of biomedical science. The two most significant applications are passive immunization and vaccine development. Passive immunization refers to a medical procedure in which physicians directly transfer antibodies to patients to provide immediate and temporary protection against infectious pathogens. (Virdi & Depicker, 2013) The main challenge with this medical technique is that it requires a large amount of antibodies, and can therefore prove to be quite expensive. (Nair, 2017). Yet the generation of transgenic plant species has provided medical professionals cheap access to a plethora of antibodies, and the cost efficiency of plantibodies is a major reason why it is devoted heavily to perform passive immunizations.
“Plantibodies have various applications in the field of biomedical science. The two most significant applications are passive immunization and vaccine development.�
Plantibodies have also been used to develop unconventional vaccines known as oral vaccines. When consumed, these vaccines provide passive immunity. The importance and urgency of this type of vaccine was first demonstrated in a study conducted by Tacket et al. The most consistent and burdensome form of human disease tends to target the mucosal sites of the respiratory tract, genital tract and the gastrointestinal tract (Tackett & Mason, 1999). The scientists found that the most effective way of achieving immunization in these epithelial cells was through direct application of Figure 2 shows an image of the corn-product of 7 genetically modified plants, formally known as transgenic plants. By altering the genetic composition of plants, scientists are able to phenotypically manipulate corn samples Source: pikist.media
63
Figure 3 shows a diagram of HIV gaining entry into a helper T cell by binding it’s gp120 glycoprotein to the CD4 receptor Source: Wikimedia Commons
a vaccine. Researchers proposed use of an “edible vaccine” to achieve this form of immunization for mucosal cells. Other scientists suggested the use of genetically engineered plants that possessed plantbased antibodies, as it would provide an effective delivery method to the antigens. Antibodies carried within the durable cells of transgenic plants would be protected as they pass through hostile bodily environments, like our intestines and stomach. (Herbers &. Sonnewald, 1999). As a result, plantibodies have the potential to quickly induce immune responses that would lead to immunization.
Case Study: HIV “It is projected that roughly 36.7 million people arounmd the world are infected with the Human Immunodeficieny Virus (HIV)”
64
It is projected that roughly 36.7 million people around the world are infected with the Human Immunodeficiency Virus (HIV). In 2016, about 1 million deaths due to HIV/AIDS were reported, despite massive advancement in treatment of the virus. A new class of broadly neutralizing antibodies (bNAbs), antiretroviral treatments used for HIV, has recently been synthesized. This class of bNAbs, called CAP256-VRC26, possess a high potency for killing the HIV virus and show promise for further research. CAP256-VRC26 bNABs target the V1V2 region of the gp120- envelope glycoprotein, leading to the eventual degradation of viral particles, showing distinct potency against HIV A and C strains. The bNAbs efficiency derives from their unique CDR (complementarity determining region), which is characterized by a “protruding O-sulfated tyrosine.” An HIV particle accomplishes infection by binding to the CD4 receptors on helper T cells, an event driven by the affinity of the gp120 envelope glycoprotein for the receptor. The bNAbs’ O-sulfated tyrosine is attracted to the gp120 glycoprotein the way the gp120 glycoprotein is attracted to CD4. When bNAbs bind, they neutralize HIV particles, rendering them useless (Singh, Pooe, and Kwezi, 2020). In a 2020 study, Singh et al. found that plantibodies show great potential for the large-scale production of these types of bNAbs. They found that the plant Nicotiana benthamiana showed greater efficiency in producing anti-HIV antibodies compared to those engineered in mammalian cells. Specifically, the plant derived bNAbs maintained high levels of potency that directly matched the bNAbs
produced in mammalian cells. Even though this advantage was clearly noted, many downsides of using transgenic plants to produce these macromolecules were discovered as well (Singh, Pooe, and Kwezi, 2020). Specifically, the transgenic plants possessed enzymes that occasionally targeted bNAb production to create recombinant variants that were nonfunctional. In all, plant-based systems were used to create a specific variant of bNAbs known as CAP256-VRC26 that harnessed a strong affinity towards the V1V2 region of the gp120 glycoprotein, which in turn prevented any binding to the C4 receptor of T-helper cells, the host cell for HIV (Singh, Pooe, and Kwezi, 2020). Despite some limitations in transgenic plant use, these scientists found significant evidence that emphasizes the usefulness of plants in antibody production. The scientists predict that these types of bNAbs could be used for passive immunization in humans as a preventative measure for developing the virus.
Conclusion The use of transgenic plants for antibody synthesis has significant implications for the medical community. Plantibodies provide potential solutions to problems plaguing the health care system around the world. The reduced cost of production, in comparison to the cost of mammalian antibody production, allows for extensive production of plantibodies. Having the ability to manufacture a large number of antibodies is particularly advantageous, as it makes passive immunization a greater possibility. Additionally, because of plantibodies, a new field of medical practice has arisen, known as oral vaccines. Oral vaccines have the potential to provide a new and efficient delivery system that would allow for rapid immunization, and in turn, could be used to facilitate international vaccine programs more easily. However, using transgenic plants poses the risk of contamination from mycotoxins and pesticides, which may provide significant harm to patients. It is imperative that scientists conduct clinical trials to ensure that plantibodies are a safe and efficient treatment.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
References Kozel, T. R., & Follette, J. L. (1981). Opsonization of encapsulated Cryptococcus neoformans by specific anticapsular antibody. Infection and Immunity, 31(3), 978984. doi:10.1128/iai.31.3.978-984.1981 Simpson, D. (2001, June 1). A Review of the Progression of Transgenic Plants Used to Produce Plantibodies For Human Usage. Retrieved May 2, 2020, from http://legacy.jyi.org/ volumes/volume4/issue1/articles/ferrante.html?ref=Guzels. TV Shakya, P., & Sharma, V. (2018). Plantibodies as biopharmaceuticals: A review. Journal of Pharmacognosy and Phytochemistry. Retrieved from http://www.phytojournal. com/archives/2018/vol7issue5/PartAJ/7-5-151-698.pdf Oluwayelu, D. O., & Adebiyi, A. I. (2016). Plantibodies in human and animal health: a review. African Health Sciences, 16(2), 640. doi: 10.4314/ahs.v16i2.35 Virdi, V., & Depicker, A. (2013). Role of plant expression systems in antibody production for passive immunization. The International Journal of Developmental Biology, 57(6-78), 587–593. doi: 10.1387/ijdb.130266ad Nair, B. J. (2017). Plantibodies: Paving Novel Avenues for Immunotherapy. MOJ Surgery, 4(4). doi: 10.15406/ mojs.2017.04.00078 Singh, A.A., Pooe, O., Kwezi, L. et al. (2020). Plant-based production of highly potent anti-HIV antibodies with engineered posttranslational modifications. Sci Rep 10, 6201 https://doi.org/10.1038/s41598-020-63052-1 Herbers K., U. Sonnewald; (1999) Production of new/ modified proteins in transgenic plants; Current Opinion in Biotechnology. 10: 163-168 Tacket C.O., H.S. Mason. (1999) A review of oral vaccination with transgenic vegetables; Microbes and Infection. 1: 777-83, Viral transformation. (2020, April 7). Retrieved from https:// en.wikipedia.org/wiki/Viral_transformation Transgenic Plant; Corn Modification (n.d.). Retrieved from https://www.pikist.com/search?q=modified HIV Membrane fusion panel.svg.(n.d.). Retrieved from https:// commons.wikimedia.org/wiki/File:HIV_Membrane_fusion_ panel.svg
SPRING 2020
65
Music Perception and Processing in the Human Brain BY KAMREN KHAN '23 Cover: Music interacts with the brain through a complex network of pathways and systems Source: Shutterstock; Creator: Zirconicusso
“Music carries immense emotive and social value across all cultures and the sense of musicality is innate to most humans.”
66
Introduction Like many other animals, humans rely on vocal systems of communication. However, humans are unique in the extent of their capacity for vocal learning (Fitch, 2006). This capacity for vocal learning paired with the complex anatomy of human vocal systems enabled a high level of interpersonal communication. One such learned mechanism of communication is music. It is important to note that ‘music’ is distinct from innate vocalization, which arises largely independent of environmental factors. While other animals, by convergent evolution, exhibit more simplistic forms of vocal learning and consequently have developed musical forms of communication, the advent of tools by hominid primates allowed for an additional level of complexity and instrumentation. The use of instruments dates back to the Neanderthal use of a bone flute approximately 55,000 years ago (D’Errico et al., 1998).
Music carries immense emotive and social value across all cultures and the sense of musicality is innate to most humans. Music therefore constitutes a valuable entry point from which to consider the complexities of human emotion in the brain and mental processing. A discussion of the cognitive and psychological implications of music must begin with the system through which it is perceived.
The human auditory system In the air, sound travels as transverse waves defined by wavelength, amplitude, speed, and frequency. The pinna (conical-shaped outer ear) channels these sounds waves into the ear canal, which subsequently directs these waves onto the tympanic membrane, more commonly known as the eardrum. The eardrum vibrates in response to the sound waves, and the ossicular chain, comprised of the three small bones called the malleus, incus, and stapes, amplifies and transmits these vibrations to the inner ear DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
volume. The acoustic foundation of timbre is not completely understood but is thought to depend on a variety of auditory cues (Town & Bizley, 2013). Formant position, referring to an area of localized acoustic energy, constitutes one such auditory cue but fails to completely explain the level and specificity of sound discrimination through distinctions of timbre (Bladon, 1983). At this point, timbre can only vaguely be understood as another dimension of sound that enables discrimination beyond the basis of pitch or other elements and further specifies sound identity.
(Luers & Hüttenbrink, 2016). The vibrations then reach the cochlea, a serious of fluid-filled membranous tubes (Robles & Ruggero, 2001). At this point, the sound waves propagating through the air are translated to vibration of the ossicles and then to ripples in the cochlear fluid. These ripples disturb hair cells on the basilar membrane, which causes the stereocilia atop these hair cells to bend. The bending of the stereocilia prompts the opening of ion channels and subsequent influx of potassium ions into the cell that creates an electrical signal (De Boer, 1991). Finally, the auditory nerve transmits this signal to the brain. 2.1 The dimensions of sound This complex process allows for the translation of mechanical energy across a variety of mediums into an electrical impulse that ultimately enables the detection of sound in the brain. However, human hearing expands far beyond mere detection. Just as frequency of light yields visual perception of color, frequency of sonic waves corresponds to audition of pitch. Notably, while the visual system is sensitive to under one octave of frequencies, the auditory system detects frequencies from 20 Hz to 20,000 Hz, or ten octaves (Oxenham, 2018). Each frequency in this vast range corresponds to specific points along the basilar membrane at which the hair cells best respond to the frequency. This organizational principle, referred to as ‘place coding’, reappears in the auditory cortex. Frequency, described in musical terminology as pitch, is integral to both the perception of language and the subtleties of speech such as intonation (Allen et al., 2017). Another element of sound, timbre, contributes to discrimination between different sources and types of sounds. For example, a note sung by a human will sound different than that same note played on a violin at the same
SPRING 2020
Figure 1. A visualization of sound waves of varying amplitudes, frequencies, and speeds traveling through a medium Source: Shutterstock; Creator: Jackie Niam
Lastly, amplitude and duration constitute two far simpler dimensions of sound. Sound waves of greater amplitude naturally cause greater disturbance of the hair cells on the basilar membrane and therefore correspond to the perception of louder sound. Duration simply refers to the length over which a sound is perceived. Through these and other dimensions of sound, humans interact with an auditory environment.
Music and emotion The vibrations and electrical impulses generated by music can elicit powerful emotions in the auditor. In this way, music invokes a response beyond the mere semantics of perceived pitch. The perception of music in the auditory cortex therefore must recruit other brain regions implicated in emotion. Additionally, it appears that not all formal elements of music contribute equally to the manipulation of emotion. Instead, crescendos, the introduction of vocals, and harmonic shifts (Schaefer, 2017) have been shown to elicit particularly strong emotional arousal.
“...it appears that not all formal elements of music contribute equally to the manipulation of emotion. Instead, crescendos, the introduction of vocals, and harmonic shifts have been shown to elicit particularly strong emotional arousal..”
3.1 Manipulation of expectance Musical manipulation of expectation can also incite emotion. Naturally, the notion of expectation implies the presence of underlying formal structure that guides the composition of and interaction with music. The study of musical structure is known as music theory, but the properties it describes are apparent both to trained musicians and individuals with no music background. For example, two pitches played together can be considered consonant or dissonant. This distinction relates to the earlier discussion of pitch as a product of specific frequencies corresponding to
67
Figure 2. Anatomy of the inner, outer, and middle ear Source: Shutterstock; Creator: Pretty Vectors
“Similarly, musical scales constitute another element of musical structure that increase emotional arousal.”
Figure 3. The path of sound from the cochlea to auditory centers in the brain Source: Sutterstock; Creator: Alila Medical Media
specific locations along the basilar membrane. Generally, if the ratio of two frequencies is simple, as in the case of a fifth or octave (3:2 and 2:1, respectively), the interval is perceived as consonant and pleasing to the ear. Conversely, more complex frequency ratios yield dissonant tones which are far less pleasing to the ear. Studies have found that individuals lacking music training (Peretz & Zatorre, 2005) and even infants (Schellenberg & Trehub, 1994) can discriminate between consonant and dissonant sounds. Music can arouse emotion, at least in part, through the tension created by dissonance and the resolution created by consonance. This distinction between dissonance and consonance is also useful in thinking about the distinction between major and minor keys. Major tonalities are associated with positive emotional valence and major passages have been found to be less dissonant on average than minor passages (Parncutt, 2014). Similarly, musical scales constitute another element of musical structure that increase emotional arousal. Most individuals hear music in terms of relative pitch, meaning listeners attend to changes in adjacent notes (McDermott & Oxenham, 2008). From relative pitch, listeners can discern the contour, or direction of change (Dowling & Fujitani, 1971). Contours, in addition to the established neural networks arising from previous experience with music, lead to the priming or expectancy of future notes. This priming occurs as each individual note is associated with other notes through interval relations which can be deemed likely or unlikely. Therefore, when a note is played, other notes are expected to follow in varying probabilities. As in the case of the interjection of dissonance, the subversion of expectations can also be a source
of arousal or musical tension.
The neurophysiological impact of music The emotional and general impact of music is accompanied by neurophysiological changes. In one study, subjects listened to what they deemed to be intensely pleasurable music and self-reported experiencing “chills” or “shiversdown-the-spine.” These intense physical reactions to music were accompanied by changes in heart rate and increased cerebral blood flow. Additionally, through positron emission tomography (PET), researchers saw changes in blood flow patterns consistent with responses to sex, food, and drugs (Blood & Zatorre, 2001). The reasoning for the association of music with phylogenetic systems of valuation and motivation remains unclear. The response of this system to sex and food is logical given the evolutionary importance of sex and the necessity of nutrition for survival. Additionally, drugs activate these systems of valuation and motivation through their manipulation of neurotransmitter activity. However, music is not necessary for survival or reproduction. Some have attempted to explain this ambiguity by referring to music as an element of sexual selection, interpersonal interaction, and mood regulation (McDermott, 2018).
Clinical applications of music Given its ability to invoke neurophysiological changes, music has a variety of clinical applications. One of the most apparent intersections of clinical practices and music is the augmentation of anesthesia through music. A randomized prospective study found that, when compared to a control group, patients
68
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
undergoing a septorhinoplasty (reconstructive nose surgery) showed lower levels of sedation, agitation, lower postoperative pain levels, and lower intake of analgesic drugs when music was played during surgery. (Gokçek & Kaydu 2019). Similarly, a study of patients undergoing abdominal surgery under general anesthesia found that those exposed to music during surgery had lower incidence of intraoperative awareness, higher satisfaction, calmer recovery, and a more stable hemodynamic profile (Kahloul et al., 2017). These findings are in line with previous studies that indicate that music can decrease pain and anxiety. (Panteleeva et al., 2018, Lee, 2016) However, the effectiveness of music in these capacities has been found to vary widely by context of implementation. The advantages of this clinical application are clear. Music is less expensive, easier to administer, and carries less risk than other instruments of anxiety and pain reduction. While music cannot be used as a sole treatment in these contexts, its supplemental use can improve patient experience and potentially reduce usage of opioids. Additionally, music appears to have a beneficial effect not only on the patients, but the health-care administrators as well. Music (of the surgeon’s choice) has been shown to increase speed and accuracy of non-surgical task performance and reduce autonomic reactivity in surgeons. However, it is important to note that these tasks were non-surgical, and participants only included surgeons who reported listening to music while operating (Allen & Blascovich 1994).
Conclusion Human perception and processing of music reflects the complexity of human interaction with the external environment. Some elements of music processing appear to be unconscious, as in the case of discriminating between consonant and dissonant sounds yet rely on extremely complex neural pathways. Additionally, certain processes rely on exposure, familiarity, and culture as given the development of pitch processing through Hebbian learning (referring to learning by changes in synaptic weights in response to external stimuli). (Erfanian Saeedi et al 2016) Furthermore, the consistency of the neurophysiological response to music with phylogenetic systems of valuation and motivation seems to indicate the ancestral importance of music in the lives of early humans. Finally, the clinical relevance of music
SPRING 2020
reflects the value in better understanding and applying this construct that seems so integral to the human mind.
References Allen, E. J., Burton, P. C., Olman, C. A., & Oxenham, A. J. (2017). Representations of Pitch and Timbre Variation in Human Auditory Cortex. The Journal of neuroscience: the official journal of the Society for Neuroscience, 37(5), 1284–1293. https://doi.org/10.1523/JNEUROSCI.2336-16.2016 Allen, K., & Blascovich, J. (1994). Effects of music on cardiovascular reactivity among surgeons. JAMA, 272(11), 882–884. Bladon A. (1983). Two-formant models of vowel perception: shortcomings and enhancements. Speech Commun. 2, 305–313 10.1016/0167-6393(83)90047-x Blood, A.J., & Zatorre, R.J. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proc Natl Acad Sci USA 98:11818–11823. De Boer E. Auditory Physics. Physical Principles in Hearing Theory. III. Physics Reports. 1991;203:125–231. D’Errico, F., Villa, P., Llona, A., & Idarraga, R. (1998). A Middle Paleolithic origin of music? Using cave-bear bone accumulations to assess the Divje Babe I bone ‘flute’. Antiquity, 72(275), 65-79. Doi:10.1017/S0003598X00086282 Dowling, W. J., & Fujitani, D. S. (1971). Contour, interval, and pitch recognition in memory for melodies. The Journal of the Acoustical Society of America, 49(2), 524+. https://doi. org/10.1121/1.1912382 Erfanian Saeedi, N., Blamey, P. J., Burkitt, A. N., & Grayden, D. B. (2016). Learning Pitch with STDP: A Computational Model of Place and Temporal Pitch Perception Using Spiking Neural Networks. PLoS computational biology, 12(4), e1004860. https://doi.org/10.1371/journal.pcbi.1004860 Fitch W. T. (2006). The biology and evolution of music: a comparative perspective. Cognition, 100(1), 173–215. https:// doi.org/10.1016/j.cognition.2005.11.009 Gokçek, E., & Kaydu, A. (2019). The effects of music therapy in patients undergoing septorhinoplasty surgery under general anesthesia. Brazilian journal of otorhinolaryngology, S18088694(18)30606-2. Advance online publication. https://doi. org/10.1016/j.bjorl.2019.01.008 Kahloul, M., Mhamdi, S., Nakhli, M. S., Sfeyhi, A. N., Azzaza, M., Chaouch, A., & Naija, W. (2017). Effects of music therapy under general anesthesia in patients undergoing abdominal surgery. The Libyan journal of medicine, 12(1), 1260886. https://doi.org/10.1080/19932820.2017.1260886 McDermott, J. (2008) The evolution of music. Nature 453, 287–288. Lee, J. H. (2016) The Effects of Music on Pain: A Meta-Analysis, Journal of Music Therapy, 53(4), 430–477. https://doi. org/10.1093/jmt/thw012 Luers, J. C., & Hüttenbrink, K. B. (2016). Surgical anatomy and pathology of the middle ear. Journal of anatomy, 228(2),
69
338–353. https://doi.org/10.1111/joa.12389 McDermott, J. H., & Oxenham, A. J. (2008). Music perception, pitch, and the auditory system. Current opinion in neurobiology, 18(4), 452–463. https://doi.org/10.1016/j. conb.2008.09.005 Oxenham A. J. (2018). How We Hear: The Perception and Neural Coding of Sound. Annual review of psychology, 69, 27–50. https://doi.org/10.1146/annurev-psych-122216-011635 Panteleeva, Y., Ceschi, G., Glowinski, D., Courvoisier, D. S., & Grandjean, D. (2018). Music for anxiety? Meta-analysis of anxiety reduction in non-clinical samples. Psychology of Music, 46(4), 473–487. https://doi.org/10.1177/0305735617712424 Parncutt, R. (2014). The emotional connotations of major versus minor tonality: One or more origins? Musicae Scientiae, 18(3), 324–353. Peretz, I., & Zatorre, R. J. (2005). Brain organization for music processing. Annual review of psychology, 56, 89–114. https:// doi.org/10.1146/annurev.psych.56.091103.070225 Robles, L., & Ruggero, M. A. (2001). Mechanics of the mammalian cochlea. Physiological reviews, 81(3), 1305–1352. https://doi.org/10.1152/physrev.2001.81.3.1305 Schaefer H. E. (2017). Music-Evoked Emotions-Current Studies. Frontiers in neuroscience, 11, 600. https://doi.org/10.3389/ fnins.2017.00600 Schellenberg, E. G., & Trehub, S. E. (1994). Frequency ratios and the discrimination of pure tone sequences. Perception & psychophysics, 56(4), 472–478. https://doi.org/10.3758/ bf03206738 Town, S. M., & Bizley, J. K. (2013). Neural and behavioral investigations into timbre perception. Frontiers in systems neuroscience, 7, 88. https://doi.org/10.3389/fnsys.2013.00088 Yinger O.S., Gooding L.F. (2015) A systematic review of musicbased interventions for procedural support. J. Music Ther. 2015;52:1–77. doi: 10.1093/jmt/thv004.
70
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
71
Quantum Machine Learning in Society Today BY KAT LASONDE '23
Introduction Cover: Recently, computer technology has been moving in a new direction. Rather than using classical physics to advance computing, scientists have begun using the properties of extremely tiny particles to create a new class of computers based on quantum mechanics. While still a developing field, quantum computers are capable of machine learning, with significant implications for biomedical, chemical, agricultural, and meteorological fields. Source: Unsplash
Recent developments in quantum physics and computer science have led to quantum machine learning: the synthesis of classical machine learning and quantum computing. Processing classical and quantum data, an idea first proposed by Richard Feynman in 1982, is becoming increasingly feasible. This paper aims to introduce the field of quantum machine learning at the undergraduate level and will build upon the concepts of quantum computing and machine learning to demonstrate their symbiotic relationship.
Quantum Mechanics Classical computers use classical physics — the physics taught in most introductory college courses — to solve problems. Classical physics describes the way the world works in most daily interactions. On the other hand, quantum
72
physics describes the way the world works on an atomic level. Quantum computing is based on the unique properties of quantum mechanics. On a quantum level, matter can be in a state known as “superposition.” If a particle is in superposition, it simultaneously exists in more than one state in a given time. The position of the particle cannot be pinpointed to a single location or unique value. However, once the particle is measured, the particle’s superposition state collapses, and it assumes a classical state. In other words, while observed, a quantum particle behaves classically (Forcer, 2002). A famous thought experiment known as “Schrödinger’s cat” exemplifies the phenomena of superposition. In this thought experiment, a cat and a potentially explosive bomb are placed in a closed box. While the box is closed, an outside observer would not know if the bomb DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Table 1
exploded, and thus if the cat were dead or alive. In the time before the box is opened and the state of the cat is observed, physicists would consider the cat to be equally dead and alive. Rather than having two states which describe the cat, there is only one — a superposition state (Hobson 1). Furthermore, inside the box, the states of the cat and the bomb (dead vs. alive, exploded vs. still) are correlated. Either the bomb goes off and the cat dies, or the bomb doesn’t explode, and the cat survives. It is impossible for the bomb to explode and the cat to live, or the bomb to not explode and the cat to die. Thus, since the bomb and cat states are correlated, the states of the bomb and cat are ‘entangled’ (Hobson 2). However, when the box is opened, and an observation of the situation is made, the superposition state of the cat collapses. The cat is either dead or alive, but not both. Additionally, the entanglement stops as the states are no longer correlated. There is a difference between a probabilistic mixture, and a ‘superposition’ mixture of probabilities, more accurately known as a coherent mixture (Molina-Terriza 2). If a person flips a coin into the air and catches it, then there is a 50% chance that the coin is heads up and a 50% chance that the coin is tails side up. The coin cannot be simultaneously head and tails side up, only one or the other [Table 1]. While the cat also has a 50% chance of dying and a 50% chance of being alive when observed, on a quantum level, this is not the case. In the quantum mechanical interpretation of the situation, when the cat is in a superposition state, there is a 100% probability that the cat is dead or alive (and thus, a 0% probability that it is neither dead nor alive) [Table 2]. Furthermore, the process of observation, which reduces a quantum state to a classical state, is known as decoherence. Decoherence can be thought of as an example of the measurement, or extraction of information from the system,
SPRING 2020
which occurs when a system is observed. When observation, or decoherence, occurs, the cat returns to the 50/50 odds of being dead or alive. The cat will never be observed to be half dead and half alive.
Quantum Computing Returning to the difference between quantum computers and classical computers, the basis of classical computation is a binary digit or bit. A bit represents a logical state and can only have a value of zero or one. On the other hand, quantum computers use quantum bits or qubits, rather than bits. While bits are either a zero or a one, qubits can be in a superposition of the zero and one states. Qubits are objects that can be represented by vectors and matrices in a vector space. (Unlike classical bits, which exist in a list of numbers). Programmers can apply matrix multiplication and other linear algebra operations to qubits without them losing their quantum mechanical properties. Qubits can simulate entanglement phenomena because large vector and matrix operations can be applied to qubits, and the correlations between entries are not lost. Thus, operations to qubits can reflect the changing state of an electron. Since qubits have more states than bits, there are more degrees of freedom for quantum computers to store information (Rieffel, 2000). Similar to the Schrodinger’s cat example, when a measurement of a qubit is taken, it collapses into a single state from the superposition state; thus, to an outside observer, a qubit and bit
“When observation, or decoherence, occurs, the cat returns to the 50/50 odds of being dead or alive. The cat will never be observed to be half dead and half alive.”
Figure 1: The above image demonstrates the various scenarios of the cat in the box. The first reflects the cat being in a box without any bomb. The second shows the cat being in a box with a bomb. Since the cat's state is unobserved, it is unknown if the cat is dead or alive. The cat is in a superposition state of being dead and alive. The third and fourth boxes show the possible outcomes for the cat—either dead or alive. Source: Pixabay
73
Table 2
would appear the same (Rieffel, 2000).
“... quantum computers can run as a classical computer in addition to running quantum algorithms that are impossible to do without quantum mechanics.”
Figure 2: The above is a picture of part of a quantum computer. Quantum computers operate using qubits rather than bits and rely on principles of superposition and entanglement to run. Source: Pixabay
74
Therefore, quantum computers can run as a classical computer in addition to running quantum algorithms that are impossible to do without quantum mechanics. Even so, the run time of traditional non-quantum computer algorithms on quantum computers typically, but not always, tend to be faster on quantum computers than conventional computers. This difference is due to the quantum computers’ ability to utilize the vector space (where qubits reside) that allow for entanglement and superposition of information. Consider the quantum computers designed by Google. In 2019, Google announced that one of its quantum computers needed “only 200 seconds to complete a problem that would have required 10,000 years on a supercomputer” (Ornes 2019). While quantum computers can run classical algorithms, there are also quantum algorithms unique to quantum computers. For example, Shor’s factoring algorithm utilizes quantum properties to determine the prime factors of large numbers (Rieffel, 2000). The ability to quickly break down large numbers has enormous implications for cryptography. Information online is often encrypted by integers composed of two primes numbers, known as ‘RSA encryption’ (Politi, 2004). Factoring these numbers on a classical computer could take thousands of years, and thus the information is generally considered secure. However, because of Shor’s Algorithm, this factorization will take exponentially less time. Although the exact amount of time is not yet known, the entire online encryption system will no longer be safe (Politi, 2004). New methods will need to be developed to encrypt information online. While quantum computers have not yet been implemented on a large scale, quantum simulators are currently utilized to model quantum systems that are not feasible on classical computers (Biamonte, 2017). For example, Dartmouth’s Whitfield group has created an educational quantum physics platform called qBraid, which can be reached at https://qbraid. com/. qBraid runs quantum programs on a state vector simulator, allowing users to develop their quantum circuits.
Machine Learning
Machine learning is the process of teaching computers to perform computations without oversight or explicit instructions (Watt, 2016). To train a network to “learn,” researchers will usually input a set of training data and then define the features that distinguish different data points (Watt, 2016). Depending on the dataset and experiment, the machine learning algorithm typically undergoes a predetermined number of rounds of testing and training before being used to categorize new data (Zhang, 2019). There are many types of machine learning, but the two major classes are ‘supervised learning’ and ‘unsupervised learning’ (Das Sarma, 2019). Supervised machine learning is when an algorithm is trained with pre-labeled data, with scientists telling the computer how to categorize data (Zhang, 2019). Applications of this include identifying images of cats versus dogs or changes in the phase of matter. On the other hand, unsupervised machine learning uses unlabeled training data and identifies meaningful patterns in the data without the user’s bias towards pre-defined distinguishing features (Zhang, 2019). Unsupervised learning is often associated with clustering, a kind of statistical analysis where training data is divided into groups based on patterns the computer itself identifies. These groups can be used to categorize data in a new way (Das Sarma, 2019). Machine learning algorithms that aim to learn the characteristics that distinguish data are known as ‘discriminative models.’ Algorithms that aim to produce new objects similar to the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
ranging from the simple perceptron network to the extremely complex deep convolutional inverse graph network. The type of system used depends on the data and the machine learning algorithms (Tch, 2017). Current applications of neural networks in machine learning include facial recognition, self-driving cars, Google Translate, and natural language processing (Zhang, 2019).
Figure 3: Google, along with Microsoft, IBM, Intel, and many other companies have strong quantum departments. They are developing both quantum software and hardware to create new and better ways to process information. Source: Pixabay
Quantum Computing Benefits Machine Learning data inputted are known as ‘generative models’ (Das Sarma, 2019). Many types of machine learning algorithms use entities known as ‘tensors’ to represent and perform large computations. A tensor can be thought of as a generalization of scalars, vectors, and matrices, where the dimension refers to the range of indices [Table 3]. In essence, tensors are multidimensional matrices that can range in size from zero to some integer n. A tensor of rank one is a single column matrix or vector. A tensor of rank two is a matrix with two rows and two columns. These tensors encode data, and when used together in large numbers, form entities known as neural networks. Data sets with thousands of data points use neural networks to store all of the data wanted for machine learning algorithms to manipulate. Modern machine learning algorithms are often run on neural networks, which can be thought of as a “net” consisting of interconnected, often layered, information processing tensors or nodes (Hardesty, 2017). When these nodes take in data, they will optimize network parameters based on that data and respond by manipulating the data into a new output. Each layer in a neural network may have a distinct parameterization, or number of inputs of data, which would need to be optimized with unique training data (Hardesty, 2017). One of the most common optimization methods is ‘gradient descent,’ where the computer repeatedly determines a linear model of the function to find global minima (Du, 2019). The ideal parameterization and model for a neural network is often found at the minimization of that function (Watt, 2016).
Quantum computing has the potential to make significant impacts in the field of machine learning. Since both quantum computing and machine learning are based on linear algebra, their operating systems are mathematically compatible (Das Sarma, 2019). As hinted above, quantum computers can be exponentially faster than classical computers — a phenomenon known as ‘quantum speedup’ (Biamonte, 2017). Quantum computing can speed up machine learning algorithms through various techniques, such as the Harrow-HassidimLloyd (HHL) quantum algorithm (Dervovic, 2018). The HHL algorithm can solve a system of linear equations exponentially faster than a classical computer. Because of the HHL algorithm, many algorithms based in linear algebra can be sped up (Dervovic, 2018). For example, the HHL algorithm was able to speed up the aforementioned Shor’s algorithm from an exponential runtime to O(log(n3)) (Dervovic, 2018). (Big O notation is a way to denote an upper bound on a runtime. The runtime is proportional to the term in the parenthesis. For O(log(n)), the algorithm has an upper-bounded runtime proportional to the logarithm of some constraint n, which is often input size. On the other hand, O(n) will run in time linearly proportional to n) Machine learning and artificial intelligence are heavily based on matrices; thus, quantum computing could dramatically affect these fields.
“Since both quantum computing and machine learning are based on linear algebra, their operating systems are mathematically compatible.”
Machine Learning Benefits Quantum Table 3
There are hundreds of kinds of neural networks,
SPRING 2020
75
Figure 4: Computers are able to run machine learning algorithms on neural networks. Applications of quantum machine learning include image recognition, speech recognition, traffic prediction, online fraud detection, and stock market trading. Source: Unsplash
“At Dartmouth College, graduate student Jun Yang in the Whitfield Group uses quantum machine learning techniques to identify phase transitions in unique phases of matter.”
Figure 5: A neural network can be thought of like a brain within a computer that can learn. Neural networks are excellent at modeling quantum states, especially for deep learning algorithms when multiple neural networks are used. Source: Flickr
Computing While quantum computing has been shown to be beneficial for machine learning, the reverse is also true: machine learning has helpful applications in quantum computation. Neural networks are often used to represent quantum states. For a system to model a quantum state, each tensor in the system is assigned to a qubit. The complete set of tensors in the network describes the quantum state. Because neural networks can represent massively low entangled states, problems that were previously unsolvable with conventional methods may now be solved (Das Sarma, 2019). Additionally, deep learning algorithms— machine learning algorithms that use multiple neural network layers to process data — are also integrable with many types of quantum computing hardware, such as quantum annealers and programmable optical arrays, types of quantum information processors (Biamonte, 2017). This means that there is potential for machine learning to benefit quantum computing in ways that scientists may not even understand yet. The Restricted Boltzmann machine and the Deep Boltzmann machine are two examples of common neural networks used to represent quantum states (Das Sarma, 2019). Restricted Boltzmann machines are used to “efficiently” describe topological states by directly using quantum measurements to create a quantum state. In the case of classical and quantum computation, “efficient” means in a reasonable amount of time, such as polynomial rather than exponential time (Gamble, 2019). Deep Boltzmann machines are RBMs with an additional layer. Because of this extra layer, Deep Boltzmann machines can represent almost all ground states of certain quantum states in polynomial time, speeding up quantum computation even further (Das Sarma 2019). Tensors are a great way to represent a physical system on a quantum level, and there are already
relatively popular and efficient neural networks representing quantum systems. At Dartmouth College, graduate student Jun Yang in the Whitfield Group uses quantum machine learning techniques to identify phase transitions in unique phases of matter. He uses machine learning to study the spin-lattice model where interacting electron spins can randomly rotate on the vertices. Once all of the electrons meet certain spin conditions — the spins tend to agree — then a model of the temperature versus the topology is generated. Machine learning algorithms have already significantly impacted research using quantum computation on Dartmouth’s campus.
Conclusion Quantum computing and machine learning have a symbiotic relationship, and the field of quantum machine learning has fantastic potential to fuel future scientific breakthroughs. However, there are still many hurdles to overcome in both areas. Challenges in quantum computing lay mainly in designing and controlling quantum hardware systems. Current quantum computing hardware is small-scale, because as the number of qubits in computation increases, there is a higher likelihood for error. Furthermore, quantum error correction — a technique to reliably deliver data over unreliable communication systems — is still an emerging field and is exceptionally challenging due to the finicky nature of quantum computers. Furthermore, quantum algorithms do not provide many, if any, advantages in reading or printing data compared to classical algorithms.
76
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Scientists have not yet figured out how to capitalize on quantum properties for reading and writing data (Biamonte, 2017). Thus, the time it takes for a quantum algorithm to read or print data may prevent it from having a faster run time than its classical counterpart. Additionally, very little is known about the number of logic gates used in various quantum machine learning algorithms, so it can be hard to estimate their runtime. There also hasn’t been extensive testing of the runtime of many quantum machine learning algorithms, so it is currently impossible to assert that they are superior to their classical counterparts. Looking towards the future, because quantum computers utilize quantum rather than classical algorithms, there is considerable potential to recognize new data patterns in non-human designed systems (Wiggers, 2019). This would include future applications in weather, climate, and DNA modeling (Watt, 2016). Currently, Case Western Reserve University is working in conjunction with Microsoft to increase the speed of MRI scans by three. Even so, because there is so much scientists still do not know about quantum machine learning, it is hard to tell how it will change the world as we know it within our lifetime. Quantum computation is becoming increasingly integrated into our science and society, and breakthroughs may be closer than they seem to those who are unfamiliar with the field. References Biamonte, Jacob et al. 2017. “Quantum Machine Learning.” Nature 549(7671): 195–202. Carrasquilla, Juan, and Roger G. Melko. 2017. “Machine Learning Phases of Matter.” Nature Physics 13(5): 431–34. Das Sarma, Sankar, Dong-Ling Deng, and Lu-Ming Duan. 2019. “Machine Learning Meets Quantum Physics.” Physics Today 72(3): 48–54.
Contemporary Physics 59(1): 16–30. Molina-Terriza, G., A. Vaziri, R. Ursin, and A. Zeilinger. 2005. “Experimental Quantum Coin Tossing.” Physical Review Letters 94(4): 040501. Ornes, Stephen. 2019. “Quantum Computers Finally Beat Supercomputers in 2019.” Discover Magazine. https://www. discovermagazine.com/the-sciences/quantum-computersfinally-beat-supercomputers-in-2019 (May 17, 2020). Politi, Alberto, Jonathan C. F. Matthews, and Jeremy L. O’Brien. 2009. “Shor’s Quantum Factoring Algorithm on a Photonic Chip.” Science 325(5945): 1221–1221. Richter-Laskowska, M. 2018. “A Machine Learning Approach to the Berezinskii-Kosterlitz-Thouless Transition in Classical and Quantum Models.” https://arxiv.org/abs/1809.09927 (May 15, 2020). Rieffel, Eleanor, and Wolfgang Polak. 2000. “An Introduction to Quantum Computing for Non-Physicists.” ACM Computing Surveys (CSUR) 32(3): 300–335. Schuld, Maria, and Francesco Petruccione. 2018. “Introduction.” In Supervised Learning with Quantum Computers, Quantum Science and Technology, eds. Maria Schuld and Francesco Petruccione. Cham: Springer International Publishing, 1–19. https://doi.org/10.1007/978-3319-96424-9_1 (May 15, 2020). Tch, Andrew. 2017. “The Mostly Complete Chart of Neural Networks, Explained.” Medium. https://towardsdatascience. com/the-mostly-complete-chart-of-neural-networksexplained-3fb6f2367464 (May 15, 2020). Watt, Jeremy, Reza Borhani, and Aggelos Konstantinos Katsaggelos. 2016. Machine Learning Refined: Foundations, Algorithms, and Applications. New York: Cambridge University Press. Wiggers, Kyle. 2019. “IBM Announces ‘high-Precision’ Weather Model, New Quantum Computer Design, and Enhanced Project Debater | VentureBeat.” https://venturebeat. com/2019/01/08/ibm-announces-high-precision-weathermodel-new-quantum-computer-design-and-enhancedproject-debater/ (May 15, 2020). Zhang, Jie M., Mark Harman, Lei Ma, and Yang Liu. 2019. “Machine Learning Testing: Survey, Landscapes and Horizons.” arXiv:1906.10742 [cs, stat]. http://arxiv.org/abs/1906.10742 (May 15, 2020).
Du, Simon S. et al. 2019. “Gradient Descent Finds Global Minima of Deep Neural Networks.” arXiv:1811.03804 [cs, math, stat]. http://arxiv.org/abs/1811.03804 (May 15, 2020). Forcer, Tim, Tony Hey, Douglas Ross, and Peter Smith. 2002. “Superposition, Entanglement and Quantum Computation.” Quantum Information & Computation 2. Gamble, Sara. 2019. Frontiers of Engineering: Reports on Leading-Edge Engineering from the 2018 Symposium Quantum Computing: What It Is, Why We Want It, and How We’re Trying to Get It. National Academies Press (US). https:// www.ncbi.nlm.nih.gov/books/NBK538701/ (May 15, 2020). Hardesty, Larry. 2017. “Explained: Neural Networks.” MIT News. http://news.mit.edu/2017/explained-neural-networks-deeplearning-0414 (May 17, 2020). Hobson, Art. 2018. “Review and Suggested Resolution of the Problem of Schrodinger’s Cat.”
SPRING 2020
77
HIIT or Miss? An Insight into the Promise of HIIT Workouts BY KRISTAL WONG '22 HIIT Workouts are extremely versatile and can be done anywhere and by anyone: jump roping, swimming, group fitness classes, at home and in the neighborhood. Source: health.gov
“HIIT involves an alternation of short - typically only a few minutes long - periods of high intensity exercises followed by short periods of rest or lowintensity activity.”
78
Introduction to HIIT The benefits of exercise are undisputable. However, the amount of time dedicated towards exercise varies very greatly from person to person. In the modern era of decreased attention spans, instant gratification, and overpacked schedules, many adults around the world struggle to find time for daily exercise. According the United Health Foundation’s Annual Report, 23.8% of adults in the United States report Physi-cal Inactivity (United Health Foundation, 2019). Many of these surveyed adults argue that it’s hard to fit in a proper workout session between work, family, and friends. For this reason, the world of fitness has seen a massive change in the last 20 years: fitness classes now offer shortened, 40-minute classes and many fitness bloggers offer 30 minute “minimalist workouts” (At Home Workouts, 2020). The demand of shorter workouts has come with the subsequent rise in population a HIIT workout: High Intensity Interval Training.
HIIT involves an alternation of short – typically only a few minutes long – periods of high intensity exercises followed by short periods of rest or low-intensity activity. Many of these HIIT workouts promise quick fat burn with all the benefits of a full Moderate Intensity Continuous Training (MICT) workout (e.g. jogging, cycling, rowing) for half the time. HIIT workouts are not new; they have been a popular training method for dec-ades, primarily as a tool for long distance runners seeking to increase endurance (Brzyzski, 2020). An early study of HIIT was conducted in 1972 and involved cardiac rehabilitation patients. The study showed that individuals cycling at higher expenditure rates for one minute with thirty second rest between each minute were able to exercise for at least two times as long as those who practiced continuous cycling (Weston et al, 2014). But more recently, HIIT has surged in popularity amongst a wide range of professional and recreational athletes, even DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
ranking second place on the list of Top Fitness Trends of 2020 from the American College of Sports Medicine’s global survey (Thompson, 2019). One variation of HIIT called Tabata, consists of a four-minute cycle of 8 different exercises: 20 seconds of activity is followed by 10 seconds of rest. HIIT workouts such as Tabata have seen many variations in schools, gyms, and home-workout channels, ranging from four to forty minutes (Tabata et al, 1996). The allure of these HIIT workouts is that they are extremely versatile and short; they can be done in a gym, a spin studio, or even in a living room in well under an hour. Though the efficiency and flexibility that HIIT provides is alluring, it is worth asking: is HIIT actually as effective as the typical hour and a half treadmill and weights session at the gym? Will HIIT workouts satisfy the World Health Organiza-tion’s (WHO) recommendation of “30 minutes of exercise 5 days a week?” While people aged 18-64 years old to should perform at least 150 minutes of moderate-level physical activity each week (eg. walking, cycling to work, gardening), vigorous exer-cises such as HIIT are recommended for at least 75 minutes a week. (Physical Activity and Adults, 2020). This article will explore the benefits of HIIT as a form of daily exercise as well as compare the impacts of HIIT on people with prevalent diseases, where recommend exercise is often part of treatment, such as Type Two Diabetes (T2D), heart disease, and obesity (MarinPenalver et al, 2016).
24 hours of recovery pe-riod in between. In fact, if a HIIT workout is performed to the absolute maximum ef-fort and exertion, a HIIT workout the next day wouldn’t seem feasible and the proper benefits derived from such a workout wouldn’t be the same. Thus, fitness trainers suggest resistance training, yoga, or spin classes between days of HIIT workout (Bubnis, 2019). Most of the excitement of these workouts pertain to younger or middle-aged individuals. And most scientific include samples of middleaged adults rather than individuals of senior and aging populations. It is important to understand whether HIIT is beneficial for the elderly since exercise is still important for geriatric health. It is also crucial to assess the safety of these exercises since the elderly are more prone to injury during workouts due to muscle wasting and weakness and may steer towards lower intensity exercises as a result (Delmonico et al, 2009). To test whether HIIT workouts would be beneficial in aging men, Dr. Fergal Grace in Victoria, Australia conducted a clinical trial with 39 aging male participants. The team found HIIT to be “safe and promising” for aging males, even improving cardiovascular function and metabolic (MET) capacity (Grace
“It's important to understand whether HIIT is beneficial for the elderly since exercise is still important for geriatric health.”
Figure 1: The allure of HIIT workouts is that it can be done by anyone virtually anywhere, including at the gym (top) or in the neighborhood (bottom). Source: health.gov
A HIIT for Improving Health for All One study from Latin America studied the effects of HIIT on the release of brain-derived neurotrophic factor (BDNF), a protein which works to maintain or im-prove several cerebral functions involving the central nervous system including memory and learning. The team compiled data that showed increases in BNDF concentrations after HIIT workouts, an important finding especially as many health con-ditions and metabolic disease such as T2D, cardiovascular disease, and obesity are associated with effects on brain function (Weston et al., 2014). Since HIIT workouts have a high intensity component, how safe is it to perform HIIT regularly? Fitness coach, celebrity trainer, and founder of Twenty-Two Training, Dalton Wong suggests limiting HIIT to 2-3 days a week with
SPRING 2020
79
et al., 2018).
“The team found that 12 weeks of HIIT training increased improved peak oxygen consumption, VO2peak, in both the younger and older group significantly while other forms of exercise, including reistance training (RT), did not.”
One group from Mayo Clinic in Minnesota, compared the physiological and genetic effects as well of HIIT for young (18-30 years old) and old (65-80 years old) men and women without underlying cardiovascular, metabolic, or blood clotting diseases. The team found that 12 weeks of HIIT training improved peak oxygen consumption, VO2peak, in both the younger and older group significantly while other forms of exercise, including resistance training (RT), did not. While the younger group did increase relative VO2peak by a greater percentage than did the older group (~28% vs. 17% respectively), maximal absolute mitochondrial respiration in older adults increased by a greater percentage (69% vs. 49%) than in the younger adults. The data also show increases in mitochondrial respiration in the HIIT cohort, but not in the nonHIIT exercise groups. This is a key finding since increases in mitochondrial oxidative capacity is related to greater cardiorespiratory fitness and decreased development of insulin resistance in elderly populations where there is greater risk for insulin resistance, sedentary lifestyle, and increases in adiposity. The study also argues that exercise status plays a bigger role in insulin sensitivity than mitochondrial capacity, fur-ther championing the importance of exercise, of which is HIIT one possible medium. Additionally, the team found the HIIT group, regardless of age, experienced the largest increase in gene expression and transcription, particularly those involving the mitochondria, muscle growth, and insulin pathways, after the program. The data also show that HIIT had a more pronounced effect on gene transcription on genes typically downregulated with age, indicating a cellular age-related benefit with HIIT compared to other workouts (Robinson et al, 2017).
A HIIT for Type 2 Diabetes and Insulin Resistance Diabetes is a globally prevalent disease, affecting an astounding 10.9% of Americans (United Health Foundation, 2019). There are two types of Diabetes: Type 1 Diabetes (T1D), also known as juvenile diabetes, where the body’s pancreas produces inadequate amounts of insulin, and Type 2 Diabetes (T2D), which is often associated with obesity and lack of exercise. Both involve problems with insulin, a hormone which controls glucose levels in the blood. Since TD1 patients are insulin dependent, treatment for these individuals involve a management plan of insulin injections, and a healthy
80
lifestyle involving exercise to minimize health complication. Patients with Type 2 Diabetes, don’t respond to insulin but fortunately for some, the lack of insulin-derived action in the body can be reversible with a change to a healthier lifestyle: proper diet, adequate exercise, and weight loss. One specific form T2D is Type 2 Diabetes Mellitus (T2DM) which can cause severe complications in the kidneys, brain, retina, and heart. Since a crucial part of diabetes treatment and management plans consist of ex-ercise, the shorter workouts of HIIT may be more appealing and feasible. To test whether HIIT workouts were as effective as MICT workouts in diabetic and pre-diabetic patients, a team from Finland tested the effectiveness of sprint interval training (SIT), a form of HIIT, against MICT. The team measured the glucose uptake (GU) of 26 male and female patients with T2D and pre-diabetes to compare the efficacy of the two workouts (Sjoros et al, 2017). GU is an important component of the study since people with T2D have impaired GU, and exercise has been shown to stimulate GU (Stanford and Goodyear, 2014). After comparing data from just 6 training sessions, the team found a 25% increase in whole body insulin-stimulated GU in the diabetic pa-tients (Sjoros et al, 2017). Researchers in the Physical Activity and Sports Science Department in Los Lagos University in Chile studied 40 women with low and high insulin resistance in a 10-week HIIT program and found significant data designating HIIT as a means to prevent cardiometabolic disease progression. They concluded that there was either a decrease or no change in anthropometric (body proportion) and no significant change in cardiovascular and muscle performance before and after the 10week HIIT cycling regiment independent of IR level, insinuating that HIIT may be a useful tool in combatting disease progression. In addition, the data also show the Low IR (LIR) group exhibiting a significant decrease in blood pressure. This study in particular is important to pre-diabetic patients since insulin resistance often exists as a pre-diabetic state, further expanding HIIT’s benefit to pre-diabetic patients (Alvarez, 2019). In light of these findings, it is important to explore HIIT as an option for diabetic treatment as the number of people with T2D globally is expected to rise to 360 million people by 2030 (Agrawal, 2008).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Is HIIT Heart Healthy? Cardiovascular (CVD) and cardiometabolic diseases have been on the rise with the United States averaging in 260.4 deaths per 100,000 people (United Health Foundation, 2019). Since the 1950s where cardiovascular disease led to a prescription of immobility, physical activity, specifically HIIT, has been ranked one of the most sought out activities to decrease risk of developing coronary heart disease and CVD such as hypertension (Meyers, 2003; Weston et al, 2014; Wu et al, 2015). A known predictor of CVD prognosis and mortality is aerobic fitness, measured by VO2peak (Ito, 2019). A systemic review and metanalysis of 10 studies involving patients with lifestyle-induced cardiometabolic disease showed, overall, that HIIT significantly resulted in higher increases of VO2peak as opposed to MICT (Weston et al, 2014). One systemic review from Sports Medicine comparing HIIT and MICT found 13/24 studies showing evidence of increases aerobic fitness with HIIT, five out of twenty-four studies showing equal improvements between HIIT and MICT and eight of twenty-four studies showing greater improvements HIIT compared to continuous moderate exercise (Kessler et al, 2012). As introduced in the diabetes section, the Chilean study on IR individuals investigated cardiometabolic disease progression in their study and found significant decreases in blood pressure for both HIIT and MICT participants (Alvarez, 2017). Similarly, another study investigated HIIT’s impact on vascular function, using cardiorespiratory fitness (CRF) as an indicator. The team measured brachial artery flow mediated dilation (FMD) as a proxy for vascular function and when comparing pre and post HIIT and MICT levels. They found significantly greater improvements in brachial artery FMD in the HIIT cohort than for the MICT
cohort. Additionally, the group performing HIIT had, on average, better outcomes of CRF, typical CVD risk factors, oxidative stress, inflammation and insulin resistance. This finding is important since vascular dysfunction leads to atherosclerotic conditions which in turn increase the risk of cardiovascular complications such as heart attack and stroke (Ramos et al, 2015). In addition, cardiometabolic diseases are marked by plasma triglycerides, blood pressure, and decreased high-density lipoprotein cholesterol (HDL-C). Treatments of such conditions include medication but are almost often prescribed after or simultaneously with lifestyle changes such as exercise. In the same Sports Medicine review mentioned earlier in this section, ten of twenty-four studies investigated and found improvement of HDL-C in its participants after 8+ weeks of HIIT training (Kessler et al, 2012). Similarly, in the 10-study review and analysis, two of the included studies indicated significant decreases in blood pressure upon completion of the exercise programs, one of which indicating a significant decrease in systolic blood pressure as well (Weston et al, 2014).
“'True HITT is like sprinting, and it should make you feel like your gas tank is completely empty' (Dalton Wong).”
A HIIT for Obesity and Weight Loss? “True HIIT is like sprinting, and it should make you feel like your gas tank is complete-ly empty,” Dalton Wong, professional performance coach, celebrity personal trainer, and founder of TwentyTwo Training
Source: Bubnis, 2019 greatist. com/fitness/hiit-workoutsshould-be-done-how-often
A popular reason for exercise is the goal of weight loss. In a meta-analysis of thirteen studies in Etiology and Pathophysiology, data suggest that HIIT is can be an efficient in managing weight in obese and overweight individuals. HIIT and MICT yielded the same average decreases in whole-body fat mass and Figure 2: The structure of a molecule of β-endorphin. Source: Wikimedia Commons
SPRING 2020
81
waist circumference, but HIIT required about 40% less time commitment (Wewege et al, 2017). However, it is important to note that this shortterm analysis did not show significant decreases in body weight changes. In the twenty-four -study review, Kessler indicated that a minimum of 12 weeks of HIIT is required before notable changes in body weight and/or percent body fat (Kessler et al, 2012). However, tackling weight loss and obesity also requires proper and balanced diets, not just exercise.
“On the other hand, many experts are weary against the claim of short HIIT workouts as a method for fat loss, citing the psychological toll of high intensity activity.”
On the other hand, many experts are weary against the claim of short HIIT workouts as a method for fat loss, citing the psychological toll of high intensity activity. For individuals who already associate MICT with sweat and pain, as Dr. Ekkekakis at Iowa State University explains, these less-active individuals may see HIIT as even more unpleasant than MICT (Ekkekakis, 2020). Here, it is important to note that there are different levels of HIIT: the ones that force full energy expenditure requires individuals to push themselves as hard as possible, thus leading to uncomfortable situations (Bubnis, 2019). Ekkekakis argues that enjoyment should be more greatly used to determine the benefits of a particular workout (Ekkekakis, 2020). Studies have shown mixed results on the enjoyment of HIIT compared to MCIT: some argue that HIIT offers equal enjoyment as Moderate Intensity Interval Training (MIIT) or Continuous Training (CT), but others argue that HIIT offers lower level of enjoyment. A Canadian study investigated this idea and questioned whether HIIT should be advocated for those
generally inactive. They found that despite HIIT creating elevated heart rate and perceived exertion, there was no significant difference between preference and enjoyment for HIIT and MICT (Stork et al, 2018). Similarly, the tenstudy review by Weston et al. reported HIIT to be more enjoyable than MICT. However, only three of these studies reported greater improvements in the quality of life with HIIT and one showed comparable enhancements with both programs (Weston et al, 2014). The positive and negative responses on HIIT enjoyment suggests the true benefits of HIIT might vary from person to person rather than through a scientific extrapolation.
A Psychological HIIT for Mental Health Exercise can have immense positive effects on mental health through stress reduction and mood elevations and on brain function and cognitive performance. It has also been demonstrated to reduce symptoms of depression and anxiety. And the promise of a shorter workout with HIIT can only logically add to these benefits. Post-exercise euphoria has been linked to increases in levels of a type of neurotransmitters called β-endorphins during exercise. These proteins bind to a receptor called μ-opioid receptors (MOR) which control feelings of positive reward and euphoria. A Finnish study conducted at the Turku University Hospital PET Center used positive emission tomography (PET) scans to track β-endorphin activity with MOR availability. The team found decreased availability of MOR after HIIT, but not with MICT. They suggest that perhaps the more
Figure 3: Former President Barack Obama and Vice President Joe Biden taking a jog amidst their busy schedules. Source: https://letsmove. obamawhitehouse.archives.gov/
82
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
strenuous and pain inducing exercise in HIIT may have caused more opioid release and MOR activation to combat analgesia, the inability to feel pain, and the body’ stress response (Saanijoki et al, 2017). In addition, a review of 12 intervention studies investigated the role of HIIT in people with mental illnesses. The findings from the metaanalysis indicated a statistically significant reduction in mental health/depression severity as well as an improved physiological fitness with an increase in VO2max (Martland et al, 2019). Another team from Taiwan investigated HIIT as a tool for improving mental and physical health of chronic schizophrenia patients. The 8-week program showed positive results with progressional and statistically significant changes within the Negative Scale and the General Psychopathology Scale mental health scores (Wu et al, 2015). While the role of exercise has seen immense benefits on its participants, the particular effects of more strenuous but more efficient exercises involved with HIIT programs should be more widely practiced in people of all conditions of mental health, including those suffering from chronic mental health conditions.
Conclusion HIIT workouts have received much hype in media, and with it attention from the scientific community. Many recommend the expansion of HIIT into medicine and as treatment options – not just for recreational use. However, it is important to note that while HIIT is more time efficient that MICT, it is not magic as readers of the 2016 New York Times’ article One Minute of All-Out Exercise May Have Benefits of 45 Minutes of Moderate Exertion may have believed when the NY Times journalist proposed a one minute workout (Reynolds, 2016). But for those hoping to enter or modify their healthy lifestyles, HIIT should remain an appealing option for anyone such as a busy college student to a diabetic patient. There are definitely many avenues of HIIT training waiting to be explored in the future, including its benefits on children. And as seen in this article, it is becoming increasing clear that HIIT training parallels and may even rivals traditional forms of exercise. However, it is important to note that for HIIT, the benefits of this training don’t unveil its benefits overnight (Kessler et al, 2012.) And along with exercise, it is important to prioritize
SPRING 2020
a healthy and balanced diet as well. Abbreviation HIIT High Intensity Interval Training MICT Moderate Intensity Continuous Training T2D Type 2 Diabetes MET Metabolic VO2peak Peak Oxygen Consumption RT Resistance Training T1D Type 1 Diabetes T2DM Type 2 Diabetes Mellitus GU Glucose Uptake IR Insulin Resistance L-IR Low Insulin Resistance CVD Cardiovascular Disease CRF Cardiorespiratory Fitness FMD Flow Mediated Dialation HDL-C High-density Lipoprotein Cholestrol CT Continuous Training MIIT Moderate Intensity Interval Training MOR μ-opioid receptors PET Positive Emission Tomography
“The findings from the metaanalysis indicated a statistically significant reduction in mental health/depression severity as well as an improved physiological fitness with an increase in VO2 max.”
References Agrawal, S., Dimitrova, N., Nathan, P., Udayakumar, K., Lakshmi, S. S., Sriram, S., Manjusha, N., & Sengupta, U. (2008). T2D-Db: An integrated platform to study the molecular basis of Type 2 dia-betes. BMC Genomics, 9, 320. https://doi. org/10.1186/1471-2164-9-320 Álvarez, C., Ramírez-Campillo, R., Ramírez-Vélez, R., & Izquierdo, M. (2017). Prevalence of Non-responders for Glucose Control Markers after 10 Weeks of High-Intensity Interval Training in Adult Women with Higher and Lower Insulin Resistance. Frontiers in Physiology, 8. https://doi. org/10.3389/fphys.2017.00479 Brzyski, L. (2020, April 21). 7 Ways to Stay Motivated Despite a Postponed Race. Philadelphia Magazine. https://www. phillymag.com/be-well-philly/2020/04/21/postponed-racerunning-tips/ Bubnis, D. (n.d.). How Often Should You Really Do HIIT Workouts? Greatist. Retrieved May 30, 2020, from https:// greatist.com/fitness/hiit-workouts-should-be-done-howoften Delmonico, M. J., Harris, T. B., Visser, M., Park, S. W., Conroy, M. B., Velasquez-Mieyer, P., Boudreau, R., Manini, T. M., Nevitt, M., Newman, A. B., & Goodpaster, B. H. (2009). Longitudinal study of muscle strength, quality, and adipose tissue infiltration. The American Journal of Clinical Nutrition, 90(6), 1579–1585. https://doi.org/10.3945/ajcn.2009.28047 Dun, Y., Smith, J. R., Medina-Inojosa, J. R., MacGillivary, M. C., Thomas, R. J., Liu, S., Cai, Y., & Olson, T. P. (2019). Effect of HighIntensity Interval Training on Total and Abdominal Fat Mass in Outpatient Cardiac Rehabilitation Patients with Myocardial Infarction. Journal of the American College of Cardiology, 73(9 Supplement 2), 13. https://doi.org/10.1016/S07351097(19)33775-1 Ekkekakis. (2020, October). High-intensity workouts send the wrong message, says Iowa State professor • News Service •
83
Iowa State University. Iowa State Univeristy. https://www.news. iastate.edu/news/2017/10/03/hiit Grace, F., Herbert, P., Elliott, A. D., Richards, J., Beaumont, A., & Sculthorpe, N. F. (2018). High intensity interval training (HIIT) improves resting blood pressure, metabolic (MET) capacity and heart rate reserve without compromising cardiac function in sedentary aging men. Experimental Gerontology, 109, 75–81. https://doi.org/10.1016/j.exger.2017.05.010 Hannan, A. L., Hing, W., Simas, V., Climstein, M., Coombes, J. S., Jayasinghe, R., Byrnes, J., & Furness, J. (2018). High-intensity interval training versus moderate-intensity continuous training within cardiac rehabilitation: A systematic review and metaanalysis. Open Access Journal of Sports Medicine, 9, 1–17. https://doi.org/10.2147/OAJSM.S150596 Ito, S. (2019). High-intensity interval training for health benefits and care of cardiac diseases—The key to an efficient exercise protocol. World Journal of Cardiology, 11(7), 171–188. https:// doi.org/10.4330/wjc.v11.i7.171 Kessler, H. S., Sisson, S. B., & Short, K. R. (2012). The potential for high-intensity interval training to reduce cardiometabolic disease risk. Sports Medicine (Auckland, N.Z.), 42(6), 489–509. https://doi.org/10.2165/11630910-000000000-00000 Marín-Peñalver, J. J., Martín-Timón, I., Sevillano-Collantes, C., & del Cañizo-Gómez, F. J. (2016). Update on the treatment of type 2 diabetes mellitus. World Journal of Diabetes, 7(17), 354–395. https://doi.org/10.4239/wjd.v7.i17.354
endurance and high-intensity intermittent training on anaerobic ca-pacity and VO2max. Medicine and Science in Sports and Exercise, 28(10), 1327–1330. https://doi. org/10.1097/00005768-199610000-00018 Thompson, W. R. (2019). WORLDWIDE SURVEY OF FITNESS TRENDS FOR 2020. ACSM’s Health & Fitness Journal, 23(6), 10–18. https://doi.org/10.1249/FIT.0000000000000526 United Health Foundation. (2019). Annual Report 2019 (Annual Report No. 20; p. 118). Weston, K. S., Wisløff, U., & Coombes, J. S. (2014). Highintensity interval training in patients with lifestyle-induced cardiometabolic disease: A systematic review and metaanalysis. British Journal of Sports Medicine, 48(16), 1227– 1234. https://doi.org/10.1136/bjsports-2013-092576 Wewege, M., van den Berg, R., Ward, R. E., & Keech, A. (2017). The effects of high-intensity interval training vs. moderateintensity continuous training on body composition in overweight and obese adults: A systematic review and meta-analysis. Obesity Reviews, 18(6), 635–646. https://doi. org/10.1111/obr.12532 WHO. (n.d.). Physical Activity and Adults. WHO; World Health Organization. Retrieved May 17, 2020, from https://www. who.int/dietphysicalactivity/factsheet_adults/en/
Metzger, N. (2019, December 16). This 20-Minute Tabata Workout Is WAY Better Than An Hour Of Running. Women’s Health. https://www.womenshealthmag.com/fitness/ a20703439/tabata-workout-routine/ Reynolds, G. (2016, April 27). 1 Minute of All-Out Exercise May Have Benefits of 45 Minutes of Moderate Exertion. Well. https:// well.blogs.nytimes.com/2016/04/27/1-minute-of-all-outexercise-may-equal-45-minutes-of-moderate-exertion/ Robinson, M. M., Dasari, S., Konopka, A. R., Johnson, M. L., Manjunatha, S., Esponda, R. R., Carter, R. E., Lanza, I. R., & Nair, K. S. (2017). Enhanced Protein Translation Underlies Improved Meta-bolic and Physical Adaptations to Different Exercise Training Modes in Young and Old Humans. Cell Metabolism, 25(3), 581–592. https://doi.org/10.1016/j.cmet.2017.02.009 Saanijoki, T., Tuominen, L., Tuulari, J. J., Nummenmaa, L., Arponen, E., Kalliokoski, K., & Hirvonen, J. (2018). Opioid Release after High-Intensity Interval Training in Healthy Human Subjects. Neu-ropsychopharmacology, 43(2), 246–254. https:// doi.org/10.1038/npp.2017.148 Sjöros, T. J., Heiskanen, M. A., Motiani, K. K., Löyttyniemi, E., Eskelinen, J.-J., Virtanen, K. A., Savisto, N. J., Solin, O., Hannukainen, J. C., & Kalliokoski, K. K. (2018). Increased insulinstimulated glucose uptake in both leg and arm muscles after sprint interval and moderate-intensity training in subjects with type 2 diabetes or prediabetes. Scandinavian Journal of Medicine & Sci-ence in Sports, 28(1), 77–87. https://doi. org/10.1111/sms.12875 Stanford, K. I., & Goodyear, L. J. (2014). Exercise and type 2 diabetes: Molecular mechanisms regulating glucose uptake in skeletal muscle. Advances in Physiology Education, 38(4), 308–314. https://doi.org/10.1152/advan.00080.2014 Tabata, I., Nishimura, K., Kouzaki, M., Hirai, Y., Ogita, F., Miyachi, M., & Yamamoto, K. (1996). Effects of moderate-intensity
84
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
85
Notch-1 as a Promising Target for Antibody Drug Conjugate Therapies BY NISHI JAIN '21 Cover: Cancer cells in non-small cell cancer – a notch signaling ADC would be an apt way to mitigate the invasiveness of cancer by taking advantage of the cell’s inherent dynamics. Source: Flickr
“Antibody drug conjugates are a novel therapy which have been shown to have increased potency and reduced side effects for treating multiple kinds of cancer." 86
Introduction Antibody drug conjugates are a novel therapy which have been shown to have increased potency and reduced side effects for treating multiple kinds of cancer. These characteristics are by virtue of their design; antibody drug conjugates (ADCs) are composed of an antibody that selectively binds for a particular antigen that is overexpressed on cancer cells; it’s a cytotoxic drug that serves to kill the target tumor cells and a linker that connects the antibody and cytotoxin. This therein combines the selectivity of an antibody with the potency of the cytotoxic drug into an effective therapy. Given the characteristics of the Notch signaling pathway, Notch-1 transmembrane proteins would suggest a favorable ADC target, and this paper will discuss its viability in this space.
Notch Signaling Overview Notch cell signaling is a highly conserved evolutionary pathway that consists of four
single pass transmembrane protein isoforms (Notch1, Notch2, Notch3, and Notch4) and five canonical Notch receptors (Jagged1, Jagged2, Delta-like1, Delta-like3, and Deltalike4) (Fleming et al., 1998). Notch1 displays medium to high levels of expression in a variety of cancers, including cervical, colorectal, pancreatic, breast, stomach, ovarian, and lung cancers. Notch1 presents a very high turnover rate (the rate at which the protein is recycled back to the surface after being internalized), so it would be a good ADC target for multiple cancers (Trang et al., 2019). The Notch signaling pathway is initiated by the binding of one of the five Notch receptors to the extracellular binding site of one of four Notch ligands, which causes two proteolytic cleavages followed by the release of the Notch intracellular domain (NICD). The NICD then translocates to the nucleus and drives transcription of several target genes which affects cell fate decisions, proliferation, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
differentiation, tissue homeostasis, cell survival, and cell death (Kageyama et al., 1999; Hori et al., 2013). Notch signaling begins when ligand presenting cells and receptor presenting cells are in close proximity; the Notch ligand binds to the Notch receptor on the opposite cell. This then signals two cleavages. The first of these cleavages is within the extracellular domain and is mediated by metalloprotease tumor necrosis factor α‐ converting enzyme (TACE). This cleaved Notch ligand is trans-endocytosed into the other cell to initiate signaling (a process controlled by ubiquitin ligases). The second cleavage happens to the intercellular component of the Notch ligand and is mediated by γ‐secretase activity; the cleaved intracellular component is the NICD and upon cleavage travels to the nucleus and leads to further transcriptional activation (Wu et al., 2010). Notch1 has been shown to have both oncogenic and anti-tumor effects, depending on the context. Notch signaling has been proven to be anti-tumor in a variety of cancer indications including skin cancer, hepatocellular carcinoma, and small cell lung cancer. However, it appears to have oncogenic properties associated with a majority of other indications, including head and neck, cervical, colorectal, pancreatic, breast, stomach, ovarian, and lung cancer (Rizzo et al., 2008).
Cervical Cancer Cervical cancer is one of the most common malignancies among women and has shown increased expression of Notch and NICD proteins, especially when there is metastatic potential to the lymph nodes and when there are cases of parametrial involvement among patient populations. Human papillomavirus (which is linked with the progression of cervical cancer) proteins E6 and E7 are also causally linked with the increased expression of Notch1 in cervical cancer tissues (Sun et al., 2009). There have, however, also been reports showing that higher expression of Notch1 stops growth in cervical cancer lines, which highlights a more complex role to the protein (Maliekal et al., 2008). Colorectal Cancer Upregulation of Notch1 has been found in intestinal adenomas, and its expression increases progressively from normal colon mucosal expression to the stage IV metastatic cancers. There were the highest levels in liver metastases compared to normal colonic mucosa or liver parenchyma (Reedijk et al., 2008; Jemal et al., 2009). Additionally, Notch1 has shown to be involved in resistance to chemotherapeutic agents such as oxaliplatin, which operates by inducing NICD and Hes-1 through up-regulation of γ-secretase activity. Through the inhibition of Notch1 with siRNA or gamma secretase inhibitors (GSI), there was a decrease in chemotherapeutic resistance to both oxaliplatin and fluorouracil (Meng et al., 2009).
Figure 1: Notch signaling pathway. Source: Wikimedia Commons)
“Notch signaling begins when ligand presenting cells and receptor presenting cells are in close proximity...”
Pancreatic Cancer Pancreatic cancer is one of the most dangerous Figure 2: Positive Notch1 expression in colorectal cancer as depicted by immunohistochemistry staining Source: Human Protein Atlas
Section II: Notch-1 in Disease Indications Notch-1 as a transmembrane protein is expressed on a variety of disease indications and therefore can serve as a diverse ADC target that can assist in a variety of these indications.
SPRING 2020
87
Figure 3: Positive Notch1 expression in breast cancer as depicted by immunohistochemistry staining Source: Human Protein Atlas
“[For breast cancer] Although not explicity a prognostic factor, higher levels of Notch1 expression are implicated in poor prognosis with lower overall survival rates.”
Figure 4: Positive Notch1 expression in lung cancer as depicted by immunohistochemistry staining Source: Human Protein Atlas
88
indications with its 5-year survival rate hovering just below 5% (Jemal et al., 2009). Pancreatic cancer cells have been found to overexpress both Notch1 and Notch2. They have also been implicated in the TGF-α-induced acinar to ductal transition such that inhibition of Notch1, either through downregulation or by GSIs, led to decreased proliferative rates, increased apoptosis, reduced metastasis and migration, and caused a decrease in the invasive nature of the cancer cells (Sawey et al., 2007; Miyamoto et al, 2003). Notch1 is also upregulated in Gemcitabine-resistant pancreatic cancer tissue, so this target can work to reduce cancerous growth in patients who have already acquired immunity toward the standard care (Wang et al., 2009). Furthermore, Notch1 is involved in microRNA-34 regulation in cancer stem cell renewal which is related to the epithelialmesenchymal transition (EMT). Targeting this mechanism of cell differentiation and renewal by inhibiting Notch signaling could present an effective way of killing cancer stem cells (Ji et al., 2009). Breast Cancer Although not explicitly a prognostic factor, higher levels of Notch1 expression are implicated in poor prognosis with lower overall survival rates (Stylianou et al., 2006). Notch has additionally been implicated in the P13K/Akt pathway, since the NICD frequently acts to inhibit P53, causing resistance to chemotherapeutic breast cancer treatments which is only reversed by rapamycin treatment (Mungamuri et al., 2006). The Notch-dependent erythropoietin receptor increases the numbers of stem cells and breast cancer cells’ self-renewing capacity. GSIs and the downregulation of Notch blocked this effect, which implies a possible role for Notch signaling in the maintenance of breast cancer stem cells
(Farnie et al, 2007). Stomach Cancer Notch has also been implicated in the progression of stomach cancer, one of the most common cancers. The expression of Notch1 was significantly higher in stomach cancer cells than in normal gastric tissue. Notch1 expression was additionally correlated with tumor differentiation, invasion, and size. Notch has also been investigated as a possible prognostic marker in patients, as the three-year survival rate is substantially worse in Notch1 positive patients as compared to Notch1 negative patients (Li et al., 2007). Ovarian Cancer Most advanced-stage ovarian cancer patients die from either the condition or from the intense combination of chemotherapy and resection surgery regimen that they are subjected to. There has been extensive work on ovarian cancer therapies and recently Notch1 has emerged as a target due to its high expression rates on the surface of ovarian cancer cells and the expression of NICD within advanced patients. Additionally, the downregulation of Notch1 results in limited growth, suggesting that Notch1 has a role in cell proliferation (Rose et al., 2009). Lung Cancer There are two main subsets of lung cancers: small cell lung cancer (SCLC) that accounts for 15-20% of all lung neoplasms, and nonsmall cell lung cancer (NSCLC) that accounts for the remainder of malignancies. Notch1 has been shown to have anti-tumor effects associated with SCLC but oncogenic properties in NSCLC. In NSCLC, 50% of patients being diagnosed at an advanced stage have the Notch1 overexpression, and only 20% have a positive response to chemotherapy (Howlader et al., 2015; Schiller et al., 2002). Notch1 is overexpressed in NSCLC and is associated with a greater possibility of lymph node
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 5: The structure of an antibody drug conjugate Source: Wikimedia Commons
metastasis and reduced overall survival in patients (Yuan et al., 2015). It is also specifically implicated in the EMT, which leads to increased metastatic potential (Christiansen et al., 2006). Notch1 activation in endothelial cells leads to several morphological, phenotypic, and functional changes that are consistent with a mesenchymal transformation, so Notch1 can be considered a viable target to inhibit EMT. Notch1 knockdown also has been shown to reverse the EMT phenotype and restored sensitivity to the EGFR inhibitor gefitinib in gefitinib‐resistant lung cancer cells (and vice versa, the expression of Notch1 was highly upregulated in gefitinib‐resistant lung cancer cells and that the activation of Notch1 resulted in an EMT phenotype) (Yuan et al., 2014). Notch1 is important for cells in a hypoxic environment, as recent studies show that inhibition of Notch1 signaling (by a γ-secretase inhibitor or by genetic downregulation) results in the inducement of apoptosis in oxygen deprived tumor microenvironments. Notch1 signaling in hypoxic NSCLC conditions allows for cell survival by both the inhibition of phosphatase and tensing homolog (PTEN) expression as well as the positive regulation of IGF-1 and its receptor IGF-1R (Galluzzo et al., 2011). Thus, Notch1 inhibition can represent an invaluable strategy to target hypoxic tumor regions which are traditionally resistant to standard chemotherapy.
SPRING 2020
Section III: Existing Pre-Clinical Work on Notch-1 ADCs The most prominent current therapies for Notch1 modulation are γ‐secretase inhibitors (GSIs); although there is not yet evidence that Notch-targeting GSIs can directly inhibit tumor growth, there is evidence of synergy between various chemotherapeutic agents and GSIs. BMS-906024, for instance, has shown promise with enhancing efficacy of chemotherapeutic Paclitaxel in lung adenocarcinoma (Morgan et al., 2017). The inhibition of Notch1 (which is very elevated in lung adenocarcinoma cell lines) via γ-secretase inhibitor has been found to induce apoptosis, which can be rescued by the reintroduction of active Notch1 (Chen et al., 2018). A-disintegrin and metalloprotease (ADAM) helps operate the first proteomic cleavage of Notch. ADAM-inhibitors have additionally shown promise, with a recent study citing that an ADAM-17 inhibitor named ZLDI8 stops proliferation and metastasis through reversal of the EMT and the Notch pathway (Lu et al., 2007). Additionally, a general treatment method called δ-tocotrienol has been shown to stop tumor proliferation, induce apoptosis and reduce cancer cell invasion; part of this mechanism is due to the downregulation of Notch1 (Ji et al., 2011; Ji et al., 2012). Daurinoline is another emerging treatment, whose mechanism of action has been proposed to include Notch1, and has shown promise in restoring sensitivity to chemotherapy resistant
“Notch1 is important for cells in a hypoxic environment, as recent studies show that inhibition of Notch1 signaling results in the inducement of apoptosis in oxygendeprived tumor microenvironments.”
89
cells with anti-proliferative and anti-metastatic effects (Li et al., 2010). Although low-dose cisplatin treatment can induce doxorubicin and paclitaxel tolerance, this effect was shown to be weakened by treatment with GSI or Notch1 shRNAs, which prevented the activation of the Notch pathway (Liu et al., 2013).
“According to the Human Protein Atlas, Notch1 is highly expressed for multiple indications. However, the normal expression of Notch1 in such cancers is also high, presenting complications for the illness.”
Additionally, there are multiple GSIs in Phase 1 and 2 clinical trials, including the aforementioned AL101 or BMS-906024 (BMS), MRK-560, MK0752 (Merck), R04929097 (Roche), Nitrogacetat or PF-03084014 (Pfizer), and Crenigacestat or LY450139 (Eli Lilly). Myriad was developing a similar technology that acts by inhibiting the final substrate cleavage of γ‐ secretase called γ‐secretase modifiers; their only product, MPC-7869, failed in Phase 3 Clinical Trials. Stapled peptides that interfere with the Notch nuclear co-activators are also being assessed preclinically. Cell permeable stabilized α-helical peptides interfering with the function of the Notch transactivation domain have shown promise as well (Moellering et al., 2009). There are also a few available antibodies that can selectively target Notch1, including Mab604.107, CAB022466 (Merck), and antiNRR1. Along the same lines, there is another antibody for one of the canonical ligands (Jagged-1) that has shown promise as a method for Notch regulation (Lafkas et al., 2015). A mAb by the name of brontictuzumab developed by OncoMed Pharmaceuticals binds to Notch1 on the cell surface to inhibit Notch signaling, and it is currently in Phase I clinical trials (Moellering et al., 2009). According to the Human Protein Atlas, Notch1 is highly expressed for multiple indications. However, the normal expression of Notch1 in such cancers is also high, presenting complications for the illness. The high levels of normal tissue expression in adrenal gland, lung, colon, rectum, gallbladder, testis, and appendix show that Notch1 must have added specificity such that it affects the tumor cells without affecting healthy tissue.
Section IV: Notch-1 ADC as a Possible Therapeutic? The current pre-clinical and clinical work suggests that because of the nature of the targeted cancers, GSIs often appear in a more advanced stage of cancers. As a result, GSIs make their entrance into the treatment regimens of only more advanced patients. Based on
90
expert consensus, however, it is unlikely that these Notch-targeting drugs will be effective anticancer instruments if used as single agents. For instance, it has been said that “the next challenge for the field will be to find the most effective ‘doublet’ to maximize the potential of Notch inhibitors,” so if an ADC is used as just this – an effective ‘doublet agent’ – it will be possible to target Notch1 with greater effectiveness and achieve success in the clinic (Galluzzo et al., 2010). References [1] Chen, Y., De Marco, M. A., Graziani, I., Gazdar, A. F., Strack, P. R., Miele, L., & Bocchetta, M. (2007). Oxygen Concentration Determines the Biological Effects of NOTCH-1 Signaling in Adenocarcinoma of the Lung. Cancer Research, 67(17), 7954–7959. https://doi.org/10.1158/0008-5472.CAN-07-1229 Christiansen, J. J., & Rajasekaran, A. K. (2006). Reassessing Epithelial to Mesenchymal Transition as a Prerequisite for Carcinoma Invasion and Metastasis. Cancer Research, 66(17), 8319–8326. https://doi.org/10.1158/0008-5472.CAN-06-0410 Egan. (1992). Activation of Notch signaling in human colon adenocarcinoma. International Journal of Oncology. https:// doi.org/10.3892/ijo_00000112 Farnie, G., & Clarke, R. B. (2007). Mammary stem cells and breast cancer—Role of Notch signalling. Stem Cell Reviews, 3(2), 169–175. https://doi.org/10.1007/s12015-007-0023-5 Fleming, R. J. (1998). Structural conservation of Notch receptors and ligands. Seminars in Cell & Developmental Biology, 9(6), 599–607. https://doi.org/10.1006/ scdb.1998.0260 Hori, K., Sen, A., & Artavanis-Tsakonas, S. (2013). Notch signaling at a glance. Journal of Cell Science, 126(10), 2135–2140. https://doi.org/10.1242/jcs.127308 Ji, Q., Hao, X., Zhang, M., Tang, W., Yang, M., Li, L., Xiang, D., DeSano, J. T., Bommer, G. T., Fan, D., Fearon, E. R., Lawrence, T. S., & Xu, L. (2009). MicroRNA miR-34 Inhibits Human Pancreatic Cancer Tumor-Initiating Cells. PLoS ONE, 4(8), e6816. https://doi.org/10.1371/journal.pone.0006816 Jemal, A., Siegel, R., Ward, E., Hao, Y., Xu, J., & Thun, M. J. (2009). Cancer statistics, 2009. CA: A Cancer Journal for Clinicians, 59(4), 225–249. https://doi.org/10.3322/caac.20006 Ji, X., Wang, Z., Geamanu, A., Goja, A., Sarkar, F. H., & Gupta, S. V. (2012). Delta-tocotrienol suppresses Notch-1 pathway by upregulating miR-34a in nonsmall cell lung cancer cells. International Journal of Cancer, 131(11), 2668–2677. https:// doi.org/10.1002/ijc.27549 Ji, X., Wang, Z., Geamanu, A., Sarkar, F. H., & Gupta, S. V. (2011). Inhibition of cell growth and induction of apoptosis in non-small cell lung cancer cells by delta-tocotrienol is associated with notch-1 down-regulation. Journal of Cellular Biochemistry, 112(10), 2773–2783. https://doi.org/10.1002/ jcb.23184 Lafkas, D., Shelton, A., Chiu, C., de Leon Boenig, G., Chen, Y., Stawicki, S. S., Siltanen, C., Reichelt, M., Zhou, M., Wu, X., Eastham-Anderson, J., Moore, H., Roose-Girma, M., Chinn, Y., Hang, J. Q., Warming, S., Egen, J., Lee, W. P., Austin, C., … Siebel,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
C. W. (2015). Therapeutic antibodies reveal Notch control of transdifferentiation in the adult lung. Nature, 528(7580), 127–131. https://doi.org/10.1038/nature15715
Notch 1 signaling is active in ovarian cancer. Gynecologic Oncology, 117(1), 130–133. https://doi.org/10.1016/j. ygyno.2009.12.003
Li, D.-W., Wu, Q., Peng, Z.-H., Yang, Z.-R., & Wang, Y. (2007). [Expression and significance of Notch1 and PTEN in gastric cancer]. Ai Zheng = Aizheng = Chinese Journal of Cancer, 26(11), 1183–1187. Matrix metalloproteinase 7 controls pancreatic acinar cell transdifferentiation by activating the Notch signaling pathway. (n.d.). Retrieved May 17, 2020, from https://www. ncbi.nlm.nih.gov/pmc/articles/PMC2148289/
Schiller, J. H., Harrington, D., Belani, C. P., Langer, C., Sandler, A., Krook, J., Zhu, J., & Johnson, D. H. (2002). Comparison of Four Chemotherapy Regimens for Advanced Non–Small-Cell Lung Cancer. New England Journal of Medicine, 346(2), 92–98. https://doi.org/10.1056/NEJMoa011954
Liu, Y.-P., Yang, C.-J., Huang, M.-S., Yeh, C.-T., Wu, A. T. H., Lee, Y.-C., Lai, T.-C., Lee, C.-H., Hsiao, Y.-W., Lu, J., Shen, C.-N., Lu, P.-J., & Hsiao, M. (2013). Cisplatin Selects for Multidrug-Resistant CD133 + Cells in Lung Adenocarcinoma by Activating Notch Signaling. Cancer Research, 73(1), 406–416. https://doi. org/10.1158/0008-5472.CAN-12-1733 Maliekal, T. T., Bajaj, J., Giri, V., Subramanyam, D., & Krishna, S. (2008). The role of Notch signaling in human cervical cancer: Implications for solid tumors. Oncogene, 27(38), 5110–5114. https://doi.org/10.1038/onc.2008.224 Meng, R. D., Shelton, C. C., Li, Y.-M., Qin, L.-X., Notterman, D., Paty, P. B., & Schwartz, G. K. (2009). -Secretase Inhibitors Abrogate Oxaliplatin-Induced Activation of the Notch-1 Signaling Pathway in Colon Cancer Cells Resulting in Enhanced Chemosensitivity. Cancer Research, 69(2), 573–582. https://doi.org/10.1158/0008-5472.CAN-08-2088 Miyamoto, Y., Maitra, A., Ghosh, B., Zechner, U., Argani, P., Iacobuzio-Donahue, C. A., Sriuranpong, V., Iso, T., Meszoely, I. M., Wolfe, M. S., Hruban, R. H., Ball, D. W., Schmid, R. M., & Leach, S. D. (2003). Notch mediates TGF alpha-induced changes in epithelial differentiation during pancreatic tumorigenesis. Cancer Cell, 3(6), 565–576. https://doi. org/10.1016/s1535-6108(03)00140-5 Moellering, R. E., Cornejo, M., Davis, T. N., Del Bianco, C., Aster, J. C., Blacklow, S. C., Kung, A. L., Gilliland, D. G., Verdine, G. L., & Bradner, J. E. (2009). Direct inhibition of the NOTCH transcription factor complex. Nature, 462(7270), 182–188. https://doi.org/10.1038/nature08543 Morello, V., Cabodi, S., Sigismund, S., Camacho-Leal, M. P., Repetto, D., Volante, M., Papotti, M., Turco, E., & Defilippi, P. (2011). Β1 integrin controls EGFR signaling and tumorigenic properties of lung cancer cells. Oncogene, 30(39), 4087–4096. https://doi.org/10.1038/onc.2011.107 Morgan, K. M., Fischer, B. S., Lee, F. Y., Shah, J. J., Bertino, J. R., Rosenfeld, J., Singh, A., Khiabanian, H., & Pine, S. R. (2017). Gamma Secretase Inhibition by BMS-906024 Enhances Efficacy of Paclitaxel in Lung Adenocarcinoma. Molecular Cancer Therapeutics, 16(12), 2759–2769. https://doi. org/10.1158/1535-7163.MCT-17-0439 Mungamuri, S. K., Yang, X., Thor, A. D., & Somasundaram, K. (2006). Survival signaling by Notch1: Mammalian target of rapamycin (mTOR)-dependent inhibition of p53. Cancer Research, 66(9), 4715–4724. https://doi.org/10.1158/00085472.CAN-05-3830 Rizzo, P., Osipo, C., Foreman, K., Golde, T., Osborne, B., & Miele, L. (2008). Rational targeting of Notch signaling in cancer. Oncogene, 27(38), 5124–5131. https://doi.org/10.1038/ onc.2008.226 Rose, S. L., Kunnimalaiyaan, M., Drenzek, J., & Seiler, N. (2010).
SPRING 2020
Sharma, A., Gadkari, R. A., Ramakanth, S. V., Padmanabhan, K., Madhumathi, D. S., Devi, L., Appaji, L., Aster, J. C., Rangarajan, A., & Dighe, R. R. (2015). A novel Monoclonal Antibody against Notch1 Targets Leukemia-associated Mutant Notch1 and Depletes Therapy Resistant Cancer Stem Cells in Solid Tumors. Scientific Reports, 5(1), 11012. https://doi. org/10.1038/srep11012 Stylianou, S., Clarke, R. B., & Brennan, K. (2006). Aberrant Activation of Notch Signaling in Human Breast Cancer. Cancer Research, 66(3), 1517–1525. https://doi.org/10.1158/00085472.CAN-05-3054 Sun, X., Wen, H., Chen, C., & Liao, Q. (2009). [Expression of Notch intracellular domain in cervical cancer and effect of DAPT on cervical cancer cell]. Zhonghua Fu Chan Ke Za Zhi, 44(5), 369–373. The Notch-Hes pathway in mammalian neural development | Cell Research. (n.d.). Retrieved May 17, 2020, from https://www-nature-com.dartmouth.idm.oclc. org/articles/7290016 Trang, V. H., Zhang, X., Yumul, R. C., Zeng, W., Stone, I. J., Wo, S. W., Dominguez, M. M., Cochran, J. H., Simmons, J. K., Ryan, M. C., Lyon, R. P., Senter, P. D., & Levengood, M. R. (2019). A coiledcoil masking domain for selective activation of therapeutic antibodies. Nature Biotechnology, 37(7), 761–765. https://doi. org/10.1038/s41587-019-0135-x Wang, Z., Li, Y., Kong, D., Banerjee, S., Ahmad, A., Azmi, A. S., Ali, S., Abbruzzese, J. L., Gallick, G. E., & Sarkar, F. H. (2009). Acquisition of Epithelial-Mesenchymal Transition Phenotype of Gemcitabine-Resistant Pancreatic Cancer Cells Is Linked with Activation of the Notch Signaling Pathway. Cancer Research, 69(6), 2400–2407. https://doi.org/10.1158/00085472.CAN-08-4312 Wu, Y., Cain-Hom, C., Choy, L., Hagenbeek, T. J., de Leon, G. P., Chen, Y., Finkle, D., Venook, R., Wu, X., Ridgway, J., SchahinReed, D., Dow, G. J., Shelton, A., Stawicki, S., Watts, R. J., Zhang, J., Choy, R., Howard, P., Kadyk, L., … Siebel, C. W. (2010). Therapeutic antibody targeting of individual Notch receptors. Nature, 464(7291), 1052–1057. https://doi.org/10.1038/ nature08878 Yuan, X., Wu, H., Han, N., Xu, H., Chu, Q., Yu, S., Chen, Y., & Wu, K. (2014). Notch signaling and EMT in non-small cell lung cancer: Biological significance and therapeutic application. Journal of Hematology & Oncology, 7, 87. https://doi. org/10.1186/s13045-014-0087-z Yuan, X., Wu, H., Xu, H., Han, N., Chu, Q., Yu, S., Chen, Y., & Wu, K. (2015). Meta-analysis reveals the correlation of Notch signaling with non-small cell lung cancer progression and prognosis. Scientific Reports, 5, 10338. https://doi. org/10.1038/srep10338 Zhao, Y., Qiao, X., Tan, T. K., Zhao, H., Zhang, Y., Liu, L., Zhang, J., Wang, L., Cao, Q., Wang, Y., Wang, Y., Wang, Y. M., Lee, V. W. S., Alexander, S. I., Harris, D. C. H., & Zheng, G. (2017). Matrix metalloproteinase 9-dependent Notch signaling contributes to kidney fibrosis through peritubular
91
Psychological Defense Mechanism Subtypes Indicate Levels of Social Support Quality and Quantity BY RACHEL TIERSKY '23
Cover: Our subconscious responses to stress may have wide reaching implications for how we view our social relationships Source: Unsplash; Creator: Aarón Blanco Tejedor
“...maladaptive defenses (immature and neurotic) were found to negatively correlate with social support quality and quantity, though not to the same degree.”
92
Abstract The present study explores the relationship between psychological defense style and quality and quantity of social support using statistical data gathered from patients seeking assessment services in the Northeast United States. Participants completed the Defense Style Questionnaire (Andrews et al., 1993) and the Social Support Questionnaire (Sarason et al., 1983) as part of a larger panel of tests, and data was then compiled and examined for correlations. Using data from these studies, it was concluded that individuals with a higher inclination toward immature defenses such as projection were more likely experience both lower quantity and quality of social support. Individuals with more mature defenses such as humor were found to have greater quantity of support, though not greater quality. Individuals with neurotic defense styles such as idealization were found to have lower quality of support, but not lower quantity. Thus,
maladaptive defenses (immature and neurotic) were found to negatively correlate with social support quality and quantity, though not to the same degree. Further investigation into the relationship between psychopathology, defense mechanisms, and the development of social relationships may suggest additional routes for intervention.
Introduction Defense mechanisms arise as natural responses to discomfort and stress. They were first defined by Anna Freud as subconscious protective measures in, “The Ego and the Mechanisms of Defense,” and then elaborated upon by psychoanalysts trying to reconcile the relationship between the personallyidentifiable ego and infantile id. Defense mechanisms are present at a very early age and continue to develop over time as a result of natural predisposition, early learning, and ego maturation. Defense style can be DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
categorized into one of 20 accepted defenses using the Defense Style Questionnaire (DSQ), and then further characterized as mature (more adaptable), neurotic, or immature (less adaptable). Some identified defenses include the mature suppression, humor; the neurotic idealization, reaction formation (incorrectly placing blame); and the immature projection, passive aggression, isolation, denial, and displacement (Andrews et al., 1993). Adaptability of defense style has had implications for identification and treatment of psychopathology – mental or behavioral disorders outlined in the DSM-5 (Bond, 2004; Zanarini et al., 2011). Individuals with immature defenses demonstrate greater levels of psychopathology than those with more adaptive defenses (Zanarini et al., 2013). In addition, individuals who experience adverse childhood events are more likely to report immature defense styles (Nickel, Egle, 2006). Thus, there appears to be a link between adversity and maladaptive defense style. Furthermore, psychopathology has been found to be related to social support. Populations with greater rates of psychopathology tend to demonstrate decreased quality and quantity of social support (Wang et al., 2014; Grav et
al., 2012). Social support is believed to directly correlate with quality of life (Velgeson, 2002) and physical health (Reblin & Uchino, 2008), making it ever more important to study and intervene. The aim of the present investigation is to uncover whether adaptability of defense mechanism is correlated with social support quality and quantity. If a relationship between defense mechanism and social support is found, this may provide evidence for a new route of therapeutic intervention in improving physical health and quality of life that focuses support networks rather than traditional therapies. In addition, the results of this study may provide insight into the mechanisms by which psychopathology impacts social support as a whole, while also noting that there may be underlying genetic or environmental influences in both spheres. Given the relationship between mental adversity and maladaptive defense style, and noting that individuals with more severe psychopathology show decreased social support, it is hypothesized that the adaptability of defense mechanisms would correlate positively with social support quality and quantity. Individuals with immature defenses would demonstrate decreased quality and quantity of social support, while individuals with more mature defenses would demonstrate increased quantity and quality of social support, comparatively.
Figure 1. Isolation is a common immature defense wherein individuals separate, or isolate, a traumatic event from the rest of their thoughts and feelings. In doing so, the brain is able to compartmentalize and avoid responding to a specific event Source: Unsplash; Creator: Tim Gouw
“it is hypothesized that the adaptability of defense mechanisms would correlate positively with social support quality and quantity.”
Data compiled over 8 years by resident psychotherapists at a clinic in Hackensack, New Jersey was used to investjigate the link between social support and quality of life.
Methods Participants and Procedure 229 individuals aged 18-66 years (mean =
Table 1: Coefficients of Adaptability of Defense Style and Support ** Correlation is significant at p ≤ 0.01 * Correlation is significant at p ≤ 0.05
SPRING 2020
93
Figure 2. Development of a healthy social life is essential for physical, emotional, and mental wellbeing Source: Useproof; Creator: Austin Distel
30.93, SD = 11.23), educated 1-22 years (mean = 14.42, SD = 2.217), and both male (36%) and female (64%) seeking assessment services in the northeast US between 2000 and 2008 participated in the study. Measures Participants seeking neurophysiological testing at the Fairleigh Dickinson University Center for Psychological Services in Hackensack, New Jersey completed the Defensive Style Questionnaire (Andrews et al., 1993) and the Social Support Questionnaire (Sarason et al., 1983) as part of a larger testing battery that also included other psychological and neuropsychological measures. Pearson correlation coefficients were calculated to describe the association between defense styles and social support.
“The Social Support Questionnaire (SSQ) is used by treatment centers as a means of establishing a patient's perceived levels of social support quality and quantity.”
Figure 3. By recognizing points of intervention early in life, physicians may be able to encourage the development of healthy defense mechanisms and in turn help create positive social environments Source: Unsplash; Creator: Vonecia Carswell
94
Questionnaires The Defense Style Questionnaire (DSQ) is used by treatment centers as a means of characterizing a patient’s ability to respond to stressful stimuli. The assessment evaluates patient responses by ranking intensity of performance in 20 different defense mechanisms on a 9-point scale, all of which are organized into a larger framework of mature, neurotic, or immature. Mature defenses include humor, suppression, sublimation, anticipation; neurotic defenses include reaction, idealization, pseudoaltruism, undoing; immature defenses include rationalization, autistic fantasy, displacement, isolation, dissociation, devaluation, splitting, denial, passive-aggression, somatization, acting out, and projection. The Social Support Questionnaire (SSQ) is used by treatment centers as a means of establishing a patient’s perceived levels of social support quality and quantity. The 27 item questionnaire extrapolates an overall support score (SSQN) and an overall satisfaction score, represented by a number between 1 (very unsatisfied) to 6 (very
satisfied) (Sarason et al., 1983).
Data Pearson correlation coefficient is a statistical measure between +1 and -1 which indicates the extent that two variables are linearly related with +1 being a perfect positive linear correlation (all points fit on a line with positive slope), -1 being a perfect negative correlation (all points fit on a line with negative slope), and 0 representing two variable that are not correlated. Pearson correlation coefficients were computed to examine the relationship between defense style and measures of social support. Significance at a 0.05 level (p ≤ 0.05) means that data is 95% likely to not be a result of random change; significance at a 0.01 level (p ≤ 0.01) means that data is 99% likely to not be a result of random chance. Immature defense style was found to be related to social support quality (r = -0.327, p < 0.01) and quantity (r = -0.183, p < 0.01). Mature defense style was found not to be related to quality (r = 0.071, p > 0.05) but related to quantity (r = 0.145, p < 0.05) of social support. Neurotic defense style was found to be related to quality ( r = -0.152, p < 0.05) but not quantity (r = 0.014, p > 0.05) of social support. The calculated value of Pearson’s correlation coefficient for mature, neurotic, and immature defense mechanisms as well as quantity and quality of social support are shown in Table 1.
Discussion It was hypothesized that individuals with mature defense styles would have high quantity and quality social support and individuals with immature or neurotic defense styles would have low quantity and quality of social support. The results of this investigation partially support this hypothesis. Individuals exhibiting
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
more immature defenses were found to have a lower quality of social support, as well as less social support altogether. Similarly, those with more mature defenses showed higher levels of social support but not necessarily greater quality. Those with more neurotic defenses exhibited lower quality social support, but not necessarily a lower quantity. These results provide evidence for the relationship between coping mechanisms, especially as an indicator of psychopathology, and perceived social support, further suggesting that maladaptive response styles have a negative impact on relationships. Because defense style is believed to be fixed by Freudian theory at the age in which respondents completed the survey, no conclusions can be drawn regarding the impact of social support on defense mechanism at this point. As such correlations become more widely understood, it is important to note the impact of social support on health and quality of life. The relationship between perceived social support and self-esteem is believed to be bidirectional, with low levels of each indicating a pull toward depressive tendencies (Lee et al., 2014). Additionally, low levels of social support are seen to correlate negatively with cardiac health (Barth et al., 2010). As it stands, high levels of perceived social support quality and quantity are essential for a psychologically and physically healthy lifestyle. Recognizing the relationship between maladaptive defense style and psychopathology, this becomes an important interventional route for healthcare workers looking to diagnose and treat mental illness. Further, this study may help identify points in development wherein beliefs about social relationships are formed. If the tendency to favor a certain defense mechanism forms early in life, and defense mechanisms are closely linked to perceptions of support, the studied correlation may provide insight into early childhood social development of those with varying degrees of adaptive defenses. It may be
SPRING 2020
beneficial to investigate points of therapeutic intervention for defense mechanism development, noting that support for children in their home, academic, or social life may have wide reaching implications for their wellbeing. Additionally, it may be useful to further investigate the relationship between defense style and presentation of certain disorders. Associations between borderline personality disorder and specific maladaptive defense styles (acting out, undoing) have already been found (Zanarini et al., 2011). There exists the potential that presentation of specific defense mechanisms may aid in diagnosis and recognition of psychopathology. In addition, it may be useful to note whether childhood intervention positively impacts the development of social relationships, and whether there is a window of time where perceptions of social support form and are solidified. Overall, however, the limitations in drawing correlational relationships such as the above must be examined. Correlational research can aid in establishing a relationship between two variables, but fails to determine whether change in one variable instigates a change in the other. Thus, this study would not be useful in determining whether patients who improve their defense mechanisms through treatment experience increased social support, nor can it suggest the opposite. Additionally, defense mechanism may be one piece to a much broader set of criteria that influence perceptions of social support; while linked, it may not be the largest contributing factor (consider the impact of genes or environment, for example). Further study may help draw a causal relationship between the two by studying treatment outcomes and perceptions of social support in patients of different ages, locations, levels of psychopathology, and adaptivity of defense style.
Figure 4. It is important to recognize the wide variety of factors that can influence quality and quantity of social support, while also taking into account the limitations of correlational research Creator: Liam Locke
“The relationship between perceived social support and self-esteem is believed to be bidirectional, with low levels of each indicating a pull towards depressive tendencies.”
References [1] Andrews G., Singh M., Bond M. (1993). The Defense Style Questionnaire. The Journal of Nervous and Mental Disease, 181(4), 246–256. doi:. 10.1097/00005053-199304000-00006 Barth J, Schneider S, von Känel R (2010). Lack of social support in the etiology and the prognosis of coronary heart disease: a systematic review and meta-analysis. Psychosom Med. 2010, 72 (3): 229-238. 10.1097/PSY.0b013e3181d01611. Bond, M. (2004). Empirical studies of defense style: relationships with psychopathology and change. Harvard Review Psychiatry.
95
doi:10.1080/10673220490886167 Grav, S., Hellzèn, O., Romild, U., & Stordal, E. (2011). Association between social support and effective disorders in the general population: the HUNT study, a cross-sectional survey. Journal of Clinical Nursing, 21(1-2), 111-120. doi:10.1111/j.1365-2702.2011.03868.x Helgeson, V.S.. (2003). Social support and quality of life. Quality of Life Research. 12. Lee C, Dickson DA, Conley CS, Holmbeck GN. (2014). A closer look at self-esteem, perceived social support, and coping strategy: a prospective study of depressive symptomatology across the transition to college. J Soc Clin Psychol. 2014, 33 (6): 560-585. 10.1521/ jscp.2014.33.6.560. Nickel, R & Egle, Ulrich. (2006). Psychological defense styles, childhood adversities and psychopathology in adulthood. Child abuse & neglect. 30. 15770. 10.1016/j.chiabu.2005.08.016. Reblin, M., & Uchino, B. N. (2008). Social and emotional support and its implication for health. Current opinion in psychiatry, 21(2), 201â&#x20AC;&#x201C;205. https://doi. org/10.1097/YCO.0b013e3282f3ad89 Sarason, I.G., Levine, H.M., Basham, R.B., et al. (1983). Assessing social support: The Social Support Questionnaire. Journal of Personality and Social Psychology, 44, 127- 139. Wang, X., Cai, L., Peng, J., & Qian, J. (2014). Social support moderates stress effects on effective disorders. International Journal of Mental Health System. doi:10.1186/1752-4458-8-41 Zanarini, M. C., Weingeroff, J. L., & Frankenburg, F. R. (2011). Defense Mechanisms Associated With Borderline Personality Disorder. Journal of Personality Disorders, 23(2), 113-121. doi:10.1521/pedi.2009.23.2.113 Zanarini, M. C., Frankenburg, F. R., Wedig, M. M., & Fitzmaurice, G. M. (2013). Cognitive Experiences Reported by Patients With Borderline Personality Disorder and Axis II Comparison Subjects: A 16-Year Prospective Follow-Up Study. American Journal of Psychiatry, 170(6), 671-679. doi:10.1176/appi. ajp.2013.13010055
96
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
97
Straight for the Heart – Providing Resilience Before an Attack
BY SAM NEFF '21
Introduction Cover: The heart is a complicated and necessary organ. Its atria collect blood from the systematic (right atrium) and pulmonary (left atrium) circulation, while its ventricles receive blood from the atria and pump blood out to the lungs (right ventricle) or the rest of the body (left ventricle). The right atrium and right ventricle are shown in this image. Source: Wikimedia Commons
98
In the United States, heart disease is far and away the leading cause of death – a CDC report on deaths in 2017 showed that heart disease killed 650,000 (“Leading Causes of Death,” 2012). It has been labeled a ‘disease of affluence’ due to its association with problems that afflict wealthy countries most severely: poor nutrition owing to a diet high in fatty foods (and the associated risk of being overweight and obese), tobacco use and alcohol consumption, or a sedentary lifestyle – days spent occupying a chair at the office and nights spent hunkered down on the couch at home. Of course, not all afflicted individuals fall into this pattern of living, and although more prevalent, these behaviors are not limited to wealthy countries. The truth is that the etiology of heart disease is more complicated – lifestyle factors are a major causative factor, but this disease also is shaped significantly by genetic predisposition. Furthermore, heart disease seems to be a
natural feature of the human condition – not just for those who possess certain gene variants or engage in harmful behaviors. With age, the heart weakens, and the prevalence of heart disease increases dramatically: among people aged 60-79 years old in the United States, the prevalence of cardiovascular disease is 73% (roughly the same for both men and women). For people 80 years or older, it is 79% for men and 86% for women (Mozaffarian et al., 2015). It is clear that improving heart health is a national (and international) imperative. First and foremost, getting heart disease treatment comes at a significant cost for each patient. Also, Medicare accounts for a significant portion of the US national budget, and given that the elderly is an established at-risk population for heart disease, their treatment incurs significant cost on the working-age population. Even military preparedness is put at risk – a country with a higher prevalence of individuals that suffer from heart disease DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Atherosclerosis is marked by the narrowing of the arteries due to accumulation of cholesterol plaques. This is particularly dangerous in the coronary arteries, which feed into the heart. Rupture of the plaque can cause clotting and blockage of these arteries, leading to a heart attack. Source: Wikimedia Commons
and associated risk factors like obesity is a population less fit for combat if the need for a major military engagement arises (“Obesity is a Natural Security Issue,” 2012). This paper includes a discussion about the biology of heart disease as well as the treatments that already exist, and may be developed, to fight it. It will address the following questions: What are the characteristic physiological changes associated with the disease? What is a heart attack and how is it linked to high blood pressure and congestive heart failure? What can be done to mitigate the risk of these ailments? Which medications and technologies are currently in use, what options are already on the table, and what potential solutions are on the horizon? A discussion of the economic, political, and ethical challenges inherent in the treatment of heart disease will have to follow, but the current moment requires a look at the biological realities of heart disease, and the medical solutions that are possible.
Physiology of Heart Disease The heart’s primary function as an organ is to pump blood throughout the body – sending oxygenated blood to all the body’s tissues and receiving deoxygenated blood in return. The heart is well synced with the lungs – from which
SPRING 2020
it receives oxygenated blood and to which it sends deoxygenated blood for reoxygenation. Generally, veins are the vessels that carry deoxygenated blood to the heart while arteries are the vessels that carry oxygenated blood away (although the pulmonary veins and pulmonary arteries are one exception, the former carrying oxygenated blood and the latter carrying deoxygenated blood) (Bryson, 2019). To supply all the body’s tissues – from the brain, which is in constant need of oxygen to fuel glucose metabolism, to the skin and skeletal muscles – the heart must pump ceaselessly with high force. This is a tiring task, and for elderly individuals, the heart simply does not have the vigor it possessed in its youth. Decline in heart function is a normal consequence of aging – as are wrinkling of the skin and brittling of the bones. In particular, the left ventricle of the heart, which is responsible for pumping blood from the heart to all non-pulmonary tissues, thickens and scars. The blood vessels leading to and from the heart also stiffen. Furthermore, the molecular processes necessary for repairing injury to cardiac muscle tissue become defective over time. These changes do not mean that the heart is destined to fail, but rather that it is more likely to stop working in conjunction with other risk factors
“To supply all the body's tissues - from the brain, which is in constant need of oxygen to fuel glucose metablism, to the skin and skeletal muscles - the heart must pump ceaselessly with high force."
99
Figure 2: A few bottles of warfarin, the famous blood thinner that was first approved by the FDA in 1954 and marketed under the brand names Coumadin and Jantoven. Source: Wikimedia Commons
“Even in the absence of a heart attack, the stiffening of blood vessels and weakening of the heart muscles can lead to arrhythmia and heart failure.”
(Strait and Lakatta, 2012). Heart disease is a complicated illness that evolves slowly over time, and its symptoms are much broader than the sudden heart attacks that cut lives tragically short and require life-long monitoring and medical treatment for survivors. A defining marker of the illness is atherosclerosis: the buildup of cholesterol plaques in the walls of arteries and consequent narrowing of the blood vessels (Ross, 1999). The narrowing of the arteries itself causes a host of problems, and high blood pressure (the heart must pump harder to get the same amount of blood through narrow arteries) is the most ubiquitous. But atherosclerosis also causes heart attacks. Rupturing of plaques in the coronary arteries (through which blood pumped out of the heart feeds directly back to the heart tissue) results in blood clots. The heart operates by a series of coordinated contractions, which requires that all the muscle tissue of the heart works in concert. If blood flow to a single region of the heart is blocked by a clot in a coronary artery, the whole heart is prone to catastrophic arrest (“Heart Attack,” n.d.) [Figure 1]. Even in the absence of a heart attack, the stiffening of blood vessels and weakening of the heart muscles can lead to arrhythmia (irregular heartbeat) and heart failure: a condition where the heart is unable to sufficiently supply all the body’s tissues with oxygen (Strait and Lakatta, 2012). Perhaps not surprisingly, atherosclerosis and heart failure also lead to stroke – the rupture or blockage of a blood vessel in the brain. People with coronary heart disease are at twice the risk (“How Cardiovascular Stroke Risks Relate,” n.d.).
Existing Treatment Options Despite this grim danger of heart disease, there is still room for optimism. A broad slate of medications exists to counteract heart disease and even more are in the various phases of clinical development. Some drugs are effective for certain patients while different drugs are effective for others – each patient and their doctor must choose a treatment regimen that is effective, affordable, and that minimizes complications. Despite their wide variety, these medications can still be divided into different classes of heart disease drugs based on their respective physiological effects. The existing classes of medication treat heart disease at various levels. Some work to prevent blood clotting – an event involving the aggregation of platelets and formation of a fibrin mesh around the break site in a damaged
100
blood vessel. This process is crucial in normal circumstances, such as to stop bleeding due to a laceration of the skin. But in narrow, plaque-filled arteries, clotting can cause a blockage; and in the coronary arteries, this can be a deadly heart attack (Bryson, 2019). Both anticoagulants (blood thinners, like warfarin) and antiplatelet agents decrease the ability of platelets and other clotting factors (like fibrin) from associating (“Cardiac Medications,” n.d.) [Figure 2]. Other medications like ACE inhibitors work to reduce blood pressure. They suppress the activity of angiotensin-converting enzyme (ACE), which normally produces a molecule called angiotensin II that raises blood pressure. Angiotensin II receptor blockers have the same effect. Diuretics lower blood pressure by causing the body to expel more fluid through urination; and vasodilators do the same by increasing the diameter of blood vessels (“Cardiac Medications,” n.d.). Digitalis is useful in advanced stages of heart disease because it increases the strength with which the heart contracts. For patients suffering heart failure, digitalis can help to pump out a sufficient blood supply to the body’s distant peripheral tissues (Hauptman and Kelly, 1999). If all of these drugs fail – and the heart is in such a dire state that it can no longer work on its own – transplant may be the only option. Transplanted hearts are typically taken from a recently-deceased human donor and surgically inserted into the recipient. The recipient must wait for a suitable organ donor to pass away (with proper blood type, heart size, etc.) and be ready at a moment’s notice to undergo the procedure (Delmo Walter and Hetzer, 2013). After transplantation, patients must take immunosuppressant drugs for the rest of their life. Luckily, scientists have recently
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
started to see room for improvement in the transplantation of artificial hearts. Not only would there be no shortage of hearts, they could theoretically last the rest of a lifetime, with only periodic tune-ups. The possibilities of mechanical heart transplantation will be discussed in the subsequent section.
Future Interventions The aforementioned wide range of medication keeps heart disease mostly in check, but they are not enough to knock off heart disease as the leading cause of death in the US and other Western countries. These medications and procedures all treat the symptoms of heart disease, but none prevent its onset and some have serious side effects. Warfarin, for example, can cause uncontrolled bleeding from the gums, the nose, in the urine, and even around the brain (“A Patient’s Guide to Taking Warfarin,” n.d.). The goal of this section is to outline a plan for preventive cardiovascular care – one that involves monitoring heart health stringently and in fine detail. This will provide an understanding of the biomarkers that define heart disease, which it is an essential first step to determine who needs medical intervention so that it can be provided at the earliest possible. Once an extensive monitoring regime is in place, this article focuses on two possible courses of action: one that involves mechanical enhancement of the heart via invasive surgery, and another that is a molecular solution that would require no invasive procedure. Step 1: Monitoring Data is paramount to gain a better understanding of the physiological markers of heart disease. Given the current trend
of increasing technology in public health, technological applications, if implemented more extensively (among both healthy and diseased individuals), could provide this data. Firstly, wearable devices like the Apple Watch, already measure surface-level information like heart rate, and even has an EKG measurement app. But cardiovascular monitoring devices can be more sophisticated. Clairways, a biotechnology company headed by two recent graduates of the Thayer School of Engineering and the Tuck School of Business, has developed a wearable acoustic sensor that listens to the sounds of the lungs. The sensor is matched with machine learning software that performs pattern recognition, and it can determine what breathing patterns are associated with adverse events like coughing and wheezing. The data is collected in a database that records the user’s lung health statistics over time, and this data may be used to inform personal health decisions or be used by scientists conducting clinical trials (“Smart Lung Monitoring,” n.d.). For example, in a trial for a new COPD medication, a patient’s lung health could be monitored constantly, not just in the clinic. The same approach is valuable for monitoring heart health in general. Mobile EKG devices already exist (Hampton and Kelly, 1999). If the data they produce is tracked over time and analyzed, scientists could gain insight into particular EKG patterns that are predictive of heart disease in its early stages [Figure 3]. For patients already suffering from heart disease, it would be much easier to witness heart dysfunction and assess the efficacy of prescribed medication. Monitoring technologies like these become most useful when they proliferate widely throughout
“Mobile EKG devices already exist. If the data they produce is tracked over time and analyzed, scientists could gain insight into particular EKG patterns that are predictive of heart disease in its early stages.”
Figure 3: Image of an EKG at top – the sharp peak indicates systole, which represents the contraction of the heart’s two ventricles as they pump blood out. The subsequent smaller bump represents diastole, the relaxation of the heart and refilling with blood. The bottom graph shows the central venous pressure (CVP), which is the blood pressure in the vena cava—the vein that deposits deoxygenated blood directly into the heart. Source: Wikimedia Commons
SPRING 2020
101
Figure 4: Heart surgery procedure performed by Polish heart surgeon Jan Witold Moll in 1969, the first procedure of its kind in Poland, and just thirteen months after the famous procedure performed by South African doctor Christiaan Barnard in 1967 (the first where the patient regained consciousness). Heart transplantation is, understandably, a relatively dangerous and labor-intensive procedure. If successful, a life-long regimen of immunesuppressive drugs is required (Lysiak, 2000). Source: Wikimedia Commons
“Thus far, mechanical hearts have not proven as long-lasting as their organic counterparts. They are currently in use only as a bridge for patients that can't wait for a donor...”
102
the population, allowing researchers to track individuals as they progress from healthy heart function to heart dysfunction. With longitudinal data from a large population of patients, it would then also be possible to determine how subtle changes in the shape of an EKG signal correlate with the early progression of heart disease. Step 2: Strengthening the Heart 2.1: The Mechanical Solution With a monitoring system to detect heart dysfunction in place – what else can be done to slow (and stop) the progression of disease? Of course, existing therapies could be applied sooner, and patients could be recommended to make lifestyle changes that may halt heart function decline. Another alternative would be to replace older cells of the heart with new ones from a patient’s own stem cells, which is already being explored (Delmo Walter and Hetzer, 2013). But going even further – one might imagine a future where the human heart is entirely replaced by a synthetic one. This solution seems, from our current vantage point, suitable only as a lastditch effort – to restore heart function when all else has failed. But when the technology exists to make mechanical hearts viable in the longterm (with necessary tune-ups to fix any damage that occurs), it becomes the ultimate preventive measure. Mechanical hearts wouldn’t have heart attacks – they pump blood throughout the body and don’t require a blood supply of their own. Engineering a synthetic heart requires taking into account certain physical principles. The organ may be thought of as two coordinated pumps – one that pushes out blood to the lungs
for reoxygenation and another that supplies oxygenated blood to all the body’s tissues, and these forces are generated by the contractile action of cardiac muscles. The physics of fluid flow must be acknowledged also – the shape of heart is constructed such that blood is funneled smoothly between chambers (Uehara and Sakane, 2003). The heart’s valves also ensure that when blood is pumped from the atria to the ventricles, there is no backflow (Bryson, 2019). The amount of pressure applied when the heart contracts is also carefully calibrated to push just the right amount of blood out to the body and is precisely coordinated by a network of neurons that is responsive to conditions like breathing rate or the blood concentration of carbon dioxide (Bryson, 2019). Clearly, this is a daunting task for the biochemical engineer. Thus far, mechanical hearts have not proven as long-lasting as their organic counterparts. They are currently in use only as a bridge for patients that can’t wait for a donor, or for patients with complicating conditions that could not withstand the immunosuppressant regime that accompanies regular heart transplantation (“7 Things You Should Know About Artificial Hearts,” n.d.) [Figure 4]. But the combination of sophisticated imaging technologies with 3D printing technology make them a possibility. A heart can be scanned and reproduced with a 3D printer to exactly the same shape – such that fluid flow between the hearts chambers is optimized and the heart valves are properly formed. A team of scientist led by Dr. Nicholas Cohrs at the University of Switzerland are
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
studying this, imaging hearts and replicating them precisely in silicone. (Cohrs et al., 2017). The next challenge will be to utilize materials that mimic the physically properties of the heart’s tissues most precisely. Another potential approach to heart transplantation is the potential of growing a new heart from the patient’s own stem cells, and then transplant this new heart into the body. The main advantage of this is that it would certainly mitigate the issue of immune rejection. But the precise cocktail of growth factors necessary to stimulate development of a stem-cell derived heart is a mystery. Another advantage of this hypothetical mechanical heart is its ability to be more precisely controlled. A typical transplanted heart needs to link up to the neurons that interfaces with the heart, but an artificial heart could be programmed to ‘listen’ to the neural signals supplied by the brain. This is an extension of the technology that already exists in the BrainGate device, which allows its users to sync up their patterns of brain activity with the movement of a cursor on a computer monitor or the action of a robotic arm (“Neurotherapeutics,” n.d.). Existing neural connections would not need to be reformed after transplantation – the artificial heart could act on brain signals remotely. 2.2: The Molecular Solution The former solution is admittedly a radical one. Despite its benefits, it would be costly to implement and presents the same dangers as any invasive transplantation procedure does. Another possibility is minimally-invasive molecular therapy. Scientists have recently begun to explore how and why other species are not prone to heart attacks like humans (LaFee, 2019). Researchers at the University of San Diego are investigating a molecule called Neu5Gc that is produced in many other organisms – fish, birds, and even primates – but not in humans. It appears that a mutation event occurred early on in the course of human evolution, in which the gene encoding Neu5Ac hydroxylase, the enzyme that produces Neu5Gc, was inactivated. The molecule has been associated with the prevention of atherosclerosis. Scientists silenced the gene encoding Neu5Ac hydroxylase in mice (called the CMAH gene) and found these CMAHknockout mice to have significantly more plaque built up in their arteries. (Kawanishi et al., 2019). Despite this finding, the Neu5Gc protein
SPRING 2020
cannot be administered to humans as a solution. Humans actually do consume a fair amount of Neu5Gc, as it is found in red meat. The problem is that dietary Neu5Gc consumption is associated with inflammation and the progression of cancer. Furthermore, its effect is actually opposite to that seen in other animals – Neu5Gc consumption enhances atherosclerosis in humans. The scientists at the University of San Diego explain these ill effects by stating that Neu5Gc acts as a “xeno-auto antigen” (Kawanishi et al., 2019). Essentially, a person’s immune system recognizes it as a foreign molecule and mobilizes immune cells to attack it, causing chronic inflammation. How could the negative effects of Neu5Gc consumption be circumvented? The first step would be to develop a detailed understanding of the protein’s mechanistic role in other animals – how exactly does it prevent atherosclerosis? What cellular proteins does it interact with directly? Given that the structure of Neu5Gc is known, scientists could design pharmaceutical drugs of similar structure that mimic the Neu5Gc-receptor interaction if they knew how the molecule interacted with cellular receptors. The drug validation process would involve the screening of numerous drug candidates on mice, with each candidate tested for its ability to decrease the deposition of plaque in the arteries. Other chemical properties, like bioavailability, rate of excretion, and toxicity, would subsequently need to be tested.
“It appears that a mutation event occured early on in the course of human evolution, in which teh gene encoding Neu5Ac hydroxylase, the enzyme that produces Neu5Gc, was inactivated. The molecule has been associated with the prevention of atherosclerosis.”
Another alternative would be to enhance the activity of proteins that are already expressed in the human body. Scientists are currently investigating the efficacy of activating alternate chloride channels (e.g. the protein TMEM16A) in patients with cystic fibrosis. The genetic disease, which results in the clogging of the lungs with thick, sticky mucus (as well as numerous defects in other organs), is the result of a mutation in the gene encoding the protein CFTR – a channel that pumps chloride (and bicarbonate) ions out of epithelial cells. The idea is that increased activity of other chloride channels (like TMEM16A) might compensate for CFTR dysfunction, specifically for patients with null mutations who are unable to benefit from other recent therapies (Danahay et al., 2020). A similar approach may be applicable to prevent atherosclerosis – enhancing the activity of proteins that halt the deposition of plaque. One such protein is perilipin-1 (Plin1), which is essential for fat metabolism and storage. (Langlois et al., 2011). Perhaps studies
103
“...if the proposed changes are made, societal benefits like increased quality of life, extended working age, and reduced burden on the health care system would be immense - and they may well compensate for the costs. .”
in mice could be done to determine if enhancing Plin1 activity above normal levels would increase the capacity of the protein to prevent atherosclerosis, although it’s certainly possible that changing this protein has no such effect or may cause unexpected problems. Nonetheless, it is worth testing – and the same goes for other genes implicated in the development of atherosclerosis.
Failure, 5(3), 211-217. https://doi.org/ 10.1002/ehf2.12267
Conclusion
EKG. (n.d.). Omron Health Care. https://omronhealthcare. com/ekg/
A heart attack is a traumatic specter; one that haunts individuals afflicted by heart disease with a feeling of grim foreboding. But if the right medical intervention is provided at an early point in time, it doesn’t have to be. To whom should this medical intervention be provided? Just people that already have atherosclerosis and high blood pressure, or also those at a genetic risk of heart disease? What about people with no known risk factors – would the benefit of medical intervention outweigh its costs and potential complications? At the very least, better monitoring of heart health certainly would. In order to enact the proposals presented in this paper, a number of hurdles must be surmounted – the ethical issues of collecting health data from wearable devices; the cost of producing artificial hearts and manufacturing molecular therapies; and the extensive further research that must be done to demonstrate the efficacy of these new treatments. Yet if the proposed changes are made, societal benefits like increased quality of life, extended working age, and reduced burden on the health care system would be immense – and they may well compensate for the costs.
References A Patient’s Guide to Taking Warfarin. (n.d.). American Heart Association. https://www.heart.org/en/health-topics/ arrhythmia/prevention--treatment-of-arrhythmia/a-patientsguide-to-taking-warfarin Bryson, B. (2019). The body: A guide for occupants (First edition). Doubleday. Cardiac Medications. (n.d.). American Heart Association. https:// www.heart.org/en/health-topics/heart-attack/treatment-of-aheart-attack/cardiac-medications Cohrs, N. H., Petrou, A., Loepfe, M., Yliruka, M., Schumacher, C. M., Kohll, A. X., Starck, C. T., Schmid Daners, M., Meboldt, M., Falk, V., & Stark, W. J. (2017). A Soft Total Artificial Heart-First Concept Evaluation on a Hybrid Mock Circulation: A SOFT TOTAL ARTIFICIAL HEART. Artificial Organs, 41(10), 948–958. https://doi.org/10.1111/aor.12956 Czepluch, F.S., Wolnik, B., and Hasenfuß, G. (2018). Genetic determinants of heart failure: facts and numbers. ESC Heart
104
Danahay, H. L., Lilley, S., Fox, R., Charlton, H., Sabater, J., Button, B., McCarthy, C., Collingwood, S. P., & Gosling, M. (2020). TMEM16A Potentiation: A Novel Therapeutic Approach for the Treatment of Cystic Fibrosis. American Journal of Respiratory and Critical Care Medicine, 201(8), 946–954. https://doi.org/10.1164/rccm.201908-1641OC Delmo Walter, E. M., & Hetzer, R. (2013). Surgical treatment concepts for heart failure. HSR Proceedings in Intensive Care & Cardiovascular Anesthesia, 5(2), 69–75.
Hauptman, P. J., & Kelly, R. A. (1999). Digitalis. Circulation, 99(9), 1265–1270. https://doi.org/10.1161/01.CIR.99.9.1265 Heart Attack. (n.d.). National Heart, Lung, and Blood Institute. https://www.nhlbi.nih.gov/health-topics/heart-attack How Cardiovascular Stroke Risks Relate. (n.d.). American Heart Association. https://www.stroke.org/en/about-stroke/strokerisk-factors/how-cardiovascular-stroke-risks-relate Kawanishi, K., Dhar, C., Do, R., Varki, N., Gordts, P. L. S. M., & Varki, A. (2019). Human species-specific loss of CMP- N -acetylneuraminic acid hydroxylase enhances atherosclerosis via intrinsic and extrinsic mechanisms. Proceedings of the National Academy of Sciences, 116(32), 16036–16045. https:// doi.org/10.1073/pnas.1902902116 LaFee, S. (2019, July 23). Why are humans the only species prone to heart attacks? University of California. https://www. universityofcalifornia.edu/news/why-are-humans-onlyspecies-prone-heart-attacks Langlois, D., Forcheron, F., Li, J.-Y., del Carmine, P., Neggazi, S., & Beylot, M. (2011). Increased atherosclerosis in mice deficient in perilipin1. Lipids in Health and Disease, 10, 169. https://doi. org/10.1186/1476-511X-10-169 Leading Causes of Death. (2012). Centers for Disease Control and Prevention. https://www.cdc.gov/nchs/fastats/leadingcauses-of-death.htm Lysiak, M. (2000). [Professor Jan Witold Moll (1912-1990): A Creator of Poznań Thoracosurgery, a Pioneer of Cardiosurgery in Poland, on the 10th Anniversary of His Death]. Arch Hist Filoz Med, 63(3-4), 47-51. PMID: 11766694 Moyes, C. D., & Schulte, P. M. (2016). Principles of animal physiology (Third edition). Pearson. Mozaffarian, D., Benjamin, E. J., Go, A. S., Arnett, D. K., Blaha, M. J., Cushman, M., de Ferranti, S., Després, J.-P., Fullerton, H. J., Howard, V. J., Huffman, M. D., Judd, S. E., Kissela, B. M., Lackland, D. T., Lichtman, J. H., Lisabeth, L. D., Liu, S., Mackey, R. H., Matchar, D. B., … Turner, M. B. (2015). Heart Disease and Stroke Statistics—2015 Update: A Report From the American Heart Association. Circulation, 131(4). https://doi.org/10.1161/ CIR.0000000000000152 Neurotherapeutics. (n.d.). BrainGate. https://www.braingate. org/research-areas/neurotherapeutics/ Obesity is a National Security Issue: Lieutenant General Mark Hertling at TEDxMidAtlantic 2012. (2012, December 6). https://www.youtube.com/watch?v=sWN13pKVp9s
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Ross, R. (1999). Atherosclerosis—An Inflammatory Disease. New England Journal of Medicine, 340(2), 115–126. https:// doi.org/10.1056/NEJM199901143400207 Smart Lung Monitoring. (n.d.). Clairways. https://www. clairways.com Strait, J. B., & Lakatta, E. G. (2012). Aging-associated cardiovascular changes and their relationship to heart failure. Heart Failure Clinics, 8(1), 143–164. https://doi.org/10.1016/j. hfc.2011.08.011 The Federal Budget in 2018: An Infographic. (2019). Congressional Budget Office. https://www.cbo.gov/ publication/55342 Uehara, M., & Sakane, K. K. (2003). Physics of the cardiovascular system: An intrinsic control mechanism of the human heart. American Journal of Physics, 71(4), 338–344. https://doi.org/10.1119/1.1533053 Yazdanyar, A., & Newman, A. B. (2009). The Burden of Cardiovascular Disease in the Elderly: Morbidity, Mortality, and Costs. Clinics in Geriatric Medicine, 25(4), 563–577. https://doi.org/10.1016/j.cger.2009.07.007 7 Things You Should Know About Artificial Hearts. (n.d.). Syncardia. https://syncardia.com/patients/media/ blog/2018/08/seven-things-about-artificial-hearts/
SPRING 2020
105
Lung Analysis of Biomarkers (LAB): The Future of Biomarker Detection BY SAM NEFF '21 AND DINA RABADI '22 Cover: The lungs – the aim of this paper is to explore the possibility of biomarker detection for the identification and management of lung-related illnesses, namely cystic fibrosis and lung cancer. Source: Wikimedia Commons
“From one perspective, the story of medicine in the past century has been the evolution of an ever-more sophisticated understanding of human physiology.”
106
Introduction How does a physician understand the functioning of the human body? In the simplest manner, a patient’s condition can be assessed from the outside with a quick examination of physical appearance and consideration of a patient’s symptoms. A physician may reference a large body of medical literature and the internal record of their own personal experience to ascertain what has gone wrong. This is a familiar story – it sounds a lot like the yearly check-up performed by a primary care physician, and indeed is oftentimes a reasonably effective assessment of patient health. That is, unless a patient is born with some genetic affliction, or develops a progressive illness like cancer or heart disease. Then, a more sophisticated understanding of the body’s function (and dysfunction) must be acquired – through medical imaging, a panel of blood tests, and perhaps a bacterial culture.
From one perspective, the story of medicine in the past century has been the evolution of an ever-more sophisticated understanding of human physiology. It has seen the development of sophisticated imaging technologies to more capably reveals the body’s internal workings, without requiring invasive surgical procedures. It has witnessed an increased capacity to gauge the levels of biomolecules in the blood – enzymes like creatine kinase, which signals muscle damage (“Creatine Kinase,” n.p.). And building from the understanding of infectious disease developed in the 19th century by scientists such as Pasteur, Lister, and Koch (among others)techniques for culturing bacteria (and other micro-organisms) have grown increasingly more sophisticated. And in light of the sophisticated genomic sequencing technologies that have emerged in the past 50 years (if we take the development of Sanger’s method as the starting point for the era of genomic sequencing), scientists can now gather an intricate picture of the interplay between DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
human and microbial gene expression in the human host (Heather and Chain, 2016). The purpose of this report will be to uncover such historical trends that have brought the scientific community to its current understanding of the human body, and given this historical trajectory, to carve out a reasonable path for the future. We aim to build upon what research has been already done, encouraging its implementation and highlighting the remaining questions that ought to be investigated. Our focus will be directed at the detection of biomarkers. One definition of a biomarker is that provided by the National Institutes of Health (NIH) in 1998: a biomarker is “a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention” (Strimbu and Tavel, 2010). This definition encompasses a broad array of diagnostic tests, everything from MRI scans to the quantity of certain molecules in blood serum or other bodily fluids and may even be extended to cover the written word in the form of the doctors’ notes contained within a massive collection of patient health records. All of this medical information comments on the patient’s normal biology, the effects of disease (infectious or otherwise), and the effects of the therapy prescribed to treat it.
Meet the Biomarkers The Electronic Health Record System In the beginning of the twentieth century, physicians relied on their notebooks to track patients and their symptoms while records were kept in massive volumes, and both were hard to obtain and frequently incomplete (Gillum, 2013). One of the first organized patient records was developed by Henry S. Plummer at St. Mary’s Hospital and the Mayo Clinic (Gillum, 2013). Plummer’s model assigned patients to a number to allow all data, including symptoms and diagnoses, for that patient to be assigned to the number (Gillum, 2013). This was one of the first recorded health records standardized in a clinical setting, and the world has come a long way since then. The major strides in medical recordkeeping have undoubtedly shaped today’s care; however, it is clear that the future of medicine requires a type of health record that goes beyond what is currently used. The electronic health record (EHR) is defined as “a digital version of a patient’s paper
SPRING 2020
chart… contain[ing] a patient’s medical history, diagnoses, medications, treatment plans, immunization dates, allergies, radiology images, and laboratory and test results” (“healthIT.gov”, n.d.). Furthermore, the EHR ideally allows for automation of provider workflow and access to all information about a patient (Menachemi, 2011). In 2009, only about 50% of office-based physicians used any form of EHR, so to incentivize widespread use of electronic health records in the United States, the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009 was signed (“healthIT.gov”, n.d., Menachemi, 2011). As of 2017, 86% of officebased physicians have adopted some form of an EHR (“healthIT.gov”, n.d.) [Figure 1] There have been various demonstrated benefits of EHRs. Some studies find that clinical outcomes improve significantly with EHR use. For example, influenza and pneumococcal vaccination rates increase by 15-50% because physicians are reminded of such frequently overlooked or forgotten clinical guidelines by EHR systems (Menachemi, 2011). Increased vaccination rates lead to a healthier community less prone to outbreaks. Furthermore, the EHR reduces unnecessary repetitive testing and larger than necessary antibiotic doses. However, these positive findings are countered by other studies that find little benefit of the EHR. In fact, several studies cite that the EHR actually contributes to physician burnout (Adler-Milstein, 2020). Many EHR interfaces are challenging to use, causing the physician to become distracted while meeting with a patient. So, while the EHR is supposed to benefit both patient and physician, the EHR may actually be detrimental for both of the parties it is intended to assist. How can this be improved, and how can we push the limits of the EHR to expand diagnosis and prevention in the clinic?
“The electronic health record (EHR) is defined as a 'digital version of a patient's paper chart... contain[ing] a patient's medical history, diagnoses, medications, treatment plans, immunization dates, allergies, rdiology images, and laboratory and test results..'”
We propose a three-pronged approach to EHR—standardization, accessibility, and expansion. Many different EHR software styles are currently in use, so the use of a centralized system would be beneficial in several ways. First, it would be immensely helpful to help shrink the learning curve for physicians. Second, a centralized system for patient data recording would make it significantly easier for patient records to be shared between different points of care for a patient, which also allows for increased accessibility. A patient care team would not have to wait and play catch-up
107
to obtain a patient’s medical records, so care would be optimized, especially in time-sensitive situations. The next step would be expansion, during which a singular EHR software program would be used in all clinics and hospitals, with both patient and physician friendly interface. While challenging, the implementation of such an EHR is certainly possible, considering that incentivization worked well in convincing clinics and hospitals in adopting an EHR via the HITECH Act in the first place.
“The genesis of medical imaging lay in the waning years of the 19th century, when German physicist William Roentgen discovered the X-ray.”
Furthermore, we propose “meaningful use” of EHR. In this context, meaningfulness is defined as “use by providers to achieve significant improvements in care” (Blumenthal, 2010). While this, in addition to adoption of EHR, was present in the HITECH Act, we strive to bring meaningfulness to an entirely new level. To achieve this, we propose that all EHR data is recorded in a large and centralized database, to which research groups may access after obtaining permission from the government. With access to tens of millions of data points, biomarkers will be discovered at a rate heretofore unseen. Furthermore, the data from a simple blood test, once put into the new and improve EHR, could scan the database for potential disease. This has been done successfully on a small scale by a group at Vanderbilt. By using a methodical strategy of examining the EHR, patient samples, and proteomics, two robust biomarkers for heart failure were identified (Wells, 2019). Models such as this one prove that the meaningful use of the EHR to discover biomarkers is feasible. Identification of such widespread biomarkers would be ground-breaking for the future of
medicine, shifting care from a reactive to a preventative model. Medical Imaging and Visual Monitoring Medical imaging is not a feature of every medical appointment, although most people have experienced some sort of imaging procedure in their lifetime – perhaps the yearly X-ray at the dentist’s office, the ultrasound during a pregnancy, or more seriously, as part of a cancer diagnosis. Some individuals undergo imaging procedures more often – say as part of a checkup for chronic respiratory illness. But regardless of the purpose, these procedures are extraordinarily useful as a marker of patient health, allowing physicians to peer into the inner structure and function of the body’s internal organs. They are just one facet of a patient’s medical history, but an invaluable one, nonetheless. The genesis of medical imaging lay in the waning years of the 19th century, when German physicist William Roentgen discovered the X-ray. In the first year alone after his discovery, over 1000 scientific articles touting the use of x-rays for diagnosis of bone fractures had been published, and medical imaging soon became a mainstay of health care practice. A series of other imaging innovations followed in the ensuing century – computed tomography (CT), positron emission tomography (PET), magnetic resonance imaging (MRI), ultrasound (US), and other variations allowed for a diverse array of imaging applications in many different fields (Bercovich and Javitt, 2018).
Figure 1: Example of an electronic health record (EHR) portal – would be used by a doctor to view and input patient medical data. Source: Wikimedia Commons
108
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 2: (Left) An MRI image of a brain tumor; (Right) An ultrasound image of the pancreas. Source: Wikimedia Commons
These technologies work in different ways. An ultrasound scanner, for example, employs a sound-producing scanner that bounces highfrequency sound waves off of internal organs and records the resulting echoes to determine organ position, much like an echo-locating bat finds its way through the forest at night. MRI machines use magnetic fields and radio waves to locate internal organs. And PET scanners visualize organs by detecting the presence of radiotracers in the body, ingested by the patient prior to the procedure (Umar and Atabo, 2019) [Figure 2]. Yet despite their diverse modes of operation, these imaging techniques all hold a similar purpose and benefit – to provide a visual picture of the inside of the body without the requirement of an incision. These techniques have all been extremely valuable, up to the current point, for diagnostic purposes in a hospital setting, but they don’t have to stop there. Currently, a portable ultrasound device called the ButterflyTM is being used to image the lungs of patients that have tested positive for COVID-19. The device is relatively cheap, and thus can be used by rural hospitals around the world that lack the capacity to purchase expensive hospital room-based imaging devices. For now, its use is limited to designated medical professionals, but it’s role can certainly be expanded (“Butterfly iQ”, n.d.). Going even further, medical imaging devices are being developed that have therapeutic uses in addition to diagnostic applications (a concept called ‘theranostics’). One example of this is the use of radioiodine to treat thyroid cancer. The thyroid gland naturally consumes iodine and uses it to produce the thyroid hormones, T2 and T3. A small dose of radiotracer can be used to diagnose a tumor of the thyroid via PET scan, and then a larger dose of radioiodine
SPRING 2020
can be given to kill the cancer (“Radioactive Iodine,” n.d.). Similar theranostic approaches are being used to treat prostate cancer and neuroendocrine tumors as well (Bercovitch and Javitt, 2018). Of course, our visual understanding of the body doesn’t have to stop at pictures of a patient’s internal organs – graphical representations of patient health may be generated also. The EKG machine, which has long been used to record the function of the heart, is now being made in mobile varieties, so that patients don’t need to take a trip to the doctor’s office or hospital to get a sophisticated picture of their current heart health (“AliveCor,” n.d.). And wearable (non-invasive) devices are even being devised to listen in to the sounds of the lungs. Analysis of this auditory data can yield valuable information about a patient’s pulmonary health (“Clairways,” n.d.). Theoretically, this physiological monitoring can extend to any organ, if a suitable device is devised for recording sound, electrical current, or physical force, and placed properly within the body. And looking further toward the future – it seems a natural ambition to make implantable devices both capable of recording physiological function and doing something to treat it.
“...despite their diverse modes of operation, these imaging techniques all hold a similar purpose and benefit - to provide a visual picture of the inside of the body without the requirement of an incision.”
Sampling the Blood (And Other Bodily Fluids) Laboratory tests and results play critical roles in patient care, often serving as the central diagnostic factor. Some of the most tested bodily fluids are blood, saliva, and urine, all of which contain key molecules that provide insight into a patient’s current health. All these methods are relatively simple and noninvasive. Furthermore, there is unique value in each of them, as some bodily fluids are more adept at detecting certain indicators. Genomic, proteomic, and metabolic methods are used in
109
the analysis of bodily indicators of disease. Blood tests are often the first bodily fluid test considered. Their uses vary, ranging from a yearly checkup to examining more serious conditions. For example, rather than waiting for graft function to deteriorate in organ transplant patients, studies show that there are key biomarkers in the blood that reflect the health of the graft and whether any changes have occurred (Heidt, 2011). Antigens can be detected in blood, providing evidence for a host of diseases, including cancer, autoimmunity, immune rejection, and Alzheimer’s disease, among others. Another type of immune cell that may be examined in blood is myeloidderived suppressor cells (MDSCs), a type of suppressive cell that typically develops as a result of inflammation or infection in the body. Levels of circulating MDSCs in the blood can indicate how effective an immunotherapy may be for a specific patient, predicting outcomes for melanoma patients (Martens, 2016).
“While saliva certainly is not the first thing that comes to mind when discussing biomarkers, its unique physiological composition of proteins and genetic molecules is a useful and untapped preventitive tool.”
Figure 3: The logo of the Human Microbiome Project, a program directed by the National Institutes of Health. The project started in 2008 and went on through 2016. Source: Wikimedia Commons
110
Saliva is useful for detecting hormone levels, cancer, and periodontal diseases. While saliva certainly is not the first thing that comes to mind when discussing biomarkers, its unique physiological composition of proteins and genetic molecules is a useful and untapped preventative tool. There is growing interest in examining saliva as a diagnostic test, as it could contain indicators before a disease even has the opportunity to manifest itself in the body. Furthermore, molecules in saliva may also have the potential to be biomarkers for Alzheimer’s, obesity, autoimmune disease, and head and neck cancers (Bermejo-pareja, 2010; Salazar, 2014). Saliva is an attractive alternative to blood because it is completely non-invasive, often collected via a cotton swab placed in the patient’s mouth. Urine tests are a highly useful way of gaining insight into the health of patients who have endocrine issues or kidney disease and injury
and can even provide insight into reproductive function in young men. In acute kidney injury, creatine levels serve as an indicative biomarker, but creatinine levels in the urine change more quickly than those in the blood, meaning that urine can provide an earlier diagnosis. (Han, 2008). Discoveries such as this one demonstrate the importance of examining all possible sources of biomarkers in the body, as different sources provide certain advantages. Furthermore, there are different ways in which biomarkers can be tested. The use of genomic, proteomic, and metabolic approaches to examining blood, saliva, urine, breath condensate, and other samples opens the door to new preventative sources and methods. Understanding the Microbiome A crucial objective of modern medicine has been to understand the human microbiome – the communities of bacteria and other microscopic organisms that reside within different human organs. The Human Microbiome Project (HMP) was initiated by the NIH in 2008. Among its incredible accomplishments have been the isolation of 2,200 bacterial reference strains to provide for future diagnosis and the identification of a ‘core microbiome’ at each site sampled (from 300 healthy adult participants) to representing the healthy condition of the microbiome. The research was accomplished at four separate sequencing centers in the United States – and a Data Analysis and Coordination Center (DACC) was established to handle the massive amounts of genomic sequencing data these efforts generated (“NIH Human Microbiome Project,” n.d.) The Microbiome Project brought significant advances to our understanding of the bacteria that live within the human body, specifically the gut, oral, nasal, skin, and urogenital microbiome (“NIH Human Microbiome Project,” n.d.) But it did not say much, if anything at all, about the lung microbiome, which is extremely complicated in its own right, and home to various different ecological niches. This is especially true in obstructed lungs, which have extensive anaerobic (low oxygen) pockets that are suitable for bacteria that require (or hold a preference for) metabolism by anaerobic fermentation as opposed to aerobic respiration (Caverly and LiPuma, 2018). And the Microbiome Project has not even scratched the surface of the diverse microbiological community – the vast number of viral, archaic, and parasitic species that are thought to inhabit the body’s cavities are yet to be sufficiently
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
catalogued [Figure 3]. As a supplement to the research done on this HMP project, a number of subsequent experiments have been done to characterize the microbiome of individuals with various diseases. For example, researchers have demonstrated that the gut microbiome has a strong impact on the gut-brain axis, the neural pathways that connect intestinal function with regions of the brain that handle emotional and cognitive processing. Gut dysbiosis – a significant difference between a patient’s microbiome and the aforementioned ‘core microbiome’- has been established as a hallmark feature of neurological disorders from autism to depression (Carabotti et al., 2015). Efforts to characterize the cystic fibrosis (CF) lung and gut microbiome are other key examples (to be discussed thoroughly in the subsequent section), for even in infancy there are significant differences between the microbiome of CF patients and that of healthy individuals, and the composition of the microbial community changes significantly as the disease progresses (Hayden et al., 2020; Zhao et al., 2012)
Biomarkers for CF A paradigm shift has occurred in the treatment of cystic fibrosis in this past decade. Over the last ten years, drugs called ‘CF modulators’ have been developed that operate to renew the function of the protein CFTR, which is
naturally misfolded at the epithelial cell surface or trapped within the epithelial cells of CF patients. The physiological consequence of this misfolded protein is clogging of the lungs with sticky mucus – impeding air flow and producing a permissive environment for pathogenic bacterial growth (alongside a broad slate of extra-pulmonary symptoms). The modulators work to remedy these problems, by helping to chaperone the CFTR protein to the cell surface and refold it to its proper conformation. These drugs are the result of several decades of work on behalf of the CF foundation and drug companies – first Aurora Biosciences, a San Francisco Based biotech firm, and since 2001, Vertex Pharmaceuticals. The latest in the series of the Vertex modulators drugs was approved by the FDA this past December, and it is permitted for use in roughly 90% of CF patients (Neff, 2020).
“...researchers have demonstrated that the gut microbiome has a strong impact on the gut-brain axis, the neural pathways that connect intestinal function with regions of the brain that handle emotional and cognitive processing.”
The effect of these drugs on eligible patients has thus far been a marked and stable increase in lung function and a significant reduction in the frequency of hospitalization. Even common comorbidities like CF-related diabetes (CFRD) seem to be decreasing in incidence (Granados et al., 2019). Yet paradoxically, the need for close biological monitoring of CF patients is greater than ever. CF patients face a new problem, analogous to that faced by diabetics upon the first use of insulin as a treatment in 1922 (“First Use of Insulin,” n.d.). Patients are bound to live
Figure 4: A thorough overview of the clinical manifestations of cystic fibrosis Source: Wikimedia Commons
SPRING 2020
111
much longer, and consequently, they will face a whole new series of CF-related complications – ones that are already known to affect CF patients will affect them for a longer duration of time, and complications yet unreported within the patient population are sure to be found as average lifespan within the CF community (currently around 40 years) approaches that of the general population. To address the need for continued and more widespread biomarker testing for CF patients, the four-part framework outlined in the previous section will be utilized, the collection of patient health records being a natural place to start [Figure 4].
“In the CF research space, there has long been a solid infrastructure for collection and analysis of patient health records.”
In the CF research space, there has long been a solid infrastructure for collection and analysis of patient health records. The CF Foundation Patient Registry (CFFPR) was founded in 1966 with the aim of understanding the disease progression of CF from an objective statistical perspective. Going back to 1986, the registry contains detailed data from patient check-ups and clinical trials on over 300 unique variables, including microbial culture data, lung function, nutritional status (and vitamin levels), and a record of all hospitalizations and medications prescribed. As of 2012, the registry held data from over 27,000 participants (Schechter et al., 2014). And this data is currently collected in an electronic database called PortCF, which houses patient data from 11 individual countries and the European Community (“United States Cystic Fibrosis Patient Registry – PortCF,” n.d.). Looking to the ways that patient health records may facilitate research discoveries and improve patient outcomes, the CF patient registry provides ample examples. A 2019 study of the European Cystic Fibrosis Registry has demonstrated that the epidemiology of its patient community has changed significantly over time – with the prevalence of certain relatively rarer microbes (e.g. non-tuberculosis mycobacteria and S. Maltophilia) appearing to increase over time. The study also revealed that the microbiome composition differs substantially between countries of different socioeconomic statuses – a revelation that should prompt the re-evaluation of precautionary infection control measures in clinics to reduce the spread of pathogens between patients (Hatziagorou et al., 2019). Another recent report recounting the results of a combined analysis of CF patients and carriers (people with one mutant copy of the CFTR gene) has revealed that CF carriers are at much higher risk of CF-related complications, albeit to a lesser degree than the actual CF
112
patients. (Fisman, 2020). This too will prompt a new look at medical treatment and prevention measures for the population of CF carriers – that is, roughly 1/27 Caucasian Americans, 1/48 Hispanic Americans, and 1/79 African Americans according to a 2014 study (Zvereff et al., 2014). And the usefulness of CF registry data goes beyond the findings of studies such as these. In the 1990s, utilizing CFFPR data, Dr. Gerry O’Connor embarked on an effort to examine variations in CF patient outcomes between nationwide CF centers. As principal investigator for the National Quality Program of the CFF, he headed a committee in 2005-2006 that encouraged CF centers to publicly report patient outcomes. This allowed an objective comparison of the varied CF care approaches taken across the country, such that one center could learn from the successes and failures of all its peers (Schechter et al., 2014). CF treatment is multidisciplinary – requiring the help of pulmonologists, nutritionists, endocrinologists, social workers, among others – and centers naturally lead or lag behind the nationwide average with respect to all of these aspects of care. O’Connor’s efforts were crucial to standardizing CF care across the country from all of these angles and informing CF caregivers across the world. Now that the system for organizing CF patient data has been discussed, we turn to discussion of the individual markers of CF patient health that are gathered and deposited in the database. In line with the established framework, this section will conclude with an assessment of lung imaging and other forms of visual monitoring, biomarkers in the blood and other bodily fluids, and measures of microbiome composition. Lung imaging in CF is, at the least, a yearly affair – although patients go into the clinic more frequently for meetings with the clinical team, and also may be hospitalized for pulmonary exacerbation (an acute and significant drop in lung function), it is recommended by the CF foundation and other international CF organizations that some form of lung imaging is conducted at least once a year (Cystic Fibrosis Trust, 2011; Castellani et al., 2018). The current ‘gold standard’ method of lung imaging for CF patients is high-resolution computed tomography (HRCT) (Kolodziej et al., 2017). A CT scan requires the patient to lay in the supine position within a donut-shaped machine.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
As the camera and x-ray tube rotate 360 degrees around the patient, a high-resolution 3D image of the lungs is produced (Umar and Atabo, 2019). The amount of radiation used is not a significant concern given that imaging is conducted only once a year – but more frequent monitoring would require a careful consideration of radiation risks. The same goes for X-ray phase contrast imaging, another technique in use that employs an even higher dose of radiation than HRCT. There is a lot of interest in the use of MRI (which uses non-ionizing radiation) for lung imaging in CF patients – however, current MRI imaging still suffers from poor spatial resolution, and will need to be improved in this regard before it can gauge lung structural health as well as the existing x-ray methods in use (Kolodziej et al., 2017). Medical imaging and monitoring do not stop at the lungs for CF patients. As previously mentioned, CF patients are prone to a host of other physiological complications, one of which is bone disease. This owes, in large part, to the fact that intestinal absorption of nutrients is hindered due to CFTR dysfunction (Putman et al., 2019). This includes the absorption of calcium, which is a necessary component of the hydroxyapatite matrix of bone, and dietary vitamin D (D2 or D3), which further serves to enhance calcium absorption in the intestines (Ross et al., 2011). CF patients are therefore screened on a yearly basis for bone health – a DEXA scan is conducted to measure bone density; the key metric of bone density being the Z-score (“DEXA Scan,”, n.d.).
expended to keep stable, often requiring hours of daily medical treatment. That being said, there may be more to glean from the visual pattern produced by the PFT (the graph of air flow vs. time), as well as the other metrics – FVC, FEF (a measure of small airway function) – that are produced during the test [Figure 5] The previous paragraphs are a fitting prelude to a discussion on the litany of biomarkers collected from the blood and other biological fluids of CF patients. Blood tests are another aspect of CF treatment that typically occurs on a yearly basis – although it may happen more frequently for patients participating in clinical trials, or for those who are hospitalized. Blood tests can be used to assess many aspects of the CF patient’s health – from markers of nutritional quality to signs of CF comorbidities like CFRD and liver disease. Screening for liver function alone may involve testing for the following list of molecules: serum aspartate aminotransferase, alanine aminotransferase, alkaline phosphatase, gamma-glutamyl transferase (GGT), and bilirubin. Levels of the
“[lung function] is something for which the utmost effort must be expended to keep stable, often requiring hours of daily medical treatment."
Figure 5: A Graphic showing the normal levels of FEV1, FVC, and FEF 25-75 (small airway function) in men and women. Source: Wikimedia Commons
Per the discussion in section two of this paper, lung monitoring extends beyond the visual picture of lung structure to a graphical depiction of lung function. The pulmonary function test (PFT), is a key part of every visit to the clinic. It amounts to a graph of the volume of air pushed out of the lungs over time – its key metrics being the FEV1 (forced expiratory volume in 1 second), or the amount of air that the patient can blow into the PFT machine in one second, and the FVC (forced vital capacity), the total amount of air that can be exhaled before the lungs are entirely empty. The FEV1, or its derivative, the percent predicted FEV1 (ppFEV1), has long been considered the gold standard marker of lung health; a suitable indicator of whether or not a patient needs to be hospitalized, and a marker of lung function decline over time (Yankaskas et al., 2004). It is something for which the utmost effort must be
SPRING 2020
113
vitamins A, D, E, and K are gauged as well – as CF patients have difficulty absorbing them due to dysfunctional intestinal absorption (Yankaskas et al., 2004). These results may inform the need for nutritional supplementation – or necessitate talk about new treatments for liver disease and diabetes (Jaffe, 2002). In the age of CF modulator therapy, where the focus has started to shift from CF lung disease (still the leading cause of mortality, but increasingly less so, and later on in life) to other CF complications, these comorbidities are becoming better understood.
“It is well known that CF patients have an abnormal lung and gut microbiome - one that is dominated by certain pathogenic species.”
Other bodily fluids or substances may also be screened for biomarkers. Fecal samples may be tested early in a patient’s life to gauge exocrine pancreatic function, given that the production of pancreatic enzymes is most often minimal in CF patients due to blockage of the pancreatic duct (in particular, assays for fecal chymotrypsin, serum pancreatic trypsinogen, fecal pancreatic elastase-I). Fecal occult blood screening may also be conducted to test for colorectal cancer – of which CF patients (and also CF carriers) are at a significantly higher risk (Yankaskas et al., 2004; Scott et al., 2020). Bronchioalveolar lavage (BAL) fluid is also commonly collected – either in the context of a clinical trial, or during a hospitalization. It is gathered by a surgical procedure called a bronchoscopy, in which a thin tube and camera, collectively referred to as a bronchoscope, are inserted into the lungs via the nose or mouth (“Bronchoscopy,” n.d.). Fluid is flushed into the alveoli, then recollected and screened for inflammatory markers or the presence of microbes (“Bronchoalveolar Lavage – an overview,” n.d.). As mentioned in section two of this report, the microbiome is a topic of extraordinary interest in the field of CF research. It is well known that CF patients have an abnormal lung and gut microbiome – one that is dominated by certain pathogenic species. CF patients whose lungs are colonized by the bacterium P. Aeruginosa have lower lung function on average than those without it. And those patients who present with both P. Aeruginosa and its common counterpart S. Aureus at the same time fare even worse (Ahlgren et al., 2015). Studies have demonstrated that over time, the CF lung microbiome decreases in diversity, such that P. Aeruginosa comes to be the dominant species (Zhao et al., 2012). It is thought that the over-abundance of P. Aeruginosa is a major factor contributing to the precipitous decline in FEV1, although this is somewhat controversial.
114
The gut microbiome, too, is a topic of great concern. In CF infants, studies of the fecal microbiome (seen as representative of the gut microbiome) have demonstrated that right at the start of life, patients present with an abnormally high population of the genus proteobacteria, and abnormally low populations of the genus bacteroidetes. One study has demonstrated that this fecal dysbiosis is associated with reduced linear growth in the first year of life (Hayden et al., 2020). Gut dysbiosis has also been linked to the aforementioned high incidence of colorectal cancer in CF, as well as the high risk of liver disease (Scott et al., 2020; Al Sinani et al., 2019). And researchers have demonstrated that in the gut, 1543 (host, not bacterial) genes are differentially expressed in CF patients vs. controls – meaning that the microbiome has a very strong effect on human gene expression (Dayama et al., 2020). Thus, a full analysis of the human microbiome must encompass the type and relative abundance of all the species present, as well as their effect on physiological function at the cellular level. Such an understanding will be crucial to the ongoing treatment of CF patients, especially given that these pathogens don’t seem to be cleared after the initiation of modulator treatment, even though the pathogen numbers may be depressed (Gifford and Heltshe, 2019).
Biomarkers for Lung Cancer Lung cancer is the second most common cancer in men and women, and it is the leading cause of cancer death, making up a fourth of all cancer deaths despite accounting for only 13% of all cancer diagnoses (“American Cancer Society”, n.d.). These statistics include both small cell (SCLC) and non-small cell lung cancer (NSCLC), with 85-90% being NSCLC and 10-15% being SCLC (“American Cancer Society”, n.d.). Due to decreased smoking rates the incidence of lung cancer is decreasing, but clearly the disease is still highly problematic. More preventative and diagnostic biomarker research must be done in order to increase survival rates [Figure 6]. First, a universal EHR can be utilized to obtain patient records for conducting large studies. While there are numerous available databases that exist, such as national cancer registries, claims-based datasets, national surveys, and hospital encounter data, some are incomplete, which is where the EHR can bridge such gaps. The National Cancer Data Base (NCDB) is used for the study of radiation therapy in cancer, but some drawbacks include lack of information
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 6: A micrograph image of non-small cell lung carcinoma. Source: Wikimedia Commons
from other databases, as well as selection bias and lack of critical treatment data like complications, cause of death, and disease-free survival (Jairam and Park, 2019). The National Cancer Instituteâ&#x20AC;&#x2122;s Surveillance, Epidemiology, and End Results Program (SEER) encompasses data on cancer incidence, staging, treatment, demographics, and survival, covering about 35% of the U.S. population (Jairam and Park, 2019). While certainly useful (this database is considered to be representative of the U.S. population), there is important data that is underreported or incomplete, so any conclusions must be made carefully (Jairam and Park, 2019). Randomized controlled trials (RCTs) are the way in which questions regarding the efficacy, cost, and complications of various treatments are answered. However, while they shed light on important points, these studies are often small, and they are many challenges that face researchers who aim to conduct or advance these studies. Such challenges include problems with approval requirements, sufficient funding, and proper structure (Yang, 2016). Another problem with RCTs is that patients with comorbidities are infrequently studied because these issues may impact the efficacy of the drug (Yang, 2016). However, many patients present with such comorbidities, and it is critical to understand how that may factor into treatment side effects and outcomes. This is another scenario in which the future standardized EHR that includes complications, quality of life factors, comparative effectiveness research can provide answers. The way to
SPRING 2020
bridge the gaps between RCTs and patient care is through large scale studies that utilize high amounts of data that may provide more generalizable conclusions. Diagnosis technology in cancer has generally remained stagnant, albeit with some breakthroughs that are implemented on a wide scale in the clinic. Patients who may have lung cancer either have an appearance of symptoms, such as coughing or persistent chest pain, or an unusual chest x-ray. The next step is the determination of whether or not the patient has cancer, what type the cancer is, and the progression of the cancer, via molecular and chemical indicators. Sputum cytology, the testing of phlegm for cancer cells, is a sometimes-specific method of establishing a diagnosis, though specificity varies depending on the location of the tumor (Rivera, 2013). There are other untapped methods of liquid biopsy, such as via the blood, saliva, or urine (among others)in which circulating tumor DNA and other molecules that indicate cancer could be identified (Rolfo, 2014). For example, there are studies that display proteomic change in urine that correspond to lung cancer. Another example is midkine, which is a heparin-binding growth factor, that shows immense potential in being a noninvasive diagnostic and prognostic marker in both serum and urine for NSCLC (Xia, 2016). These are indicators of the under-utilized non-invasive techniques of detecting cancer. Obviously, typical tissue biopsies are another way in which a tumor may be categorized and a treatment plan determined.
â&#x20AC;&#x153;The National Cancer Institute's Surveillance, Epidemiology, and End Results Program (SEER) encompasses data on cancer incidence, staging, treatment, demographics, and survival, covering about 35% of the U.S. population.â&#x20AC;?
115
Conventional bronchoscopy technology to obtain tissue biopsies can be limited, providing insufficient tumor samples. However, there are numerous other bronchoscopy methods for the screening and detection of early lung cancer that improve upon the conventional bronchoscopy technology. White light bronchoscopy is one method by which a physician can directly examine a patient’s airways, but it can be unspecific (Ingage, 45). An improvement on white light bronchoscopy is autofluorescence bronchoscopy. Highmagnification bronchoscopy through the use of a bronchovideoscope is another method that increases specificity in detection (Ingage, 45). This method uniquely combines a videoscope with fiber observation, allowing for the detection of other lung issues, such as bronchitis and dysplasia, and is more sensitive and specific than the aforementioned techniques. Finally, narrow band imaging is a method that uses red, green, and blue light in a videoscope system to detect the microvascular network of the lung’s mucosa (Ingage, 45-46). A higher precision biopsy called optical biopsy, a high precision bronchoscopic imaging technology, may eventually replace the traditional tissue biopsy (Ingage, 45). There are even up and coming technologies that utilize exhaled breath condensate to classify gene expression and serve as an early detector for lung cancer [Figure 7].
“While the current methods of diagnosing lung cancer are improving, there is still much work to be done to enhance early discovery. Often the cancer is caught too late, especially for younger people and nonsmokers.”
116
While the current methods of diagnosing lung cancer are improving, there is still much work to be done to enhance early discovery. Often the cancer is caught too late, especially for younger people and nonsmokers, as the patient already is experiencing symptoms. In fact, only 16% of lung cancers are diagnosed in the early stages, and more than half of all people diagnosed with lung cancer die within a year of diagnosis (“lung. org”, n.d.). Biomarkers in lung cancers are the focal point of many studies, but this has yet to translate successfully to clinical practice, which would therefore improve patient outcomes. Genomic biomarkers are just one type of disease indicator. For example, identified genomic biomarkers in NSCLC tissue include epidermal growth factor receptor (EGFR), anaplastic lymphoma kinase (ALK), and Kirsten Rat Sarcoma Viral oncogene homolog (KRAS). While there are not targeted therapies to affect these mutations, there are trials for drugs acting against their downstream targets that are showing immense promise (Vargas, 2016). Another possible genomic biomarker is p53 overexpression in NSCLC, which predicts poor prognosis and
resistance to chemotherapy (Mogi, 2011). Another use of biomarkers is for the optimization of various treatments, such as immunotherapy. While lung cancer is not considered immunogenic, the development of checkpoint inhibitors such as anti-PD-1 and anti-CTLA-4, with others in development such as anti-VISTA, immunotherapy may still be useful in some tumors. One method of determining whether a patient with NSCLC will respond positively to anti-PD-1 is by immunohistochemistry PDL1 positivity (Villalobos, 2017). While other studies may not consider this a robust enough biomarker, it is nonetheless promising and shows how further carefully designed studies on a larger scale could determine whether this and others are valid biomarkers, eventually standardizing biomarker testing protocols and subsequent treatment. Even the microbiome plays a role in the development of lung carcinogenesis and metastasis. Disruption of the lung microbiome by antibiotic treatment before and during immunotherapy has been found to increase progression and decrease survival (RamírezLabrada, 2020). Furthermore, dysbiosis, or microbial imbalance, can promote chronic inflammation and development and activation of oncogenes, which increase the likelihood of developing cancer (Ramírez-Labrada, 2020). The lung microbiome is influenced by a host of factors, such as lifestyle, pollution, tobacco, and as stated before, increased exposure to antibiotics. One example of a large initiative that examined the role of screening in survival outcomes in lung cancer patients is the National Lung Screening Trial (NLST), which tested over 50,000 patients by low-dose computed tomography (LDCT). Use of LDCT for diagnosis decreased mortality by 20% when compared to chest radiography, which decrease mortality by 6.7% (Nasim, 464). While there are radiation risks, as well as high risk for discovery of benign nodules, use of LDCT is promising for patients who fit the necessary criteria, considering the significantly decreased mortality rate. Studies such as this one could become significantly more common by the standardization of the EHR, providing researchers with much greater opportunity to gain perspective on a much larger scale than with a typical randomized clinical trial. However, it is critical to have welldesigned studies that avoid selection bias, immortal time bias, and confounding variables
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
CF Care of the Future On a bright autumn morning a CF patient is conferencing with his CF care team in the car on the way to work. The check-up starts with a review of the patient’s electronic health record.
when conducting large scale data analyses to ensure real discoveries. Considering that only 0.1% of biomarkers are translated to the clinic successfully, it is time to transform our methods of biomarker discovery and implementation in the clinic (Goossens, 3). Biomarker discovery is often the afterthought of experiments, but this challenge can be overcome if the search for biomarkers becomes a national, and even international goal. Carefully designed studies and high-quality sample sizes through the use of a new and optimized standard EHR will allow for a significant increase in biomarker use in the clinic, allowing for diagnosis and prevention at never before seen rates. After the U.S. refocuses its strategy, a worldwide effort in biomarker research can be undertaken. Different countries have different research focuses, and increased collaboration would lead to increased rates of biomarker discovery and implementation. Consequently, patient care globally will improve, and cancer mortality rates have the potential to decrease in association.
LAB of the Future – A Tale of Two Clinics
The following section, in narrative form, will explore the characteristics of a patient visit of the future if current trends in biomarker research are driven out to their full extension, and if a series of additional research questions are pursued. The story of two patients – one calling in to a CF clinic and the other attending a cancer check-up – will be told from the vantage point of the year 2030. SPRING 2020
In the year 2025, the CF foundation, in concert with an international body of 25 countries (including the United States), agreed on a standardized panel of data points to be collected in the CF registry – this includes the results all of the aforementioned tests to assess pancreatic function, liver function, and immune system markers; PFT data –the FEV1, FVC, FEF, and the full graphical of air flow vs. time from the test; visual data from lung MRI scans; and the entire history of patient-doctor verbal exchange (and written doctor’s notes) during past appointments – for prior virtual appointments, the words of patient and doctor are preserved on tape and kept on file. Of course, all of this data is has only stored with patient consent – and the patient can opt out of the tracking of certain data points at any time. This check-up was scheduled a week prior after the patient completed his weekly lung function test. The test was done on a mobile PFT machine that the patient plugged into his computer – upon plugin, a required software was automatically loaded that carefully directed the patient through the procedure. The results were automatically sent upon completion to the patient’s pulmonologist for review, in addition to being uploaded to the patient’s electronic health record.
Figure 7: A surgeon performing a bronchoscopy procedure. The bronchoscope is a flexible tube with an attached light that can be used to visualize and also collect samples from the bronchoalveolar space within the lung. Source: Wikimedia Commons
“The test was done on a mobile PFT machine that the patient plugged into his computer - upon plugin, a required software was automatically loaded that carefully directed the patient through the procedure.”
The patient’s ppFEV1 remains at a baseline of 85%, the same as it had roughly been for the past five years. Owing to the continued use and improvement of CF modulator therapy (with the development of more effective, nextgeneration modulators), the years of steady lung function decline, punctuated by rapid drops, are ever more becoming a thing of the past. Yet the pattern found in the graph of air volume/second indicated that something was off. A recent worldwide analysis of graphical PFT data (air flow vs. time), now housed for every patient in the CF registry across 25 countries, had demonstrated that particular graphical PFT patterns were associated with infection by certain bacteria. This sort of study is conducted by employing sophisticated image-recognition machine learning software (the use of convolutional neural networks is a
117
common approach) (Sharma et al., 2018). The graphical PFT data of 10,000 patients who had cultured P. Aeruginosa, 8,000 who had cultured S. Aureus, and 2,000 who had cultured B. Cepacia – were provided to train the algorithm. Once it learned to distinguish the graphical data of these three patient sub-populations, it could be applied to patient graphs of unknown bacterial colonization status and predict colonization status solely from the shape of the graph. The larger the size of the training set (in this case, with a combined 20,000 patients, it is very large), the more powerful these sorts of algorithms tend to be at making predictions (Abroshan et al., 2020).
“The tele-conference wraps up with the pulmonologist prescribing the antibiotic treatment to the patient, and the patient signing off. The script is immediately sent to the pharmacy nearest the patient's office, and he is able to pick up the drugs before work.”
118
Analyzing this patient’s own PFT graph, it appeared that he may have been infected by the dangerous gram-negative bacterium B. Cepacia – the algorithm had suggested this with a confidence of 60%. Typically, B. Cepacia causes dangerous, rapid drops in lung function, and (from the perspective of the current day) drastically diminishes patient life expectancy (Lord et al., 2020). But it seemed that this infection had been caught early, such that it had not colonized the lungs to a significant degree. To confirm their suspicion, the doctors also looked into a number of other diagnostic tests – ultrasound of the lungs, which the patient completes on a monthly basis by using a portable device; and tests for a series of inflammatory markers that signal B. Cepacia infection – IgG antibodies to the bacteria, as well as molecules produced by the bacteria itself (Mott et al., 2010). The ultrasound results were analyzed by the pulmonologist, and the antibody results by an immunologist – both are present on the call, although one is working in an office in Boston while the other is working in San Francisco. At this moment in time, the patient’s microbiome is well characterized – and the diversity and molecular production of the CF microbiome in general is much better understood than it was in the year 2020. Early in the past decade (2020-2030), the practice of 16S sequencing was approved for a clinical setting, such that patient sputum could be screened for the DNA of various bacterial species at the clinic. In addition, the FDA approved a portable “breath biopsy” device in 2028 designed by Professor Jane Hill at the Thayer School of Engineering. By the year 2020, it had proven capable of differentiating influenza A from influenza B (it was initially used as a diagnostic tool during the COVID-19 pandemic) (“Making the ‘Breath Biopsy’ a Reality,” 2020). By 2025, it had proven capable of detecting the CF
pathogens P. Aeruginosa and S. Aureus. But it is not yet capable of detecting B. Cepacia, so the doctors ordered 16S sequencing of the patient’s sputum for this pathogen – three days before the appointment, the patient stopped into the clinic so that the sputum could be collected (and the B. Cepacia antibody tests could be done). By the appointment, the results were in, indicating that he had indeed cultured the pathogen, although to a very small degree. Considering these results, the pulmonologist wants to prescribe a series of antibiotics to treat the bacteria. The 16S sequencing demonstrated that the patient is colonized with both B. Cepacia and P. Aeruginosa at once. Recent Hd-DoE studies, conducted with the support of the company Trailhead Biosystems, have revealed that a certain antibiotic cocktail is highly effective at treating the two bacteria in co-culture (“Trailhead Biosystems,” n.d.). This sort of experiment involves a number of 100-well plates, seeded with human bronchial epithelial cells (stem-cell derived and more or less identical). The cells in each 100-well plate were inoculated with a different CF-related pathogenic bacteria, or several in combination. Each row of the (10 x 10) plate inoculated with B. Cepacia and P. Aeruginosa was treated with a different drug cocktail – and the threedrug cocktail of tobramycin, aztreonam, and ciprofloxacin was demonstrated to be the most effective at killing the pathogens. The tele-conference wraps up with the pulmonologist prescribing the antibiotic treatment to the patient, and the patient signing off. The script is immediately sent to the pharmacy nearest the patient’s office, and he is able to pick up the drugs before work. Cancer Care of the Future On a bright summer day in 2030, a young lawyer is in the waiting room at Dartmouth Hitchcock Medical Center. She was diagnosed with NSCLC just two years ago, which was shocking news for a twenty-eight-year-old woman at the beginning of her career. Today would be the day that she would be declared in remission, having beaten her cancer due to paradigm shift in research towards prevention. The Human Cancer Biomarker Project, which was initiated in 2025, rapidly gained attention from researchers all over the world, who wanted to be a part of the collaborative project that would change the future of cancer care. And change the future of cancer care it did. The HER
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
system was reformed in the United States, such that it became a single, standardized software used across the country. The new EHR included a plethora of new features, including thousands of indicators of disease, so once lab results were uploaded, the EHR could immediately report back any issues that arose in a test along with which diseases a patient may face in the near or far future. This gave preventative care an entirely new meaning, and diseases were being caught at earlier stages than ever before, and by 2028, a complete panel of biomarkers that could detect lung cancer before symptoms even appeared, with 95% accuracy. The patient’s cancer was detected before she even experienced symptoms, such as coughing and shortness of breath. Her annual blood test was screened for the typical panel of the most common blood biomarkers, testing for levels of MDSCs, circulating tumor DNA, and various other cancer indicators that have been verified by groups around the world. After the patient’s blood tests came back, with EHR analysis, her physician recommended that she do an exhaled breath condensate test, which would analyze her breath to see if there were any molecular indicators of non-small cell lung cancer. This test, pioneered by undergraduates at Dartmouth’s Thayer School of Engineering, and marketed by students at the Tuck School of Business, was first used at Dartmouth Hitchcock Medical Center. By 2030, the students’ startup company had supplied hundreds of hospitals across the country, accelerating and improving diagnosis of lung diseases significantly. Not only was diagnosis happening earlier than ever thought possible, prognosis using the EHR and tests like the exhaled breath condensate was more precise than ever before. Patients were responding to immunotherapy at higher rates than ever, since the cancer was caught at an easily treatable and non-malignant stage, a time at which immunotherapy treatment is optimal anyways. The patient had a family history of lung cancer. Thirty years prior, she lost her grandmother to the disease, and in 2020, she lost her father to it as well. Though groundbreaking research in treatment options such as immunotherapy was done between 2010 and 2020, the success seen in the lab clearly did not always translate to the clinic. It’s a shame that in those ten years, while research on treatments such as immunotherapy were groundbreaking, that the outcomes were not significantly different. Because she was at higher risk and her physician
SPRING 2020
knew of her background, additional biomarkers could be run on her blood test, so she could be treated accordingly. Thankfully, since the patient’s cancer was caught so early, she was able to undergo low dose oral immunotherapy, allowing her to continue living her life and thriving in her career, with few side effects and little decreased quality of life. This section highlights the major impact that LAB—Lung Analysis of Biomarkers—would have on the future of disease prevention and care. Looking forward, even past 2030, LAB is a structured model that will be adapted and optimized for other conditions and diseases, positively changing the future of medical care across the world.
Conclusion These future scenarios suggest significant improvements in the realm of biomarker detection – with real benefits to patient health and well-being. However, there are certain obstacles that must be overcome in order for this future to become a reality. Not least of these is further research and development of the devices that have been mentioned. MRI scanning of the CF lungs, for example, has not yet produced images at the level of visual acuity shown by X-ray images (Kolodziej et al., 2017). But in addition to the research that must be done, there are also ethical and economic challenges that need to be tackled. First and foremost is the problem of sharing patient data. In order for data to be added to the CF registry, or the National Cancer Database, patient consent is required. And permission is required for the usage of this data also. Although each individual patient doesn’t need to be contacted, the researchers attempting to access the data must go through a rigorous request process. The same considerations will apply to the general population also – the hope being that by the proliferation of wearable devices (from the already widespread Apple Watch to devices in development, like the ClairwaysTM lung acoustic sensor), it will be possible to gain a more sophisticated picture of the health of the whole population. With this knowledge, the medical system may shift from a palliative to a preventive posture. But in order for sensitive medical data to be collected and analyzed, patients need to be informed about each type of data they provide, and their data must be thoroughly blinded in the eyes of
“The patient's cancer was detected before she even experienced symptoms, such as coughing and shortness of breath. Her annual blood test was screened for the typical panel of the most common blood biomarkers, testing for levels of MDSCs, circulating toumor DNA, and various other cancer indicators that have been verified by groups around the world.”
119
researchers.
“Looking further into the future, the goal ought to be to design a device that will collect a significant amount of physiological data - say the existence of certain bacterial species, the expression of host genes, the force generated by contraction of the diaphragm, etc. automatically.”
120
Another hurdle to address is the economic one – creating a reality in which patients track their health with a multitude of mobile and/or wearable devices is an expensive undertaking. In terms of producing wearable devices that are cost-effective, the ClairwaysTM acoustic sensor is a useful model. A low-power device that is wearable long-term and does not require any sort of surgical implantation or attachment – this is something that may be distributed to patients and rich and poor countries alike. And the device transmits the data it collects back to the clinic automatically, so that doctors can track a patient’s lung health and researchers can use the data for clinical trials (“Clairways,” n.d.). To accompany these devices, virtual transmission and storage of lung health data needs to be enabled. And the capabilities for remote video conferencing between patient and doctor (as well as between doctors and specialists) ought to be provided too. The aforementioned virtual check-ups won’t simply materialize either – a strong telehealth infrastructure must be built to support them. Accomplishing this goal will require collaboration between companies (healthcare and telecommunications corporations, among others) as well as federal and state governments, who must work to strengthen (or diminish) certain regulations in order to accommodate this telehealth system of the future. A final consideration, perhaps the most important of all, is burden of care. In this future scenario, the patient’s physiological function is very well understood – and measures can be taken to treat bacterial infection, and other disease comorbidities, on a preventative basis. This is sure to save a lot of money (and enhance patient quality of life) over time. However, the patient’s healthcare burden is still high – they now must remember to complete a series of tests on a daily or weekly basis, and may spend at least as much time dealing with the disease as they did before, even if they are not making frequent trips to the clinic. Looking further into the future, the goal ought to be to design a device that will collect a significant amount of physiological data – say the existence of certain bacterial species, the expression of host genes, the force generated by contraction of the diaphragm, etc. – automatically. Such a device would necessarily be implantable. It would require surgery to implement, and it would raise the same problem of immune rejection and shifting tissue structure that plague existing implantable devices. This is
not to mention the research that must be done to equip the device with the technological capability to measure host gene expression and the presence of bacteria, among other things. Yet a multi-faceted tool such as this would. provide an extremely sophisticated level of understanding of physiological function. This is a future worth striving for. References Abroshan, M., Alaa, A. M., Rayner, O., & van der Schaar, M. (2020). Opportunities for machine learning to transform care for people with cystic fibrosis. Journal of Cystic Fibrosis, 19(1), 6–8. https://doi.org/10.1016/j.jcf.2020.01.002 Adler-Milstein, J., Zhao, W., Willard-Grace, R., Knox, M., & Grumbach, K. (2020). Electronic health records and burnout: Time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians. Journal of the American Medical Informatics Association, 27(4), 531–538. https://doi.org/10.1093/jamia/ocz220 Ahern, S., Dean, J., Liman, J., Ruseckaite, R., Burke, N., Gollan, M., Keatley, L., King, S., Kotsimbos, T., Middleton, P. G., Schultz, A., Wainwright, C., Wark, P., & Bell, S. (2020). Redesign of the Australian Cystic Fibrosis Data Registry: A multidisciplinary collaboration. Paediatric Respiratory Reviews. https://doi. org/10.1016/j.prrv.2020.03.001 Ahlgren, H. G., Benedetti, A., Landry, J. S., Bernier, J., Matouk, E., Radzioch, D., Lands, L. C., Rousseau, S., & Nguyen, D. (2015). Clinical outcomes associated with Staphylococcus aureus and Pseudomonas aeruginosa airway infections in adult cystic fibrosis patients. BMC Pulmonary Medicine, 15. https://doi. org/10.1186/s12890-015-0062-7 Al Sinani, S., Al-Mulaabed, S., Al Naamani, K., & Sultan, R. (2019). Cystic Fibrosis Liver Disease: Know More. Oman Medical Journal, 34(6), 482–489. https://doi.org/10.5001/ omj.2019.90 Bercovich, E., & Javitt, M. C. (2018). Medical Imaging: From Roentgen to the Digital Revolution, and Beyond. Rambam Maimonides Medical Journal, 9(4). https://doi.org/10.5041/ RMMJ.10355 Bermejo-Pareja, F., Antequera, D., Vargas, T., Molina, J. A., & Carro, E. (2010). Saliva levels of Abeta1-42 as potential biomarker of Alzheimer’s disease: A pilot study. BMC Neurology, 10(1), 1–7. https://doi.org/10.1186/1471-237710-108 Blumenthal, D., & Tavenner, M. (2010). The “Meaningful Use” Regulation for Electronic Health Records. New England Journal of Medicine, 363(6), 501–504. https://doi.org/10.1056/ NEJMp1006114 Bonne, N. J., & Wong, D. T. (2012). Salivary biomarker development using genomic, proteomic and metabolomic approaches. Genome Medicine, 4(10), 1–12. https://doi. org/10.1186/gm383 Bronchoalveolar Lavage—An overview | ScienceDirect Topics. (n.d.). Retrieved May 17, 2020, from https://www. sciencedirect.com/topics/neuroscience/bronchoalveolarlavage
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Bronchoscopy | National Heart, Lung, and Blood Institute (NHLBI). (n.d.). Retrieved May 17, 2020, from https://www. nhlbi.nih.gov/health-topics/bronchoscopy Bruce Stanton Receives CFF Unsung Hero Award – DartCF. (n.d.). Retrieved May 17, 2020, from https://sites.dartmouth. edu/dartcf/2019/02/25/bruce-stanton-receives-cff-unsunghero-award/ Butterfly iQ - Ultrasound, ultra-simplified. (n.d.). Retrieved May 13, 2020, from https://www.butterflynetwork.com/ Caley, L., Smith, L., White, H., & Peckham, D. G. (2020). Average rate of lung function decline in adults with cystic fibrosis in the United Kingdom: Data from the UK CF registry. Journal of Cystic Fibrosis. https://doi.org/10.1016/j.jcf.2020.04.008 Carabotti, M., Scirocco, A., Maselli, M. A., & Severi, C. (2015). The gut-brain axis: Interactions between enteric microbiota, central and enteric nervous systems. Annals of Gastroenterology : Quarterly Publication of the Hellenic Society of Gastroenterology, 28(2), 203–209. Castellani, C., Duff, A. J. A., Bell, S. C., Heijerman, H. G. M., Munck, A., Ratjen, F., Sermet-Gaudelus, I., Southern, K. W., Barben, J., Flume, P. A., Hodková, P., Kashirskaya, N., Kirszenbaum, M. N., Madge, S., Oxley, H., Plant, B., Schwarzenberg, S. J., Smyth, A. R., Taccetti, G., … Drevinek, P. (2018). ECFS best practice guidelines: The 2018 revision. Journal of Cystic Fibrosis, 17(2), 153–178. https://doi. org/10.1016/j.jcf.2018.02.006 Caverly, L. J., & LiPuma, J. J. (2018). Good cop, bad cop: Anaerobes in cystic fibrosis airways. European Respiratory Journal, 52(1), 1801146. https://doi. org/10.1183/13993003.01146-2018 Clairways | Wearable Lung Function | Asthma, COPD, CF | United States. (n.d.). Clairways Staging V4. Retrieved May 13, 2020, from https://www.clairways.com Colquhoun, D. A., Shanks, A. M., Kapeles, S. R., Shah, N., Saager, L., Vaughn, M. T., Buehler, K., Burns, M. L., Tremper, K. K., Freundlich, R. E., Aziz, M., Kheterpal, S., & Mathis, M. R. (2020). Considerations for Integration of Perioperative Electronic Health Records Across Institutions for Research and Quality Improvement: The Approach Taken by the Multicenter Perioperative Outcomes Group. Anesthesia & Analgesia, 130(5), 1133–1146. https://doi.org/10.1213/ ANE.0000000000004489 Creatine Kinase (CK) | Lab Tests Online. (n.d.). Retrieved May 18, 2020, from https://labtestsonline.org/tests/creatinekinase-ck Dayama, G., Priya, S., Niccum, D. E., Khoruts, A., & Blekhman, R. (2020). Interactions between the gut microbiome and host gene regulation in cystic fibrosis. Genome Medicine, 12(1), 12. https://doi.org/10.1186/s13073-020-0710-2 DEXA scan: Purpose, procedure, and results. (n.d.). Retrieved May 17, 2020, from https://www.medicalnewstoday.com/ articles/324553 First use of insulin in treatment of diabetes on this day in 1922. (n.d.). Diabetes UK. Retrieved May 19, 2020, from https://www.diabetes.org.uk/about_us/news_landing_page/ first-use-of-insulin-in-treatment-of-diabetes-88-years-agotoday Fisman, D. (2020). Cystic fibrosis heterozygosity: Carrier state or haploinsufficiency? Proceedings of the National Academy
SPRING 2020
of Sciences, 117(6), 2740–2742. https://doi.org/10.1073/ pnas.1921730117 Fleischer, S., Kraus, M. S., Gatidis, S., Baden, W., Hector, A., Hartl, D., Tsiflikas, I., & Schaefer, J. F. (2019). New severity assessment in cystic fibrosis: Signal intensity and lung volume compared to LCI and FEV1: preliminary results. European Radiology. https://doi.org/10.1007/s00330-019-06462-8 Gerald T. O’Connor, PhD;ScD – Faculty Expertise Database – Geisel School of Medicine at Dartmouth. (n.d.). Retrieved May 17, 2020, from https://geiselmed.dartmouth.edu/faculty/ facultydb/view.php/?uid=218 Gifford, A. H., & Heltshe, S. L. (2019). Does Ivacaftor Taken Twice a Day Keep the Pseudomonas Away? Annals of the American Thoracic Society, 16(11), 1366–1367. https://doi. org/10.1513/AnnalsATS.201908-596ED Gillum, R. F. (2013). From Papyrus to the Electronic Tablet: A Brief History of the Clinical Medical Record with Lessons for the Digital Age. The American Journal of Medicine, 126(10), 853–857. https://doi.org/10.1016/j.amjmed.2013.03.024 Granados, A., Chan, C. L., Ode, K. L., Moheet, A., Moran, A., & Holl, R. (2019). Cystic fibrosis related diabetes: Pathophysiology, screening and diagnosis. Journal of Cystic Fibrosis, 18, S3–S9. https://doi.org/10.1016/j.jcf.2019.08.016 Han, W. K., Waikar, S. S., Johnson, A., Betensky, R. A., Dent, C. L., Devarajan, P., & Bonventre, J. V. (2008). Urinary biomarkers in the early diagnosis of acute kidney injury. Kidney International, 73(7), 863–869. https://doi.org/10.1038/ sj.ki.5002715 Hatziagorou, E., Orenti, A., Drevinek, P., Kashirskaya, N., Mei-Zahav, M., De Boeck, K., ECFSPR. Electronic address: ECFS-Patient.Registry@uz.kuleuven.ac.be, & ECFSPR. (2019). Changing epidemiology of the respiratory bacteriology of patients with cystic fibrosis-data from the European cystic fibrosis society patient registry. Journal of Cystic Fibrosis: Official Journal of the European Cystic Fibrosis Society. https://doi.org/10.1016/j.jcf.2019.08.006 Hayden, H. S., Eng, A., Pope, C. E., Brittnacher, M. J., Vo, A. T., Weiss, E. J., Hager, K. R., Martin, B. D., Leung, D. H., Heltshe, S. L., Borenstein, E., Miller, S. I., & Hoffman, L. R. (2020). Fecal dysbiosis in infants with cystic fibrosis is associated with early linear growth failure. Nature Medicine, 26(2), 215–221. https://doi.org/10.1038/s41591-019-0714-x Heather, J. M., & Chain, B. (2016). The sequence of sequencers: The history of sequencing DNA. Genomics, 107(1), 1–8. https://doi.org/10.1016/j.ygeno.2015.11.003 Heidt, S., San Segundo, D., Shankar, S., Mittal, S., Muthusamy, A. S. R., Friend, P. J., Fuggle, S. V., & Wood, K. J. (2011). Peripheral Blood Sampling for the Detection of Allograft Rejection: Biomarker Identification and Validation. Transplantation, 92(1), 1–9. https://doi.org/10.1097/TP.0b013e318218e978 Hong, G., Miller, H. B., Allgood, S., Lee, R., Lechtzin, N., & Zhang, S. X. (2017). Use of Selective Fungal Culture Media Increases Rates of Detection of Fungi in the Respiratory Tract of Cystic Fibrosis Patients. Journal of Clinical Microbiology, 55(4), 1122–1130. https://doi.org/10.1128/JCM.02182-16 Horck, M. van, Alonso, A., Wesseling, G., Groot, K. de W., Aalderen, W. van, Hendriks, H., Winkens, B., Rijkers, G., Jöbsis, Q., & Dompeling, E. (2016). Biomarkers in Exhaled Breath Condensate Are Not Predictive for Pulmonary Exacerbations in Children with Cystic Fibrosis: Results of a One-Year
121
Observational Study. PLOS ONE, 11(4), e0152156. https://doi. org/10.1371/journal.pone.0152156 Institute of Medicine (US) Committee to Review Dietary Reference Intakes for Vitamin D and Calcium. (2011). Dietary Reference Intakes for Calcium and Vitamin D (A. C. Ross, C. L. Taylor, A. L. Yaktine, & H. B. Del Valle, Eds.). National Academies Press (US). http://www.ncbi.nlm.nih.gov/books/NBK56070/ Jaffe, A. (2002). Are annual blood tests in preschool cystic fibrosis patients worthwhile? Archives of Disease in Childhood, 87(6), 518–520. https://doi.org/10.1136/adc.87.6.518 Jairam, V., & Park, H. S. (2019). Strengths and limitations of large databases in lung cancer radiation oncology research. Translational Lung Cancer Research, 0(0), S172–S183. Kołodziej, M., de Veer, M. J., Cholewa, M., Egan, G. F., & Thompson, B. R. (2017). Lung function imaging methods in Cystic Fibrosis pulmonary disease. Respiratory Research, 18. https://doi.org/10.1186/s12931-017-0578-x Lagier, J.-C., Edouard, S., Pagnier, I., Mediannikov, O., Drancourt, M., & Raoult, D. (2015). Current and Past Strategies for Bacterial Culture in Clinical Microbiology. Clinical Microbiology Reviews, 28(1), 208–236. https://doi.org/10.1128/CMR.00110-14 Logo, cls-1{fill:#142a39;} cls-2{fill:#2d9f86;}AliveCor K. (n.d.). AliveCor. Retrieved May 19, 2020, from https://www.alivecor. com/ Lord, R., Jones, A. M., & Horsley, A. (2020). Antibiotic treatment for Burkholderia cepacia complex in people with cystic fibrosis experiencing a pulmonary exacerbation. Cochrane Database of Systematic Reviews. https://doi.org/10.1002/14651858. CD009529.pub4 Loukou, I., Moustaki, M., Plyta, M., & Douros, K. (2019). Longitudinal changes in lung function following initiation of lumacaftor/ivacaftor combination. Journal of Cystic Fibrosis: Official Journal of the European Cystic Fibrosis Society. https:// doi.org/10.1016/j.jcf.2019.09.009 Making the “Breath Biopsy” a Reality | Thayer School of Engineering at Dartmouth. (n.d.). Retrieved May 13, 2020, from https://engineering.dartmouth.edu/news/making-the-breathbiopsy-a-reality Martens, A., Wistuba-Hamprecht, K., Foppen, M. G., Yuan, J., Postow, M. A., Wong, P., Romano, E., Khammari, A., Dreno, B., Capone, M., Ascierto, P. A., Giacomo, A. M. D., Maio, M., Schilling, B., Sucker, A., Schadendorf, D., Hassel, J. C., Eigentler, T. K., Martus, P., … Weide, B. (2016). Baseline Peripheral Blood Biomarkers Associated with Clinical Outcome of Advanced Melanoma Patients Treated with Ipilimumab. Clinical Cancer Research, 22(12), 2908–2918. https://doi.org/10.1158/10780432.CCR-15-2412 Menachemi, N., & Collum, T. H. (2011). Benefits and drawbacks of electronic health record systems. Risk Management and Healthcare Policy, 4, 47–55. https://doi.org/10.2147/RMHP. S12985 Mogi, A., & Kuwano, H. (2011). TP53 Mutations in Nonsmall Cell Lung Cancer. Journal of Biomedicine and Biotechnology, 2011. https://doi.org/10.1155/2011/583929 Mott, T., Soler, M., Grigsby, S., Medley, R., & Whitlock, G. C. (2010). Identification of Potential Diagnostic Markers among Burkholderia cenocepacia and B. multivorans Supernatants. Journal of Clinical Microbiology, 48(11), 4186–4192. https://doi. org/10.1128/JCM.00577-10
122
Neff, S. (2020). The Evolution of Cystic Fibrosis Therapy—A Triumph of Modern Medicine. Dartmouth Undergraduate Journal of Science, XXI(2), 90–99. NIH Human Microbiome Project—About the Human Microbiome. (n.d.). Retrieved May 13, 2020, from https:// www.hmpdacc.org/overview/ Office-based Physician Electronic Health Record Adoption. (n.d.). Retrieved May 19, 2020, from /quickstats/pages/ physician-ehr-adoption-trends.php Prentice, B. J., Ooi, C. Y., Verge, C. F., Hameed, S., & Widger, J. (2020). Glucose abnormalities detected by continuous glucose monitoring are common in young children with Cystic Fibrosis. Journal of Cystic Fibrosis: Official Journal of the European Cystic Fibrosis Society. https://doi.org/10.1016/j. jcf.2020.02.009 Putman, M. S., Anabtawi, A., Le, T., Tangpricha, V., & SermetGaudelus, I. (2019). Cystic fibrosis bone disease treatment: Current knowledge and future directions. Journal of Cystic Fibrosis, 18, S56–S65. https://doi.org/10.1016/j.jcf.2019.08.017 Radioactive Iodine (Radioiodine) Therapy for Thyroid Cancer. (n.d.). Retrieved May 16, 2020, from https://www.cancer.org/ cancer/thyroid-cancer/treating/radioactive-iodine.html Ramírez-Labrada, A. G., Isla, D., Artal, A., Arias, M., Rezusta, A., Pardo, J., & Gálvez, E. M. (2020). The Influence of Lung Microbiota on Lung Carcinogenesis, Immunity, and Immunotherapy. Trends in Cancer, 6(2), 86–97. https://doi. org/10.1016/j.trecan.2019.12.007 Rivera, M. P., Mehta, A. C., & Wahidi, M. M. (2013). Establishing the Diagnosis of Lung Cancer: Diagnosis and Management of Lung Cancer, 3rd ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest, 143(5, Supplement), e142S-e165S. https://doi.org/10.1378/ chest.12-2353 Rolfo, C., Castiglia, M., Hong, D., Alessandro, R., Mertens, I., Baggerman, G., Zwaenepoel, K., Gil-Bazo, I., Passiglia, F., Carreca, A. P., Taverna, S., Vento, R., Peeters, M., Russo, A., & Pauwels, P. (2014). Liquid biopsies in lung cancer: The new ambrosia of researchers. Biochimica et Biophysica Acta (BBA) - Reviews on Cancer, 1846(2), 539–546. https://doi. org/10.1016/j.bbcan.2014.10.001 Schechter, M. S., Fink, A. K., Homa, K., & Goss, C. H. (2014). The Cystic Fibrosis Foundation Patient Registry as a tool for use in quality improvement. BMJ Quality & Safety, 23(Suppl 1), i9–i14. https://doi.org/10.1136/bmjqs-2013-002378 Scott, P., Anderson, K., Singhania, M., & Cormier, R. (2020). Cystic Fibrosis, CFTR, and Colorectal Cancer. International Journal of Molecular Sciences, 21(8), 2891. https://doi. org/10.3390/ijms21082891 Shah, P., Kay, D., Benninger, L., Lascano, J., & Kirst, M. (2019). ANALYSIS OF EXHALED BREATH CONDENSATE AT BASELINE AND DURING EXACERBATIONS IN SUBJECTS WITH CYSTIC FIBROSIS. CHEST, 156(4), A1640. https://doi.org/10.1016/j. chest.2019.08.1439 Sharma, N., Jain, V., & Mishra, A. (2018). An Analysis Of Convolutional Neural Networks For Image Classification. Procedia Computer Science, 132, 377–384. https://doi. org/10.1016/j.procs.2018.05.198 Staff, R. T. (2019, July 5). Products 2019: Spirometers & PFT | RT. RT: For Decision Makers in Respiratory Care. https://www.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
rtmagazine.com/products-treatment/diagnostics-testing/ diagnostics/products-2019-spirometers-pft/ Standards for the Clinical Care of Children and Adults with cystic fibrosis in the UK. (2011). Cystic Fibrosis Trust. Strimbu, K., & Tavel, J. A. (2010). What are Biomarkers? Current Opinion in HIV and AIDS, 5(6), 463–466. https://doi. org/10.1097/COH.0b013e32833ed177 Szczesniak, R. D., Su, W., Brokamp, C., Keogh, R. H., Pestian, J. P., Seid, M., Diggle, P. J., & Clancy, J. P. (2020). Dynamic predictive probabilities to monitor rapid cystic fibrosis disease progression. Statistics in Medicine, 39(6), 740–756. https://doi. org/10.1002/sim.8443 Trailhead Biosystems. (n.d.). Trailhead Biosystems. Retrieved May 13, 2020, from https://www.trailbiosystems.com Umar, A., & Atabo, S. (2019). A review of imaging techniques in scientific research/clinical diagnosis. MOJ Anatomy & Physiology, 6(5), 175–183. https://doi.org/10.15406/ mojap.2019.06.00269 United States Cystic Fibrosis Patient Registry—PortCF. (n.d.). Retrieved May 17, 2020, from http://www.cysticfibrosisdata. org/data-registry/united-states-cystic-fibrosis-patientregistry-portcf Vargas, A. J., & Harris, C. C. (2016). Biomarker development in the precision medicine era: Lung cancer as a case study. Nature Reviews. Cancer, 16(8), 525–537. https://doi. org/10.1038/nrc.2016.56 Villalobos, P., & Wistuba, I. I. (2017). Lung Cancer Biomarkers. Hematology/Oncology Clinics of North America, 31(1), 13–29. https://doi.org/10.1016/j.hoc.2016.08.006 Viral Culture—Clinical Virology—Stanford University School of Medicine. (n.d.). Retrieved May 13, 2020, from http:// clinicalvirology.stanford.edu/culture.html Wang, W., Wang, S., & Zhang, M. (2017). Identification of urine biomarkers associated with lung adenocarcinoma. Oncotarget, 8(24), 38517–38529. https://doi.org/10.18632/ oncotarget.15870
https://doi.org/10.1002/jmri.27030 Xia, X., Lu, J.-J., Zhang, S.-S., Su, C.-H., & Luo, H.-H. (2016). Midkine is a serum and urinary biomarker for the detection and prognosis of non-small cell lung cancer. Oncotarget, 7(52), 87462–87472. https://doi.org/10.18632/ oncotarget.13865 Xing, F., & Yang, L. (2016). Chapter 4—Machine learning and its application in microscopic image analysis. In G. Wu, D. Shen, & M. R. Sabuncu (Eds.), Machine Learning and Medical Imaging (pp. 97–127). Academic Press. https://doi. org/10.1016/B978-0-12-804076-8.00004-9 Yang, C.-F. J., Hartwig, M., D’Amico, T. A., & Berry, M. F. (2016). Large Clinical Databases for the Study of Lung Cancer: Making Up for the Failure of Randomized Trials. The Journal of Thoracic and Cardiovascular Surgery, 151(3), 626–628. https://doi.org/10.1016/j.jtcvs.2015.08.110 Yankaskas, J. R., Marshall, B. C., Sufian, B., Simon, R. H., & Rodman, D. (2004). Cystic Fibrosis Adult Care. Chest, 125(1), 1S-39S. https://doi.org/10.1378/chest.125.1_suppl.1S Zhao, J., Schloss, P. D., Kalikin, L. M., Carmody, L. A., Foster, B. K., Petrosino, J. F., Cavalcoli, J. D., VanDevanter, D. R., Murray, S., Li, J. Z., Young, V. B., & LiPuma, J. J. (2012). Decade-long bacterial community dynamics in cystic fibrosis airways. Proceedings of the National Academy of Sciences of the United States of America, 109(15), 5809–5814. https://doi.org/10.1073/ pnas.1120577109 Zvereff, V. V., Faruki, H., Edwards, M., & Friedman, K. J. (2014). Cystic fibrosis carrier screening in a North American population. Genetics in Medicine: Official Journal of the American College of Medical Genetics, 16(7), 539–546. https://doi.org/10.1038/gim.2013.188
Wells, Q. S., Gupta, D. K., Smith, J. G., Collins, S. P., Storrow, A. B., Ferguson, J., Smith, M. L., Pulley, J. M., Collier, S., Wang, X., Roden, D. M., Gerszten, R. E., & Wang, T. J. (2019). Accelerating Biomarker Discovery Through Electronic Health Records, Automated Biobanking, and Proteomics. Journal of the American College of Cardiology, 73(17), 2195–2205. https:// doi.org/10.1016/j.jacc.2019.01.074 What is an electronic health record (EHR)? | HealthIT.gov. (n.d.). Retrieved May 19, 2020, from https://www.healthit.gov/ faq/what-electronic-health-record-ehr What Is Lung Cancer? | Types of Lung Cancer. (n.d.). Retrieved May 19, 2020, from https://www.cancer.org/cancer/lungcancer/about/what-is.html Willsey, G. G., Eckstrom, K., LaBauve, A. E., Hinkel, L. A., Schutz, K., Meagher, R. J., LiPuma, J. J., & Wargo, M. J. (2019). Stenotrophomonas maltophilia Differential Gene Expression in Synthetic Cystic Fibrosis Sputum Reveals Shared and Cystic Fibrosis Strain-Specific Responses to the Sputum Environment. Journal of Bacteriology, 201(15). https://doi. org/10.1128/JB.00074-19 Woods, J. C., Wild, J. M., Wielpütz, M. O., Clancy, J. P., Hatabu, H., Kauczor, H.-U., van Beek, E. J. R., & Altes, T. A. (2019). Current state of the art MRI for the longitudinal assessment of cystic fibrosis. Journal of Magnetic Resonance Imaging: JMRI.
SPRING 2020
123
A Homing Missile for Cancer: Antibody Drug Conjugates as a New Targeted Therapy STAFF WRITERS: DINA RABADI, ALLAN RUBIO, LOVE TSAI BOARD WRITERS: ANNA BRINKS, NISHI JAIN Cover: An microscopic view of cancerous cells. Source: Wikimedia Commons
“Cancer today continues to have a monumental impact on human life: one in three women and one in two men will develop cancer during their lifetime.”
124
Diversity of Cancers Human history tells a story of violence, desperate survival, and immense growth. Since our emergence approximately 300,000 years ago, Homo sapiens have embarked on countless wars: against other species of Homo, against extreme elements during our migration across the far reaches of the globe, and against each other as nations grew and struggled for power. In 1971, President Nixon declared war on a new enemy, one brimming with violence and embedded in our very DNA: cancer. Despite the optimism fostered by an era that put a man on the moon, the war on cancer would prove to be a battle not easily won. Cancer today continues to have a monumental impact on human life: one in three women and one in two men will develop cancer during their lifetime. The death toll is also significant: one quarter of American deaths and 15 percent of all deaths worldwide are attributable to cancer (Mukherjee, 2010).
Cancer arises from mutations in genes involved in cell growth and replication — proto-oncogenes, tumor suppressor genes, and DNA repair genes. These mutations allow the cell to grow uncontrollably and invade other tissues and sites of the body, a condition called metastasis. While the specific mutations in a particular cancer may be incredibly diverse and operate through an array of pathways, most can be linked to the six hallmarks of cancer: self-sufficiency in growth signals, insensitivity to anti-growth signals, evading apoptosis, sustained angiogenesis, metastasis, and limitless replicative potential (figure 1) (Hannahan and Weinberg, 2000). As oncologist Siddhartha Mukherjee writes in his novel Emperor of All Maladies, “Cancer is an expansionist disease; it invades through tissues, sets up colonies in hostile landscapes, seeking ‘sanctuary’ in one organ and then immigrating to another. It lives desperately, inventively, fiercely, territorially, cannily, and defensively — at times, as if teaching us how DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
to survive. To confront cancer is to encounter a parallel species, one perhaps more adapted to survival than even we are.” Unfortunately, there are hundreds of types of cancers, each uniquely adapted to survival and constantly evolving into new forms. The heterogeneity and complexity of the tumor microenvironment (TME) make it challenging to overcome, due to its varying cell types, genetics, and epigenetics. There are two types of immune cells present in the TME – “effector” cells and “protector” cells (Tesi, 2018). “Effector” cells are immune cells that respond to the tumor in response to cytokines and chemokines, the inflammatory signals that notify the immune system to attack. ‘Effector’ cells include T cells, macrophages, neutrophils, dendritic cells, and more. However, ‘effector’ cells can become tumor-associated, meaning that they are converted to tumor-protecting cells. These tumor-protecting cells are known as ‘protector’ cells, and they comprise the tumor’s micro-immune system. ‘Protector’ cells are one way in which resistance to anticancer drugs is acquired. Another mechanism by which this resistance is obtained is via gene mutations, which impact cellular factors such as drug uptake, metabolism, and export (Tredan et. al, 2007). This is a sort of unintended natural selection of anticancer drugs – the weaker cells are killed, while the cells strengthened through mutations survive. Antibody drug conjugates are a valuable treatment to cancer due to their inherent specificity – they attach a cytotoxin to a monoclonal antibody (an antibody that descends from a single parent antibody, and are therefore all perfectly identical in nature) with a linker and selectively target cancer cells
SPRING 2020
in which there are overexpressed receptors. When the ADCs are in the same TME as these overexpressed receptors, the monoclonal antibody portion of the receptor latches onto the overexpressed antigen and is internalized by the cancer cell. This internalization occurs either during receptor degradation and recycling or as in response to binding a particular ligand (delta-serrate ligands of the Notch pathway, for instance, are inherently internalized as a mechanism of their receptor). Upon internalization, these molecules enter the endosomal pathway to either head back to the cell surface for recycling or into the lysosomes for degradation (Beck 2017). Regardless, while in the endosomes within the cell, the cytotoxin is released to kill the cell. Existing ADCs include brentuximab vedotin (2011) and trastuzumab emtansine (2013) whose potencies are phenomenal, but the drug to antibody ratio is four – something that has been increased with ADCs that are currently in clinical trials. Drug to antibody ratio (or DAR) describes the inherent potency of an ADC and is measured by the number of cytotoxic molecules that can be conjugated to the central antibody (Perez 2014). Optimized ADCs result in DARs that are high, but with each added payload bonded to the central antibody, the molecule becomes increasingly unstable. This instability can lead to the premature release of the payload into the bloodstream which can cause unintended side effects for the patient. The effectiveness of ADCs is apparent through published clinical trial data, – for instance, a recent report published regarding brentuximab vedotin states a complete remission rate of 65% for lymphoma patients who were administered the maximum tolerated dose of 1.8 mg/kg, and with a slightly smaller dose, 50% of patients were shown to have a similar response. Overall tumor regression was observed in 86% of patients (Younges et. al 2010). The patients who were enrolled in this particular clinical trial were undergoing other treatment regimens compared to which brentuximab vedotin proved far more efficacious.
ADC Action Inside the Cell Cancer is the proliferation of cancerous cells which results in tumors (Iqbal & Iqbal, 2014). What are receptors, and how does their overexpression play into cancer? Receptors are proteins which pick up signals from the body to carry out some sort of response; in cancer, this response may be growth or reproduction. This “signal” is often a corresponding molecule
Figure 1: The hallmarks of cancer, as described by Hannahan and Weinburg. These abilities provide a unifying framework to describe and understand cancers despite their staggering diversity, laying critical groundwork for potential therapeutic developments. Each symbol represents a unique hallmark: Green — sustaining proliferative signals. Brown — insensitivity to anti-growth signals. Pink — evading immune destruction. Blue — limitless replicative potential. Orange — tumor promoting inflammation. Black — tissue invasion and metastasis. Red — sustained angiogenesis. Blue — genome instability and cell mutations. Grey — evading apoptosis. Purple — deregulating cellular energetics Source: Wikimedia Commons
“Antibody drug conjugates are a valuable treatment to cancer due to their inherent specificity - they attach a cytotoxin to a monoclonal antibody with a linker and selectively target cancer cells in which there are overexpressed receptors.”
125
Figure 2: Antibody drug conjugates bind to a receptor on the outside of the cell, and when that receptor is internalized then it releases the cytotoxin that then serves to kill the tumor cell. Source: Wikimedia Commons
“In addition to binding to the surface of tumor cells, there has been significant evidence to suggest that certain monoclonal antibodies (mAbs) of ADCs can also bind to the tumor vasculature.”
known as a ligand, and after the binding of the ligand to the receptor, the receptor goes through a conformational change (Editors, 2020). When a cancerous cell is classified as “receptor positive,” such signals promote cell proliferation and receptors overexpress resulting in tumors. As such, receptor overexpression is an important component in understanding the mechanisms of cancer and how they function. For example, two thirds of breast cancers test positive for hormone receptors ("Hormone Receptor Status: Breast Cancer Pathology Report", 2020). Certain treatments already take advantage of this fact, such as those involving monoclonal antibodies ("Monoclonal Antibodies and Their Side Effects", 2020). Naked antibodies (antibodies without any drug attached to them) are actually the most common type of monoclonal antibodies, and work in a variety of ways: by boosting a person’s immune response to cancerous cells, targeting biological checkpoints, or attaching to cells and blocking antigens that aid in cell proliferation ("Monoclonal Antibodies and Their Side Effects", 2020). One of the three components that makes up the ADC, the monoclonal antibody binds to the antigen and is therefore essential for the specificity of ADCs. Monoclonal antibodies are immune system proteins that are made by identical immune cells and are exact replicates of a parent cell that is specific for one receptor. The laboratory production of the monoclonal antibody is extremely expensive and time consuming since it must go through a mouse model called a hybridoma, then be humanized
126
(adjusted for the human body) for optimum efficacy (Mullard 2013). In addition to binding to the surface of tumor cells, there has been significant evidence to suggest that certain monoclonal antibodies (mAbs) of ADCs can also bind to tumor vasculature (Sondergeld et al., 2015). The receptor must also be exposed on the exterior of the cell such that the ADC can bind to it. For instance, HER2 expression is significantly higher on the surface of breast cancer cells – trastuzumab is able to recognize the receptor and bind it with ease, thereby facilitating good results in the clinic. Additionally, some of the naked antibodies (monoclonal or otherwise) have shown an ability to retain extracellular mechanisms of interest such as antibody dependent cell mediated cytotoxicity (ADCC) in which a cell is coated with antibodies and is killed by white blood cells (Rodriguez-Aller et al., 2016). It may also retain antibody dependent cell mediated phagocytosis (ADCP) in which the cell can be attacked and killed by phagocytes in the bloodstream. Retention of this property is useful as it provides alternate ways to kill the cell in the case that the naked antibody is used for treatment in combination with the ADC. The second of three components that make up the ADC are called cytotoxic payloads (frequently referred to as warheads) and serve to kill the tumor cell; as important as they are, there are only currently a few families of them are in clinical trials. The majority of these drugs target DNA and are thereby toxic for both tumorous proliferating or non-proliferating
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
healthy cells, or target microtubules, the latter of which are more specific to proliferating tumorous cells (Verma et al., 2015). They are designed to optimize potency and efficacy, both of which are measured by the DAR. The current industry standard settles around 3.5 to 4 as a DAR – in other words, there are about 3-4 warhead molecules per antibody. This is one of the primary limitations behind the limited clinical success of conventional chemotherapies as cytotoxic payloads in ADCs – not only do they not deliver enough drug per antibody (DAR less than 4), they are additionally limited by the number of antigens on the surface of cells (Elgersma et. al, 2015). These common chemotherapies (such as methotrexate and taxoids) that have low stability when conjugated to the central antibody. Therefore, a high dosage of the ADC is required to be effective, traditionally resulting in high side toxicities. In addition to these difficulties in conjugation, the cytotoxic payloads have to maintain their efficacy after conjugation and while in aqueous forms for intravenous administration, have to be optimized to allow for a longer shelf life, and must be able to be synthesized in massive quantities with relative cost efficiency (Nakada et. al 2016). The two primary approved ADCs – brentuximab vedotin and trastuzumab emtansine – achieve the above criteria through the use of auristatins and maytansinoids, both of which are tubulin (protein that forms part of the skeletal system) inhibitors. Auristatins are currently the largest class of ADCs and are primarily comprised of monomethyl auristatin E (MMAE) and
monomethyl auristatin F (MMAF). Both MMAE and MMAF are characterized by high potency (allowing for multiple drug molecules to be conjugated to the same central antibody), water solubility (allowing for the drug to be available through IV administration), and stability within the body and within the TME. Alternatively, Maytansinoids are a second class of ADCs in clinical trials which are composed of cytotoxins DM1 and DM4. Maytansinoids were first tested as independent anticancer agents but failed in clinical trials due to toxicity. When they were conjugated to antibodies it was found that antibodies provided the specificity that the drug required to reduce side. Other kinds of cytotoxic payloads include tubulysins (which inhibit the polymerization of microtubules during mitosis to induce apoptosis), calicheamicins (which bind to the minor groove in DNA and cleave the strand), duocarmycins (which bind to the minor groove in DNA and alkylate it), benzodiazepines (which also bind to the minor groove in a sequence specific manner to inhibit further cellular growth), camptothecin analogues (which are topoisomerase inhibitors that block the ligation step of the cell cycle), and doxorubicin (which is a DNA intercalating agent causing mutations) (Beck et al., 2017).
“This is one of the primary limitations behind the limited clinical success of conventional chemotherapies as cytotoxic payloads in ADCs - not only do they not deliver enough drug per antibody, they are additionally limited by the number of antigens on the surface of cells”
Equally important to the antibodies and the drugs themselves is the linker that connects them (the third of three components making up the ADC). If the linker is unstable and prematurely releases the drug into the
Figure 3: Two researchers examine slides displaying cultures of cells that make monoclonal antibodies. These are grown in the lab and the researchers are analyzing the products to select the most promising options. As scientists work towards perfecting drug design and identifying new targets, the treatment options and quality of life for cancer patients continue to improve. Source: National Cancer Institute; Creator: Linda Bartlett
SPRING 2020
127
Figure 4: As the cancer grows, smaller tumor components are able to break off and escape through the blood brain barrier such that they cannot be followed in by the larger therapeutic molecules, resulting in brain cancer. This act of tumor components circulating the brain is called metastasis and is a dangerous progression in the condition; nanoparticles that are currently being developed are now capable of carrying drugs through the blood brain barrier to target and eliminate these small escaping tumor components. In the image blood vessels (red), nuclei (blue), and breast cancer cells (green) are seen as well as the intravenous experimental nanoparticles that are able to cross the blood brain barrier. Source: NIH
“Not only is oral therapy significantly less invasive and time-consuming than intravenous methods, it is also significantly more cost effective, both for patients and hospitals. ”
bloodstream, this will lead to a dangerous level of toxicity that can cause severe side effects and can, in some cases, be fatal. Additionally, welldesigned linkers can lead to a lower therapeutic index in the case that there are not enough drug molecules that can be bound to the antibody in a stable manner (Patterson et al., 2014). There are several strategies employed in ADC design to help with stable binding, one of which is the choice between cleavable and non-cleavable linkers. Cleavable linkers are hydrolytically sensitive only to the acidic pH associated with lysosomes, resulting in the release of the drug payload when the entire ADC starts getting intracellularly degraded. Another common strategy used on linkers is the use of disulfide bridges that are degraded by glutathione (physiological preventing intracellular damage); disulfide bridges allow for steric hindrance of conjugated payloads, which limits premature degradation (Ekholm et al., 2016). Non-cleavable linkers are sensitive to proteases within the cell, resulting in the release of the payload only after the drug is internalized. In addition to cleavability concerns, linkers can increase the potency of a drug through the enhancement of the bystander effect. The bystander effect takes place when a single cytotoxic molecule can kill not only the tumor cell that the ADC is bound to, but also on other tumor cells that are in the vicinity; tumor cells versus healthy cells are differentiated on the basis of expression of the antigen that the antibody in the ADC is targeting (Kline et al., 2015). Linkers encompassing a reducible disulfide bond exhibit the bystander effect and linkers that encompass a non-reducible thioester do not. Finally, another common consideration in the design of linkers is the use of polarity, as polar linkers tend to have the improved solubility (thereby allowing them to be used in IV medications) and reduce multi drug resistance (MDR). MDR is determined by the expression of MDR1, which can transport hydrophobic compounds more easily than hydrophilic ones – by producing polar molecules, the linker and the ADC at large has improved potency against MDR1+ molecules (Zimmerman 2014).
ADC Delivery Cancer treatments are often administered intravenously or orally, depending on both treatment plan and patient preference. The main advantage of oral therapy is patient autonomy, as oral therapy allows a patient to administer the drug themselves in the comfort of their own home. Intravenous therapy visits can take hours of a patient’s time and can be painful, depending on the vein size and pain tolerance of the patient.
128
Not only is oral therapy significantly less invasive and time-consuming than intravenous methods, it is also significantly more cost effective, both for patients and hospitals. The medical advantage of oral therapy is increased exposure to the drug, which is particularly important for schedule-dependent therapies. Yet, there are several disadvantages of oral therapies. Oral therapeutics may interact with food and other drugs, prescription or nonprescription, while intravenous therapies are more direct and less likely to cause such interactions. Another potential issue that arises with oral treatment is adherence to the drug schedule, due to a patient’s confusion or misunderstanding (Bhattacharyya, 2010). ADCs are striving to curb the disadvantages that are associated with both treatment methods. Both pharmacokinetics (PK) and pharmacodynamics (PD) are critical to understanding drug delivery and function. PK is the study of the disposition of a drug after its delivery to a patient or “what the body does to a drug” (Prendergast et al., 2007). Conversely, PD is the study of how a drug affects the body. PD investigates how a drug affects its targets in a dose- and time- dependent fashion, and includes receptor binding and sensitivity, post-receptor effects, and chemical interactions (Schwab, 2011). The PD of ADCs will be discussed in depth in other sections; this section will focus on PK. The key components of PK are explained by the acronym ADME: absorption, distribution, metabolism, and excretion. Absorption describes the route used to administer the drug, such as intravenously or orally. Distribution describes the movement of the drug through systemic circulation within the body and has implications for the efficacy, metabolism, and toxicity of the drug. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Metabolism is the processing of the drug by the body and usually results in generating products that have greater aqueous solubility; this change occurs by phase I reactions that provide functional polar groups to the drug molecules or phase II conjugation reactions which add large polar moieties via high energy cofactors or a chemically reactive substrate. Finally, excretion is the removal of the drug from the body, usually through the liver and kidney (Prendergast et al., 2007). Multiple analytes need to be examined to accurately evaluate an ADCâ&#x20AC;&#x2122;s PK: both the antibody and the drug can be measured either in the conjugated form (bound to its target) or the unconjugated form. Metabolizing the drug can add additional complexity by creating new species to detect, such as drug complexes that can form by interacting with endogenous molecules in the body (Kamath et al., 2015). The pharmacokinetics of ADCs are greatly influenced by the properties of the antibody backbone. ADCs exhibit similar ADME profiles to unconjugated antibodies, including a low volume of distribution, a slow clearance, a long half-life, and proteolysis-mediated catabolism (Hedrich et al., 2018). Elimination pathways may involve deconjugation followed by degradation or catabolism through nonspecific or targetmediated proteolysis (Kamath et al., 2015). Beyond PK and PD, focusing on the circulatory system itself can be an invaluable way to improve therapies. Exploiting abnormalities in tumor vasculature provides an opportunity to enhance drug delivery (Greish, 2010). First described by Matsumura and Maeda in 1986, the enhanced permeability and retention (EPR) effect relies on the specific pathophysiological differences between tumors and healthy tissues. The majority of solid tumors have a chaotic vasculature and microenvironment as a result of the production of an abnormally large amount of vascular growth factors and vascular permeability factors like nitric oxide and prostaglandins. The extent of the EPR effect can also be determined by a lack of functional lymphatic drainage, an elevated interstitial fluid pressure, or a dense and deregulated stromal department (Golombek et al., 2018). The density of the extracellular matrix (part of the stromal compartment) can play an important role in EPR-mediated tumor accumulation. For example, collagen may form a barrier that prevents the penetration of medicine. Different types of cells in this area can also have an impact: the accumulation of macrophages may promote the delivery of drugs to the
SPRING 2020
tumor. Additionally, due to the invasive and rapid growth of tumors, surrounding tissue can be subjected to solid stress causing the compression of angiogenic blood vessels, which in turn contributes to hypoxia and increased production of pro-angiogenic factors such as vascular endothelial growth factor (VEGF.) VEGF induces endothelial cell survival, sprouting, and vascular leakinessâ&#x20AC;&#x201D; the key components of EPR-mediated tumor targeting. Ultimately, these characteristics of the unique tumor environment allow drug delivery to take place, using abnormally wide fenestrations in tumor blood vessels combined with the decreased lymphatic drainage to effectively and selectively accumulate drugs in tumor tissue. Packaging ADCs into micelles or other nanomedicine vehicles may increase these EPR effects. However, EPR can be highly heterogeneous, changing over the timeline of tumor development, among tumor types of the same origin, and within tumors and metastases within the same patient (Golombek et al., 2018). Therefore, despite its promising ability to promote drug delivery to tumors, ADCs may not be effective in every patient and an individualized approach is required.
â&#x20AC;&#x153;...due to the invasive and rapid growth of tumors, surrounding tissue can be subjected to solid stress causing compression of angiogenic blood vessels, which in turn contributes to hypoxia and increased production of pro-angiogenic factors..."
Figure 5: Image of patient receiving radiation therapy for Hodgkin's Lymphoma in a Versa HD. Source: Wikimedia commons, Jakembradford
129
Review of Other Existing Therapies The immune system is the body’s most critical defense against invaders such as bacteria and viruses. However, the immune system also plays an important role in cancer development, as an estimated 15% of all human cancers worldwide may be attributed to viruses (Liao, 2006). A prominent example is human papilloma virus, which is a prevalent cause of cervical cancer. Additionally, the immune system can protect the body against itself: it polices the body’s tissues and recognizes abnormal cells. These capabilities can be capitalized on by immunotherapies, which enhance the immune system’s ability to fight cancer. There are numerous types of immunotherapy treatments, and ADCs are included among them due to their reliance on antibodies. Other examples include targeted antibodies, cancer vaccines (which elicit an immune response against specific cancer antigens), oncolytic virus therapy (which uses modified viruses to infect and destroy tumor cells), immunomodulators (which intervene in the immune system’s regulatory pathways), and adoptive cell therapy (which uses the immune system’s cells) (CRI Staff, 2019). There are many other therapies currently being developed in research labs or in clinical trials. The immune system’s precision, ability to adapt continuously and dynamically, and memory of past threats make it a promising opponent against cancer.
“...radiation and surgery are often coupled together to create a more comprehensive treatment for cancer, as radiation is an 'indirect' means to target cancer whereas surgery can more directly focus on a tumor while leaving healthy tissues unharmed.”
130
Radiation is currently the primary method of treatment for cancer alongside surgery (Baskar et al. 2012). In fact, radiation and surgery are often coupled together to create a more comprehensive treatment for cancer, as radiation is an “indirect” means to target cancer whereas surgery can more directly focus on a tumor while leaving healthy tissues unharmed. Radiation is also known as a “physical” agent, in that it works by killing cells with energy in the form of ions. Radiation kills cells by causing damage to cellular DNA, mitigating its ability to reproduce, as well as through free radicals that can then indirectly damage cellular DNA. This form of treatment works broadly, killing cells indiscriminately within an area, healthy ones alongside cancerous ones. In the past, researchers and physicians have made this trade-off because normal cells can heal quicker than mutated, cancerous ones (Baskar et al. 2012). There are two ways to deliver radiation therapy: an external beam of high-energy light (this is most common) or an internal system with radioactive components sealed to protect the rest of the body (Baskar et al.). When used in conjunction with surgery, radiation can shrink
tumors before an operation or kill any leftover cells after the operation. Over the years, various technological advances have made radiation much more effective: killing more cancerous cells while sparing healthy ones, avoiding critical regions during the procedure, and utilizing more precise methods of radiation administration. As a response to the issues ADC’s were facing as a treatment mechanism, nanoparticles were considered not only to increase the bioavailability but also reduce the toxicity profiles of the conjugates (Johnston and Scott, 2018). In this context, the employed nanoparticles are typically liposomes, membrane bound structures with an aqueous center, or polymeric/metal nanoparticles (Johnston and Scott, 2018). This nascent technology, dubbed Antibody conjugated nanoparticles (ACNPs), works in the same way that previously mentioned ADC’s did, effectively building on an existing technology. The antibodies are conjugated to the nanoparticle which brings the cargo-loaded structure to the target cancer cell. However, the cargo is not found on the antibody this time — it is inside of the nanoparticle itself (Johnston and Scott, 2018). With this method, toxicity-inducing payload is within a structure and not exposed, increasing potency while also improving safety (Johnston and Scott, 2018). While this alternative might sound promising nanoparticles require extremely careful selection to be compatible with host bodies and antigens. In fact, in one study, only 0.7% of nanoparticles in nanomedicine successfully reached the tested tumor site in vivo (Wilhelm et al., 2016). As such, until this challenge is solved, nanoparticles will remain an exciting challenge for ADC development.
Outlook for the future Antibody-drug conjugates walk the fine line between a greatly beneficial medical advancement and a potential major source of toxicity for cancer patients. With every advancement in potency and efficacy for this drug, researchers must take into account the damage that could occur in the case of a misfire. Fortunately, however, most safety issues that this technology poses are relatively “manageable” (Wolska-Washer and Roback, 2019). The future of ADCs can split many ways to improve performance and reduce toxicity. The most obvious mode of improvement is increasing specificity through better
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 6: Wells filled with liquid for a research test. This test involves the preparation of cultures in which hybrids are grown in large quantities to produce desired antibodies. This is achieved by fusing myeloma cells and mouse lymphocytes to form hybrid cells. As scientists work towards perfecting drug design and identifying new targets, the treatment options and quality of life for cancer patients continue to improve. Source: National Cancer Institute; Creator: Linda Bartlett
conjugation and improved linkers. A better way for antibodies to find their target cell will no doubt be highly favored as this is directly related to delivery efficiency. Permeability of the cargo is also a focus for researchers as a way to improve efficacy without increasing toxicity (Johnston and Scott, 2018). Another such method is alteration of the cancer microenvironment which will effectively make drug delivery more favorable (Wolska-Washer and Roback, 2019). Not only will the alteration of the cancer microenvironment reduce the chance of unwanted bindings and interactions –this change could allow for more efficient delivery techniques and lay paths for a different kind of research in the field. There are currently eight FDA-approved antibody drug conjugates on the market, each of which targets a specific condition and is only used in that instance—for example, the drug trastuzumab deruxtecan, sold under the name Enhertu, is used for the treatment of adults with unresectable or metastatic HER2positive breast cancer. Additionally, in the case of metastatic cancer, the patients must have previously received two or more anti-HER2based treatments to be approved for ADC therapy ("FDA approves new treatment option for patients with HER2-positive breast cancer who have progressed on available therapies", 2020). These eight drugs, and all future FDAapproved ADCs, can be used in conjunction with other therapies to more effectively help a patient enter remission. Much like how surgery and radiation are used together to “fill in the gaps” of the other treatment, when physicians
SPRING 2020
combine ADCs with other therapies, this creates a compounded attack that is more efficient in the long run. Since ADCs are targeted therapies and match with specific conditions, they work well alongside other treatments. One such pairing is an ADC combined with immune checkpoint blockades, which block the stimulation of immune checkpoint targets, thus allowing the body to attack cancerous cells. Employing an ADC treatment plan after surgery is another option, in which the ADC can seek out cells left over from the operation, ensuring that they cannot proliferate to create another tumor or metastasize.
“There are currently eight FDA-approved antibody drug conjugates on the market, each of which targets a specific condition and is only used in that instance.”
Safety and efficacy are two important concerns for any treatment or drug. While ADCs certainly have great potential, there are still hurdles to ensuring patient safety and maximizing the drug’s efficacy. These include balancing cytotoxicity and efficacy, optimizing antibody affinity, and promoting internalization and tumor penetration. One issue is that many cancer targets are also expressed on normal cells, even if at lower levels. Highly cytotoxic payloads may therefore potentially impact normal, healthy cells, resulting in side effects that may range from discomfort to lethality for a patient. To enhance the therapeutic potential of ADCs, the strength of the cytotoxic payload needs to be counterbalanced by the specificity of the antibody; to do so the potency of the cytotoxic agent must be improved to lower the minimum effective dose or the tumor selectivity has to be improved to increase the maximum tolerated dose (Beck et al., 2017). Cytotoxic drugs with lower potencies
131
“...the modularity and tunability of ADCs offer many opportunities to optimize the drug's efficacy against cancer while reducing side effects and maintaining patient safety.”
such as topoisomerase inhibitors have been investigated to help address this problem (Mysliwy, 2020). Additionally, even with a highly specific antibody, there can be barriers to effective tumor penetration and internalization of ADCs. Tumor and antigen accessibility are critical components of effective ADC delivery. It has been reported that only 0.001-0.01% of an injected unmodified tumor-specific antibody (and by extension, a tumor-specific ADC) actually bind to the tumor cells; this would then require a highly potent cytotoxic payload to make an impact on the cancer (Beck et al., 2017). Conjugate stability also plays a role in many of these hurdles and is crucial for ADC function. As already discussed, there are several steps that must occur before ADCs can exert their anticancer effects: they must circulate in the blood, bind the cancer antigen, and subsequently be internalized (Mysliwy, 2020). To limit off-target toxicity, it is vital that the ADC remains stable while circulating in the blood, as early cleavage of the linker can lead to systemic toxicity (Beck et al., 2017 and Hedrich et al., 2018). Effective linker design, however, has to balance the need for stability during several days in circulation with the need for efficient cleavage upon delivery into the target cell. Extensive research has gone into novel conjugation techniques that may improve stability, including the introduction of nonnatural amino acids containing azide handles for drug attachment or ring-opened maleimides (Mysliwy, 2020). Despite the obstacles that still need to be overcome, the modularity and tunability of ADCs offer many opportunities to optimize the drug’s efficacy against cancer while reducing side effects and maintaining patient safety.
Antibody–Drug Conjugation Chemistry. ChemMedChem, 11(22), 2501–2505. https://doi.org/10.1002/cmdc.201600372
References
Johnston, M. C., & Scott, C. J. (2018). Antibody conjugated nanoparticles as a novel form of antibody drug conjugate chemotherapy. Drug Discovery Today: Technologies, 30, 63–69. https://doi.org/10.1016/j.ddtec.2018.10.003
Baskar, R., Lee, K., Yeo, R., & Yeoh, K. (2012). Cancer and Radiation Therapy: Current Advances and Future Directions. International Journal Of Medical Sciences, 9(3), 193-199. https://doi.org/10.7150/ijms.3635 Beck, A., Goetsch, L., Dumontet, C., & Corvaia, N. (2017). Strategies and challenges for the next generation of antibody drug conjugates. NATURE REVIEWS DRUG DISCOVERY, 16(5), 315–337. https://doi.org/10.1038/nrd.2016.268 Bhattacharyya, G.S. (2010). Oral systemic therapy: Not all “winwin”. Indian Journal of Medical and Paediatric Oncology, 2010, 1–3.https://doi.org/10.4103/0971-5851.68844 CRI Staff. (2019). Immunotherapy. Cancer Research Institute. Editors, B. (2020). Receptor - Definition, Types and Examples | Biology Dictionary. Biology Dictionary. Retrieved from https:// biologydictionary.net/receptor/. Ekholm, F. S., Pynnönen, H., Vilkman, A., Pitkänen, V., Helin, J., Saarinen, J., & Satomaa, T. (2016). Introducing Glycolinkers for the Functionalization of Cytotoxic Drugs and Applications in
132
FDA approves new treatment option for patients with HER2positive breast cancer who have progressed on available therapies. U.S. Food and Drug Administration. (2020). Retrieved 15 May 2020, from https://www.fda.gov/newsevents/press-announcements/fda-approves-new-treatmentoption-patients-her2-positive-breast-cancer-who-haveprogressed-available. Golombek, S. K., May, J.-N., Theek, B., Appold, L., Drude, N., Kiessling, F., & Lammers, T. (2018). Tumor targeting via EPR: Strategies to enhance patient responses. Advanced Drug Delivery Reviews, 130, 17–38. https://doi.org/10.1016/j. addr.2018.07.007 Greish, K. (2010). Enhanced Permeability and Retention (EPR) Effect for Anticancer Nanomedicine Drug Targeting. In S. R. Grobmyer & B. M. Moudgil (Eds.), Cancer Nanotechnology (Vol. 624, pp. 25–37). Humana Press. https://doi. org/10.1007/978-1-60761-609-2_3 Hanahan, D., & Weinberg, R. A. (2000). The Hallmarks of Cancer. Cell, 100(1), 57–70. https://doi.org/10.1016/S00928674(00)81683-9 Hedrich, W. D., Fandy, T. E., Ashour, H. M., Wang, H., & Hassan, H. E. (2018). Antibody-Drug Conjugates: Pharmacokinetic/ Pharmacodynamic Modeling, Preclinical Characterization, Clinical Studies, and Lessons Learned. Clinical Pharmacokinetics, 57(6), 687–703. https://doi.org/10.1007/ s40262-017-0619-0 Hormone Receptor Status: Breast Cancer Pathology Report. Breastcancer.org. (2020). Retrieved 10 May 2020, from https:// www.breastcancer.org/symptoms/diagnosis/hormone_ status. Iqbal, N., & Iqbal, N. (2014). Human Epidermal Growth Factor Receptor 2 (HER2) in Cancers: Overexpression and Therapeutic Implications. Molecular Biology International, 2014, 1-9. https://doi.org/10.1155/2014/852748 Improving the Serum Stability of Site-Specific Antibody Conjugates with Sulfone Linkers | Bioconjugate Chemistry. (n.d.). Retrieved May 29, 2020, from https://pubs.acs.org/ doi/10.1021/bc500276m
Kamath, A. V., & Iyer, S. (2015). Preclinical Pharmacokinetic Considerations for the Development of Antibody Drug Conjugates. Pharmaceutical Research, 32(11), 3470–3479. https://doi.org/10.1007/s11095-014-1584-z Liao, J. B. (2006). Viruses and human cancer. The Yale Journal of Biology and Medicine, 79(3–4), 115–122. Methods to Make Homogenous Antibody Drug Conjugates | SpringerLink. (n.d.). Retrieved May 29, 2020, from https://link. springer.com/article/10.1007%2Fs11095-014-1596-8 Monoclonal Antibodies and Their Side Effects. Cancer.org. (2020). Retrieved 10 May 2020, from https://www.cancer.org/ treatment/treatments-and-side-effects/treatment-types/ immunotherapy/monoclonal-antibodies.html. Mukherjee, S. (2011). The emperor of all maladies: A
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
biography of cancer (1st Scribner trade paperback ed). Scribner. Prendergast, G. C., & Jaffee, E. M. (2011). Cancer immunotherapy: Immune suppression and tumor growth. Mullard, A. (2013). Maturing antibody–drug conjugate pipeline hits 30. Nature Reviews Drug Discovery, 12(5), 329–332. https://doi.org/10.1038/nrd4009 Nakada, T., Masuda, T., Naito, H., Yoshida, M., Ashida, S., Morita, K., Miyazaki, H., Kasuya, Y., Ogitani, Y., Yamaguchi, J., Abe, Y., & Honda, T. (2016). Novel antibody drug conjugates containing exatecan derivative-based cytotoxic payloads. Bioorganic & Medicinal Chemistry Letters, 26(6), 1542–1545. https://doi. org/10.1016/j.bmcl.2016.02.020 Perez, H. L., Cardarelli, P. M., Deshpande, S., Gangwar, S., Schroeder, G. M., Vite, G. D., & Borzilleri, R. M. (2014). Antibody–drug conjugates: Current status and future directions. Drug Discovery Today, 19(7), 869–881. Production of Site-Specific Antibody–Drug Conjugates Using Optimized Non-Natural Amino Acids in a Cell-Free Expression System | Bioconjugate Chemistry. (n.d.). Retrieved May 29, 2020, from https://pubs.acs.org/doi/10.1021/bc400490z Rodriguez-Aller, M., Guillarme, D., Beck, A., & Fekete, S. (2016). Practical method development for the separation of monoclonal antibodies and antibody-drug-conjugate species in hydrophobic interaction chromatography, part 1: Optimization of the mobile phase. Journal of Pharmaceutical and Biomedical Analysis, 118, 393–403. https://doi. org/10.1016/j.jpba.2015.11.011 Schwab, M. (Ed.). (2011). Pharmacodynamics. In Encyclopedia of Cancer (pp. 2840–2840). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-16483-5_4495 Sondergeld, P., van de Donk, N. W. C. J., Richardson, P. G., & Plesner, T. (2015). Monoclonal antibodies in myeloma. Clinical Advances in Hematology & Oncology: H&O, 13(9), 599–609. Tesi, R.J. (2018). MDSC; The Most Important Cell You Have Never Heard Of. Trends in Pharmacological Sciences, 2019, 4-7. https://doi.org/10.1016/j.tips.2018.10.008. Tredan et. al. (2007). Drug Resistance and the Solid Tumor Microenvironment. JNCI: Journal of the National Cancer Institute, 2007, 1441-1454. https://doi-org.dartmouth.idm. oclc.org/10.1093/jnci/djm135. Verma, V. A., Pillow, T. H., DePalatis, L., Li, G., Phillips, G. L., Polson, A. G., Raab, H. E., Spencer, S., & Zheng, B. (2015). The cryptophycins as potent payloads for antibody drug conjugates. Bioorganic & Medicinal Chemistry Letters, 25(4), 864–868. https://doi.org/10.1016/j.bmcl.2014.12.070 Wilhelm, S., Tavares, A. J., Dai, Q., Ohta, S., Audet, J., Dvorak, H. F., & Chan, W. C. W. (2016). Analysis of nanoparticle delivery to tumours. Nature Reviews Materials, 1(5), 1–12. https://doi. org/10.1038/natrevmats.2016.14 Younes, A., Bartlett, N. L., Leonard, J. P., Kennedy, D. A., Lynch, C. M., Sievers, E. L., & Forero-Torres, A. (2010). Brentuximab Vedotin (SGN-35) for Relapsed CD30-Positive Lymphomas. NEW ENGLAND JOURNAL OF MEDICINE, 363(19), 1812–1821. https://doi.org/10.1056/NEJMoa1002965
SPRING 2020
133
Coronavirus: The Story of a Modern Pandemic STAFF WRITERS: NINA KLEE, TEDDY PRESS, BRYN WILLIAMS, MAANASI SHYNO, ALLAN RUBIO BOARD WRITERS: SAM NEFF, MEGAN ZHOU Cover: COVID-19 viral particles. Source: Felipe Esquivel Reed via Wikimedia Commons
â&#x20AC;&#x153;In December 2019, health officials in Wuhan China recorded an unexpected rise in severe pneumonia cases above the endemic level for the city.â&#x20AC;?
134
Introduction In December 2019, health officials in Wuhan, China recorded an unexpected rise in severe pneumonia cases above the endemic level for the city. A significant number of patients had all been exposed to the Huanan wholesale seafood market, prompting the activation of a surveillance system put into place after the 2003 SARS outbreak (Singhal, 2020). The World Health Organization (WHO) was notified on December 31st and Huanan, later confirmed to be the source of this coronavirus through environmental samples, was closed the next day. However, epidemiologists noted an exponential increase in cases in which the patients had not been exposed to the initial source, indicating human-to-human transmission. This finding came around Chinese New Year, so the major travel common for this holiday allowed the virus to spread to other provinces and countries. By January 7th, 2020, testing indicated the agent to be a
coronavirus, later dubbed COVID-19, with high homology to bat coronavirus and SARS-CoV. By January 23, 2020, Wuhan was put on lockdown and airports began to implement screening methods, but asymptomatic people and those traveling before the onset of symptoms limited the effectiveness of these prevention measures [Figure 1]. The global logarithmic expansion of COVID-19 led the WHO to first declare an outbreak on January 30th, 2020 and later a pandemic on March 11th, 2020 [Figure 2]. Since this declaration and implementation of safety measures, the pandemic has impacted all parts of life. Social distancing precautions have altered the way people can interact, study, shop, and work. The closing of non-essential businesses and travel in areas with higher prevalence has had negative consequences for the global economy. Amidst panic and confusion, scientists are trying to understand the nature of the virus, especially with concern to creating an effective vaccine (Singhal, 2020). DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
The illness COVID-19 is caused by the coronavirus named SARS-CoV-2, which consists of a single-stranded positive-sense RNA (+ssRNA) genome around 30 kilobases (Chen et al., 2020). The RNA genome contains at least six open reading frames (ORFs) consisting of both structural and nonstructural proteins. The ORFs near the 3’ terminus code for four main structural proteins: the spike, membrane, envelope, and nucleocapsid. Sequencing shows that SARS-CoV-2 possesses a typical coronavirus (CoV) structure with a 54% identity across the whole genome structure (Chen et al., 2020). However, the virus is unique in having the largest known viral RNA genome: it is approximately 30kb whereas the typical CoV genome is only about 10kb. Also, SARS-CoV-2 is a betacoronavirus, one of four coronavirus subgroups [Figure 3]. It is most closely related to bat CoVs (Bat-SL ZC45 and Bat-SL ZXC21) and is distantly related to SARS-CoV, another CoV that infects humans (Chen et al., 2020). While the direct origins of SARS-CoV-2 remain unknown, it is unlikely that it originated in a lab setting since genetic data reveals a novel viral backbone not derived from a pre-existing virus (Andersen et al., 2020). The two most
widespread theories of the virus’ origin are: 1) natural selection in an animal host before transmission to humans and 2) natural selection in humans after zoonotic transfer (Andersen et al., 2020). Many early cases were connected to the Huanan market in Wuhan, so many possible animal sources were present, such as a type of anteater called a pangolin. Pangolins are known to carry a CoV almost identical to SARSCoV-2 (Andersen et al., 2020). The pangolin CoV contains the six receptor binding domains known to bind to the ACE2 receptor in the novel CoV. However, the pangolin virus is missing the insertion of a polybasic cleavage site, a key feature of SARS-CoV-2, which thus leads to the theory that this feature arose during human-tohuman transmission (Andersen et al., 2020). The ongoing pandemic has and continues to change lives, challenge social structures, and urge for prompt and innovative interventions. This paper aims to break down the various aspects of the coronavirus outbreak. Starting with a background on the virus and its spread through an epidemiological and pathological lens, the paper then shifts to public health implications. Such implications include the current state of the pandemic response through social distancing measures, impact on the health care system, and the availability and allocation of medical supplies and services. In addition to analyzing and evaluating the US response to the pandemic, there is also a commentary on international responses and their effectiveness. Moreover, this includes a discussion on potential systems that can help mitigate the spread of the virus and improve responses to future pandemics. Finally, the paper concludes with a glimpse into the future: what could happen next?
Figure 1. Map of Confirmed Cases in Mainland China and Taiwan as of January 22 2020. This was the day before Wuhan was finally put on lockdown and airports began to screen for the coronavirus, and a week before the WHO officially declared COVID-19 as an outbreak. Source: Wikimedia Commons
“The two most widespread theories of the virus' origin are 1) natural selection in an animal host before transmission to humans and 2) natural selection in humans after zoonotic transfer.”
Figure 2. General timeline of the COVID-19 pandemic. Information Source: World Health Organization
SPRING 2020
135
Figure 3. A genomic structure and phylogenetic tree of coronaviruses. The novel COVID-19 is highlighted in red. Permission to use granted by Chen et al., 2020
“Once the virus reaches an individual's mouth, it has access to both the respiratory and the digestive tract - and SARS-CoV-2 has a high affinity for both lung and intestinal tissues.”
Figure 4. Graphic of the COVID-19 virus created by the Center for Disease Control Source: Wikimedia Commons
136
Understanding the Coronavirus: Epidemiology and Biology of Transmission SARS-CoV-2: Modes of Transmission The virus is currently believed to have originated in bats in the Wuhan province in China and to have spread to humans through an unknown intermediary animal. Currently known methods of transmission include inhalation or direct contact with infected droplets (CDC, 2020). Respiratory droplets include those created by sneezing, coughing, and talking. One of the most alarming contributors of the spread of COVID-19 has been its asymptomatic carriers who can spread the virus to others without showing any symptoms of the disease themselves. Recent findings have shown that respiratory droplets can act as aerosols (particulates that stay in the air for an extended period of time and enter the body); this coronavirus can remain in the air in aerosol form for up to three hours (Li et. al, 2020). For these reasons, the Center for Disease Control (CDC) and the WHO recommend wearing masks when going outside. In addition, COVID-19 can spread when a person touches a surface which has been infected with the virus and then touches their eyes, nose, or mouth. In some cases, the virus has even been found in stool samples of infected patients. It remains unclear whether COVID can be transmitted through food; however, cooked food is not currently of concern because extreme heat would kill any possible viruses on the food (CDC, 2020). Studies by the National Academies of Science, Engineering, and Mathematics have used analytical models to predict that the spread of coronavirus will likely slow in the warmer months, similarly to influenza (Li et al., 2020).
Physiology of Human SARS-CoV-2 Infection Viruses are deadly entities—they are not quite living species, given their incapacity to reproduce autonomously but nonetheless are successful at infiltrating the human body and cause harm [Figure 4]. Once the virus reaches an individual’s mouth, it has access to both the respiratory and the digestive tract—and SARS-CoV-2 has a high affinity for both lung and intestinal tissues because both tissues express the protein ACE-2 in high quantities (Letko et al. 2020; Zhang and Penninger et al., 2020). ACE-2 is normally involved in the reninangiotensin system, a pathway that regulates blood pressure, and it is also the functional receptor for SARS-CoV-2. The virus contacts ACE-2 with its own so-called spike proteins and is subsequently brought into the cell interior (Zheng et al., 2020). Once the viral particle has entered the cell, it takes advantage of the host cell machinery to reproduce itself. In addition to the genome, certain viral proteins in the capsid coat are also deposited into the cell. These proteins may help
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
the virus become incorporated into the host genome so that the host transcriptional and translational machinery turns the viral genome into functional protein. Viruses with RNA genomes (like SARS-CoV-2) must convert their RNA into DNA so that it can enter the human genome. The viral DNA can then be transcribed and translated into a variety of different proteins such as those that make up the protein coat and various enzymes. Viral enzymes called proteases cleave the coat proteins into smaller components that are built into a new viral coat. By the action of these proteins, the virus is able to reassemble within the human cell into many more identical viral particles. Ultimately, the proliferation of the viral particles places enough stress on the cell that it bursts, allowing the virus to roam the body and infect additional cells [Figure 5]. At this point, the immune system is activated: neutrophils and monocytes will engulf the viral particles as other immune cells tag them with antibodies which are produced by B cells. Other cells also help kill foreign invaders, and there are still more that have other immune functions. In some cases, this complex immune response is effective in clearing the virus in a matter of days or weeks, with no long-term consequences. However, certain patients experience a more severe infection, such that they need to be hospitalized, and in some cases, intubated. These unfortunate individuals have a much lower survival rate. One potential explanation is the “cytokine storm” phenomenon in which the activation of the immune system contributes to a hyper-active immune response. In attempting to kill the virus, the body can destroy itself.
SPRING 2020
Modeling the Global Spread of the Pandemic Since its origin in Wuhan, the virus has grown exponentially, traversing borders and traveling over vast distances in human vectors to become the pandemic it is known as today. Due to improper and late policies, infections grew to a total of 28,276 confirmed cases with over 20 countries affected around the world as of February 6th, 2020—just weeks after the first known cases outside of China according to the WHO (Wu et al., 2020). Therefore, it has been paramount to develop and understand metrics that characterize the novel coronavirus so that the spread can be properly modelled. A standard measure of infection is R0; it quantifies the expected number of additional infections after every infected individual can give valuable insight to the immediate impacts of novel exposure. It is calculated by using individual-level contact tracing: the average number of secondary cases is the R0. Although it is not always agreed upon, as its value varies for many countries, Zhang and colleagues approximated the R0 value for South Korea and Italy to be 3.2 and 3.3 respectively in February 2020. In a March 2020 report, the Imperial College London modelled the virus to have a R0 value of 2.4 overall, signifying that with every person infected, they are likely to infect more than 2 individuals. In addition to the R0 value, scientists also model the dynamics of coronavirus through its incubation period, communicability, and latent period. The incubation period is the most commonly measured metric, and it is the amount of time between identifiable exposure to the disease and the onset of symptoms. This tells researchers and health professionals not only how much time patients should remain in a hospital, but also how long the general public should be in self-quarantine after suspected exposure. For this coronavirus, the median is around 5.1 days while it takes 11.5 days for 95% of patients to develop symptoms, which is the basis of the two-week-long expected quarantine period (Lauer et al., 2020). Communicability refers to how long a person can spread the disease and latent period is defined as the time between exposure and the start of the communicable period.
Figure 5. Replication cycle of the coronavirus. Source: Wikimedia Commons
“...researchers have been working on creating mathematical models that can predict both where people who are infected are traveling and how likely they are to spread their disease.”
Beyond characterizing the coronavirus with these important epidemiologic metrics, researchers have been working on creating mathematical models that can predict both where people who are infected are traveling and how likely they are to spread their disease
137
Figure 6. This is a sample of the CDC’s COVID-19 diagnostic RTPCR panel for testing. Source: Wikimedia Commons
“The first COVID-19 case in the US was an unnamed 35-year old man, who, by the time he went to an urgent care clinic on January 19, 2020, had a fourday history of cough and fever.”
(Hsu, 2020). While there is some definite data on the network of flight routes and international travel patterns, computational methods still rely on making some assumptions as an individual’s specific travel pattern is still unknown. The uncertainty around COVID-19’s R0, however, is a concerning problem: if this number is off, the estimate of the coronavirus’ spread can be off by orders of magnitude. These models also incorporate the effects of public health measures, such as social distancing, wearing face masks, school closures, or the quarantining of entire cities (Hsu, 2020). These mathematical models attempt to discern the relationship between individuals who are susceptible to virus (S), are infected (I), and then either recover (R) or die (Adam, 2020). Advanced versions of this SIR model also group people based on characteristics such as age, sex, health status, employment, and number of contacts, to better gauge where and when people are interacting (Adam, 2020). These models are extremely useful, but ultimately, doctors and researchers have acknowledged how there is incomplete data available on the coronavirus, and that the current projects and models are limited.
Tracking the US Response The goal of this section is to tell the story of the US response to the coronavirus with emphasis laid on three broad aspects: (1) the retooling of the healthcare system to expand medical capacity and to ensure that all patients who need treatment, for COVID-19 or other medical affliction, are able to receive it, (2) the institution of social distancing guidelines by the federal and state governments and the efforts taken by businesses and individual Americans to meet them, and (3) the economic stimulus measures that have been proposed and enacted to ameliorate some of the symptoms of the deep economic crisis caused by the lockdown of the country and the world. The first COVID-19 case in the US was an unnamed 35-year old man, who, by the time he went to an urgent care clinic on January 19, 2020, had a four-day history of cough and fever (Schumaker, 2020). He stated that he had returned to Washington state on January 15, 2020, after visiting family in Wuhan, China. Washington state also had the first reported COVID-19 death in the US on February 28 at Evergreen Health Medical Center. However, after an autopsy in April, the first COVID-19 fatality in the US was determined to actually be a death in Santa Clara, California (Schuchat, 2020). It
138
was not until March 13th, 2020, however, that President Trump declared it to be a US national emergency, which finally opened up $50 billion in federal funding. On March 15, 2020, the CDC warned against gatherings larger than 50 people, and two days after this warning, the coronavirus was reported in all fifty states. That same day, the first ‘shelter in place’ order, put in place in Northern California, was issued; other states soon followed suit (Schumaker, 2020). By March 20th, with more than 15,000 confirmed cases, New York City accounted for roughly half of the infections in the United States and was declared the epicenter. After the US became the country with the most confirmed coronavirus cases on the 26th, on March 27th, Trump signed a $2 trillion stimulus bill. As of July 5th, the US has had over 2.8 million confirmed cases with more than 120,000 deaths (CDC, 2020). Provision of Medical Supplies and PPE In order to contain the rapid spread of coronavirus, governments need to implement fast and aggressive testing sufficiently early. Death rates mostly depend on the extent of testing, the capacity of a country’s healthcare system, its demographics, and the availability of drugs that reduce the severity of COVID-19 upon infection. Even with mass testing, however, the case fatality rate of the virus is predictably several times that of seasonal flu, which has a case fatality rate of 0.1% in the US. Nevertheless, testing is essential for mitigating the spread of coronavirus. The most common test for the viral RNA of SARS-CoV-2 uses polymerase chain reaction (PCR) (“Humanity tested,” 2020). Nucleic acid real-time PCR tests (RT-PCR) have become the standard method for diagnosing coronavirus infections (Li et al., 2020). Medical device companies and government and research laboratories around the globe have scaled up nucleic acid tests, mostly by PCR, to detect the RNA of the virus [Figure 6]. Government agencies assess these
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
tests through emergency routes, such as the Emergency Use Authorization program by the US Food and Drug Administration. Antibody tests in the form of lateral flow immunoassays and enzyme-linked immunosorbent assays are also being rapidly developed to detect earlystage infection (“Humanity tested,” 2020). Tests were difficult to obtain for almost two months after this pandemic reached the US. The US detected its first infected citizen on January 12th, but by February 25th, only 426 tests had been conducted in the entire country. By March 9th, 8554 tests had been conducted in total, which was fewer than South Korea was then conducting each day (Dyer, 2020). Until March, all diagnostic testing was conducted by the CDC only on patients who had travel history to China or had clinical symptoms (Adalja et al., 2020). The US government enforced strict approval requirements preventing private and foreign providers from producing their own tests even though the CDC alone is not able to achieve the high-level testing capacity required in the midst of a pandemic (Adalja et al., 2020). Furthermore, a design flaw made 160,000 tests that the CDC had sent out unusable (Dyer, 2020). In late March, various providers began testing and the first drive-through sites became operational (Dyer, 2020). By March 22nd, the number of tests exceeded 235,000 and more than 80% of them had been conducted in the 7 days prior (Dyer, 2020). Even in April, however, only about 120,000 samples were tested each day, while millions of people would need to be tested per day to control the infection rate (Mangan, 2020). By April 14th, ten states had opened drive-through testing facilities and some commercial companies were able to test samples received from doctors and hospitals (Maragakis, 2020). By May 11th, 97 public health laboratories offered testing and CDC
SPRING 2020
labs had tested 6,275 samples while US public health labs had tested 818,682 respiratory specimens with viral tests (“U.S. Viral Testing Data,” 2020). Despite the fact that testing has since ramped up, the CDC’s earlier lack of development of testing kits and the restrictions on the production of testing kits by other providers resulted in a slow US response to the pandemic. Due to this delay in testing, the US had conducted only 125 tests per million people by mid-March, in contrast to 5,567 tests per million people in South Korea and 2,514 per million in Italy (McCarthy, 2020). By May 27th, a reported 45.9 tests per thousand people had been conducted in the US (15,192,482 in total), reflecting a rapid surge in testing from mid-March to May (“Coronavirus (COVID-19) Testing,” 2020). Coronavirus tests need to be ordered by a doctor who considers risk factors like age, general health, where the patient lives and works, travel history, and symptoms. Due to different testing criteria and limited supplies, not everyone can receive a test, especially not those with no symptoms or with minimal risk factors. An individual is most likely to get a test if they are experiencing severe symptoms (like very high fever and breathing difficulties), have certain health conditions (like diabetes, heart disease, or chronic lung disease), or if they work in an environment involving direct care to patients. Also, the time to process tests varies and can take between an hour and several days to yield results. The CDC provides guidelines for who to test but leaves decisions for testing up to state and local health departments as well as individual doctors (Maragakis, 2020). In addition to the necessary and ongoing increase in testing capacity, the country also faces a severe shortage of personal protective equipment (PPE) and medical supplies like ventilators that are vital to the treatment and containment of the virus (Dyer, 2020). A survey of 978 institutions covering 47 states revealed that 36% had no remaining face shields, 34% had no thermometers, and 19% had no remaining gowns. Nearly every institution surveyed had no supplies remaining for at least one type of PPE (Schlanger, 2020). The nation also faces a significant shortage of ventilators with an estimated 60,000 available at the beginning of the outbreak and a predicted need ranging from several hundred thousand to a million by the peak of infections (Ranney et al., 2020). To address a severe nationwide shortage of PPE and necessary medical equipment, the federal
Figure 7. Donation of Personal Protective Equipment Supplies (PPE) to the Philippines Office of Civil Defense. Healthcare workers are in dire need of PPE as they battle against COVID-19, and around the world, individuals and organizations are banding together to contribute and donate to fighting the novel coronavirus. Source: Wikimedia Commons
“By May 11th, 97 public health laboratories offered testing and CDC labs had tested 6,275 samples while US public health labs had tested 818,682 respiratory specimens with viral tests.”
139
â&#x20AC;&#x153;...states across the country have loosened licensing rules, allowing outof-state doctors to practice immediately and allowing retired physicians to volunteer.â&#x20AC;?
government has provided or plans to provide a total of 11,706,105 respirators, 26,533,466 masks, 5,310,654 face shields, 4,408,861 surgical gowns, 145,790 coveralls, 22,609,519 gloves, 50 PAPRs, 7,920 ventilators, and 8,450 beds (House Oversight Committee, 2020). In response to the shortage of equipment, businesses and individuals in the private sector have started producing PPE [Figure 7]. Many companies like Ford and GM are producing various kinds of medical supplies ranging from hand sanitizer to respirators (Tognini, 2020). President Trump used the Defense Production Act on multiple occasions beginning on March 18th to strongly encourage other companies to help the effort and increase production of PPE and ventilators (The White House, 2020). As cases of COVID-19 continue to rise, both the public and private sectors are working to increase available healthcare personnel and resources. Experts have expressed concern over an inevitable shortage of healthcare workers (Simmons-Duffin, 2020). To mitigate these shortages, states across the country have loosened licensing rules, allowing out-of-state doctors to practice immediately and allowing retired physicians to volunteer. Additionally, the Federation of State Medical Boards is offering free access to their physician database, so hospitals can quickly verify the credentials of incoming doctors (Simmons-Duffin, 2020). In addition to state and private responses, the federal government, as of April 11th, has created fifteen Army Urban Augmentation Medical Task Forces, eight of which are already deployed along the east coast. These task forces consist of 85 medical military personnel and are capable of providing the same service as a 250-bed hospital (Lacdan, 2020). In addition to these task forces, 25,000 national guard troops have been mobilized across the country to help test civilians and relieve overflowing hospitals (Lacdan, 2020). Hospitals are currently facing a capacity crisis with a nationwide estimate of 2.77 hospital beds per 1,000 people (Blumenberg, 2020). According to the American Hospital Association, there are currently 6,146 hospitals with 792,417 non-federally staffed beds of which 107,276 are ICU beds (AHA, 2020). To increase surge capacity, makeshift centers have popped up across the country. At the end of March, President Trump announced the immediate conversion of outpatient surgery centers, hotels, and dormitories into makeshift hospitals, healthcare centers, and quarantine sites (Szabo
140
et al., 2020). As a result of this plan, there are over 5,000 outpatient surgery centers in the US which can be converted into temporary hospitals increasing surge capacity for certain regions significantly (Szabo et al., 2020). Given the overwhelming strain currently placed on hospitals, elective surgeries have come to a halt across the United States after the American College of Surgeons recommended that all elective surgeries be postponed indefinitely on March 17th (Shapiro, 2020). The pandemic has triggered the development of coronavirus vaccines by pharmaceutical companies and research organizations like the National Institutes of Health. Various coronavirus vaccines are currently in different stages of development. For example, the pharmaceutical company Altimmune is working on a single dose intranasal vaccine for COVID-19 called AdCOVID. Its design and synthesis have already been completed, and the company plans to conduct preclinical animal studies and phase one clinical trials in the third quarter of 2020. The private biotechnology company OyaGen is currently assessing the applicability of its drug OYA1 for COVID-19. OYA1 was initially investigated for treating cancer but has now shown strong antiviral capability against the coronavirus in laboratory assays. Coronavirus research is a priority around the world and has allowed for unprecedented collaboration between countries. For example, Inovio Pharmaceuticals, an American company, is working with Beijing Advaccine Biotechnology Company to develop a novel coronavirus vaccine (INO-4800). Like many other companies and research groups, Inovio is following an accelerated timeline for the development of the vaccine. Preclinical trials are currently ongoing, and they have designed human clinical trials and are planning for large-scale manufacturing. The company has also prepared 3,000 doses for human clinical trials to be conducted across the US, China, and South Korea. Phase I human clinical trials in 40 healthy volunteers began in April 2020 in the US, and results are expected to become available September 2020. Besides pharmaceutical companies, universities are also on the search for a coronavirus vaccine. Columbia University is using a $2.1 million grant to try several approaches for developing a vaccine. Bringing a vaccine to the market can take 12 to 18 months and there is still a long way to go. However, many clinical trials are already underway and the process of developing a coronavirus vaccine has been accelerated
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
immensely (Duddu, 2020). Enactment of Social Distancing and Contact Tracing Measures A key measure of success in the efforts to contain the COVID-19 pandemic is the extent to which contact tracing is conducted. Contact tracing warns individuals about potential exposure to the coronavirus and is one of the most useful tools we have to combat its spread. It has had promising results in slowing the spread of SARS in 2003 and Ebola in 2014 (Garza, 2020; WHO, 2017). This process consists of public health workers working with infected patients to recall people the patient had close contact with during the timeframe when they may have been infectious (NCIRD, 2020). Close contact is typically defined as being within 6 feet of an infected person for more than 10 minutes (Garza, 2020). These potentially exposed people will then be notified of the potential exposure as quickly as possible and are given resources to educate them on the virus and their own risk (NCIRD, 2020). They are asked to social distance for two weeks in case they become sick (NCIRD, 2020). Even though contract tracing measures are far from perfect, early efforts to trace contacts and mitigate the spread of the disease can lower the ultimate extent of the crisis. For example, if contact tracing measures are put in place after 50 diagnosed cases of COVID-19 are identified in a country, the R0 value decreases from 3 to 1. This would mean the growth of the pandemic is linear as opposed to exponential (Hellewell et al., 2020). Qualitatively, this would be the difference between successfully “flattening the curve” or overwhelming hospital capacity. Many countries have been able to use contact tracing to slow the spread. South Korea, which used contract tracing to stop MERS in 2015, is using online patient interviews, cell phone GPS data, credit card transaction data, and
SPRING 2020
surveillance footage to track and identify exposures (Garza, 2020). People in Singapore are using a mobile app that employs Bluetooth to identify when people are close together, then if someone becomes infected, app users are alerted. In China, over 9,000 contact tracers have been employed in Wuhan alone to identify and inform people who have been exposed (Garza, 2020). Technological resources are also essential to make this process as effective and efficient as possible. Google and Apple are working together on an optional program that would be installed in smartphones that would aid in the tracing process (Garza, 2020). Contact tracing is an important tool and may be one of the more effective measures to stop the spread of infectious diseases in the future [Figure 8]. A CDC report released on March 6, 2020 reveals details about contact tracing measures enacted in the US. On February 26th, 12 travelrelated cases, 3 cases with no travel history, and 46 repatriated US citizens (infected in other countries and brought home to the United States) had been diagnosed as COVID-positive. For 10 of the travel-related cases, a total of 445 close contacts (from 1 to 201 people per case) were identified that had been in touch with the infected person on the date when symptoms started, or afterwards. All of the contacts were monitored with daily phone calls, text messages, or in-person conversation for fever and other symptoms. Some contacts developed symptoms (12% of the 445) and were deemed persons under investigation and tested for coronavirus—and only two tested positive. Later, 146 contacts related to these two additional cases were identified, and 18 developed symptoms and were tested, but all results came back negative (Burke et al., 2020). Furthermore, some states in the US have initiated contact tracing programs (Garza, 2020). For example, in California, a pilot program with a tech company trained 250 workers in contact tracing (Garza, 2020). Massachusetts is bringing in 1,000 contract tracers to work across the state, and similar efforts are spreading across the country (Garza, 2020). Some estimate the country will need as many as 100,000 contact tracers employed to effectively track the spread of infection (Garza, 2020).
Figure 8. This infographic illustrates the importance of contact tracing and how it can reduce spread of the coronavirus. Source: Wikimedia Commons
"Despite these early contact tracing measures, cases in the United States began to balloon over March and April.”
Despite these early contact tracing measures, cases in the United States began to balloon over March and April. A CDC report released on April 29th outlines an updated policy for controlling the virus. For each new case, patients are asked to recall potential contacts
141
Figure 9: President Trump signed H.R. 748, the CARES Act on March 27, 2020. Source: Wikimedia Commons (Official White House Photo by Shealah Craighead)
“Many countries have banned international travel or, at the very least, are mandating tests and a twoweek quarantine for travelers entering the country.”
which now includes all contacts that they met since they (likely) became infectious. Public health workers then reach out to contacts and ask them to stay home and maintain a distance of at least six feet from other individuals for 14 days after the date of potential. It is not currently possible to know the exact origins of most cases, but as individuals travel to testing sites or are diagnosed at workplaces that are still open, contact tracing measures still apply (‘Contact Tracing,’ 2020). The CDC has guidelines on “Social Distancing, Quarantine, and Isolation” given the spread of COVID-19. The CDC originally recommended that gatherings be limited to less than 1,000 people, then revised that number to no more than 250 people, and then continued to revise the recommended number to no greater than 10 people (CDC, 2020). Since then, it has discouraged mass gatherings altogether. Specific practices recommended include remaining at least six feet away from others, washing hands often, and wearing a mask in public settings if outside travel is necessary. The CDC has also advised working from home whenever possible (CDC, 2020). Although the CDC has promoted its social distancing guidelines since the beginning of the pandemic, the CDC does not have any legal power to enforce its guidelines. President Trump has deemed the CDC guidelines as merely recommendations. There is no law enforcement of social distancing on a national level, but certain states have garnered attention for their individual pandemic responses. Although the White House had been promoting social distancing in their own set of guidelines, those expired on April 30th (Chiu, 2020). However, on May 5th, the White House ordered the CDC to revise its guidelines on reopening the country, claiming they counteracted White House policy on leaving COVID-19 responses up to individual states (Frayer and Suliman, 2020). Though White House officials have left most recommendations to state leaders, Dr. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases, played a key early role in dispensing guidance on disease response to the general public (Chiu 2020). Travel around the world has come to a near standstill as a result of this pandemic. Many countries have banned international travel or, at the very least, are mandating tests and a two-week quarantine for travelers entering the country. The United States initially mandated quarantine for two weeks for all passengers who had traveled from Hubei province early
142
February, and by March, travelers from a variety of countries were denied permission to enter the US with other travel advisories for Iran, Italy, and China (Silverman, 2020). As a part of the contingency plan to tackle this ongoing crisis, President Trump extended the travel ban to 26 European nations as well (‘Coronavirus: US travel ban on 26 European countries comes into force,’ 2020). As of early June, restrictions on travel are still mostly in place, though some countries have begun to lift social distancing regulations and want to “open up” their economies after closing down almost all businesses. Implementation of Economic Measures to Blunt the Impact of the Pandemic As the US economy has shut down in the past few months, there has been a great need for economic stimulus for Americans. This stimulus is needed for both those that have retained the capacity to work (either physically on the job or remotely) and those who have been temporarily or permanently furloughed from their jobs. A $8.3 billion aid package, largely geared towards the public health sector, was enacted on March 6th. The Families First Coronavirus Response Act (signed March 18th) provided free virus testing for uninsured individuals alongside food aid, emergency paid sick leave, and an expansion of federal Medicaid funding (CFPB, 2020). The CARES Act (H.R. 748), passed on March 27th, offered a broad slate of relief measures, including compensation for lost wages, provisions to address shortages of drugs and medical supplies, health care coverage, efforts aimed to support businesses, and assistance for the healthcare workforce [Figure 9]. This act provided roughly $2.2 trillion dollars for COVID-19 relief (‘State Fiscal Responses,’ 2020). Specifically, the CARES Act provides an additional $600 per week to those currently receiving unemployment insurance. Tax relief measures include recovery rebates for
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
taxpayers, payroll tax credits for businesses, and the revocation of certain excise taxes. The act allocates $350 billion to the Paycheck Protection Program, which provides loans to cover payroll costs for small businesses with less than 500 employees. It also provides $454 billion in emergency lending to states, cities, and individual businesses. It expands various measures aimed to boost the healthcare industry, such as addressing supply chain shortages to improving access to telehealth services and strengthening Medicare and Medicaid provisions. A $150 billion Coronavirus Relief Fund was also set up to support state and city expenses related to COVID-19 (Watson et al., 2020). Since the CARES Act, the Paycheck Protection Program and Health Care Enhancement Act was also signed into law on April 24th. It included $310 billion in small business funding, $75 billion for hospitals, $25 billion to support testing, and $60 billion in additional emergency disaster relief (‘State Fiscal Responses’, 2020). Alongside federal intervention, individual states have formed their own policies regarding economic action during the pandemic. Thousands of bills have been introduced in state legislatures, from bills that urge individuals to fist bump rather than shake hands in Alabama to bills giving public schools more time to submit five-year budgets in Ohio (‘State Action,’ 2020). In general, states have taken action to provide additional stimulus beyond what the federal government has offered. Policymakers and public health officials are also concerned about safely operating essential businesses. This is especially important as several states have started to relax social distancing measures. Small businesses suffer in terms of profit and production considering their small clientele base and occasional lack of flexibility. In order to support struggling businesses during this time, most states have issued instructions for stay-at-home practices and business closures. States are also beginning to implement the CDC’s behavioral adjustments to prevent transmission within stores. These practices go beyond wearing masks in public to include wiping public surfaces before and after you touch them, leaving doors open to prevent it from being touched, and removing items from common areas (CDC, 2020). However, several of these will not be mandated on a state level considering the differing priorities between states. Because people are still making trips to these essential businesses, it is
SPRING 2020
important that these spaces are continuously disinfected. The Environmental Protection Agency has provided a list of disinfectants that are effective in slowing the spread of viruses. In addition to these disinfectants, businesses have been instructed to routinely clean surfaces and objects with soap and water. The CDC has also suggested creating alternative disinfectants when others are not available and has also issued guidance about the proper care and storage of these disinfectants. It is also important to take measures to protect the custodial staff performing the disinfecting, as they are often the most exposed to the virus and toxic disinfectant chemicals. This can be done by ensuring custodial staff receive sufficient training and wear gloves and masks at all times (‘CDC Reopening Guidance,’ 2020). Additionally, the US Chamber of Commerce has suggested that companies make a decision about whether to keep work hours limited and to make sure to clearly address staff safety concerns. The Chamber of Commerce also notes that employees should be informed of reopening plans prior to agreeing to work hours. Customers should also be informed of expectations for visiting business spaces. The Chamber of Commerce further recommends that requirements should be publicized in multiple ways, such as using both a sign and a FAQ document (Fallon, 2020).
International Response to the Pandemic This section outlines the international response from a variety of perspectives. It focuses on efforts to ramp up medical capacity, guidelines for physical distancing, and government measures to address the economic impact of the crisis. The synopsis of each country’s response includes a timeline of the events that have taken place in these last few months and a description of specific measures taken within the different countries [Figure 10] [Figure 11] [Figure 12] [Figure 13].
“Alongside federal intervention, individual states have formed their own policies regarding economic action during the pandemic. Thousands of bills have been introduced in state legislatures...”
China Although the earliest symptoms of COVID-19 were first observed in patients on December 1st, it is believed that the virus began spreading in early-mid November in the Hubei province of China (“Early outbreak,” 2020). It then took about three weeks for Chinese officials to receive word of SARS-like viral symptoms in Hubei. By December 31st, over 27 cases of pneumonia, later attributed to COVID-19, were confirmed. Over the course of the next few weeks, the virus
143
Figure 10. This is a heat map of the COVID-19 outbreak as of May 29, 2020. This refers to the WHO’s situation reports for its case information, but as this is a fluid situation, new cases may not be immediately represented. Source: Wikimedia Commons
“Although many of China's lockdown policies have been deemed effective at curtailing the spread of the virus, many health experts have criticized China's delayed response and dispersal of information regarding the disease.”
144
spread rapidly. On January 1st, a Chinese hospital posted on the social media platform WeChat that they were fighting a mysterious illness; they were charged by Chinese officials with spreading rumors. The same day, the Huanan Wholesale Seafood Market was shut down and the WHO was informed of the possible disease (“Early outbreak,” 2020). By January 7th and 8th, Chinese officials had confirmed the emergence of a new coronavirus, whose genome was released to the public for the purpose of developing detection tests on the 9th. When the WHO found evidence of human-to-human transmission, the entire city of Wuhan was locked down on January 23rd. This timeline coincided with the Lunar New Year, and festivities led to hundreds of thousands of Chinese citizens traveling across the country, presumably contributing to a quick spread of the disease. Although many of China’s lockdown policies have been deemed effective at curtailing the spread of the virus, many health experts have criticized China’s delayed response and dispersal of information regarding the disease (“Pivotal 6 Days,” 2020). One report found that China had known of the possibility of a pandemic for a period of about six days before informing the rest of the world—a delay that occurred at a critical early stage of the spread of the virus. Nevertheless, China’s first public official comments arrived on January 20th, warning the public of a new disease (Cyranoski, 2020). Others have been less critical of China’s cautious approach, noting that raising an alarm too early can have the unintended consequence of crying wolf, therefore negatively curbing public
caution when the situation grows more dire (Beusekom, 2020). Social distancing was instituted at the same time as Wuhan’s lockdown and continued in Wuhan until April 8th. In terms of travel restrictions, Wuhan was locked down in midJanuary, and 760 million people were confined to their homes and pressured to stay in place (Li et al., 2020). Officials have later determined that travel bans don’t necessarily stop the spread of a virus, but instead slow it down— thus aiding in ‘flattening the curve.’ The WHO also congratulated China on its successful social distancing policy. One study found that the R0 value dropped from 2 to 1.05 due to social distancing measures (Beusekom, 2020). There is evidence that “cities that suspended public transport, closed entertainment venues and banned public gatherings before their first COVID-19 case had 37% fewer cases than cities that didn’t implement such measures” (Cyranoski, 2020). One mathematical model shows that over 60% of cases could have been prevented had social distancing been implemented one week earlier (“Shortcomings and Deficiencies,” 2020). China’s early medical response exposed deficiencies in the Chinese healthcare system. Towards the beginning of the virus’s spread, the National Health Commission distributed a 63page guide on identifying cases, opening fever clinics, and recommending protective gear (“Pivotal 6 Days,” 2020). These documents were classified as internal and some claim that the officials had downplayed the initial severity of the threat. According to Time Magazine, China
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
has only 1.8 doctors for every 1,000 people, compared with 2.4 in the US and 2.8 in the UK. Furthermore, “China’s doctors are unevenly weighted towards specialties to the detriment of primary care,” which makes it more difficult to transition to quick frontline response and testing (Campbell, 2020). However, over 1,000 telehealth companies have seen a boost in usage due to the coronavirus, claiming to have amassed over 300 million users in the country. One company grew from 10,000 to 150,000 daily consultations over the period of a few months (Beusekom, 2020). Many of China’s hospitals were inadequately equipped to deal with the large influx of cases with a severe shortage of hospital and ICU beds and other healthcare resources (“Shortcomings and Deficiencies,” 2020). In response to the virus, more than 18,000 healthcare workers from other parts of China arrived in Wuhan to aid hospitals. Two makeshift hospitals and dozens of quarantine centers with more than 39,000 beds total were reserved for COVID patients (Campbell, 2020). During the outbreak’s peak, over 19,000 patients were hospitalized, 9,689 of whom were in serious condition, and 2,087 of whom required intensive care each day in Wuhan alone (Li et. al, 2020). Guangzhou, who had started contact tracing early, had a maximum of 271 patients hospitalized on any given day. The low availability of ventilators correlated to a sharp increase in fatality rates, which was 4.5% in Wuhan in comparison to 0.8% elsewhere in China; such numbers show the need for adequate medical supply provisions (“Shortcomings and Deficiencies”, 2020). From this data, scientists claim social distancing and “flattening the curve” lower the fatality rate, as they spread out the need for vital medical supplies and hospital space over time. Unfortunately, there have been new clusters of the coronavirus around Wuhan since the end of their lockdown (Pasley, 2020). Chinese officials have responded swiftly and accordingly, specifically in the northeast provinces where new cases have emerged (Hernandez, 2020). For example, the city of Shulan was locked down in May in a Wuhan-style manner after at least 34 people in the surrounding province experienced symptoms relating to COVID-19 (Davidson, 2020). New hotspots are expected to emerge, which leaves officials with questions regarding when it is safe to relax social distancing orders while minimizing the spread of the disease.
SPRING 2020
South Korea The 2015 MERS outbreak very clearly influenced South Korea’s response to the novel coronavirus. Because the country came to a standstill due to MERS, South Korean officials have acted more aggressively than those in countries without a similar experience (Beaubien, 2020). The first case in South Korea was confirmed on January 20, 2020, and there were only one or two cases on average in the subsequent days (Shim et al., 2020). However, by mid-February, the number of confirmed cases started to rapidly increase following the 31st case: a woman in the Daegu area who developed atypical pneumonia without any history of travel to an area with a COVID-19 outbreak. After this woman was identified, the rapid spread was traced back to Daegu’s Shincheonji Church of Jesus; a positive case there has been linked to a superspreading event leading to more than 3900 secondary cases (Shim et al., 2020). Other clusters besides this church include the Chungdo Daenam hospital, a Cheonan gym, and a pilgrimage tour to Israel. By March 6th, there was a total of 6284 cases with 42 deaths according to the Korea Centers for Disease Control and Prevention. South Korea's foreign minister, Kang Kyungwha, said the key to the country’s healthcare response was that it started to develop testing for the novel coronavirus even before it had a significant number of cases (Beaubien, 2020). Beyond creating an accurate test, South Korea has established more than 600 COVID-19 screening sites, including public health-care clinics, drive-through centers, and walk-in screening sites (Her, 2020). All sites are capable of performing COVID-19 screening and taking swab samples for reverse transcription-PCR tests; this setup has allowed for more than 15,000 screening tests to be performed in one day (Her, 2020). Additionally, in order to track and enforce social distancing, the South Korean government has used data from surveillance cameras, cell phones’ GPS records and credit card transactions to generate a movement map, documenting the social connections of suspected cases (Beaubien, 2020). This map is displayed on the Internet and notifications are sent to inhabitants in the relevant neighborhoods so they can take additional precautions, and thus anyone who has interacted with an infected person can be traced and placed into quarantine (Her, 2020). The legal foundation for this comprehensive strategy is mostly in legislation passed in the immediate aftermath of the MERS outbreak, and it is unlikely to be implemented in
“Because the country came to a standstill due to MERS, South Korean officials have acted more aggresively than those in countries without a similar experience.”
145
Figure 11. Rates of COVID-19 testing (per 1000 people, rolling three-day average) for the various countries discussed in this report. This data was derived from the website ‘Our World in Data’ and repurposed into this new graphic with RStudio Data Source: https:// ourworldindata.org/coronavirus-
countries like the US due to privacy concerns and protection of personal information.
“As a response to SARS in 2003, a national center for infectious diseases, which focused solely on research and prevention, had been created with help from the WHO and CDC.”
146
The estimated R0 in South Korea is 1.5 ± 0.1 as of March 2020 (Shim et al., 2020). The nationwide preventative measures are expected to reduce community transmission and ultimately bring the reproduction number below one (Shim et al., 2020). Despite aggressive goals and measures to counteract the coronavirus’ impact on lives lost, the economic impact is vast. The Korea Composite Stock Price Index initially dropped, before seeing a small rise in stocks after receiving government support (Nicola et al., 2020). The Korea Development Institute (KDI) has also directly cited the negative impacts of COVID-19 on the fast contracting economy as consumption and exports decrease. The KDI report also comments on the grim global industrial production and trade volume decline. Singapore After learning from the SARS outbreak in 2003, Singapore was already allocating resources towards future infectious disease prevention and response when some of the first cases were reported in Wuhan (Lim, 2020). Given Singapore’s proximity to the outbreak, the government started to screen travelers from Wuhan as early as January 2nd (Yong, 2020). The first case of COVID-19 in Singapore occurred on January 23rd, and restrictions banning travel from Wuhan were put in place on that day (Lim, 2020; Pung et al., 2020). About a week later, enhanced government surveillance was authorized for COVID-19 patients and those possibly infected (Pung et al., 2020). Singapore raised the Disease
Outbreak Response System alert to level orange, indicating that COVID-19 was a severe respiratory disease that spreads easily from person to person (Lim, 2020). Though the government was quick to react, by February 17, Singapore had the largest number of cases outside of mainland China (Liew et al., 2020) By the middle of February, strict 14-day stayat-home orders had been issued to any person entering Singapore from China (Yong, 2020). Additional restrictions banning travel from Iran, Italy, and South Korea were announced in early March. By March 24, all entertainment venues were required to close, and some restrictions were placed on malls, museums, and other gathering places. These measures were taken in response to what many called a second wave of the virus in Singapore that occurred due to the large influx of Singaporeans returning home when they believed it to be safe (Lim, 2020). By April 17th the number of cases in Singapore had almost peaked with over 5,000 cases confirmed, and by the end of April the rate of new infections was steadily declining (Yong, 2020). As a response to SARS in 2003, a national center for infectious diseases, which focused solely on research and prevention, had been created with help from the WHO and the CDC (Lim, 2020). Additionally, the government enacted the mission ‘Sparrowhawk,’ a plan to help train doctors in infectious disease awareness and testing (Lim, 2020). These measures were already in place when COVID-19 hit Singapore. After the virus emerged, additional actions were taken, such as the creation of a multi-
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
agency task force chaired by two ministers from the health and national development departments that included public servants from a variety of backgrounds (Lim, 2020). To track and attempt to slow the spread of the disease, the government used contact tracing, which proved extremely effective and identified 40% of cases in Singapore (Lim, 2020). Manufacturing centers across the country moved to quickly produce diagnostic kits and serological tests while working on manufacturing new drugs and possible vaccines (Lim, 2020). Additionally, the government released anonymized detailed reports from doctors of patients with the virus to debunk false information and dispel fears (Lim, 2020). While the country strove to keep business as normal as possible, many businesses were shut down or partially restricted. Starting May 12th, select businesses, like salons and restaurants, were allowed to reopen but under various restrictions such as decreased capacity and social distancing (South China Morning Post, 2020). As a result of the recent decrease in new cases, the government is beginning a phased reopening of the economy (South China Morning Post, 2020; Yong, 2020). At the beginning of the pandemic, the Singaporean government announced all COVID-19 medical expenses would be paid for by the government. Additionally, if a resident could not attend work because of COVID-19 and social distancing measures, the government would distribute around $70 USD per day missed (Lim, 2020). The government also passed a bill protecting businesses from legal action for up to six months regarding legal issues arising from economic disturbances resulting from the virus (Choudury, 2020). Japan The first coronavirus case in Japan was a Chinese national who had traveled from Wuhan to Japan and was reported on January 16th (“Coronavirus: timeline of outbreak,” 2020; Margolis, 2020). A month later, the infection rate in Japan was as low as one case per million people (“Coronavirus: timeline of outbreak,” 2020). On February 3rd, Japan blocked entry to those with a history of traveling to Hubei province and to Chinese nationals with a Hubei province-issued passport. A month later, these entry restrictions were expanded to regions in South Korea, Italy, and Iran. Japan also instituted self-enforced self-quarantine regulations for visitors from Europe and the US. Following travel restrictions, on February 27th,
SPRING 2020
public schools shut down and the government requested suspension of community gatherings and the closure of tourist attractions, sporting events, concerts, and festivals. On March 5th, 55 new cases were reported and only 98 new cases were reported on March 25th, which aligns with a linear growth model (Margolis, 2020). By April 9th, there had been 5,002 cases, 116 deaths, and 1270 recovered cases in Japan (Looi, 2020). While the governor of Hokkaido proclaimed a state of emergency on February 28th, asking people to stay inside, the prime minister declared a state of emergency on April 7th until May 6th in Tokyo and its neighboring cities (Margolis, 2020; Looi, 2020). Under Japanese law established after World War II, the government cannot force private companies to close nor can it force its citizens to stay inside or penalize them for not doing so (Looi, 2020). State of emergency measures are requests and instructions, but the government cannot legally enforce lockdowns. Emergency declarations, however, enable prefectural leaders to ask residents to stay home, to request closures of schools, nonessential stores, and businesses, and to advise organizers to cancel or postpone events (Margolis, 2020). Some stores were temporarily closed and working from home was encouraged, but not forced. Supermarkets, public transport and pharmacies, however, remained open (Looi, 2020). Rather than widespread private and public closures, the government asked people to avoid environments with poor ventilation and dense crowds. This has been difficult to enforce, as demonstrated by a 6,500-person gathering for a martial arts event in Saitama in late March. Hence, social distancing measures have been moderate and rush-hour traffic was only down by 10% in late March as compared to mid-January. Of the big corporations in Japan, 55% have shifted to working remotely, but many white-collar workers still work in the office. Nonetheless, modest social distancing has made an impact: nationwide event cancellations and social distancing measures starting at the end of February has cut the infection rate to a calculated 50% of what it would have been without these measures (Margolis, 2020).
“Whenever a hospital confirmed a new case, the government dispatched teams of medical and data experts to work with local governments in an effort to locate and test anyone who had come in contact with the infected patient.”
Moreover, Japan has heavily relied on isolation of cases and contact tracing. People with symptoms were first tested around February 12th and a specialized team of public health and medical experts was created to identify and isolate infection clusters. Whenever a
147
Figure 12. COVID-19 Case fatality rate or CFR (%) for the various countries discussed in this report. This data was derived from the website ‘Our World in Data’ and repurposed into this new graphic with RStudio. Data Source: https:// ourworldindata.org/covid-cases
hospital confirmed a new case, the government dispatched teams of medical and data experts to work with local governments in an effort to locate and test anyone who had come in contact with the infected patient. Often, local facilities associated with an infection cluster closed down. Therefore, both effective cluster tracking and social distancing measures early on may have contributed to the linear rather than exponential growth of infections (Margolis, 2020).
"Japan fell into a recession as the economy declined by an annualized rate of 3.4% in the first three months of 2020.”
It is also common in Japan to practice good hygiene and it is rather uncommon to shake hands, hug, or kiss when greeting, which has likely lowered the spread of coronavirus as handwashing reduces respiratory infections by 16%. A survey from January 2018 also revealed that 53% of Japanese people wear masks regularly; hence, the common practice of wearing masks and good hygiene may have benefitted Japan in mitigating the spread of the virus from the very start (Margolis, 2020). Medical attention has been focused on the severely ill to prevent the health care infrastructure from being overwhelmed (Margolis, 2020). People with mild symptoms are asked to stay at home or in a designated hotel rather than being automatically admitted to a hospital. Because hospitals only have about 5 beds per 100,000 people, 10,000 hotel rooms in Tokyo and 3,000 in the Western Kansai region have been allocated towards patients with mild symptoms, and a further 800 patients can be accommodated at the Olympic Village in Tokyo. Meanwhile, the Tokyo 2020 Olympics have been postponed (Looi, 2020). Even though Japan has
148
a strong national healthcare system (with more than four times the number of hospital beds per 1,000 people than the US), there is still a shortage of medical supplies like masks and disinfectants (Margolis, 2020). Japan has conducted fewer tests than other countries and only uses about 15% of their testing capacity of 7,500 tests a day. In an effort to preserve limited medical supplies, Japan has instituted strict testing criteria, only allowing patients who have had a fever greater than 99.5 degrees Fahrenheit for more than 4 days, who have underlying health conditions, or who are connected to a previously confirmed case to get tested. To compare, by March 20th, the US had conducted 313 tests per million people, South Korea more than 6,000 per million people, and Japan only 118 tests per million people. This small amount of testing could explain why Japan does not have a recorded exponential growth of infection rates despite only minimally social distancing. This undertesting may have masked the extent of infection in Japan (Margolis, 2020). Furthermore, Japan has felt the impacts of the coronavirus outbreak on its already weakened economy, the world’s third largest economy. Japan fell into a recession as the economy declined by an annualized rate of 3.4% in the first three months of 2020. Consumer spending had already dropped after the government increased the tax on consumption by 2% in October 2019 and shortly after, a typhoon hit Japan’s main island, further driving down the economy. The number of Japanese exports
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
fell in 2019 due to decreasing global demand and the US-China trade war, so this pandemic has worsened Japan’s economic situation, inflicting a reduction in exports and tourism, postponement of the Olympics which would have boosted economic activity, and a decrease in consumption due to requested lockdown measures (Dooley, 2020). Italy Second only to China, Italy had been one of the nations hardest hit by the COVID-19 cases, with more than 200,000 cases and approximately 30,000 deaths as of May 8th. After Italy declared a national emergency on January 31st, small towns affected by the outbreak were placed under quarantine. The first case of atypical pneumonia was recorded on February 20th, 24 hours before 36 new cases out of patient zero’s contact list were diagnosed (Palaniappan et al., 2020). As the situation worsened, schools and universities were closed and lockdowns were instituted in several northern provinces on March 8th, followed by a nation-wide lockdown two days later (Palaniappan et al., 2020). Foreign travel and foreign visitors, while not restricted, was highly discouraged. Italy also chose to participate in the EU’s entry ban from March 17th to May 17th to help slow the spread of the virus (Palaniappan et al., 2020). Despite the creation of a COVID-19 task force in Lombardy on February 21st, Italy had difficulties in preventing transmission of the virus (Giacomo et al., 2020). Hospitals were instructed to create cohort ICUs for COVID-19 patients, organize triage areas to administer mechanical ventilation, establish local procedure for respiratory triage testing and diagnosis, stock up on PPE, and set up a reporting mechanism for the critically ill to the regional coordinating center (Giacomo et al., 2020). Since this was still in the early days of the outbreak before it became a pandemic, little was known about the virus’ transmission modes and it spread from Lombardy to other regions (Giacomo et al., 2020). The rapid spread of the virus has put immense pressure on the healthcare system and physicians, who have been working extended hours. Hospitals running low on ICU spaces have been forced to design triaging systems to determine resource distribution to best benefit the population (Giacomo et al., 2020). The Italian government has also been criticized for creating red zones in a manner that has been inconsistent with the spread of the virus (Pisano et al., 2020). Implementing gradual lockdown decrees may
SPRING 2020
have worked against prevention interests, as they led to more movement, especially southward as people moved away from the quarantines in the north (Pisano et al., 2020). As with other countries, Italy’s economy has suffered as a result of social distancing measures. According to the International Trade Administration, Italy’s small and medium enterprises sector, which depends on loans to operate, corresponds to one-third of the economy and half of all employment (“2020 Study of Italy's Economy,” 2020). This sector is especially vulnerable to economic fluctuation, and the pandemic has dampened production and resulted in substantial economic downturn. Italy’s economy is also highly dependent on automobile and food production, aviation, retail, and tourism, operations which have been halted by the implementation of distancing strategies (“2020 Study of Italy's Economy,” 2020). Italy has moved past phase two of its recovery, which entails reopening nonessential businesses and allowing people to return to work, and into phase three (Morrison, 2020). The Italian government has advised businesses to require masks, gloves, distancing pathways, customer limits, and sanitizer stations (Morrison 2020).
“When Germany recorded its first case in January 27th, labs across the country already had a stock of test kits.”
Germany When Germany recorded its first case on January 27th (a 22-year-old asymptomatic man working at a school), labs across the country already had a stock of test kits. When he tested positive, the school closed, all children and staff were ordered to stay at home in isolation for two weeks, and 235 people were tracked and tested (Bennhold, 2020). Shortly thereafter, the German government set up a COVID-19 crisis team. By March 9th, the first COVID-19 death was recorded. Then, on March 13th—a day after WHO declared the coronavirus a pandemic—the first restrictions were imposed on public life, including school and university closures and mass event cancellations (“Coronavirus timeline in Germany,” 2020; Hartl et al., 2020). On March 16th, the government decided to close non-essential shops as well as public spaces and on March 22nd, gatherings of more than two people were banned (“Germany: COVID-19 measures,” 2020). A worldwide travel ban was also imposed and later extended (Marcus, 2020). The number of cases exceeded 50,000 on March 29th and 100,000 on April 6th (“Coronavirus timeline in Germany,” 2020). In early April, the time taken for the number of infections to double slowed
149
Figure 13. Timeline of the international response to the COVID-19 pandemic with dates in which countries imposed lockdowns or declared a national state of emergency, as well as the dates of key pronouncements by the WHO. Created by staff writer Nina Klee
“Germany is also testing many more people than most countries, which has contributed to a lower fatality rate of 1.6.”
to around 9 days (Bennhold, 2020). By April 27th, it became mandatory to wear protective masks (“Coronavirus timeline in Germany,” 2020). Then, in mid-April, Germany started Europe’s first large-scale antibody testing to track infection rates and lower the spread of the virus. They aim to examine around 5,000 blood samples every 14 days and determine the level of immunity according to the presence of antibodies in the blood (Schmitz, 2020). On May 4th, groups of 5 people from different households started being allowed to gather as lockdown was slowly lifted, also allowing a return to shops, restaurants, hotels and more. The number of coronavirus cases had reached 165,664 on May 4th, while the number of new coronavirus infections per day was below 1,000, making the cases more traceable (Marcus, 2020). If the virus spreads even faster, some restrictions may need to be reimposed. However, the daily death toll in Germany was the lowest it had been in over a month, with fewer than 40 people dying within 24 hours by May 10th (“Germany infection rate rises,” 2020). Furthermore, the average age of those infected has been lower in Germany relative to other countries, largely because many early patients caught the virus in ski resorts in Austria and Italy. As infections spread in other countries, more older people were affected, and the death rate increased from 0.2% to 1.6%. However, the average age of those infected in Germany has only been around 49 (Bennhold, 2020). Germany is also testing more people than most countries, which has contributed to a lower fatality rate of 1.6% With high testing, Germany has detected more asymptomatic people than other countries. This early and extensive testing slowed the spread and severity of the outbreak by enabling public health officials to isolate known cases and treat patients earlier, which significantly increases their chances of survival. Medical staff are also being tested regularly, and some hospitals conduct block tests where swabs of ten people are analyzed at once and individual tests are only conducted if the result of the block test was positive. This increases efficiency of testing by limiting the number of tests to be conducted while still identifying infected individuals. Meanwhile, social distancing measures have flattened the curve, which has kept the healthcare system from being overwhelmed and prevented a scarcity of lifesaving equipment like ventilators. Chancellor Angela Merkel has communicated clearly and regularly even whilst imposing stricter social distancing measures (Bennhold, 2020).
150
Furthermore, Germany’s public healthcare system has been prepared with high capacity, especially in the intensive care department. Hospitals across the country expanded their intensive care capacities and had high capacities to begin with. One university hospital in Giessen, which already had 173 intensive care beds with ventilators, created an additional 40 beds and increased the number of staff on standby to work in intensive care units by 50%. In January, Germany had around 28,000 intensive care beds with ventilators, and by early April, Germany had 40,000 available ICU beds. Effective measures in Germany have therefore been early mass testing and treatment, social distancing guidelines, and the preparedness and capacity of the healthcare system (Bennhold, 2020).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Despite successful responses to the coronavirus outbreak from a health perspective, the pandemic has still caused Germany to fall into a recession and the economy to shrink by 2.2% in the first three months of 2020. This has been the largest quarterly fall since the global financial crisis in 2009. In the last three months of 2019, the German economy contracted by 0.1% due to effects of the US-China trade war, making this second quarter of negative growth a recession. Like for most other countries, the negative growth was triggered by reduced exports, consumer spending and investment as a result of lockdown measures and restrictions on commercial activity. Germany’s economy being less affected than neighboring countries may be the result of the 16 German states allowing factories and construction sites to remain open during lockdown (“German economy in recession,” 2020), as well as the government providing a $146 billion stimulus package (Sardana, 2020). However, economists expect a steeper decline in the second quarter of 2020, as the consequences of the lockdown come fully into effect (“German economy in recession,” 2020). Great Britain On February 10, the COVID-19 crisis was labeled a “serious and imminent threat to public health” by the UK government, 11 days after it had been labeled a “Public Health Emergency of International Concern” by the WHO (Sohrabi et al., 2020). At that time, eight people had tested positive for SARS-CoV-2 in the United Kingdom (Mahase, 2020). On March 4th, the number of cases had risen to 85, at which point a testing containment strategy was put in place. By that time, 16,659 tests had been conducted and anyone who tested positive was transferred to a designated High Consequence Infectious Disease center. The UK National Health Service and its ministers have been tasked with coordinating the pandemic response throughout the crisis, receiving direction from Prime Minister Boris Johnson (Sohrabi et al., 2020). After the WHO officially declared the COVID crisis a pandemic, UK officials commenced plans to alter the structure of British society. Non-elective surgeries were suspended on March 17th, providing for the expansion of testing capabilities. The “FivePillar Plan,” announced by Health Secretary Matt Hancock on April 2nd, aimed to allow for 100,000 tests per day to be administered in England by the end of the month by involving the private sector in test production (Bradley,
SPRING 2020
2020). Further plans to shut down non-essential businesses and public events were put in place throughout March and April. (Iacobucci, March 2020; Iacobucci, April 2020). By the end of May, much of Britain is still locked down, although plans for reopening are starting to be set in motion. On May 11th, Prime Minister Boris Johnson gave a speech from his official residence outlining a “conditional plan” for reopening. He proclaimed the earliest date (June 1st) at which primary students could return to school and at which shops could reopen, so long as they took special precautions (“PM Unveils,” 2020). In a May 26 address, Johnson gave the go-ahead for nonessential shops to open on June 15th, following the social-distancing guidance provided in a UK report on retail reopening (“Non-essential Shops,” 2020). As COVID-19 became a more serious threat to public health, the UK government began to ramp up its health care system to ensure it had sufficient capacity to respond to the pandemic. The National Health Service (NHS) provided guidance on social distancing—encouraging individual actions such as frequent hand washing, avoiding crowded places, and canceling social events. In addition to informing the public, the government took measures to scale up hospital and testing capacity. The aforementioned ‘five-pillar plan’ increased the rate of swab-testing, especially for those at high risk, for healthcare workers, and others in essential industries. It also called for the development of antibody tests, enhanced surveillance measures to detect viral spread, and a public-private partnership to enhance population-wide testing capacity (Iacobucci, March 2020). The NHS and industrial partners worked together to provision hospitals with the key supplies—ventilators, face masks, and hospital beds—that would keep the spread of the virus from exceeding hospital capacity.
“On March 25th, the British government enacted the 'Coronavirus Act 2020' (HC Bill 122). The law includes measures to increase the health care workforce, mental health resources, a statutory sick pay system for unemployed workers, and a number of provisions for business relief.”
The response to COVID-19 also required an emphasis on the pandemic’s economic consequences. On March 25th, the British government enacted the ‘Coronavirus Act 2020’ (HC Bill 122). The law includes measures to increase the health care workforce (by calling in retired professionals and current medical students), mental health resources and other measures of support for frontline workers, a statutory sick pay system for unemployed workers, and a number of provisions for business relief. This includes government-
151
“It is estimated that developing countries will experience a devastating income loss exceeding $220 billion.”
backed loans, special support for industries like aviation, and guidelines for businesses to operate in a modified. Life sciences research also received a funding boost—the NHS will receive an additional 34 billion pounds per year (compared to the 2018-19 fiscal year) until the fiscal year 2024-25, an increase in the R&D tax credit, and an increase in public R&D investment by 22 billion pounds up to 2024-25 (‘Sha,’ 2020).
and financially (Ravelo et al., 2020). Tackling COVID-19 in developing countries is essential because in these most susceptible areas, the scars will be deep and long lasting, not just in the economy but also in the fight against other diseases prevalent in the communities (United Nations, 2020).
COVID-19 in Developing Countries As COVID-19 spread across the world, it reached developing nations at a delayed pace. Once it arrived, however, it spread rapidly between countries due to poor infrastructure and a lack of resources. (Brahma et al., 2020; WHO, 2020). With 75% of people in the least developed countries lacking soap and water and living in overcrowded and underserved areas, the disease cannot be easily contained (United Nations, 2020). As of April 22nd, there are over 15,000 COVID-19 cases in Africa, with the majority in South Africa, Algeria, and Cameroon (WHO, 2020). In South America, countries like Ecuador and Brazil began to see the virus’ impact in early February (WHO, 2020). Brazil and Ecuador are the two most heavily impacted countries in South America; by the end of May, Brazil had over 1,500 cases and Ecuador had over 780 (Hallo et al., 2020).
This pandemic has challenged the world to rethink the sense of normality it once had and to instead embrace a sense of abnormality and uncertainty. Innovative thinking, collaboration, and fast action have been important components of a critical response to the everchanging circumstances of this pandemic. Some implemented measures have been effective in mitigating the spread of the virus while others have not; some countries have been successful in controlling the severity of the virus and its transmission while others have failed to do so. So, how can nations be better prepared for future pandemics or recurring waves of coronavirus outbreaks? Upon comparing various international responses, key measures that have been evident in successfully mitigating the spread of the virus have been widespread testing, early and rigorous contact tracing and close adherence to social distancing measures. In the future, rapid virus detection, effective contact tracing systems, improvement of testing capabilities, and mitigation of economic consequences are key in the face of a pandemic. This section discusses how the pandemic has transformed aspects everyone’s lives in the long-term and what to expect in the future.
As the virus begins to spread across developing countries and pre-existing healthcare measures are disrupted, many aid groups have stepped in to help in the response. At the beginning of the pandemic, only two African countries possessed testing capabilities, but now more than 44 can effectively test for COVID-19 (WHO, 2020). As the SARS-CoV-2 virus disrupts the world, other medical efforts in developing countries have come to a halt. The WHO and UNAIDS predict that the six-month disruption of antiretroviral treatments in Africa could result in over 500,000 additional deaths from AIDS related diseases (Ravelo et al., 2020). Additionally, due to the decrease in medical care from foreign aid groups, HIV mother to child transmission rates are expected to increase by as much as 104% in certain African countries (Ravelo et al., 2020). Furthermore, it is estimated that developing countries will experience a devastating income loss exceeding $220 billion USD (United Nations, 2020). In response, on May 7th, the United Nations increased the funds of the Global Humanitarian Response Plan to almost $7 billion. This includes additional funds for the WHO’s Preparedness and Response Plan, which will help developing countries respond to the virus both medically
152
The Pandemic Response of the Future
Rapid Virus Detection If another pandemic event like COVID-19 arises in the future, how can the operative pathogen be detected sooner? One approach would involve the proliferation of wearable devices— devices that can detect signs of physiological distress, such as fever, abnormal heart rate, or breathing difficulties. The idea is that abnormal data from a number of patients in close proximity could alert medical professionals to a potential outbreak of a virus or other pathogen in a certain area. The medical professionals could then contact these individuals, test them for a slate of known pathogens (or identify novel ones), and initiate, if necessary, quarantine and contact tracing protocols. Several useful devices have already been designed. Something as simple as Apple Watch
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 14. Telemedicine devices, such as this one used by emergency medical teams in Spain to perform an ultrasound, have become even more crucial in order to reduce the number of healthcare providers directly exposed to the COVID-19 environment. Source: Wikimedia Commons
monitoring of heart rate could provide useful information about health status. Other devices collect more complicated medical information. The Butterfly device, a mobile ultrasound machine, is already being used to image the lungs of patients with COVID-19. It is currently only authorized for use by medical professionals, but it may feasibly be approved for public use (“Butterfly iQ,” n.d.). Another useful device is the Clairways Technology’s low-powered acoustic sensor, which listens in to lung sounds. It is worn on the chest and transmits its data to an external database, where machine learning algorithms trained to detect respiratory events like coughing, wheezing, or even heart rate variability characterize lung health. This allows researchers conducting clinical trials to monitor patients' lung health continuously throughout the trial period and not only when they stop by the clinic (“Clairways,” n.d.). There are also a number of Mobile EKG’s on the market capable of gauging heart health. A test that previously required a visit to the hospital or doctor’s office and the placement of 12 electrodes on the patient’s skin can now be done at home in a much less cumbersome fashion (“12-Leed ECG Placement,” n.d.). AliveCor’s KardiaMobile Device is a pocket-sized device that is synced up to an iPhone. In order to collect data, a patient just has to place two fingers from each hand on the device (“AliveCor,” n.d.). That data may be stored, analyzed, and utilized by doctors and researchers if it is approved for that purpose. For this wearable device approach to be an effective mode of pandemic detection, the wearable devices would need to be widespread throughout the human population on a global
SPRING 2020
scale. If just one region of the world goes without these technologies, although they may still be helpful for isolating early cases, a pandemic could spread to the point where these new viral detection strategies are no longer very useful. It is thus imperative that these devices are inexpensive, and a debate about how these devices could be paid for is needed. If the devices are paid for by individuals, should they or should they not be covered by insurance? If the latter, legislators and their constituents would need to be convinced that the health benefits, and all the economic benefits that a healthier public may entail, outweigh the cost of these devices. At the same time, designers should focus on building devices that are highly effective at detecting some aspect of human health but not be overly complicated (for the purposes of saving money and making it easier for the user to operate). In addition, there are ethical hurdles that must be overcome regarding the collection and use of data from wearable devices. In order for doctors and researchers to recognize an outbreak and stop it in time, real-time data from these wearable devices would need to be instantly available for analysis. There are several conceivable routes for data sharing. One would be that the data is shared only with a patient’s approved personal physician. But, in order for a broader outbreak to be detected, doctors throughout the country would need to be in close communication about their patients’ physiological health. A more efficient approach would be for the data to be transmitted to a central research organization or for data to go directly to an international health organization
“...in order for a broader outbreak to be detected, doctors throughout the country would need to be in close communication about their patients physiological health.”
153
like the WHO. Any option will likely spark a strong feeling of distaste among advocates for data privacy. Thorough efforts could be taken to blind the researchers to the individuals who generated the data, patient’s identities must ultimately be revealed so that they can be tested. Nonetheless, the benefits of these devices for early pandemic detection are such that these challenges are worth tackling.
“The benefit of telehealth technology is simple but powerful: It allows clinicians to see patients who are at home.”
Improved Testing Capability As previously stated, China was quite slow in releasing the original genome sequence of the identified coronavirus. In the future, it is crucial that information be shared publicly and widely in order for detection tests to be developed as soon as possible. Improving testing capability includes faster creation of tests, improving the accuracy of tests, and widespread distribution of tests. Perhaps the best example of testing expansion and results is that of South Korea, which at one point during the pandemic was the site of the second most cases in the world next to China. Through a combination of directed testing and contact tracing, health officials were able to curb the spread of cases. According to an article by The Atlantic, South Korea based its testing practices on three pillars: “fast and free testing, expansive tracing technology, and mandatory isolation of the most severe cases” (Thompson, 2020). Important details of South Korea’s policies included the authorized use of unverified tests during global health crises and recognizing the necessity of early testing and mandatory isolation. South Korea knew and put into practice what most health officials are recommending for future pandemic responses: hospitals stockpiling supplies and distributing testing kits as quickly as possible to have a grasp on how and where a virus is spreading (Yong, 2020). In addition to the large number of tests, South Korea quickly constructed over 600 testing centers and pioneered the drive-thru testing center (Thompson, 2020). Many of these practices require a great deal of trust between citizens and government, and they have proven effective in curbing the spread of disease. Shifting Medical Care to the Home In addition to patients infected with the novel coronavirus, hospitals and other institutions are still treating other patients. Due to concerns of the spread of COVID-19, providers and patients have had to adapt to a new normal of telemedicine [Figure 14]. Medical practices have rightly limited access to in-person care, but maintaining high-quality patient care is still a priority for all people involved. Prior to this
154
pandemic, telemedicine has been associated with reduced mortality and hospital admission rates, as well as improved quality of life, when used for communication, counseling, and disease monitoring (Humphreys et al., 2020). Now, well into the pandemic, telemedicine is being implemented at a larger scale rapidly, and specifically will be needed for symptom management, early goals-of-care conversations, and care at the end of life (Humphreys et al., 2020). Given the global scale of the novel coronavirus, obstacles to telehealth implementation, such as limited reimbursement, HIPAA compliance, and interstate licensing restrictions, have been temporarily loosened (Gutierrez, 2020). The benefit of telehealth technology is simple but powerful: it allows clinicians to see patients who are at home. Healthcare systems lacking existing programs can outsource to services provided by Teladoc Health or American Well, for example (Hollander and Carr, 2020). Nonetheless, telemedicine is far from perfect. Complications with telemedicine are prevalent on the providers’ side. For example, providers may have stress and grief from being separated from patients as well as their other team members (Humphreys et al., 2020). They may experience guilt as well, for not being a part of the frontline team in the hospital, which can affect their ability to work from home during their virtual appointments. It is thus equally as important to take care of the healthcare workers themselves. From electronic tablets to electronic ICU monitoring programs to this growing system of telemedicine, it is evident that the pandemic has been a major push to see the benefits of increased technology in healthcare. Vaccine Production Many reports regarding potential vaccines have been released in the last few months and infectious disease expert Dr. Anthony Fauci is hopeful that a vaccine will come out in 12 to 18 months. However, there are many challenges, as vaccine development typically takes at least four years and there has never been an mRNA vaccine, much less a coronavirus one. Drug production normally involves extensive processes including securing funding, getting approvals, and research, with less than 10% of drugs entering clinical trials being approved by the FDA. Additionally, it is very difficult to ethically test potential vaccines because vaccinated participants cannot be infected with COVID-19 and have to contract it naturally.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 15. Working From Home Leaves Empty Corridors and Hallways. As a measure to reduce and prevent the risk of COVID-19 transmission, work from home has become the normal. Halls and corridors that are normally full of people are quiet and empty as people stay home to stay safe. Source: Wikimedia Commons
This lengthens vaccine development, especially if infection rates reduce as a result of other public health measures. FDA approval will also be strenuous considering that issues with any released vaccine will fuel conspiracy theories, potentially undoing the efforts of current public health measures (Thompson, 2020). Nonetheless, there is reason to believe that there is a quicker, albeit more risky, path to a vaccine. Fortunately, society has some background and experience with coronaviruses like SARS and MERS. COVID-19 does not mutate as rapidly as HIV and is 80% identical to SARS-CoV-2, making it slightly less challenging to move into the testing phase. To counteract the low drug success rate, several potential candidates can be tested at once. There is at least 254 therapies and 95 vaccines related to COVID-19 in the pipeline. Typical wait times between testing processes could also be reduced by combining testing phases with multiple groups of people. If successful in early trials, emergency-use provisions could be given to essential workers as early as September this year. Additionally, building factories earlier for mass production could be worthwhile efforts that save time. For this reason, the Bill and Melinda Gates Foundation is already building factories for seven different vaccines, despite knowing that two at most will be used. These measures might help quicken the pace of vaccine development (Thompson, 2020). However, the year-long timeline does not mean that everyone will have access to the vaccine as soon as it comes out. Those with higher risk and essential workers will be given treatments
SPRING 2020
earlier than the rest of the population and it isn’t clear what distribution would look like. If another country were to produce the vaccine first, it might take much longer for it to reach the US. Probability and data do not stand behind the optimist’s short timeline, but the economy and public may not be able to sustain social distancing much longer. Researchers, experts, and elected officials have decided that it is better to take financial risks than wait. Ultimately, therapeutic drugs and contact tracing may be what enables us to bring back any “normalcy” (Thompson, 2020). Minimizing the Economic Consequences of a Pandemic As this report has made clear, the economic consequences of the COVID-19 pandemic are dire. Although it has now somewhat recovered from the rout in late February, the global stock market decline in the week from February 24th to 28th was the largest since the Great Recession (Smith, 2020). Additionally, the US unemployment rate has soared by early May to its worst numbers since the Great Depression, spiking at 14.7% on May 8th (Long and Van Dam, 2020; Hartman, 2020). There is no way to tell at this juncture whether or not the economy will bounce back elastically to the vibrant high it reached prior to the onset of this crisis. Given that the next few years may be economically troubled, it is worth considering the measures that could be taken to relieve current economic pressures and mitigate the economic effects of any future pandemic.
“Although it has now somewhat recovered from the rout in late February, the global stock market decline in the week from February 24th to 28th was the largest since the Great Recession.”
The problem may be approached from several angles. First, from the supply side: due to the
155
pandemic, global business supply lines were significantly disrupted (Schmalz, 2020). A clear objective is focused on making supply chains more resilient, and this may be accomplished in several ways. One approach would be to permanently concentrate a company’s supply lines within a single country. This would do the most to minimize the disruptions that come with international trade but would come with significant inefficiencies. For one, there is the difficulty of finding necessary in-country suppliers, who may not offer supplies at prices as low as they may be found abroad, especially in the absence of international competition. A modified approach would concentrate operations within a single economic bloc of geographically close countries, and it may be a more viable option. A third approach would be to have a flexible list of suppliers—both domestic and international—that could be exchanged quickly in the case of a pandemic event. There are significant challenges here too, such as a supplier may be less willing to contract with a company that may switch to another supplier at a moment’s notice. Likely, certain economic incentives would need to be offered to the supplier by the company in question to ameliorate this concern—or the company may simply conceal its intentions to have flexible supply lines.
“People have realized that inefficient timeconsuming meetings can essentialy be replaced by a simple email. Business trips can, to an extent, be replaced by a video conference.”
156
Hopefully, the shift in business operations to remote work will keep consumer expenditure close to normal in the next crisis, or at least significantly closer than it has been during the COVID-19 pandemic. It is possible that consumers, feeling uncertain about future economic conditions as they are in this current moment, will have the desire to save more of their money. As explained by the “paradox of savings,” if a large portion of the population acts in this way, consumer spending goes down and so does income (Chen, 2019). The nownatural way to address this problem is for the government to increase fiscal spending. This approach, a hallmark of Keynesian economics, has marked the response to economic crisis, in the US and across the world, over the past century. However, there are legitimate questions about the efficacy of this approach in the longterm, given the risk of enhancing inflation and increasing the public debt. Over the next decade, the long-term impact of the CARES Act and other recent measures taken will need to be thoroughly considered (Eitelman, 2020; Forsyth, 2020). This economic analysis should inform the pandemic response of the future.
The Workplace of the Future Before COVID-19, “working from home” (WFH) was not the norm. Now, many industry professionals have become accustomed to WFH, leaving questions for the future of the workforce. In addition to many articles published on how to best be productive while in one’s makeshift home office, there has been discussion regarding the culture of the ‘workplace’ and if work from home would continue after the coronavirus lockdown (Jain, 2020). As people have been doing this for weeks and months now, it is an opportunity for firms to reevaluate how their workplace functions and how it should change post-pandemic, especially as flexible work, both in terms of hours and in location, will definitely become more prevalent (Goodman, 2020). There are definitely pros and cons to a workplace of the future that is primarily from home. People have realized that inefficient, time-consuming meetings can essentially be replaced by a simple email. Business trips can, to an extent, be replaced by a video conference. After this crisis, companies will likely build up their remote working infrastructure (Asia News Monitor, 2020). Workers can benefit from doing their individual tasks at home. However, going to the office is still invaluable when people need to work on projects together. Additionally, the arrangement of not returning to the office is really only feasible for white-collar workers, as management, professional, and administrative positions can easily take their work out of the office (TCA Regional News, 2020). Blue-collar work, on the other hand, is less likely to be able to shift in the same way. Either way, it is clear that the COVID-19 pandemic has made workers and management realize the inefficiencies of the workplace as people have transitioned to remote work, and no matter how small, the virtual workspace undoubtedly will be used more in the future [Figure 15]. Conclusions So, what happens next? How can this pandemic “end?” What will the near future look like? This section describes some possible scenarios, but it is difficult to make an accurate prediction of what will happen in the future. The first potential outcome is that every nation manages to control the virus simultaneously, which is how SARS in 2003 came to an end (Yong, 2020). Traditional public health prevention methods including testing, isolation, and airport screening were used to
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
prevent transmission in almost all of the 29 countries with SARS outbreaks (Karlamangla, 2020). Cutting off transmission of the virus to healthy people allowed it to die out within eight months (Karlamangla, 2020). However, the chance of worldwide simultaneous control of COVID-19 is very small due to varying global policies, and its high transmissibility means that it just takes one infected person to trigger another outbreak (Yong, 2020). The COVID-19 pandemic has been challenging to contain due to its alarmingly large number of cases, which is more than 10 times that of SARS (Karlamangla, 2020). SARS also did not hit any developing countries that lacked the infrastructure to contain it and it had a relatively low transmission rate. It was also deadlier, making it easier to prevent its spread than COVID-19 (Karlamangla, 2020). Another scenario is that of herd immunity— past flu pandemics have affected the world until the virus could no longer find viable hosts due to widespread immunity. However, milder coronaviruses that cause cold-like symptoms have given people immunity for less than a year while the original SARS virus provided much longer immunity, so immunity for SARS-CoV-2 may lie somewhere in between (potentially lasting a few years). To verify immunity, serological antibody tests must be used to confirm that these antibodies stop people from catching and transmitting the virus. If this is the case, this might be an opportunity to let immune citizens return to work during social distancing periods. However, while acquiring herd immunity may be the fastest scenario, it also comes at a high cost. COVID-19 is more transmissible and fatal than the flu, so allowing herd immunity could contribute to an even higher number of deaths and an overwhelmed healthcare system (Yong, 2020). Another way for the coronavirus to be more contained in the near future may be by seasonality. Coronaviruses have a history of being winter infections (Yong, 2020). A study of the four most common human coronaviruses revealed them to be highly seasonal, with 2.5% of coronavirus respiratory infections occurring between June and September. The number of infections was found to increase in December and peak in January or February before reducing in March (Huzar, 2020). There may be declines in transmission in the summer due to warmer and wetter climates in addition to public health measures (Lipsitch, 2020). For influenza, an increased quantity of water vapor,
SPRING 2020
less children in school, and stronger immune systems disrupt contagion in the summer (Lipsitch, 2020). However, seasonal effects on COVID-19 are unknown and seasonal variations may not be sufficient to slow its spread (Yong, 2020). Finally, the most realistic yet also the longest scenario would be controlling recurring outbreaks that arise until a vaccine is produced. One difficulty of producing a vaccine against coronavirus is that unlike for the flu, there are no existing vaccines for coronaviruses that could be used as a starting point to develop a vaccine against SARS-CoV-2 (Yong, 2020). Therefore, the process can take over a year, and that is only if clinical trials succeed (Thompson, 2020). Until a vaccine is found, coronavirus will remain a topic of concern—most likely for at least another year or longer. If the current phase of social distancing is effective, the pandemic may be controlled enough for people to return to relative normalcy, returning to schools and offices. While the world might have to undergo multiple periods of social distancing, it does not necessarily need to be under continuous lockdown for the next few years (Yong, 2020).
“One day, COVID-19 may be viewed like the flu is viewed today - a treatable disease that recurs every winter and one that everyone can get an annual vaccination for.”
Whether by herd immunity or by the arrival of a vaccine, the virus will spread less in the future than it does now. Nations will be better prepared to face an outbreak: there is improved technology for diagnosis and contact tracing, facilities for isolating and treating people, resources like testing kits and PPE, policies allowing for prompt action, and awareness of the severity of diseases like COVID-19. Nations are more aware of their shortcomings when first tackling the outbreak in 2020 and should be more equipped to address a problem of this scale. Models suggest that the virus may be present and cause an epidemic every few years, but with the hope and expectation that it will be less severe, society will be better prepared. One day, COVID-19 may be viewed like the flu is viewed today—a treatable disease that recurs every winter and one that everyone can get an annual vaccination for. References dalja, A. A., Toner, E., & Inglesby, T. V. (2020). Priorities for the US Health Community Responding to COVID-19. JAMA, 323(14), 1343–1344. https://doi.org/10.1001/jama.2020.3413 Andersen, K. G., Rambaut, A., Lipkin, W. I., Holmes, E. C., & Garry, R. F. (2020). The proximal origin of SARS-CoV-2. Nature Medicine, 26(4), 450–452. https://doi.org/10.1038/s41591020-0820-9
157
Army Deploys Medical Task Forces to Help Hard-Hit Communities. (n.d.). U.S. DEPARTMENT OF DEFENSE. Retrieved May 29, 2020, from https://www.defense.gov/Explore/News/ Article/Article/2146685/army-deploys-medical-task-forces-tohelp-hard-hit-communities/ Atchison, C. J., Bowman, L., Vrinten, C., Redd, R., Pristera, P., Eaton, J. W., & Ward, H. (2020). Perceptions and behavioural responses of the general public during the COVID-19 pandemic: A cross-sectional survey of UK Adults [Preprint]. Public and Global Health. https://doi. org/10.1101/2020.04.01.20050039 Begging for Thermometers, Body Bags, and Gowns: U.S. Health Care Workers Are Dangerously Ill-Equipped to Fight COVID-19. (n.d.). Time. Retrieved May 29, 2020, from https://time. com/5823983/coronavirus-ppe-shortage/ Bennhold, K. (2020, April 4). A German Exception? Why the Country’s Coronavirus Death Rate Is Low. The New York Times. https://www.nytimes.com/2020/04/04/world/europe/ germany-coronavirus-death-rate.html Burke, R. M., Midgley, C. M., Dratch, A., Fenstersheib, M., Haupt, T., Holshue, M., Ghinai, I., Jarashow, M. C., Lo, J., McPherson, T. D., Rudman, S., Scott, S., Hall, A. J., Fry, A. M., & Rolfes, M. A. (2020). Active Monitoring of Persons Exposed to Patients with Confirmed COVID-19—United States, January–February 2020. MMWR. Morbidity and Mortality Weekly Report, 69(9), 245–246. https://doi.org/10.15585/mmwr.mm6909e1 Business Reporter: Changing How We Work, From Home and Away, After The Coronavirus - ProQuest. (n.d.). Retrieved May 29, 2020, from http://search.proquest.com/docview/23923273 23?accountid=10422&rfr_id=info%3Axri%2Fsid%3Aprimo Butterfly iQ - Ultrasound, ultra-simplified. (n.d.). Retrieved May 13, 2020, from https://www.butterflynetwork.com/ CDC. (2020, February 11). Coronavirus Disease 2019 (COVID-19). Centers for Disease Control and Prevention. https://www.cdc.gov/coronavirus/2019-ncov/php/principlescontact-tracing.html Chen, Y., Liu, Q., & Guo, D. (2020). Emerging coronaviruses: Genome structure, replication, and pathogenesis. Journal of Medical Virology, 92(4), 418–423. https://doi.org/10.1002/ jmv.25681
Coronavirus Aid, Relief, and Economic Security (CARES) Act, no. H.R. 748 (2020). Coronavirus (COVID-19). (n.d.). WHO | Regional Office for Africa. Retrieved May 29, 2020, from https://www.afro.who. int/health-topics/coronavirus-covid-19 Coronavirus (COVID-19) Testing—Statistics and Research. (n.d.). Our World in Data. Retrieved May 15, 2020, from https:// ourworldindata.org/coronavirus-testing Coronavirus outbreak: Top coronavirus drugs and vaccines in development. (2020, April 16). Clinical Trials Arena. https:// www.clinicaltrialsarena.com/analysis/coronavirus-mers-covdrugs/ Coronavirus Test: What You Need to Know. (n.d.). Retrieved May 15, 2020, from https://www.hopkinsmedicine.org/ health/conditions-and-diseases/coronavirus/coronavirustest-what-you-need-to-know Counsel, P. (2020, March 31). Coronavirus Bill (UK; United Kingdom). Published by Authority of the House of Commons. https://publications.parliament.uk/pa/bills/cbill/58-01/0122/ cbill_2019-20210122_en_1.htm COVID-19: Looming crisis in developing countries threatens to devastate economies and ramp up inequality. (n.d.). UNDP. Retrieved May 29, 2020, from https://www.undp.org/content/ undp/en/home/news-centre/news/2020/COVID19_Crisis_in_ developing_countries_threatens_devastate_economies.html COVID-19—A timeline of the coronavirus outbreak. (n.d.). Devex. Retrieved May 29, 2020, from https://www. devex.com/news/sponsored/covid-19-a-timeline-of-thecoronavirus-outbreak-96396 Dyer, O. (2020). Covid-19: US testing ramps up as early response draws harsh criticism. BMJ, 368. https://doi. org/10.1136/bmj.m1167
Choudhury, S. R. (2020, April 8). The battle against coronavirus will last a “very long time,” says Singapore minister. CNBC. https://www.cnbc.com/2020/04/08/economic-impact-ofcoronavirus-will-last-a-long-time-singapore-minister-says.html
Executive Order on Prioritizing and Allocating Health and Medical Resources to Respond to the Spread of Covid-19. (n.d.). The White House. Retrieved May 29, 2020, from https:// www.whitehouse.gov/presidential-actions/executive-orderprioritizing-allocating-health-medical-resources-respondspread-covid-19/
Chronology: Germany and the Coronavirus. (2020, May 1). The Berlin Spectator. https://berlinspectator.com/2020/05/01/ chronology-germany-and-the-coronavirus-2/
Fast Facts on U.S. Hospitals, 2020 | AHA. (n.d.). Retrieved May 29, 2020, from https://www.aha.org/statistics/fast-facts-ushospitals
Clairways | Wearable Lung Function | Asthma, COPD, CF | United States. (n.d.). Clairways Staging V4. Retrieved May 13, 2020, from https://www.clairways.com
Germany Is Conducting Nationwide COVID-19 Antibody Testing. (n.d.). NPR.Org. Retrieved May 15, 2020, from https://www.npr.org/sections/coronavirus-liveupdates/2020/04/21/839594202/germany-is-conductingnationwide-covid-19-antibody-testing
Contact tracing. (n.d.). Retrieved May 29, 2020, from https:// www.who.int/news-room/q-a-detail/contact-tracing Contact Tracing: Part of a Multipronged Approach to Fight the COVID-19 Pandemic. (2020). Centers for Disease Control and Prevention. Coronavirus: A timeline of how the deadly outbreak is evolving. (n.d.). Retrieved May 15, 2020, from https://www. pharmaceutical-technology.com/news/coronavirus-a-timelineof-how-the-deadly-outbreak-evolved/
158
Coronavirus Act 2020—UK Parliament. (n.d.). Retrieved May 15, 2020, from https://services.parliament.uk/bills/2019-21/ coronavirus.html
Gutierrez, J., Kuperman, E., & Kaboli, P. J. (n.d.). Using Telehealth as a Tool for Rural Hospitals in the COVID-19 Pandemic Response. The Journal of Rural Health, n/a(n/a). https://doi.org/10.1111/jrh.12443 Hallo, A., Rojas, A., Hallo, C., A, H., A, R., & C, H. (2020). Perspective from Ecuador, the Second Country with More Confirmed Cases of Coronavirus Disease 2019 in South
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
America: A Review. Cureus Journal of Medical Science, 12(3). https://doi.org/10.7759/cureus.7452 Hellewell, J., Abbott, S., Gimma, A., Bosse, N. I., Jarvis, C. I., Russell, T. W., Munday, J. D., Kucharski, A. J., Edmunds, W. J., Funk, S., Eggo, R. M., Sun, F., Flasche, S., Quilty, B. J., Davies, N., Liu, Y., Clifford, S., Klepac, P., Jit, M., … van Zandvoort, K. (2020). Feasibility of controlling COVID-19 outbreaks by isolation of cases and contacts. The Lancet Global Health, 8(4), e488– e496. https://doi.org/10.1016/S2214-109X(20)30074-7 Her, M. (undefined/ed). How Is COVID-19 Affecting South Korea? What Is Our Current Strategy? Disaster Medicine and Public Health Preparedness, 1–3. https://doi.org/10.1017/ dmp.2020.69 Here’s How Computer Models Simulate the Future Spread of New Coronavirus—Scientific American. (n.d.). Retrieved May 15, 2020, from https://www.scientificamerican.com/article/ heres-how-computer-models-simulate-the-future-spread-ofnew-coronavirus/ Hollander, J. E., & Carr, B. G. (2020). Virtually Perfect? Telemedicine for Covid-19. New England Journal of Medicine, 382(18), 1679–1681. https://doi.org/10.1056/NEJMp2003539 How Is COVID-19 Affecting South Korea? What Is Our Current Strategy? | Disaster Medicine and Public Health Preparedness | Cambridge Core. (n.d.-a). Retrieved May 15, 2020, from https://www.cambridge.org/core/journals/disaster-medicineand-public-health-preparedness/article/how-is-covid19affecting-south-korea-what-is-our-current-strategy/0EA2D46 9C3131FBDD0110922FCD4EF7E How Is COVID-19 Affecting South Korea? What Is Our Current Strategy? | Disaster Medicine and Public Health Preparedness | Cambridge Core. (n.d.-b). Retrieved May 15, 2020, from https://www.cambridge.org/core/journals/disaster-medicineand-public-health-preparedness/article/how-is-covid19affecting-south-korea-what-is-our-current-strategy/0EA2D46 9C3131FBDD0110922FCD4EF7E How Singapore Is Taking On COVID-19. (2020, April 3). Asian Scientist Magazine | Science, Technology and Medical News Updates from Asia. https://www.asianscientist.com/2020/04/ features/singapore-covid-19-response/ How South Korea Reined In Coronavirus Without Shutting Everything Down: Goats and Soda: NPR. (n.d.). Retrieved May 15, 2020, from https://www.npr.org/sections/ goatsandsoda/2020/03/26/821688981/how-south-koreareigned-in-the-outbreak-without-shutting-everything-down Humanity tested. (2020). Nature Biomedical Engineering, 4(4), 355–356. https://doi.org/10.1038/s41551-020-0553-6 Iacobucci, G. (2020a). Covid-19: All non-urgent elective surgery is suspended for at least three months in England. BMJ, m1106. https://doi.org/10.1136/bmj.m1106 Iacobucci, G. (2020b). Covid-19: Government promises 100 000 tests per day in England by end of April. BMJ, m1392. https://doi.org/10.1136/bmj.m1392 Infection rate rises in Germany as lockdown eases. (2020, May 10). BBC News. https://www.bbc.com/news/worldeurope-52604676 Infographic: COVID-19: Has The U.S. Closed The Testing Gap? (n.d.). Statista Infographics. Retrieved May 15, 2020, from https://www.statista.com/chart/21108/covid-19-tests-
SPRING 2020
performed-per-million-of-the-population/ Letko, M., Marzi, A., & Munster, V. (2020). Functional assessment of cell entry and receptor usage for SARS-CoV-2 and other lineage B betacoronaviruses. Nature Microbiology, 5(4), 562–569. https://doi.org/10.1038/s41564-020-0688-y Li, Z., Yi, Y., Luo, X., Xiong, N., Liu, Y., Li, S., Sun, R., Wang, Y., Hu, B., Chen, W., Zhang, Y., Wang, J., Huang, B., Lin, Y., Yang, J., Cai, W., Wang, X., Cheng, J., Chen, Z., … Ye, F. (n.d.). Development and clinical application of a rapid IgM-IgG combined antibody test for SARS-CoV-2 infection diagnosis. Journal of Medical Virology, n/a(n/a). https://doi.org/10.1002/jmv.25727 Lindley, D. (2020). Work Better From Home During the Coronavirus Quarantine. The Major Gifts Report, 22(6), 1–1. https://doi.org/10.1002/mgr.31483 Logo, cls-1{fill:#142a39;} cls-2{fill:#2d9f86;}AliveCor K. (n.d.). AliveCor. Retrieved May 28, 2020, from https://www.alivecor. com/ Looi, M.-K. (2020). Covid-19: Japan declares state of emergency as Tokyo cases soar. BMJ, 369. https://doi. org/10.1136/bmj.m1447 Mahase, E. (2020). Coronavirus: NHS staff get power to keep patients in isolation as UK declares “serious threat.” BMJ, m550. https://doi.org/10.1136/bmj.m550 Mangan, D. (2020, April 16). The US economy can’t reopen without widespread coronavirus testing. Getting there will take a lot of work and money. CNBC. https://www.cnbc. com/2020/04/16/coronavirus-testing-needs-to-be-widelydone-before-economy-reopens.html Margolis, E. (2020, March 28). “This may be the tip of the iceberg”: Why Japan’s coronavirus crisis may be just beginning. Vox. https://www.vox.com/covid-19-coronavirusexplainers/2020/3/28/21196382/japan-coronavirus-casescovid-19-deaths-quarantine Menokee, D. B., Sikim Chakraborty, and Aradhika. (2020, April 2). The early days of a global pandemic: A timeline of COVID-19 spread and government interventions. Brookings. https://www.brookings.edu/2020/04/02/the-early-daysof-a-global-pandemic-a-timeline-of-covid-19-spread-andgovernment-interventions/ News, A. B. C. (n.d.). The Latest: Singapore loosens coronavirus restrictions. ABC News. Retrieved May 29, 2020, from https:// abcnews.go.com/Health/wireStory/latest-belgium-relaxesvirus-lockdown-opens-shops-70610764 News, L. S., Cara Anthony,Kaiser Health. (n.d.). U.S. Clears More Than 5,000 Outpatient Centers as Makeshift Hospitals in COVID-19 Crisis. Scientific American. Retrieved May 29, 2020, from https://www.scientificamerican.com/article/u-s-clearsmore-than-5-000-outpatient-centers-as-makeshift-hospitalsin-covid-19-crisis/ Non-essential shops to reopen from 15 June—PM. (2020, May 26). BBC News. https://www.bbc.com/news/uk52801727 PM unveils “conditional plan” to reopen society. (2020, May 10). BBC News. https://www.bbc.com/news/uk-52609952 Pung, R., Chiew, C. J., Young, B. E., Chin, S., Chen, M. I.-C., Clapham, H. E., Cook, A. R., Maurer-Stroh, S., Toh, M. P. H. S., Poh, C., Low, M., Lum, J., Koh, V. T. J., Mak, T. M., Cui, L., Lin, R. V. T. P., Heng, D., Leo, Y.-S., Lye, D. C., … Ang, L. W. (2020). Investigation of three clusters of COVID-19 in Singapore:
159
Implications for surveillance and response measures. The Lancet, 395(10229), 1039–1046. https://doi.org/10.1016/S01406736(20)30528-6
https://www.channelnewsasia.com/news/singapore/ singapore-covid-19-outbreak-evolved-coronavirus-deathstimeline-12639444
Ranney, M. L., Griffeth, V., & Jha, A. K. (2020). Critical Supply Shortages—The Need for Ventilators and Personal Protective Equipment during the Covid-19 Pandemic. New England Journal of Medicine, 382(18), e41. https://doi.org/10.1056/ NEJMp2006141
Tognini, G. (n.d.). Coronavirus Business Tracker: How The Private Sector Is Fighting The Covid-19 Pandemic. Forbes. Retrieved May 29, 2020, from https://www.forbes.com/sites/ giacomotognini/2020/04/01/coronavirus-business-trackerhow-the-private-sector-is-fighting-the-covid-19-pandemic/
Sensors, C. and. (n.d.). 12-Lead ECG Placement Guide with Illustrations. Cables and Sensors. Retrieved May 28, 2020, from https://www.cablesandsensors.com/pages/12-lead-ecgplacement-guide-with-illustrations
Wason, G., LaJoie, T., Li, H., & Bunn, D. (2020). Congress Approves Economic Relief Plan for Individuals and Businesses. Tax Foundation.
Shankl, S. A. L.-M., Manley, M. I., McCarthy, L., & Tomlin, J. (n.d.). COVID-19 Control Measures—UK Government Statutory Powers and Summary of Economic/Regulatory Interventions to Date | Lexology. Retrieved May 15, 2020, from https://www. lexology.com/library/detail.aspx?g=7107bd89-0950-490189f5-1e9d037670dc Shapiro, N. (n.d.). When ‘Elective’ Surgery Is Necessary: Operating During The COVID-19 Coronavirus Pandemic. Forbes. Retrieved May 29, 2020, from https://www.forbes. com/sites/ninashapiro/2020/05/04/when-elective-surgery-isnecessary-operating-during-coronavirus-covid-19/
What Is Contact Tracing? Here’s How It Could Fight Coronavirus. (n.d.). Time. Retrieved May 29, 2020, from https:// time.com/5825140/what-is-contact-tracing-coronavirus/ Will Work From Home Continue After Coronavirus Lockdown In India? - ProQuest. (n.d.). Retrieved May 29, 2020, from http://search.proquest.com/docview/2381448359?accountid =10422&rfr_id=info%3Axri%2Fsid%3Aprimo Working from home: Has coronavirus shown that remote work is the future? | Pro/Con - ProQuest. (n.d.). Retrieved May 29, 2020, from http://search.proquest.com/docview/2382041 007?accountid=10422&rfr_id=info%3Axri%2Fsid%3Aprimo
Shim, E., Tariq, A., Choi, W., Lee, Y., & Chowell, G. (2020). Transmission potential and severity of COVID-19 in South Korea. International Journal of Infectious Diseases, 93, 339–344. https://doi.org/10.1016/j.ijid.2020.03.031
World: Will the coronavirus change the way we work from home? - ProQuest. (n.d.). Retrieved May 29, 2020, from http:// search.proquest.com/docview/2379655101?accountid=1042 2&rfr_id=info%3Axri%2Fsid%3Aprimo
Singhal, T. (2020). A Review of Coronavirus Disease-2019 (COVID-19). INDIAN JOURNAL OF PEDIATRICS, 87(4), 281–286. https://doi.org/10.1007/s12098-020-03263-6
Yong, S. by E. (n.d.). How the Pandemic Will End. The Atlantic. Retrieved May 29, 2020, from https://www.theatlantic.com/ health/archive/2020/03/how-will-coronavirus-end/608719/
SNS PPE REPORT.pdf. (n.d.). Retrieved May 29, 2020, from https://oversight.house.gov/sites/democrats.oversight.house. gov/files/documents/SNS%20PPE%20REPORT.pdf
Zhang, H., Penninger, J. M., Li, Y., Zhong, N., & Slutsky, A. S. (2020). Angiotensin-converting enzyme 2 (ACE2) as a SARS-CoV-2 receptor: Molecular mechanisms and potential therapeutic target. Intensive Care Medicine, 46(4), 586–590. https://doi.org/10.1007/s00134-020-05985-9
Sohrabi, C., Alsafi, Z., O’Neill, N., Khan, M., Kerwan, A., Al-Jabir, A., Iosifidis, C., & Agha, R. (2020). World Health Organization declares global emergency: A review of the 2019 novel coronavirus (COVID-19). International Journal of Surgery, 76, 71–76. https://doi.org/10.1016/j.ijsu.2020.02.034
Zheng, Y.-Y., Ma, Y.-T., Zhang, J.-Y., & Xie, X. (n.d.). COVID-19 and the cardiovascular system. NATURE REVIEWS CARDIOLOGY. https://doi.org/10.1038/s41569-020-0360-5
Special report: The simulations driving the world’s response to COVID-19. (n.d.). Retrieved May 15, 2020, from https://www. nature.com/articles/d41586-020-01003-6 State Action on Coronavirus (COVID-19). (2020). National Conference of State Legislatures. State Fiscal Responses to Coronavirus (COVID-19). (2020). National Conference of State Legislatures. States Get Creative To Find And Deploy More Health Workers In COVID-19 Fight. (n.d.). NPR.Org. Retrieved May 29, 2020, from https://www.npr.org/sections/healthshots/2020/03/25/820706226/states-get-creative-to-find-anddeploy-more-health-workers-in-covid-19-fight The timeline regarding coronavirus in Germany. (2020, April 1). Deutschland.De. https://www.deutschland.de/en/thetimeline-corona-virus-germany Thompson, S. A. (2020, April 30). Opinion | How Long Will a Vaccine Really Take? The New York Times. https://www.nytimes. com/interactive/2020/04/30/opinion/coronavirus-covidvaccine.html Timeline: How the COVID-19 outbreak has evolved in Singapore so far. (n.d.). CNA. Retrieved May 29, 2020, from
160
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
161
The Economics of Drug Development
WRITERS: KRISTAL BYSTAFF SAM NEFF '21 AND DINAWONG, RABADIDEV '22 KAPADIA, MAANASI SHYNO, LOVE TSAI, ANIKETH YALAMANCHILI, BOARD WRITERS: SAM NEFF, NISHI JAIN Cover: Skyrocketing drug prices in recent years have put a spotlight on pharmaceutical firms pricing and manufacturing processes as well as the regulations that govern them Source: Pxhere
â&#x20AC;&#x153;Including the full process of discovery, development, regulation testing, and marketing, costs can top off to more than $1.2 billion.â&#x20AC;?
162
Introduction: Challenges of Drug Development Political controversy surrounding drug prices has existed for decades, but recent debate illuminates the discrepancy between production costs and final product prices. Many point to the tendency of drug monopolies to price-gouge consumers, painting pharmaceutical companies as greedy actors. Pharmaceutical companies and favorable pundits counter this by claiming that current drug pricing is reflective of the extensive costs associated with drug development. Including the full process of discovery, development, regulation testing, and marketing, costs can top off to more than $1.2 billion. Companies must have two to three drugs approved to remain competitive but typically only receive one approval due to low success rates in proposals, ranging from 20-30% (Lakdawalla, 2018). These expenditures, in addition to those incurred pursuing patents, impact the ultimate price of
a drug brought to market. But the question is, to what extent? Considering that high costs impact accessibility of drugs for the most vulnerable populations, it is important to hold companies accountable for fair pricing to prevent exploitation. Outlining the contributing factors associated with the costs of drug development, from production to marketing and pricing, will give insight into how well production costs and shelf price are correlated. By laying out in detail the drug approval process and the economic models used to determine pricing, we seek to determine the. ways in which drug costs may be reduced from both the perspective of drug production and the perspective of pricing [Figure 1].
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 1: Cost of drugs is also driven by the stakeholders who help assemble, market, and sell them. Source: Flikr
The Process of Drug Design (Part I): Molecular Design and the Clinical Trial Process The average drug takes over 10 years to pass from initial discovery to market availability (de Vrueh 2017). Starting with the drug’s discovery, scientists conduct animal testing, and proceed towards submitting an Investigational New Drug Application (IND) to the FDA, which includes previous data on animal studies, molecular drug studies, and plans for manufacturing, and an outline for human testing (Lakdawalla, 2018). There are four phases of the clinical trial process. The focus of Phase I is drug safety. Here, 2080 patients are enrolled, and data collection focuses on side effects, the body’s internal processing, and excretion of the drug. In Phase II, trials increase enrollment to hundreds of patients, and the primary goal is to assess both positive and negative effects on the people with the condition that is being targeted. These trials are controlled with a randomly assigned placebo group. Patients are also monitored for short term side effects and safety of the drug. After successful completion of Phase II, the FDA and sponsoring pharmaceutical firms meet about Phase III clinical trials where typically thousands of patients are enrolled. The goals of this phase are to study the drug in different populations, its complications with other drugs, and to establish proper dosages. Again, researchers are monitoring side effects, safety, and effectiveness of the drug.
SPRING 2020
If all goes well, the FDA will meet again with sponsors in preparation for submitting a New Drug Application (NDA) to the FDA for drug marketing approval. A complete NDA discloses animal and human data from Phases I-III and includes analysis of the human trials including in vivo interactions and manufacturing (FDA, 2016). Within 60 days, the FDA decides whether to send a review team to evaluate the study and the drug’s safety and effectiveness. Once approved by the Center for Drug Evaluation and Research (CDER), the FDA confirms accuracy of drug labeling and concludes with a facility inspection of the manufacturing plant. The last step of the clinical trial, Phase IV, does not occur until after the drug goes onto the market. Here, the FDA’s post-marketing safety system monitors public usage of the drug including unforeseen complications and intervenes if necessary (Lakdawalla 2018). Although it is ultimately the clinical trials that determine whether or not a drug enters the market, rigorous target validation and preclinical studies are essential to increase confidence in the end result of these investigations. This is because it affords not only a more robust understanding of the biology of the disease but also allows scientists to recognize its variability. Preclinical studies frequently take about 7-10 years, beginning with target validation, compound characterization, and then ultimately moving into in vitro and in vivo studies. The process of target validation utilizes a variety of techniques including computer modeling, immunohistochemistry, and protein expression studies, among others (Smith 2003). Target validation essentially aims to confirm
“Target validation essentially aims to confirm that the receptor protein or pathway affected by the drug is a worthy investment of time and resources.”
163
Figure 2: Steps of the Drug Development Process. Source: Wikimedia Commons.
â&#x20AC;&#x153;...healthcare economists are using decision modeling to take in the current information that has been collected through tests and trials to calculate what the next steps for healthcare provides should be.â&#x20AC;?
that the receptor protein or pathway affected by the drug is a worthy investment of time and resources. For example, in the case of oncology treatments, this means looking at normal expression of receptors in normal tissue relative to tumorous tissue such that the chemotherapy is targeted toward the cancerous cells only and does not cause significant toxic side effects (DiMasi 2007). Compound characterization frequently encompasses a marriage between business development and chemistry, because before the target receptor or pathway has been identified, the scientists must synthesize or acquire the drug that will affect the receptor in the intended way. If a company is rather large, it is commonplace they will acquire the drug from small companies either through purchasing the patent and rights to the drug or acquiring and/or merging with other firms if the drug is especially promising (Vernon 2008). If, on the other hand, the synthesis is done in house, the company will attempt to create a cost and time efficient method of synthesizing the correct molecule and ensuring that it is stable when it will enter the body. In vitro studies will confirm drug stability on a cellular level, as well as demonstrating pharmacodynamics and toxicity. Building on in vitro cell studies, animal models provide a more realistic understanding of what the drug does in the body. If in vitro and in vivo studies show the drug being safe and efficacious, then there is promise for clinical trials (DiMasi 2007). In the drug development process, creating a functional drug is not the only consideration considered at pharmaceutical companies. The extensive research and development needs for new drugs are often cited as contributing
164
factors to the steep costs that patients and healthcare payers pay when these drugs are used. Consequently, healthcare providers and payers alike are now putting a higher emphasis on determining the true cost-effectiveness of developing, introducing, and producing a drug. Healthcare reimbursement is a system by which a company will reimburse an individual for receiving treatment for medical expenses that they incur. Because reimbursement can cost companies large sums, these companies want to ensure that they will reimburse effective treatments only. But along with research into the costs of drugs, reimbursement agencies also want to see information on long term survival of patients, resource use, quality of life, etc. in order to come to a holistic decision about the value of these drugs, and the price this value corresponds to. Without these estimates, reimbursement agencies might be hesitant to accept paying for a drug or treatment at a certain price when the actual production costs are much higher or much lower than expected (Hall et al., 2010). For this reason, healthcare economists are using decision modeling to take in the current information that has been collected through tests and trials to calculate what the next steps for healthcare providers should be. One popular application of decision modeling is a framework known as "Value of Information" (VoI). With VoI, researchers determine the societal payoff that would be derived from zero uncertainty in an aspect of the drug, whether it be the efficacy, cost, toxicity, or any other factor (Hall et al., 2010). This value is known as the Expected Value of Perfect Information (EVPI) and can be used to determine how trials could be run to uncover information that would correspond to
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
the highest benefit to consumers if there was no uncertainty for that category of information. For instance, if researchers were 95% confident that a drug was effective at treating patients but were only 30% sure of the costs that producing the drug would entail, then it would be best for researchers to run more trials investigating the costs behind producing these drugs (Hall et al., 2010). However, EVPI assumes societal benefit under zero uncertainty for the category of information, which is not a practical expectation. It is infrequent that researchers can make definitive statements on how effective the drug will be or the level of toxicity for different people. Therefore, the benefit based on a given sample set of information can be determined, also known as the Expected Value of Sample Information (EVSI). Subtracting the cost of a trial from EVSI can give a very good estimate of the benefit that would be run from running a trial focused on determining further information on costs and resource use as opposed to efficacy and toxicity, for example (Hall et al., 2010). While some countries prefer to use a risk-sharing model for decision modeling, these are not optimal as they instead look to lower average risk instead of total risk. Instead, many countries are making VoI frameworks the norm and much research is going into refining the mathematics that would determine EVPI and EVSI given the current known information (Hall et al., 2010). But decisions on what type of trials to run and when are important considerations for pharmaceutical companies to keep in mind when developing a product as EVSI. For instance, any decision that can lessen the cost of running a trial can indicate it would be beneficial choice. And often the question arises: would there be greater benefits if this decision was made earlier? Should trials even continue at this point? To determine the frequency and location of decision points in a drug development process, healthcare economists can use option models commonly found in finance to represent these decisions. In financial markets, options are often traded and bought at a premium to underlying asset value because consumers find additional value in the right to say whether they would like to actually buy the asset at a predetermined price. For instance, a value can be put on a student’s decision to go to college because without that decision, they wouldn’t be able to avoid the choice that would make them worse off in the future.
SPRING 2020
In drug development, researchers want to add decision points to analyze and interpret data in order to make the best decisions for the project. There are some key takeaways that can be derived from analyzing the general drug development process with a few exceptions. Firstly, decision points should be added until the benefit derived from the access of that decision outweighs the costs (whether it be time, resource, or manpower) of adding that decision. Secondly, decision points should be placed earlier rather than later. Because initial tests bring more certainty than later trials simply because there is more uncertainty at the beginning, adding decision points at earlier points can allow researchers to identify large information gaps that future trials should address. Lastly, the higher uncertainty the focus of trials is, the more valuable decision points after the trials are. This occurs simply because trials studying aspects of the drug with more uncertainty subsequently have the potential to eliminate more uncertainty than trials on aspects where most everything is known. Because of the large amount of uncertainty eliminated after these trials it would be beneficial for researchers to make future decisions after the trial rather than before (Burman & Senn, 2003). Though, it is important to keep in mind that the current framework for decision modeling is a simplified version of reality. In reality, there are a variety of restrictions surrounding information including time to process and analyze data, confidentiality agreements, conflict of interest, and more that complicate how decision modeling can be applied. In the future, refining this framework to better fit reality will be a heavy focus along with a sizable challenge for healthcare economists. After target validation, compound characterization, in vivo studies, and in vitro studies, the company submits an IND to the FDA, which if accepted, allows the drug to enter clinical trials. Clinical trials are among the most expensive elements of the drug development project because there is extensive marketing and advertising that incentivizes doctors to participate and give their patients the medication. There is no database or systematic method for acquiring patients for clinical trials, so companies frequently have to go out and get patients and doctors to agree to participate using the preclinical data and the IND (Lakdawalla 2018, DiMasi 2016). Phase 1 clinical trials are completed on healthy patients in order to establish the safety of the
“After target validation, compound characterization, in vivo studies, and in vitro studies, the company submits an IND to the FDA, which if accepted, allows the drug to enter clinical trials.”
165
“The path to a drug's FDA approval culminates in the company filing a new drug application (NDA), which servers as the company's formal proposal to the FDA to allow the production, advertisement, and sale of their new pharmaceutical product in the U.S. drug market.”
166
drug. Phase 1 takes about 1-2 years and uses about 20-100 healthy volunteers; 70% of drugs that enter Phase 1 survive this trial. Phase 2 clinical trials are completed on 100-500 patient volunteers in order to do an initial evaluation of the effectiveness of the medication by giving alternate doses to sick patients. About 30-40% of drugs that survived phase 1 survive phase 2, and the phase lasts about 2 years. Phase 3 is the longest and most expansive phase, taking at least 3-4 years and enrolling about 10005000 patients. Phase 3 seeks to confirm both the safety and the efficacy of the medication and specifically monitors the long-term use. The new drug is also compared directly against existing treatments, placebos, or active comparators (Vernon 2008). This phase heightens the risk clinicians and patients will fail to see the trials as the reason to abandon the use of current medication (Lakdawalla 2018, Hall 2010). Despite the large numbers and cost investment in clinical trials, they have been commonly cited as being too relaxed on patient numbers, thereby not recognizing potential adverse reactions either due to heterogeneous disease biology between patients. (DiMasi 2016). The path to a drug’s FDA approval culminates in the company filing a New Drug Application (NDA), which serves as the company’s formal proposal to the FDA to allow the production, advertisement, and sale of their new pharmaceutical product in the U.S. drug market ("New Drug Application (NDA)," 2020). The application packet submitted to the FDA tells the drug’s entire life story, including its ingredients, how it affects the body, its manufacturing, processing, and packaging info, and all of the research done before this point, such as the methods and results of the animal studies and human clinical trials of the IND. Once submitted, the FDA has 60 days to conduct a preliminary review, after which they either scrap the entire application or allow it to proceed to further review. If the drug has been moved to accelerated/priority review, a final decision can be expected within 6 months ("Cadence Pharmaceuticals," 2020). Accelerated review is further divided into four categories: fast track, breakthrough therapy, accelerated review, and priority review ("Fast Track, Breakthrough Therapy, Accelerated Approval, Priority Review," 2020). Fast track is for drugs that treat serious conditions that are currently unmet in the market (such as drugs that treat coronavirus). Accelerated approval is the same idea, except it allows for the drug to be approved upon a “surrogate endpoint,” an endpoint that acts as a marker or predictor of clinical benefits but is not
itself a clinical benefit. Breakthrough therapy is a category meant for a drug that significantly improves outcomes over currently available drugs, and the last category, priority review, is a broad term meant for any drug that the FDA aims to get to within the next six months. If the drug was moved to standard review, this time is lengthened to 10 months. If the drug is a biological product instead of a chemical one, such as vaccines, a Biologics License Application (BLA) must be filed instead ("What Are ‘Biologics’ Questions and Answers," 2020). This application contains many of the same components that an NDA does, with additional or revised sections to account for the differing properties of biologics (such as their less certain makeup, greater sensitivity, and greater vulnerability to contamination). After approval of either drug type (biological or chemical), the product can be released and marketed. All companies must submit information every three months for three years regarding any side effects that patients on the medication experience. Severe sideeffects must be reported within 15 days of the company being notified. When the three-year mark has passed, pharmaceutical companies must then submit an annual report in lieu of the trimonthly one. Additional studies are known as Phase IV / “post-marketing” studies and can continue for years after the initial FDA approval ("Postmarketing Clinical Trials," 2020). These studies evaluate the long-term safety of a drug and often involve thousands of patients. Postmarketing requirements (PMRs) are studies mandated by the FDA, whereas post-marketing commitments (PMCs) are done on a company’s own volition. PMRs are often mandated for three reasons: either the drug was approved on an accelerated path (COVID-19 drugs or vaccines may be subject to this clause), the drug is used on children (in which case pediatric studies are required by PREA), the drug was initially approved under animal efficacy and must now be proven to be effective for humans ("Postmarketing Requirements and Commitments: Introduction," 2020). The procedures for litigation by the FDA after a company has committed fraud (such as within their research methods) or withheld information (such as not submitting severe side-effects in post-approval reporting) are not readily available online.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 3: A drug manufacturing plant where active ingredients are mixed with excipient and compounded into an edible tablet. The drug is subsequently packaged and shipped out to research centers for clinical trials, or after approval, to patients Source: Wikimedia Commons
The Process of Drug Design (Part II): Business model and manufacturing of drugs In addition to the onerous process of screening drug candidates and performing the rigorous scientific testing necessary to prove their safety and efficacy, a significant amount of work goes into the manufacturing of drugs and the formation of a business model for drug production and marketing. The following paragraphs outline the key aspects involved in producing a pharmaceutical drug for the market. A drug receives the distinction of “best in class” or “first in class” when it is in the last stages of clinical trials and near entering the market. “Best in class” indicates that there is an existing active comparator that addresses the particular disease indication. The new drug will be compared to this existing treatment while in the phase III clinical trials to see if the efficacy and overall survival and progression free survival of the patients is better with the new drug. If the drug is the first one for a particular disease, then it is called “first in class” due to the fact that there are no other treatments for the disease (DiMasi 2007). Usually, if the medication falls into either of these categories, they are informally eligible for off-label use; if they target a receptor or pathway for one particular indication that is also expressed or present in another indication then it can be assigned to that indication as well on a patient-by-patient basis (March 2017). If these new indications prove helpful, then companies begin to seek
SPRING 2020
approval for these additional indications. This process is a goal for all drugs that move through the market due to the fact that it increases market size of the drug by moving into multiple disease indications. Upon entering the market, the drug companies frequently have a patent on the product and seek to sell it to a wide patient population. However, first in class medications with the first mover economic advantage are rare, so drugs are frequently still competing with the existing drugs that have been approved (Meek 2013). This stage is known as phase IV, where even after FDA approval and market entrance, drug companies are trying to convince doctors (who in turn have to sell it to patients) that it is a worthy shift from the existing medications that are in use. Part of phase IV also establishes that the drug is safe and effective outside of the artificial clinical setting and accompanying marketing and business development help extend the reach of the medication in order to equalize its position among past medications (Lakdawalla 2018). Even before the clinical trial process is complete, pharmaceutical companies need to develop a process for manufacturing their drugs. A key phrase in the world of pharmaceutical manufacturing is ‘Quality by Design’ (QbD). It essentially means that the drug production process is closely vetted before any product is made to ensure that there is minimal risk of batch inconsistency. To accomplish this task, pharmaceutical companies build new plants and design chemical synthesis processes in
“A key phrase in the world of pharmaceutical manufacturing is 'Quality by Design (QbD). It essentially means that the drug production process is closely vetted before any product is made to ensure that there is minimal risk of batch inconsistency.”
167
conversation with national and international regulatory agencies. One place where standards are set is the International Conference for Harmonization, where expert working groups work in consultation with representatives of regulatory bodies from across the world, developing a thick packet of standards for pharmaceutical companies to follow (ICH, 2009). The key point of these guidelines is to encourage a controlled process – the production of drug batches at a consistently high standard.
“Even if these guidelines are met on time, they are not a steady target - the regulatory standards affecting the industry evolve over time, following political response to the actions of pharmaceutical companies, or a broader shift in the political climate and mentality towards government regulation.”
A quick glance at the table of contents for the ICH guidelines should be enough to convince the reader that these guidelines are highly complex. They require detailed attention to the physicochemical properties of the active pharmaceutical ingredient (API) and excipient (non-active substances), the containers used to store ingredients and finished product, and the potential risk for microbiological contamination – just to name a fewconsiderations that must take place. (ICH, 2009). Even if these guidelines are met on time, they are not a steady target – the regulatory standards affecting the industry evolve over time, following political response to the actions of pharmaceutical companies, or a broader shift in the political climate and mentality towards government regulation. And these guidelines are not the only ones that must be met either. Companies must comply with the rules laid out in the Code of Federal Regulations set out by the FDA. CFR 21 applies to the production of food and drugs, with CFR 21, Pt. 211 applying to pharmaceutical manufacturing in particular (FDA, 2020). Additionally, companies operating abroad must meet the criteria set forth in international regulations of each individual country or the regulations set by the EU for European countries. Of course, these standards apply specifically to the pharmaceutical industry – manufacturers of biologics or generics would have to follow a somewhat different set of rules. In fact, many modern pharmaceutical companies actually produce both pharmaceuticals and biologics. Vertex Pharmaceuticals, a company that recently cracked the top 10 of Fortune Magazine’s Future 50 list, yet is still considered small in comparison to industry giants like Merck and Pfizer, has recently acquired two biologics companies in a merger: Semma Therapeutics, a company developing beta cell transplants for Type II Diabetes patients, and Exonics Therapeutics, a company developing gene therapy for patients
168
with Duchenne’s Muscular Dystrophy (“Future 50,” n.d.; “Vertex to Acquire Semma,” n.d.; “Vertex Expands,” n.d.) . To get these biologic products to market, Vertex will need to file a Biologics License Application (BLA) rather than a new drug application (NDA), and the subsequent steps of the regulatory approval process will differ too ("What Are ‘"Biologics’" Questions and Answers," 2020). In today’s day and age, pharmaceutical production is a global enterprise – at least for those companies big enough to be major players in the industry. Vertex has been around for roughly 30 years, initially developing immunosuppressant drugs for transplant patients at their inception in 1989 (the company worked on an alternative to the highly successful, yet significantly nephrotoxic immunosuppressant cyclosporine) and moving on to produce antiviral medications at the height of the AIDS epidemic in the 1990s. In the new century, Vertex turned to tackle Hepatitis C, gaining approval for its blockbuster drug Incivek in 2011 (which was quickly toppled from its high market position by the more effective drugs of competitors) (Reuters, 2014). Concurrent efforts to develop drugs for the genetic disease cystic fibrosis (CF), however, have now borne fruit. Vertex now has a series of drugs that are approved for use in about 90% of the CF patient population – with its latest drug Trikafta granted approval in the Fall of last year (FDA Press Release, 2019). Developing drugs in all these disease areas has required and further stimulated the development of a massive infrastructure for drug research and production. Vertex operations are conducted out of several major research centers, with company headquarters located in Boston Massachusetts. The company has another campus in San Francisco, which works currently, and has worked historically on CF drugs. Another campus in Oxford is devoted to developing small-molecule pain medications. And across the world, Vertex works with a global array of contract manufacturing organizations (CMO’s) and contract research organizations (CRO’s) to manufacture its various approved drug products and test those still in the clinical pipeline. This is not to mention the numerous vendors who supply the company with manufacturing equipment and laboratory devices, or the companies that assist in drug packaging and shipping. The story of Vertex brings to light many
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 4: 2018 Generic Drug Approvals by the US FDA. Source: FDA
important points about the current state of the modern pharmaceutical industry. First, its global reach. Even a relatively young company like Vertex has contract research organizations and manufacturing sites throughout Europe, Asia, and the Americas. The company’s business – and even more so a larger company like Merck or Pfizer - is deeply affected by fluctuations in global supply chains. Second, pharmaceutical companies deal in highly competitive markets – the story of Incivek demonstrates that even a successful drug may quickly be eclipsed by competitors tackling the same disease, implying that FDA approval does not assure a profitable drug. And once a patent is filed (long before a drug is actually approved), the clock starts ticking towards the day when the window of patent exclusivity expires, and the drug can be reengineered by generic companies. Of course, a patent may always be challenged even before then. Third is the relationship between drug companies, academic institutions, and patient foundations. In building its blockbuster CF drugs, Vertex derived significant financial benefit (and moral support) from its relationship with the CF foundation. And the effort to develop laboratory assays to test the CF drugs was supported by a Scientific Advisory Board (SAB), composed of scientists from various academic institutions. With drug production in mind, good relationships between companies and the FDA, their network of global suppliers, and their partner academic institutions are essential to success.
SPRING 2020
Problems with drug pricing Despite the large workload required to produce drugs, the exorbitant costs that consumers are expected to pay today appear excessive to much of the population. Pricing discrepancies can occur as a result of any step in the processes described above. Drug approval, with phase I, II, III, and IV trials, is a lengthy and costly process for drug companies to undertake – these processes cumulatively can take over a decade to complete. Not to mention the NDA, IND, and other data that is required to collect and submit to the FDA for approval. Furthermore, this does not even take into account the target validation, potential patent purchasing or company acquisition, and various studies that must go into verifying the safety and efficacy drug, not to mention actually producing it. Lastly, the production process (constructed with QbD guidelines in mind) developed by companies in the pharmaceutical industry, not only in their facilities within the United States, but also by their properties and partners throughout the world, requires careful attention and a good deal of time to properly execute. All of that being said, to understand whether pharmaceutical prices are justified, we now turn to elucidate the factors that may blur the line between fair and inflated pricing in the pharmaceutical industry.
“...hiking prices in order to supply more funding into drug discovery is a common practice...”
For example, hiking prices in order to supply more funding into drug discovery is a common practice. This is made possible by American patent law, which was initially introduced in order to drive production, but has since created monopolies on certain drugs. This occurs in other markets as well when a product becomes popular and demand increases faster than supply, but prices are increased
169
incrementally so that the product doesn’t fall out of favor. Pharmaceutical companies which hold monopolies on vital drug treatments do not need to consider this when changing prices allowing them to price-gouge customers who need these drugs with few repercussions depending on the state.
“The fact that drug prices in the United States by some sources are almost two times greater than prices in other countries continues to be a common source of controversy.”
Another important economic aspect is the influence of the drug market and the actions of other pharmaceutical companies. The Shkreli effect, named after the former owner of Turing Pharmaceuticals, is a prime example of this. In 2015 Martin Shkreli increased the price of Daraprim from $13.50 to $750 per tablet after acquiring the company despite not changing production methods. This not only forced hospitals and healthcare workers to consider riskier, less vetted options, but also created a domino effect with increased pricing for these alternatives. Other market control tactics include blocking generic drugs from hitting the market (Copeland 2019). Pay-to-delay deals are often made with generic brands to allow companies to maximize profit before sales are reduced by the introduction of a cheaper alternative. Product hopping, which exploits the Risk Evaluation and Mitigation Strategies issued by the FDA, is when companies slightly modify a drug nearing expiration and release it as a new product or improved product to trick generic brands into restarting development of alternatives to the “new” drug. By blocking generics, which increase accessibility with lower pricing, allow high costs to last longer. Ultimately, it is the company who decides on the high price tags associated with their drugs. And while governmental tax-exemptions and policies may factor in the decision, it is more likely that these companies use the government’s lack of regulations to generate these high prices. This lack of federal involvement in drug pricing makes our government unique, as many European countries are largely involved in regulating drug prices (Golec & Vernon, 2010). In these foreign countries, Australia, for example, a drug must be submitted to the Pharmaceutical Benefit Advisory Committee, which verifies a drug strictly on the basis on whether its health impact is worth the price tag. This type of system prevents staggering prices in these foreign countries which arguably enact such regulations because they see medicine as a public utility, and one their people can’t live without (Kliff, 2018). In addition to a lack of governmental drug pricing regulations in the US, part of the problem
170
lies in patent laws which allow inventing firms to establish a monopoly price on the market for about five to seven years. This phenomenon has also allowed limited price ceilings on drugs such as insulin and epinephrine to become extremely costly (DiMasi 2007). Additionally, there are methodological concerns such as the limitation of trials to brand name drugs as they have achieved higher market prevalence than the generic brands and to drugs that are not covered by insurance (Wagner et. al 2004). Moreover, it has also been shown that pharmaceutical firms engage in the practice of price discrimination. American firms have noticed that since the fraction of drug purchasers in the United States tend to be of higher income, these high prices can be maintained while firms operating the lower income segments of the world have realized that prices must fall to maximize demand. This results in lower prices for the lower income countries (correlating with GDP) and higher prices for countries such as the United States (Wagner et. al). This hypothesis lines up due to the fact that the United States has a higher GDP than the next three countries combined – thereby potentially justifying the higher drug prices (DiMasi 2016). However, this allegation of price raising has not been found across the board. As mentioned before, there are discrepancies in the communication of drug prices. Although sources supported by the US federal government consistently show comparable drug prices with the rest of the world, other independent journals and think tanks clearly show the US prices as far ahead of the rest (DiMasi, 2016). Moreover, the fact that drug prices in the United States by some sources are almost two times greater than prices in other countries continues to be a common source of controversy (Kesselheim and Avorn 2016). This has been explained by the US government claims that the products that are available in other countries do not align with products that are in the United States. Notably, there are manufacturing differences (i.e. equipment and formulations), differences in clinical indications, and differences in the need of various populations across countries (“Comparison of US and International Prices for Top Medicare Part B Drugs by Total Expenditures”). Therefore, a patient receiving a drug in Switzerland may pay a significantly lowered
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 5: A graph of the sales of drugs in relation to their expiration dates. The sales cliff indicates how in the highly competitive drug market, drug prices are so high so as to reap a profit before patent expiration allows competitors to undercut them Source: http://www.mdpi. com/1424-8247/5/12/1393
price for a prescription medication as would a patient in the US (Kliff, 2018). While patients suffer high drug costs, there is some benefit to this system: the lack of regulation provides monetary incentive for scientific innovation and interest into drug development, which is already risky and financially consuming. In fact, although places such as the EU suffer a lower frequency of pharmaceutical drug price inflation, the EU had a reported 1680 fewer research jobs and 46 fewer novel drug discoveries over a period of 19 years than did the US (Golec & Vernon, 2010). Thus, some argue that regulations such as price ceilings on profits may hinder innovation and deter scientists from investing in these drugs themselves. In addition, though witnessing lower levels of cost inflation, foreign countries may suffer from the loss of something else: monetary incentive for innovation and interest in drug development. In fact, over a period of 19 years, the EU had reported 1680 fewer research jobs and 46 fewer novel drug discoveries than did the US (Golec & Vernon, 2010). Therefore, those siding with existing governmental drug regulation policies argue that such limitations on profits may prove more detrimental to society by hindering innovation and deter scientists from investing in these drugs themselves. To ensure the US remains the premier innovator for drug discoveries, the government has enacted patent laws and IP rights along with several pieces of legislations and programs such as the Orphan Drug Act (1983), the Hatch / Waxman Act (1984), the
SPRING 2020
Prescription Drug User Fee Act (1992), the Food and Drug Administration Modernization Act (1997), the Biologics Price Competition and Innovation Act (2009), the Small Business Innovation Research program, the Jumpstart Our Business Startups Act (2012), and the Drug Competition Action Plan (DCAP) (2017) (Shouldn’t the U.S. Government Do More to Regulate High Drug Prices?, 2016; Center for Drug Evaluation and Research, 2020). Along these lines, a key aspect of current pricing decisions in the US pharmaceutical industry is the consideration of patent law and the exclusivity window. As government policies regarding patent law and IP protection further exacerbate the rising cost of drugs, these long timelines for generic medicine production and approval after patent expiration further hinders the release of lower cost generic products (DiMasi 2016). Thus, patent laws raise drug prices within the patent window due to exclusion of competitors but also support new drug innovation by guaranteeing profit for a company’s heavy investments in the drug development process. While some generic brands do follow the exact molecular structure and composition as the original, other drugs such as those produced from a living cell / organism provide something of an added obstacle for generic manufacturers to replicate and supply at a lower cost. This class of drugs, including biologics, are often priced higher (Pearl, 2018). This is part of the reason why insulin, a hormone molecule, and therefore a biologic, continues to have high prices attached. If the government alters this window
“...though witnessing lower levels of cost inflation, foreign countries may suffer from the loss of something else: monetary incentive for innovation and interest in drug development.”
171
between initial release of biologic drugs to when generic companies can start discovering alternate methods of producing them, perhaps lower-cost generic drugs will enter markets sooner. Current legislation prevents the FDA from approving generics for release at least 7-10 years, even in the absence of patents (Engelberg et al, 2015).
“The Orphan Drug Act... grants a drug manufacturer seven years of exclusive market access after drug approval. The aim here is to incentivize production of drugs for rare or terminal illnesses for which the patient market, and the potential profit, is limited.”
The “Hatch-Waxman Act” was passed in 1982 to balance the consumer criticism for high drug prices with the profits needed to fund and reward drug development. The act extended the patent life of pharmaceuticals for five years beyond the 20 years of patent life that is generally afforded to new products in the United States. This change takes into consideration the fact that new drug patents are filed and approved during the preclinical phase of drug testing, long before the actual approval of a drug (that may or may not actually occur). Years of costs are accrued before any significant profits are made (Lakdawalla, 2018). But Hatch-Waxman also supports generic companies in their efforts to mimic the mechanism of action of “pioneer drugs.” It established the ANDA process for generics (whereby generic manufacturers invest in tests that demonstrate bioequivalence of their product to the existing pioneer drug). When the ANDA is approved, generic companies are able to sell their drugs. That is, as soon as the patent exclusivity window for the pioneer drug ends. The act also provides incentives for generic manufacturers to challenge a patent in court – either diminishing its breadth or invalidating it entirely. The generic manufacturer that makes the legal challenge is given six months marketing exclusivity over other generics if they are successful (Lakdawalla, 2018). The process of patent filing is highly complex. There are other laws too that government patent exclusivity for drug companies. The Orphan Drug Act, for example grants a drug manufacturer seven years of exclusive market access after drug approval. The aim here is to incentivize production of drugs for rare or terminal illnesses for which the patient market, and the potential profit, is limited. In addition to market exclusivity, data exclusivity measures prevent generic companies from using the safety and efficacy data generated in the clinical trials of a pioneer drug for a certain window of time. Hatch-Waxman provides five additional years of data exclusivity. And the Food and Drug Administration Modernization Act of 1997
172
(FDAMA) addressed pediatric drug applications by providing an additional six-month window of data exclusivity. Of course, the complexity of designing a drug production process in line with the guidelines established by the FDA and other international regulatory organizations, and the designing of clinical trial processes, is a significant barrier for generic companies that wish to imitate a pioneer drug (Lakdawalla 2018). It is important to note that the US government is a major buyer of drugs, primarily for its social service programs: Medicaid, the Department of Veteran Affairs, and the Department of Defense (Frank, 2003). Therefore, if the government did play a role in drug pricing and regulation, there could be a conflict of interest. And what entity would be able to enforce these regulations and ensure the ethicalities of this new role? Now drawing back to another difference within the US and foreign governmental drug policies, when the US government does participate in limited drug pricing negotiations, they are not published, unlike those of European governments. The foreign transparency of publicizing information arguably allows other nations to understand and determine for themselves, the reasonable price of a drug and arguably allows drug companies to continue charging high prices as there would be no national comparison (Pearl, 2018). And a publicization of healthcare pricing wouldn’t be new to the government since Medicare already produces pricing standards for hospitals and physicians.
Solutions for the future of drug development Given the complexity inherent in the issue of drug prices, pharmaceutical regulation, and the sheer number of parties involved in the drug development process, it is inevitable that inefficiencies and problems arise. Thus far, this paper has outlined some of those issues. In this section, we propose – based on our findings – a series of suggestions that could serve to mitigate some of the problem areas present in the drug development process, improve the relationship between pharmaceutical firms and regulators, and bring down drug prices. It should be emphasized that the following are conjectures informed by background research, shaped by the perspectives and opinions of the individual authors of this report.
Part 1: Drug production costs – target
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 6: A diagram representing the process of structure-based drug design. Once researchers visualize the binding of a drug to its target protein (represented by the ribbon structure at top left), they can determine which of the drugs functional groups are essential to binding, and which can be altered in an attempt to make the drug less toxic or increase its bioavailability – the degree to which it enters the bloodstream. Source: Wikimedia Commons
validation and the clinical trial process. Rational drug design: The concept of rational, or structure-based, drug discovery is something that has captured the minds of players in the pharmaceutical industry for the past few decades. It can be described in opposition to the traditional mode of drug discovery – literally digging through the dirt (soil and other dirty surfaces known to harbor bacteria) to find bioactive molecules that may have some effect on human cells. This is the story of cyclosporine – the still industrystandard immunosuppressant drug that was discovered in Norway by Sandoz biologist Hans Peter Fry. Fry was on vacation at the time, and the company had encouraged its employees to collect soil in plastic bags when they went abroad. This time, the practice bore fruit (Meek, 2013) . A pure form of the drug was synthesized in 1973, and its immunosuppressant effects were elucidated throughout that decade. Early clinical trials proceeded throughout the 1970s but presented mixed results in terms of safety and efficacy. The research work of Dr. Thomas Starzl in the early 1980s, a key figure in the history of liver transplantation, demonstrated the drug’s efficacy with careful dosing and
SPRING 2020
combination with other drugs (like the steroid prednisone) (Meek, 2013; Werth, 2014). Turning to rational drug design, the story of Vertex Pharmaceutical is again instructive. The first big project that Vertex pursued after its genesis in 1989 was immunosuppression. Cyclosporine, despite its miraculous capacity to facilitate organ transplantation, is dangerously nephrotoxic. The scientists at Vertex sought to produce a drug like cyclosporine that prevented the immune rejection of organs, but did not damage the kidneys, or cause other dangerous side effects. To do so, they latched on to a molecule called FK-506, which had been harvested in 1987 from a sample of Japanese soil. It was quickly found to be more potent that cyclosporine, but it still had significant side effects, including nephrotoxicity (Werth, 2014; InvivoGen, n.d.). Vertex scientists spent years crystallizing the binding protein for FK-506 in human cells, and modeling the interaction of FK-506 and its binding protein with the help of advanced molecular imaging software (Werth, 2014). With knowledge about the details of this interaction, it became possible for Vertex scientists to understand exactly which portion of the drug molecule was essential for binding, and which portions could be altered. It followed
“The scientists at Vertex sought to produce a drug like cyclosporine that prevented the immune rejection of organs, but did not damage the kidneys, or cause other dangerous side effects." 173
that by altering the superfluous functional groups, it would be possible to produce new variants of the drug that were less toxic, or more bioavailable (entered the bloodstream more readily), yet still as effective at interacting with the target protein.
“One major aspect of the drug development process where machine learning may be used is in target identification and validation.”
These efforts to modify FK-506 ultimately came largely to naught, but other Vertex efforts at immunosuppression followed in its place, and some candidate molecules from the FK506 efforts were found to have anti-cancer properties, and later used in clinical trials (“Coming Full Circle,” 1998). But what about the idea of rational drug design in general. We are now almost three decades out from these efforts (as Barry Werth notes in his definitive account of the company and the pharmaceutical industry at large, Vertex was a pioneer in this field). Have we advanced any further towards the principles of ration drug design since then? One limit to the prospect of rational drug design that existed in the 90’s, but has improved since, is our computing capacity. Better molecular imaging software (as well as more efficient crystallization techniques) have allowed for faster resolution of drug-effector interactions – what took several years for Vertex scientists in the 1990s can now be done in significantly less time (Pichler et al., 2008; Dauter and Wlodawer, 2016). And the prospects for machine learning (a field whose basic theoretical tenets and algorithms have been around for decades) to enhance drug discovery have increased in recent years due to drastically enhanced computing power. Yet there are still barriers that block the implementation of rational drug design principles. One is economic – the expense of sophisticated imaging technology and crystallization machinery (as well as the capacity to assay large volumes of potential drug candidates for their cellular effects) is a high hurdle, especially for young companies. And even for big pharmaceutical companies (and perhaps especially for big ones) – there is a barrier to change; it’s hard to implement an entirely new approach to drug discovery and abandon the processes that have bred generations of successful drugs. What if the new approach were to fail? The economic consequences could be disastrous for the company and the patients that currently benefit, or could benefit in the future, from its drugs. There are also technical difficulties – for example, the accurate calculation of binding affinity
174
(Lounnas et al., 2013) between a drug molecule and its intracellular receptor is quite a difficult task. Just knowing that a drug binds to a protein in the body is not enough information to tell how effective it may be at remaining attached to that target, and affecting intracellular change. The technical ability to gauge binding affinity has increased, however, in recent years (Cournia et al., 2017). Furthermore, high throughput screening methods to test large catalogues of potential drug molecules have improved significantly in the past decade, making the prospect of rational drug discovery ever more promising for those willing to tackle it (Lounnas et al., 2013; Brazil, 2018). Machine Learning Techniques: As previously discussed, the time required to produce a drug can be decades once all federal filings have been approved and the drug can finally be introduced to the market. One of the main reasons why drug development takes so long is simply because of the complexity of biological systems and the process of producing drugs to interact with them. Each organism possesses a countless number of cells, in many unique varieties, with a multitude of proteins and other molecules floating around within the cells and their surrounding space. It goes without saying that there is a lot of data scientists need to analyze in order understand how a drug may affect the body. Fortunately, where humans struggle with processing such large amounts of data, machines may excel. Scientists have proposed using machine learning algorithms to analyze current sets of biological data, having computers propose optimal solutions to a variety of steps in the drug development process (Vamathevan et al., 2019). One major aspect of the drug development process where machine learning may be used is in target identification and validation. Scientists put in countless hours of research to identify a certain problem, propose a therapeutic solution, and verify their hypothetical proposed solution using ex vivo (outside of living organisms) and in vivo (inside of living organisms) models (Vamathevan et al., 2019). Machine learning has already been used to analyze protein sequences, metabolic interactions, gene associations, and more to support drug target identification and validation (Fine and Chopra, 2019). And because a lot of useful data is already present within the existing body of scientific literature, bioinformatics researchers
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
are using Natural Language Processing (NLP), among other algorithmic methods, to analyze the text and tease out useful information. However, just because a drug has the potential to interact with a target protein in the body does not mean that these drugs possess high specificity for that protein. Machine learning algorithms can be used to study the physicochemical, structural, and geometric features of drug binding sites on target proteins to analyze whether to assess drug-protein affinity based on past data on drug bonding (Vamathevan et al., 2019). For predictive power in drug development, Deep Neural Networks (DNN) are often used because of the large set of data that they can take in and use towards predictive power. Many DNNs use a form of prediction known as Markov Models. Markov Models take in a training sequence of data with states corresponding to "hidden" values that would be unknown in an actual dataset. Then, the model is shown a completely new sequence of data, and can predict the values associated with certain states in the sequence based on the current and previous states and values. For instance, one common application of DNN in chemical synthesis is predicting reaction sequence. Once favorable reactions and their characteristics from other chemical synthetic processes are fed into a DNN, the algorithm can build the protocol towards the most favorable reaction sequence given the reactions the algorithm has already placed into the sequence (Vamathevan et al., 2019). However, machine learning can help with more than just the laboratory side of drug development. One big problem in drug development is assembling clinical trials. Researchers don't want to waste time and risk their drug being labeled as statistically ineffective by giving their drug to someone who it is not likely to help. There are various signs that patients exhibit which can indicate whether they will benefit from a certain drug or not. Machine learning has recently been proposed as a tool to help solve this problem by learning and analyzing these biomarkers to build clinical trials participants that are better suited to the drug in question. This technology is currently extremely scarce because of low quality trial results and a lack of proof that the software is efficacious. If strengthened, however, this technology may help mitigate the risk of drugs that are actually effective being
SPRING 2020
denied entrance into the market (Kohannim et al., 2010). In summary, there are several ways in which drug development can be optimized with machine learning. Hopefully, by expediting the time it takes to bring drugs to market, pharmaceutical companies can lower the costs they incur in the drug development process, which will then result in lower prices for consumers. Improving the Relationship Between Public, Private, Academia and NGOs: The cost-motivated process of drug development has forced academia into working with grant contracts and publication agreements incentivized with royalty paybacks and protection of intellectual property (Palmer and Chaguturu 2017). Many argue that these monetary incentives create a misalignment between scientific value and commercial value. Currently, academia does not play as large of a role in the new drug development process due to the overpowering of mergers and acquisitions which have the cost reduction processes, and large pharmaceutical companies which have streamlined the drug production process and receive monetary support from sources such as pharmaceutical venture capital.
â&#x20AC;&#x153;Thus emerges the public-private partnership (PPP), an arrangement in which the public sector and private sector decide to share their resources in the development of a drugâ&#x20AC;?
Thus emerges the public-private partnership (PPP), an arrangement in which the public sector and private sector decide to share their resources in the development of a drug. These collaborations may include but are not limited to governmental organizations, academia, patient organizations, and pharmaceutical companies. Many call for organizational restructuring around PPPs, with an increased role of academia in the process of drug discovery. The thought is that industry often lacks the culture essential to choosing and discovering novel drug targets and academics often lack the experience of working with patents. The US government has facilitated this shift with Bayh-Dole Act of 1980, which instigated partnerships between academic research, and pharmaceutical companies of all sizes (de Vertuh & Crommelin, 2017). These academic-industrial partnerships allow for the sharing of intellectual knowledge essential for new drug identification. However, this process is easier said than done. The idea that pure science and for-profit business ought to be separated must be put aside and replaced
175
with a united awareness of the mission of drug research and development. In recent years, partnerships with academia have produced positive outcomes, such as the collaboration of Emory University with GlaxoSmithKline (GSK) to produce the HIV drug Lamivudine (Frearson & Wyatt, 2010).
“Collaboratives such as the Drugs for Neglected Diseases Initiative (DNDi) have emerged from joint efforts between public, private, academic, nonprofit, and philanthropic centers.”
One advantage of PPPs is that the sharing of knowledge and research among multiple parties will ensure that wasteful repetitive development efforts are not made. Also, a greater pool of knowledge will lead to faster drug development overall, as more minds are focused on the same objective. The pooling of financial resources between multiple entities is also helpful in decreasing drug development costs that might have been too large a burden for one single company, but not for two or more entities (de Vrueh & Crommelin, 2017). A parallel can be drawn between drug development and defense technology. It used to be that the US could bear the costs of defense technology R&D, but the technology has become so complex and so expensive to produce, that it now takes an international conglomerate for progress to be made on that front. Similarly, drug development has become so expensive and complex that any one pharmaceutical company doing it on their own has had to increase drug prices to cover the costly process. Another approach is one that expands on the PPEs – the addition of non-profit organizations. Collaboratives such as the Drugs for Neglected Diseases Initiative (DNDi) have emerged from joint efforts between public, private, academic, nonprofit and philanthropic centers. DNDi’s mission is to create new and improved treatments for globally neglected diseases by using an international collaborative R&D model and by forging relationships with global partners such as the non-profit Doctors Without Borders (MSF), the World Health Organization (WHO), pharmaceutical companies, and international governments in countries such as Malaysia and Brazil. The idea is that the unity and collaboration of different scientists and organizations will create a pooling of resources and techniques not available to a single company or a single partnership and allow for quicker execution of projects. The vaccines and treatments discovered and produced by this partnership for often-overlooked diseases are not associated with a patent, which would create a monopoly of their discoveries. Rather their goals lie in collaboration, a commitment to equitable access, and transparency in delivering effective
176
and accessible treatments for those afflicted with disease (About Us – DNDi, 2015; Belliveau et al., 2020). Once a drug has been discovered and is ready for human testing, clinical trials are conducted in remote and affected areas and subsequently produced and disseminated to those who are infected with the disease – no matter where, and at an affordable cost (About Us – DNDi, 2015). This model proves extremely efficient: within the last five years, DNDi alone has been responsible for creating eight new drugs and treatment options and successfully delivering them to populations in need. For example, the emergence of drug resistant malaria strains in the late 20th century has lacked the attention of the western world despite persisting malaria endemic areas in Southeast Asia, Southern Africa, and South America. DNDi, with the Fixed-dose Artesunate Combination Therapy (FACT) project, developed and tested artesunate-amodiaquine and artesunatemefloquine, a new ACT using two existing antimalarials. Since then, they have also helped to deliver 600 million doses of this treatment to two countries. In addition, DNDi has created fexinidazole, a new drug for African Sleeping sicknesses, a disease that primarily affects central Africa. This transportable oral drug created significant improvements over the previous treatment that required painful spinal taps, intravenous fluid treatment, and highly equipped hospital and laboratory facilities. It also had a high level of toxicity, generating adverse life-threatening side effects in 1 in 20 patients (CDC Malaria Worldwide - How Can Malaria Cases and Deaths Be Reduced?, 2019; Belliveau et al., 2020). Evidently, efforts made by PPPs such as that between academia and large pharmaceutical sectors (Emory University and GSK, Lamivudine) and by multi-sectored partnerships between academia, and public, private, nonprofit and philanthropic sectors (DNDi) significantly challenge the current, broken system of drug development and the inequities in healthcare that it fosters and question the existing capitalistic norms plaguing the process today.
Part II: Drug production costs (global drug manufacturing + regulation compliance) Assessing the Current State of Federal Regulation: Federal regulation has been both touted and condemned from different political
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
perspectives, but research suggests that regulation has been more ineffectual, for good or for ill, than politicians appear to suggest. Part of this commentary stems from the need to accumulate political capital – evidence shows that proposing regulation of the pharmaceutical industry allows politicians to claim that they fight for the common people, thereby incentivizing regulatory affairs. Regulation within the realm of drug pricing falls into four segments: safety and efficacy, insurance, patents, and antitrust regulation. For the purposes of this paper, insurance was not a point of discussion, but the other three elements bear recognition. FDA regulations to ensure safety and efficacy have been some of the most stringent regulations globally. While the requirements for safety and efficacy vary slightly between these two institutions, there is a significant amount of investment that goes into getting approval from the EMA even though a drug may have passed the FDA, and vice versa. This added spending inevitably falls on consumers in the form of higher drug prices to compensate for added resources; although the expansion of the market would cause there to be an inevitable shift in the demand curve that would allow for a similar equilibrium price than before, the drug companies still seem to charge more, often using added investment to enter European markets and the possibility of pricing regulation as a justification for this. International standardization of pharmaceutical regulation is one possible solution to this problem, allowing for a one-stop-shop for universal approval. This has been advised against given the heterogeneity of different global populations, but since many drugs end up seeking approval in international markets anyway, this argument breaks down. Another criticism has been about FDA clinical trials – as discussed before, it turns out that much of the money required for drug development is actually funneled into clinical trials, and these trials often don’t have enough participants to constitute a compelling cohort. A possible solution for this challenge has been considered to be the internalization of cost for phase I and phase II clinical trials that help determine safety within the FDA, leaving the proof of efficacy to the pharmaceutical firms. In this way, with more disposable income dedicated to clinical trials, firms will have the incentive to spend more on phase III trials to make a compelling case for their drug being a break from the standard
SPRING 2020
treatment. This may convince physicians to switch treatment regimens once the drug is completely approved. Antitrust regulation is not necessarily something that has been considered within current political discussions of the pharmaceutical industry, but it is something that is important and worth considering as pharmaceutical firms grow larger and larger in size. Antitrust regulation enters into consideration when firms attempt to aggregate their market power through horizontal integration – either through merging with other firms or through acquiring smaller firms who may have been competitive with the large firm’s product. The common argument for supporting such mergers and acquisitions is that the larger firm can promote economies of scale, meaning that they can allow certain drug products to be manufactured at a lower cost due to the use of expensive equipment and a researched and optimized method. However, what is not considered in this argument is the fact that there is an incentive for these firms – if they have acquired enough market power – to charge monopoly pricing, thereby necessitating a strict eye towards antitrust regulation on the part of the government.
“International standardization of pharmaceutical regulation is one possible solution to this problem, allowing for a one-stopshop for universal approval."
If Firm A is an enormous multibillion dollar firm who is a monopolist for a therapy – let’s say for esophageal cancer – and is a firm with few competitors (Firms B, C, D) who are incentivized to innovate the current method that has been used by Firm A, then Firm A, fearing the possibility of losing revenue to the smaller firms, will offer a significant amount of money to buy out firms B, C, and D. If the firms initially refuse on principle, then Firm A will continually offer more and more money until they accept. If they do not accept, it is possible that they might take the valuation posed by Firm A and go to a venture firm to get further funding. Given this scenario, it is possible that in the absence of antitrust regulation one of the smaller firms (B, C, D) might go on to produce a product that rivals Firm A’s and thereby could reduce the monopoly pricing. However, this scenario is unlikely – it is much more likely that the smaller firms will choose to be bought out by Firm A. If Firm A buys out these firms, they can either choose to stop innovation if Firm A’s product is doing very well, or they can develop it alongside their current therapy and simply replace the market with the newer product, likely charging even more than before. If strict antitrust vigilance is employed by the government then it is possible to stop Firm A
177
from acquiring any of the smaller firms, thereby allowing the smaller firms to independently develop their product without too much of an incentive to sell their technology to existing larger firms.
“To conceptualize this problem, it is useful to consider different models of global manufacturing. The first is to make everything in one place...”
178
Patent law is perhaps the most controversial involvement of the government into the drug development pipeline, and economic literature suggests that a removal of patents will reduce drug prices, enhance innovation, and maintain current levels of involvement in the market (DiMasi 2016). The government, on average, grants 5-7-year patents to firms as an incentive to create the drug – during this period, the firm that came up with the drug is welcome to set the price as high as they want. In essence, these firms will have monopoly pricing over the course of these 5-7 years – much to the detriment of the patient population. The general argument for patents is that without patents there will be no drugs, but this reasoning is flawed and requires a more critical analysis. Under the current law, after a drug is approved the chemical formula and associated safety and efficacy stats are published and available to competitors basically for free (the only cost is the potential cost of the journal article) – this is another argument, in addition to the incentive based in monopoly pricing, that promotes the idea that patents are required. However, even with collaboration and monetary incentives, there are a few other things that are worth recognizing. First of all, the generic firms who will be marketing a drug must synthesize them and prove their efficacy first – something that does not happen overnight by any means. During the 5 to 7 year patent period that the inventing firm enjoys, generic firms are retroactively determining the composition and the method for synthesizing the drug, setting up production lines, and studying patient populations and market dynamics. Therefore, although it appears that generic firms enter immediately at patent expiry, there is a lot of overhead research that is done to make this happen. If patents were abolished, companies would still enjoy monopoly profits for some time, so the profit incentive would not be abolished if patents were eliminated. Additionally, much of drug development happens outside of private sector, and it is frequently only the private sector that must realize the costs of validating an existing drug target and putting it through clinical trials, so if we were to internalize some of the phase I and phase II costs into the government, then it is possible to reduce this financial burden on pharmaceutical firms and potentially open up the patent-free avenue. Finally, patent law is
not an issue that is commonly discussed in US politics, so it is something of a “regulatory capture” in that politicians can act in the interests of the pharmaceutical firms rather than the public at large. As a result, excessive lobbying on the behalf of pharmaceutical firms could lead politicians to extend patents or do other favors that are more in line with what pharmaceuticals want rather than the general public. If we combine these things – international standardization, internalization of Phase I and Phase II costs, vigilant antitrust regulation, and an abolition of patents, it is possible to create a healthy level of government oversight that will lead to a reduction in drug price and an optimization of the system at large. Making Supply Chains More Resilient: The challenge of supply chain disruptions is extremely relevant right now. In the midst of the COVID-19 pandemic, it has become highly challenging for pharmaceutical companies to transport drug materials – active ingredients, excipients, and finished products – across national borders. And even within one country, factory shutdowns and workspace limitations have made drug production and shipping more difficult. The cost of these supply chain disruptions are enormous – companies can’t make drugs and profit from them as easily, and certain treatments are becoming scarce for patients who need them. Perhaps the solution, looking toward the future, is to make supply chains more resilient. To conceptualize the problem, it is useful to consider different models of global manufacturing. The first is to make everything in one place. An archetypal historical example would be the River Rouge plant of the Ford Motor Company (located in Dearborn, Michigan) – at its height, it included buildings for steel and glass production, engine construction, a tool and die shop, and a building for automobile assembly (“National Register,” n.d.). It’s first major assignment was the production of war boats for WWI, but it subsequently turned to the production of a number of Ford models, and it inspired other plants of its kind in the Soviet Union (GAZ) and South Korea (Hyundai). Of course, this is not the full picture of Ford Motor Company operations, certainly not now and not at the time either. Ford had plants throughout the United States, Europe, Asia, and elsewhere, and drew from suppliers of an
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
equally global extent. But the idea of centering the supply chain in one location, at least for particular product models, is still an interesting one to consider (“National Register,” n.d.) There are certain advantages to this form of production. For one, as is probably apparent, supply chain issues are minimized. One doesn’t need to worry about transporting parts or raw materials – just distributing the final products. But of course, the idea of harvesting materials and making a whole product in one place is highly impractical, especially with all the varied and sophisticated components present in today’s modern automobiles; their electronic user interfaces, and fancy leather seats. A similar statement can be made about the modern pharmaceutical industry. The nature of modern pharmaceutical operations involves working in multiple disease areas at once – all of the drugs in these separate pipelines may require up to hundreds of intermediates to synthesize. It would be just about impossible to do this all in one place. For pharmaceutical companies, another advantage of international production lies in the nature of partnerships with international organizations – the vendors that make laboratory equipment and parts for the intricate machines that occupy the factory floors of drug manufacturing centers, or the external CRO’s and CMO’s that handle additional research and manufacturing for the company that is not conducted in-house. From the pricing side of things, significant costs can be saved by shopping around for ingredient suppliers and vendors that supply materials at the lowest price. The global supply chain structure that characterizes the modern pharmaceutical is highly prone to supply chain disruptions – as is the auto industry and most other largescale companies that manufacture goods. This system that yields high cost-efficiency in the long term can be quite costly in the short term. And even though this problem may sound like something that can be put up with in the interest of long-term monetary gain, the nature of the pharmaceutical industry (the expense of developing a single drug and the exorbitant rate of failure) means that massive short-term failure might spell the end of a company’s life. There are a few solutions to consider. One would be to concentrate the entire chain of company operations within one country. In
SPRING 2020
the event of a pandemic such as the one we are witnessing now, operations would still be hindered due to national economic disruptions, but not nearly as much as if operations were global. This change, however, would preclude many of the benefits of global operations that were mentioned in the previous paragraphs – particularly the ability of a company to select vendors that test drugs and produce parts or intermediates at the highest quality and the lowest cost. Another solution would be to take the opposite approach – developing a multipolar model of global manufacturing. A company may choose to increase its number of global headquarters – say one in North America, another in South America, a third in Europe, and a fourth in Asia. Each company headquarters may focus on the production of different products, perhaps those that are of the highest demand in a particular region of the world. And, these centers could be strategically placed such that the average transit time between associated vendors, CMO’s, and CRO’s is minimized. A third approach, which constitutes a compromise between the first two, would be to have a flexible list of vendors and contract organizations to which a company could quickly switch up its contracts in a crisis such as the one we are in now (or when a certain vendor has become too costly or difficult to work with). Such an approach would require a sophisticated analysis of the productivity of current suppliers / producers within the supply chain, as well as forecasting to predict when supply chain disruptions – in one country, or worldwide – are likely to occur. And in any of these cases, there is a lot of inertia to be overcome in rearranging existing operations. Leaders who are willing to take big (and risky) moves to make these sorts of changes are required.
“Value-based decision modeling takes uncertainties in investment decisions and evaluates them from a product-facing and consumer-facing focus.”
Value-Based Decision Modeling: One aspect of drug development that has often been overlooked by drug companies is the use of value-based decision modeling in efforts to decrease uncertainty in production investment. Value-based decision modeling takes uncertainties in investment decisions and evaluates them from a product-facing and consumer-facing focus. For instance, how valuable is conducting another six-month trial to differentiate between helping 90% of patients and 91% of patients when another
179
“Unlike more fluid, less-regulated sectors of the economy, the bioscience industry is so intensely regulated that it is highly resistant to change.”
study could be conducted on the sensitivity of the drug to cheaper or more environmentally friendly materials? Decision modeling can help determine the relative value of two such choices using the expected outcomes that can be derived from each one. Further, decision points can be inserted in the process so that uncertainty is eliminated at key points and decisions for future studies are decided when their response is most critical. These decision points allow for careful analysis of the data to make the most informed decision from the perspective of researchers (Burman & Senn, 2003). Then, not only can pharmaceutical companies spend time researching only the items that will be of most benefit to society, which will decrease the time needed for many drugs, but it can also create a more holistic process for research that will be appreciated by reimbursement agencies and federal agencies that look at more factors of the drug than simply efficacy and toxicity (Hall et al., 2010). Unfortunately, decision modeling is experiencing several difficulties due to a lack of acceptance by the scientific community. Unlike more fluid, less-regulated sectors of the economy, industries the bioscience industry is so intensely regulated that it is highly resistant to change. However, this is a particular case where regulatory bodies are actually pushing towards the use of this decision modeling. This shift could make the drug development process more transparent and more attuned to the needs of the public, which will take a huge burden off of the regulatory bodies and ensure that the public's best interests are being taken into account. But where the major barrier to decision modeling lies is in the absence of a clear process to effectively calculate these valuebased results. The mathematics behind these models are extremely complex and not entirely accurate, which has caused many to believe that decision modeling is simply overpromising a reality that will never come to fruition. However, once this area is better researched to fine-tune the models used, it can gain the efficacy rate to be of great use to pharmaceutical companies everywhere (Kimko and Pinheiro, 2015). References Burman, C., & Senn, S. (2003). Examples of option values in drug development. PHARMACEUTICAL STATISTICS, 2(2), 113–125. https://doi.org/10.1002/pst.41 Center for Disease Control. (2019a, January 28). How Can Malaria Cases and Deaths Be Reduced? - Drug resistance in the Malaria Endemic World. https://www.cdc.gov/malaria/ malaria_worldwide/reduction/drug_resistance.html
180
Center for Disease Control. (2019b, June 7). CDC - African Trypanosomiasis—Resources for Health Professionals. https://www.cdc.gov/parasites/sleepingsickness/health_ professionals/index.html Center for Drug Evaluation and Research. (2020, January 27). Center for Drug Evaluation and Research (CDER). FDA. https:// www.fda.gov/about-fda/fda-organization/center-drugevaluation-and-research-cder CFR - Code of Federal Regulations Title 21 (Electronic Code of Federal Regulations). (2020). US Food and Drug Administration (FDA). Chen, J. (n.d.). Paradox of Thrift Definition. Investopedia. Retrieved May 29, 2020, from https://www.investopedia.com/ terms/p/paradox-of-thrift.asp Comparison of U.S. and International Prices for Top Medicare Part B Drugs by Total Expenditure. (2018). Office of the Assistant Secretary for Planning and Evaluation. Cournia, Z., Allen, B., & Sherman, W. (2017). Relative Binding Free Energy Calculations in Drug Discovery: Recent Advances and Practical Considerations. Journal of Chemical Information and Modeling, 57(12), 2911–2937. https://doi.org/10.1021/ acs.jcim.7b00564 Danzon, P. (n.d.). Competition and Antitrust Issues in the Pharmaceutical Industry. 57. Dauter, Z., & Wlodawer, A. (2016). Progress in protein crystallography. Protein and Peptide Letters, 23(3), 201–210. de Vrueh, R. L. A., & Crommelin, D. J. A. (2017). Reflections on the Future of Pharmaceutical Public-Private Partnerships: From Input to Impact. Pharmaceutical Research, 34(10), 1985–1999. https://doi.org/10.1007/s11095-017-2192-5 DiMasi, J. A., & Chakravarthy, R. (2016). Competitive Development in Pharmacologic Classes: Market Entry and the Timing of Development. CLINICAL PHARMACOLOGY & THERAPEUTICS, 100(6), 754–760. https://doi.org/10.1002/ cpt.502 DiMasi, Joseph A., & Grabowski, H. G. (2007). Economics of new oncology drug development. JOURNAL OF CLINICAL ONCOLOGY, 25(2), 209–216. https://doi.org/10.1200/ JCO.2006.09.0803 Engelberg, A. (n.d.). How Government Policy Promotes High Drug Prices | Health Affairs. Retrieved May 29, 2020, from https://www.healthaffairs.org/do/10.1377/ hblog20151029.051488/full/ Ensuring Global Access to Medicines and Vaccines for COVID-19. (2020, May 15). [Facebook Livestream]. https:// www.facebook.com/msf.english/ Fast Track, Breakthrough Therapy, Accelerated Approval, Priority Review. U.S. Food and Drug Administration. (2020). Retrieved 10 June 2020, from https://www.fda.gov/ patients/learn-about-drug-and-device-approvals/fast-trackbreakthrough-therapy-accelerated-approval-priority-review. FDA. (2016). FDA Drug Approval Process [PDF]. https://www. fda.gov/media/82381/download FDA approves new breakthrough therapy for cystic fibrosis. (2019, October 21). FDA Press Release. Fine, J., & Chopra, G. (2019). Lemon: A framework for rapidly
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
mining structural information from the Protein Data Bank. Bioinformatics, 35(20), 4165–4167. https://doi.org/10.1093/ bioinformatics/btz178 FK506. (n.d.). InvivoGen. Forsyth, R. W. (n.d.). Today’s Stimulus Spending Will Lead to Tomorrow’s Inflation. Retrieved May 29, 2020, from https:// www.barrons.com/articles/the-trillions-of-dollars-in-stimulusmoney-will-have-a-worse-long-term-economic-impact-thanthe-financial-crisis-51587746207 Frank, R. G. (2003). Government Commitment And Regulation Of Prescription Drugs. Health Affairs, 22(3), 46–48. https://doi. org/10.1377/hlthaff.22.3.46 Frearson, J., & Wyatt, P. (2010). Drug Discovery in Academiathe third way? Expert Opinion on Drug Discovery, 5(10), 909–919. https://doi.org/10.1517/17460441.2010.506508 Future 50. (n.d.). Fortune. Retrieved May 26, 2020, from https://fortune.com/future-50/2019/ Golec, J., & Vernon, J. A. (2010). Financial effects of pharmaceutical price regulation on R&D spending by EU versus US firms. PharmacoEconomics, 28(8), 615–628. https:// doi.org/10.2165/11535580-000000000-00000 Gross, D. J., Ratner, J., Perez, J., & Glavin, S. L. (1994). International Pharmaceutical Spending Controls: France, Germany, Sweden, and the United Kingdom. Health Care Financing Review, 15(3), 127–140. Hall, P. S., McCabe, C., Brown, J. M., & Cameron, D. A. (2010). Health economics in drug development: Efficient research to inform healthcare funding decisions. EUROPEAN JOURNAL OF CANCER, 46(15), 2674–2680. https://doi.org/10.1016/j. ejca.2010.06.122 Horwitz, J. (2020, May 21). Facebook to Shift Permanently Toward More Remote Work After Coronavirus. Wall Street Journal. https://www.wsj.com/articles/facebookto-shift-permanently-toward-more-remote-work-aftercoronavirus-11590081300
Industry. Journal of Economic Literature, 56(2), 397–449. https://doi.org/10.1257/jel.20161327 Long, H., correspondentEmailEmailBioBioFollowFollow, closeHeather L., close, rew V. D., & dataEmailEmailBioBioFollowFollow, rew V. D. focusing on economic. (n.d.). U.S. unemployment rate soars to 14.7 percent, the worst since the Depression era. Washington Post. Retrieved May 29, 2020, from https://www.washingtonpost. com/business/2020/05/08/april-2020-jobs-report/ Lounnas, V., Ritschel, T., Kelder, J., McGuire, R., Bywater, R. P., & Foloppe, N. (2013a). Current progress in Structure-Based Rational Drug Design marks a new mindset in drug discovery. Computational and Structural Biotechnology Journal, 5. https://doi.org/10.5936/csbj.201302011 Lounnas, V., Ritschel, T., Kelder, J., McGuire, R., Bywater, R. P., & Foloppe, N. (2013b). Current progress in Structure-Based Rational Drug Design marks a new mindset in drug discovery. Computational and Structural Biotechnology Journal, 5. https://doi.org/10.5936/csbj.201302011 March, R. J. (2017). Entrepreneurship in Off-Label Drug Prescription: Just What the Doctor Ordered! JOURNAL OF PRIVATE ENTERPRISE, 32(3), 75–92. Meek, T. (2013, March 25). This month in 1980: 33 years since cyclosporine demonstrated its potential as an immunosuppressant (World) [Text]. PMLive; PMGroup Worldwide Limited. http://www.pmlive.com/pharma_ news/33_years_since_cyclosporine_demonstrated_its_ potential_as_an_immunosuppressant_468977 Moors, E. H. M., Cohen, A. F., & Schellekens, H. (2014). Towards a sustainable system of drug development. DRUG DISCOVERY TODAY, 19(11), 1711–1720. https://doi. org/10.1016/j.drudis.2014.03.004 NATIONAL REGISTER OF HISTORIC PLACES INVENTORY -- NOMINATION FORM: Ford River Rouge Complex. (n.d.). United States Department of the Interior - National Park Service.
ICH HARMONISED TRIPARTITE GUIDELINE PHARMACEUTICAL DEVELOPMENT Q8(R2). (2009). INTERNATIONAL CONFERENCE ON HARMONISATION OF TECHNICAL REQUIREMENTS FOR REGISTRATION OF PHARMACEUTICALS FOR HUMAN USE.
Navigating Drug Discovery with High-Throughput Screening. (n.d.). Drug Discovery from Technology Networks. Retrieved May 15, 2020, from https://www.technologynetworks.com/ drug-discovery/articles/navigating-drug-discovery-withhigh-throughput-screening-297350
Kimko, H., & Pinheiro, J. (2015). Model-based clinical drug development in the past, present and future: A commentary. British Journal of Clinical Pharmacology, 79(1), 108–116. https://doi.org/10.1111/bcp.12341
Palmer, M., & Chaguturu, R. (2017). Academia–pharma partnerships for novel drug discovery: Essential or nice to have? Expert Opinion on Drug Discovery, 12(6), 537–540. https://doi.org/10.1080/17460441.2017.1318124
Kliff, S. (2016, November 30). The true story of America’s sky-high prescription drug prices. Vox. https://www.vox.com/ science-and-health/2016/11/30/12945756/prescriptiondrug-prices-explained
Pearl, R. (n.d.). 4 Regulations That Would Terrify U.S. Drug Companies Ahead Of The 2018 Midterms. Forbes. Retrieved May 23, 2020, from https://www.forbes.com/sites/ robertpearl/2018/07/16/drug-companies/
Kohannim, O., Hua, X., Hibar, D. P., Lee, S., Chou, Y.-Y., Toga, A. W., Jack, C. R., Weiner, M. W., & Thompson, P. M. (2010). Boosting power for clinical trials using classifiers based on multiple biomarkers. Neurobiology of Aging, 31(8), 1429– 1442. https://doi.org/10.1016/j.neurobiolaging.2010.04.022
Pichler, B. J., Wehrl, H. F., & Judenhofer, M. S. (2008). Latest Advances in Molecular Imaging Instrumentation. Journal of Nuclear Medicine, 49(Suppl 2), 5S-23S. https://doi. org/10.2967/jnumed.108.045880
Lakdawalla, D. N. (2018a). Economics of the Pharmaceutical Industry. JOURNAL OF ECONOMIC LITERATURE, 56(2), 397–449. https://doi.org/10.1257/jel.20161327 Lakdawalla, D. N. (2018b). Economics of the Pharmaceutical
SPRING 2020
Rouillard, A. D., Hurle, M. R., & Agarwal, P. (2018). Systematic interrogation of diverse Omic data reveals interpretable, robust, and generalizable transcriptomic features of clinically successful therapeutic targets. PLoS Computational Biology, 14(5), e1006142. https://doi.org/10.1371/journal. pcbi.1006142
181
Schmalz, F. (n.d.). The coronavirus outbreak is disrupting supply chains around the world—Here’s how companies can adjust and prepare. Business Insider. Retrieved May 29, 2020, from https://www.businessinsider.com/covid-19-disrupting-globalsupply-chains-how-companies-can-react-2020-3
from https://russellinvestments.com/us/blog/will-all-of-thisstimulus-cause-runaway-inflation-not-so-fast
Seife, C. (2015). Research Misconduct Identified by the US Food and Drug Administration. JAMA Internal Medicine, 175(4), 567. https://doi.org/10.1001/jamainternmed.2014.7774 Shouldn’t the U.S. Government do more to regulate high drug prices? (2016, July 20). BIO / MRP. https://www.drugcostfacts. org/drug-pricing-regulations Smith, C. (2003). Drug target validation: Hitting the target. Nature, 422(6929), 342–345. https://doi.org/10.1038/422341a Smith, E. (2020, February 28). Global stocks head for worst week since the financial crisis amid fears of a possible pandemic. CNBC. https://www.cnbc.com/2020/02/28/ global-stocks-head-for-worst-week-since-financial-crisis-oncoronavirus-fears.html U.S. shed 20.5 million jobs in April, unemployment at 14.7%. (2020, May 8). Marketplace. https://www.marketplace. org/2020/05/08/covid-19-april-unemployment-rate/ Vamathevan, J., Clark, D., Czodrowski, P., Dunham, I., Ferran, E., Lee, G., Li, B., Madabhushi, A., Shah, P., Spitzer, M., & Zhao, S. (2019). Applications of machine learning in drug discovery and development. Nature Reviews Drug Discovery, 18(6), 463–477. https://doi.org/10.1038/s41573-019-0024-5 Vernon, J. A., Hughen, K., & Golec, J. H. (2008). Future of drug development: The economics of pharmacogenomics. EXPERT REVIEW OF CLINICAL PHARMACOLOGY, 1(1), 49–59. https://doi. org/10.1586/17512433.1.1.49 Vertex Expands into New Disease Areas and Enhances Gene Editing Capabilities Through Expanded Collaboration with CRISPR Therapeutics and Acquisition of Exonics Therapeutics. (n.d.). Vertex Pharmaceuticals. Retrieved May 26, 2020, from https://investors.vrtx.com/news-releases/news-release-details/ vertex-expands-new-disease-areas-and-enhances-geneediting Vertex Pharmaceuticals to Acquire Aurora Biosciences for $592 Million in Stock. (2001, April 30). Genome Web. Vertex to Acquire Semma Therapeutics With a Goal of Developing Curative Cell-Based Treatments for Type 1 Diabetes. (n.d.). Vertex Pharmaceuticals. Retrieved May 26, 2020, from https://investors.vrtx.com/news-releases/newsrelease-details/vertex-acquire-semma-therapeutics-goaldeveloping-curative-cell Vertex to end sales of hepatitis C drug Incivek. (2014, August 13). Reuters. Wagner, J. L., & McCarthy, E. (2004). International differences in drug prices. Annual Review of Public Health, 25, 475–495. https://doi.org/10.1146/annurev.publhealth.25.101802.123042 Wells, W. (1998). Coming Full Circle: Vertex Pharmaceuticals, Inc. Chemistry & Biology, 5, R267-R268. Werth, B. (2014). The billion-dollar molecule: The quest for the perfect drug. Will All Of This Stimulus Cause Runaway Inflation? Not So Fast. | Russell Investments. (n.d.). Retrieved May 29, 2020,
182
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
183
The Neurobiology of Eating Disorders STAFF WRITERS: ANAHITA KODALI, DANIEL CHO, AUDREY HERRALD, SOPHIA ARANA BOARD WRITERS: LIAM LOCKE, MEGAN ZHOU Source: original figure created with Visme
“Binge eating (BED) is characterized by recurrent episodes of overeating, known as binges, in which patients experience a loss of control over food intake.”
Introduction Many individuals have healthy eating patterns: they eat when they are hungry, stop when they are full, and consume a balanced diet full of nutrients. However, some individuals do not eat when they are hungry, and others do not stop when they are full. If the eating patterns of these individuals start to interfere with their physical or mental well-being, they may be diagnosed with an eating disorder. The American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders, currently in its fifth edition (DSM-V), characterizes three eating disorders commonly diagnosed: anorexia nervosa, binge eating disorder, and bulimia nervosa (American, 2013). Although there are other ways in which eating can become disordered, these are certainly the most prevalent and will be the focus of this article. Anorexia nervosa (AN) is an eating disorder
184
characterized by food restrictive behaviors, an intense fear of gaining weight, a strong emphasis on body-image for self-evaluation, and a harm-avoidant personality (Nettersheim et al., 2018). Patients with AN are usually severely underweight and many have comorbid psychopathologies including depression, obsessive-compulsive disorder, and substance use disorders. Furthermore, AN is very difficult to treat because patients often value their food-restrictive behaviors (Basu & Chakraborty, 2010). Due to this resistance to treatment and the serious health complications that occur during starvation, anorexia has the highest mortality rate of any psychiatric disorder, including depression (Smink et al., 2012). Binge eating disorder (BED) is characterized by recurrent episodes of overeating, known as binges, in which patients experience a loss of control over their food intake (American, 2013). While the patients with AN avoid eating because it causes dysphoria, patients with DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
up approach will be employed to develop an understanding of the three disorders introduced above. First, genes associated with the disease will be identified. Second, the effects of these genes and gene products on neuronal activity will be analyzed. Third, a review of clinical neuroimaging studies will be performed to examine whether the molecular research agrees with its clinical manifestation. Finally, a graphical model for the development and maintenance of each disorder will be proposed. Current treatment options will be introduced after a discussion of each disorder.
Figure 1: The diathesis stress model of psychiatric disorders suggests that genetic predisposition and environmental triggers are both required for the disorder to develop. Source: Wikimedia Commons
Anorexia Nervosa BED experience a temporary improvement in mood following a binge eating episode. Some researchers have classified BED as a kind of ‘food addiction’ due to the similar neurobiological adaptations that occur in response to calorically dense foods and drugs of abuse (Volkow et al., 2017). Patients diagnosed with bulimia nervosa (BN) must report episodes of binge eating followed by compensatory purging at least once a week for 3 months to fit diagnostic criteria. The binge-purge cycle is originally a goal-directed behavior (i.e. purging as a volitional act), but the cycle eventually becomes a habit in which patients report a loss of control during purging episodes. Purging takes many forms including voluntary emesis (vomiting), enemas, laxatives, diuretics, abuse of prescription medications, or even excessive exercise (Nettersheim et al., 2018). Notably, the prevalence of AN and BN is approximately four times higher in women than it is in men. The etiology and maintenance of an eating disorder involves a complicated interaction between biological and social factors. The diathesis-stress model (Figure 1.1, initially used to describe the development of schizophrenia) may be used to understand the development of an eating disorder (Joiner et al., 2013). Individuals with a genetic predisposition to disordered eating will only develop the disorder if the appropriate environmental stressors are encountered (e.g. childhood trauma, social pressure, poor self-esteem, etc.). The primary focus of this article will be on the biological mechanisms that lead to the development and maintenance of disordered eating, although environmental factors may be discussed where appropriate. A ground-
SPRING 2020
Patients with AN exhibit extreme executive control over their diets and physical activity, and are able to override their body’s physiological response to hunger signals and starvation. There are several factors that contribute to the onset of AN; a combination of genetics, culture, and environment result in a dysregulation of neuronal biochemistry and circuit function (see Figure 2). The following sections discuss each of these factors. 1.1 Genetics and Environment Clinicians and researchers are becoming increasingly aware of the biological factors leading to the development of AN. Family and twin studies have shown that people with first-degree relatives with AN are ten times more likely to develop AN themselves. While there is a lack of consistency due to small sample sizes, it is believed that genetic factors might account for as much as 80% of AN onset (Holland et al., 1988). Genomewide association studies have identified eight different chromosomal regions that segregate with the disease across chromosomes 1, 2, 3, 10, and 11. Genes on these chromosomes may contribute to AN-related dysregulation of metabolism, hunger-regulatory systems, and reward pathways (Watson et al., 2019; RaskAndersen et al., 2010). Additionally, multiple regions on chromosomes 2 and 13 have been linked to obsessive tendencies and drive for thinness (Pinheiro et al., 2009). In total, 128 polymorphisms (changes in DNA bases) in 43 genes have been related to the onset of AN; however, some of these regions may simply segregate with the disease without increasing vulnerability (Rask-Andersen et. al., 2010).
“Clinicians and researchers are becoming increasingly aware of the biological factors leading to the development of AN.”
Associations between serotonin-related genes and AN have been replicated in many studies. Serotonin (also known by its chemical name 5-hydroxy-tryptophan or 5-HT) is a monoamine
185
Figure 2: AN is the product of several factors, including both psychiatric risk factors and metabolic components, some of which are shown above. Source: Wikimedia Commons
neurotransmitter derived from the amino acid tryptophan, which is consumed in a regular diet. Serotonin is thought to act as a mood stabilizer and is generally associated with good moods; however, abnormally high serotonin levels, as well as serotonin hypersensitivity at normal levels, have been linked to dysregulation of mood and appetite (Duvvuri et al., 2010). Patients with AN have been shown to have higher amounts of 5-HT2A receptor mRNA, resulting in increased serotonin binding and signal transduction upon food consumption (Klump et al., 2001). The serotonin receptor HTR1D has also been heavily implicated in AN’s pathophysiology (Pinheiro et al., 2009).
“Puberty has been implicated as a critical risk factor for AN onset in girls; this is at least partially due to estrogen's impacts on brain development.”
186
Dopamine-related genes also play a key role in the onset of AN (see Figure 3). Dopamine (DA) is involved in behavioral reinforcement and is thought to underlie the negative effect experienced immediately after eating in patients with AN (Barry & Klawans 1976). Dopamine receptors are overproduced in patients with AN, leading to anxiety and harm avoidance (Bailer et al., 2012), as well as an ability to abstain from normally pleasurable things such as food (Kontis & Theochari, 2012). Certain genetic alterations in DRD4 (a D2-type receptor) and DAT1 (a DA transporter) increase dopamine sensitivity and synaptic lifetime, respectively. The resulting overstimulation of dopamine receptors caused by these alterations is thought to cause dysphoria following eating in patients with AN. Two specific polymorphisms, DRD4 C616G and DAT1 VNTR, have also been correlated with AN’s clinical symptoms; patients with these mutations scored higher than those without the mutation on all but one section of the Eating Disorder Inventory Test-2, a test that evaluates several factors related to AN including drive for thinness and
impulse control. Interestingly, different variants of the DRD4 gene have been associated with increased BMI, and may represent a possible therapeutic target (Gervasini et al., 2013). Generally, AN onset has been shown to cause stark epigenetic changes including methylation at 58 different sites on the genome. Genes changed included those affecting lipid and glucose metabolism, serotonin receptor activity, and immune system function (Steiger et. al., 2019). The impact on the immune system may be related to changes to CPA3 and GATa2 gene expression as they are positively correlated with changes to leptin. CPA3 is a carboxypeptidase expressed in mast cells, which contribute to inflammatory diseases, while GAT2, a GABA transporter, has been associated with deficiencies in certain immune cells (Baker et. al., 2019). Environmental and cultural factors also contribute to the onset of AN. As a whole, females are more likely than males to develop the disorder. Puberty has been implicated as a critical risk factor for AN onset in girls; this is at least partially due to estrogen’s impacts on brain development (Klump 2014). Sociocultural pressures may also contribute to this disparity; women are also more likely than men to express dissatisfaction with weight and use dieting measures (Striegel-Moore et al., 2009). Cultural transitions and uncertain circumstances are also a major risk factor, and these issues will likely be augmented by industrial development, urbanization, and globalization (Zipfel et al., 2015). In particular, female Asian immigrants to Western nations are more likely than native citizens to develop AN; in turn, less acculturated immigrants have
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
been found to be more likely than their more assimilated peers to develop AN (Jennings et al., 2005); these findings may be generalized to any immigrants to new nations. Finally, family has a significant impact on the probability of developing AN (especially considering that most individuals in a household have similar genetic and environmental circumstances). Children with AN often come from households that promoted normalcy, discipline, and image-adherence; in addition, familial histories of eating disorders, depression, and alcoholism are considered strong risk factors (Wozniak et al., 2012). 1.2 Biochemical and Cellular Changes Several neurochemical processes are implicated in the development of AN. Critical to AN’s pathophysiology is the hypothalamus, the brain’s control center for feeding behavior. In patients with AN, communication between the gut and the brain is severely disrupted. Researchers have noted that upon eating, hypothalamic levels of glutamate and glutamine, some of the body’s most abundant amino acids, decrease. This alteration is important because glutamate is the major excitatory neurotransmitter in the central nervous system (CNS), so this decrease in glutamate after eating is associated with a general inhibition of the hypothalamus. Specifically, decreasing glutamate concentrations results in dysregulated connectivity to the arcuate nucleus and lateral hypothalamus, both of which have critical roles in satiety and energy balance. This dysregulation disrupts satiety cues and may contribute to AN onset (Florent et al., 2019).
menstruation) (Sidiropoulos, 2007). Perhaps the most profound changes occur in the body’s hormonal pathways. Levels of gastrointestinal hormones involved in regulatory processes for eating - including ghrelin, GLP-1, and PPY - are often significantly higher than baseline values. There are conflicting results regarding the impacts of AN on CCK concentrations (another gastrointestinal hormone involved in control of satiety), but some patients have been shown to have increased levels. Adrenal hormones like cortisol also increase, and the interactions between ghrelin and cortisol may facilitate eating dysfunction following periods of extreme stress (Culbert et al., 2016). At the same time, other hormones are lowered. Leptin is a hormone released from adipose (fat) tissue which regulates food intake and energy expenditure. Patients with AN have low blood concentration of leptin, directly correlating with reduced body fat in these individuals (Hebebrand et al., 2007). Disturbances to leptin and the other aforementioned hormones are a probable contributor to core symptoms of AN, such as dietary restriction.
Figure 3: This diagram shows the myriad of structures impacted by DA. Dopaminergic neurons, the main site of DA synthesis, are located within midbrain nuclei, including the ventral tegmental area (VTA) and the substantia nigra (SNc). Once DA is produced in the midbrain, it is released into its target synapses through three different pathways (nigrostriatal, mesolimbic, and mesocortical) and can affect regions of the brain involved in executive and cognitive function. Source: Wikimedia Commons
“Critical to AN's pathophysiology is the hypothalamus, the brain's control center for feeding behavior.”
Sex hormones are also disrupted—similar to leptin, the body’s circulating levels of estrogen are signficantly below baseline, causing symptoms like headaches and mood swings; these low levels can also be a risk factor for depression (Warren 2011). Disruption of ovarian hormones can also cause a decrease in serotonin and dopamine production leading to dysregulated CNS circuitry (Culbert, Racine, Klump 2016). 1.3 Anorexia Neurocircuitry Clinical neuroimaging studies provide further support that dopamine and serotonin-related Figure 4: The anterior cingulate cortex is highlighted in this MRI. Activity in this region is lowered in patients with AN which contributes to deficits in setshifting and behavioral flexibility. Source: Wikimedia Commons
In turn, AN causes many bodily changes that negatively impact physical health. AN patients often have constipation, low BMI, low blood pressure, and amenorrhea (a pause of
SPRING 2020
187
Figure 5: Proposed model of anorexia nervosa neurobiology Original figure created using Visme
â&#x20AC;&#x153;Clinical neuroimaging studies provide further support that dopamine and serotonin-related gene variants contribute to the etiology and maintenance of AN.â&#x20AC;?
188
gene variants contribute to the etiology and maintenance of AN. Positron emission tomography (PET) is an imaging technique that uses a radiolabeled tracer to detect biochemical processes in living tissue (Vaquero & Kinahan, 2015). PET studies have shown increased binding of D2 and D3 receptors in the ventral striatum of patients with AN relative to age-matched controls in response to pleasurable tasks including food ingestion (Kaye et al., 2013; Frank et al., 2005) and amphetamine administration (Avena & Bocarsly, 2012). While controls report these tasks as pleasurable (or even euphoric), patients with AN report discomfort and increased anxiety suggesting that hypersensitivity of the dopamine reward system facilitates aversion
and inhibition in these individuals rather than its usual functions of reward and motivation (Kaye et al., 2013). Alterations in the serotonin system also contribute to the risk-averse phenotype observed in patients with AN. A consistent finding in the early literature on anorexia is a low level of serotonin metabolites in the cerebrospinal fluid, possibly as a result of starvation (Kaye et al., 1991). Low activity of nuclei expressing serotonin receptors is also thought to facilitate increased executive control over feeding behaviors. SPECT studies, which measure changes in regional cerebral blood flow, have shown that patients with AN
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
have decreased delivery of blood to the medial prefrontal cortex (mPFC) , an area important for reward learning (Kaye et al., 2013). Overexpression of 5-HT1A and decreased expression of 5-HT2A receptors in the mPFC is thought to have a hyperpolarizing effect on these neurons resulting in an increased ability to delay rewards (Winstanley et al., 2006). The parietal lobe, a brain area where sensory and visual information are consolidated to create an image of the body (Shimada et al., 2005), is another brain region central to the pathophysiology of AN. Several functional neuroimaging studies have found decreased levels of parietal lobe function in patients with AN which may contribute to distorted body image and poor eating habits. Previously, it had been observed that those with lesions in the right parietal lobe were at risk for developing anosognosia, a condition where one lacks selfawareness about any disease they happen to have (McGlynn and Schacter, 1989). In this way, decreased parietal lobe function could explain why those with AN often seem unaware of their actual body size and about their disordered eating habits. Furthermore, the parietal lobe is a sexually dimorphic area which is smaller in females and may contribute to higher female prevalence of AN (van Kuyck et al., 2009). Other areas implicated in AN are the anterior and subgenual cingulate cortices, regions associated with arousal and emotional processing. The anterior cingulate cortex is involved in body image perception and processing emotions associated with eating (De Araujo and Rolls, 2004), while the subgenual cingulate cortex is involved in regulating emotions (Rudeback et al., 2014). In AN patients, a decrease in cerebral activity in these regions was observed both before and after clinical treatment, suggesting that lower activity of these areas is a state-trait (i.e. stable feature) in anorexia (van Kuyck et al., 2009). The temporal lobe, which contains the amygdala and the insula, is another region involved in abnormal sensory processing and body image perception in AN. The amygdala is responsible for fear conditioning and anxiety (Davis and Whalen, 2001), while the insula is involved in perception and integration of bodily states (Nunn et al., 2008). SPECT studies have found that individuals with AN have increased perfusion of blood in the temporal lobe, including the amygdala-hippocampal region and the right insula. Other studies, which
SPRING 2020
presented anorexic patients with triggering stimuli such as pictures of high-calorie food, found high levels of activation in the insula (van Kuyck, et al., 2009). Higher than average levels of activity in the temporal lobe, including the amygdala and insula, may explain why patients with AN have an intense fear of gaining weight. Another area in the temporal lobe implicated in AN is the Fusiform Face Area (FFA), which is responsible for the perception and analysis of facial details. This area is overly active in AN compared to controls, which could contribute to the overly critical manner in which patients AN perceive their body (Li et al., 2015).
Binge Eating Disorder
If AN is at one end of the eating disorders spectrum, BED would be on the other. While patients with AN exhibit enough executive control over feeding behaviors to override the physiological hunger signals from gut hormones, patients with BED have reduced executive control, are less risk-averse, and are more sensation-seeking than healthy controls (Giel et al., 2017). However, similar to anorexia and other psychiatric disorders, BED is dependent on genetic predisposition and environmental stressors. Genes that make foods with high fat and sugar content more pleasurable may result in increased reward and positive reinforcement. Individuals with BED often use excessive consumption of food as a coping strategy, and this behavior can significantly impair mental and physical wellbeing—sometimes contributing to the onset of diabetes or obesity (Munsch & Herpertz, 2011). The following sections characterize the manifestation and maintenance of BED from the perspective of positive reinforcement, in which a behavior is strengthened due to the positive emotions elicited.
“If AN is at one end of the eating disorders sepectrum, BED would be on the other...”
2.1 Genetics and Environment As is the case with nearly all psychiatric disorders, genetic and environmental factors have been identified that contribute to BED onset. Invalidating childhood environments (particularly “achievement-oriented” parenting styles), a tendency for negative self-evaluation, shyness, and extreme compliance have all been associated with an increased risk of BED (Fairburn et al., 1998). Obesity and BED are commonly comorbid, and many of the risk factors for obesity are also indicated in BED: low self-esteem, sexual or physical trauma, and a history of emotional abuse or neglect are all examples (Hillbert et al., 2014). Many of these environmental risk factors have been identified 189
Figure 6: This striatal medium spiny neuron is eGFP-filled. MSNs play a role in the positive reinforcement patterns of BED patients that mimic drug addiction. Source: Wikimedia Commons
through family and twin studies, and have also helped identify genetic risk factors. BED aggregates strongly in families, independent of obesity, and is significantly more prevalent in first-degree relatives than the general population (Fowler and Bulik, 1997; Hudson et al., 2006; Lilenfeld et al., 2008). Twin studies give a heritability estimate of 45% (Trace et al., 2013). Genetic variants in the DA reward system have been thoroughly investigated with regard to BED risk, and many of these mutations have been shown to have the opposite effect as those seen in AN. Davis and colleagues genotyped individuals with BED, normal-weight individuals, and individuals with obesity for several singlenucleotide polymorphisms of the dopamine receptor DRD2 (including Taq1A, −141 Ins/Del, and C957T). The researchers found that BED individuals are significantly more likely to be homozygous for the Taq1A polymorphism and C957T marker, both of which reflect enhanced DA signal transduction (Davis et al., 2012). These variants make rewarding activities (i.e. binge eating) more pleasurable, and increased concentrations of synaptic dopamine following binges cause these receptors to be pulled out of the synapse. The result is a DA hyposensitivity in these individuals which requires a greater input to generate the same level of reward during the next episode of binge eating; that is to say, patients with BED develop a ‘food tolerance’ (Volkow et al., 2017). Some evidence suggests that 5-HT gene variants may contribute to genetic susceptibility of BED (Molteleone et al., 2006), however, most evidence implicates the DA reward system.
“The dopamine reward system has evolved to reinforce behaviors suited for survival. Eating, drinking, social activity, exercise, and sex are all natural rewards that signal DA release in the ventral striatum, a key node in the reward pathway.”
190
dynamic control of synaptic weighting in the striatum by two neurotransmitter systems: dopamine and glutamate. Medium spiny neurons (MSNs) in the striatum have ‘spiny’ dendritic morphology and express gamma amino-butyric acid (GABA), an inhibitory neurotransmitter that ‘turns off’ the synapses in which it is released. Two types of DA receptors are expressed in these MSNs, and each of these receptors defines a different pathway through the striatum. D1-type receptors characterize the direct pathway, while D2-type receptors characterize the indirect pathway. The direct pathway is an ‘accelerator’ pathway; it results in excitation of the basal ganglia output nuclei and intensifies behavior. The indirect pathway inhibits these basal ganglia nuclei and can be thought of as a ‘brake’ (Yager et al., 2015).
2.2. Biochemical and Cellular Changes The dopamine reward system has evolved to reinforce behaviors suited for survival. Eating, drinking, social activity, exercise, and sex are all natural rewards that signal DA release in the ventral striatum, a key node in the reward pathway (Rada et al., 2005). Modern diets contain calorically dense foods with high sugar and fat content (e.g. cookies, cakes, ice cream), which can result in ventral striatal DA concentrations up to two times higher than those associated with unprocessed foods like fruit and meat. While the high DA concentrations tied to many processed foods are not as high as the DA concentrations associated with drugs of abuse, BED does bear pathological similarities to drug addiction (Volkow et al., 2017).
Dopamine receptors are G-protein coupled receptors (GPCRs), which release protein complexes intracellularly when bound by an agonist in the synaptic cleft. The G-proteins associated with D1-type receptors are excitatory, which means that they increase cAMP production and turn on CREB, a transcription factor required for strengthening synapses. D2-type receptors are bound to inhibitory G-proteins which turn off CREB and weaken synapses. The binary effects of D1 and D2-type receptors result in strengthening of the direct pathway and weakening of the indirect pathway when dopamine is released in the striatum (an opposite adaptation is seen in response to an aversive stimulus). This results in an increased reward associated with the stimuli and an increased probability of behavioral initiation on future encounters (Nestler, 2011).
The positive reinforcement patterns characteristic of BED (and drug addiction) are dictated by
Since patients with BED have decreased concentrations of DA type-2 receptors in the
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 7: Proposed model of binge eating disorder neurobiology Original figure created using Visme
ventral striatum, the indirect pathway is less active in these individuals. This is a possible mechanism behind poor regulatory control over feeding behaviors. However, it is important to recognize that behavioral initiation in humans is ultimately under cognitive control (i.e. dependent on regulation from the frontal cortex). In BED and addiction, it is thought that low activity of the frontal cortex is insufficient to overcome the signals for behavioral initiation sent by the striatal direct pathway (Volkow et al., 2017). The following section discusses some of the neuroimaging studies that highlight impulsivity in BED. 2.3 Binge Eating Neurocircuitry Patients with BED are thought to have reduced inhibitory control over their behaviors, as
SPRING 2020
previous studies have shown that BED is associated with increased impulsivity, elevated compulsivity, and altered reward sensitivity/ punishment (Kessler et al., 2016). Additionally, patients may experience impairments in decision-making capabilities. A study testing a group of women with BED in a game of dice found that they made riskier decisions than overweight women who were otherwise healthy (Svaldi et al., 2009). Given the suspected role of the striatum and insular cortex in the mediation of these cognitive functions, there have been various studies to determine whether there are alterations in the function of these brain regions. In a study where participants were shown images of high-caloric food, patients with BED exhibited increased levels of left and right ventral-striatal activation
â&#x20AC;&#x153;...previous studies have shown that BED is associated with increased impulsivity, elevated compulsivity, and altered reward sensitivity/ punishment.â&#x20AC;?
191
Figure 8. Bulimia nervosa is accompanied by many physical health complications Source: Wikimedia Commons
compared to controls, demonstrating that the predictive reward in response to food-cues is greater in patients with BED (Weygandt et al., 2012). Moreover, Balodis and colleagues determined that obese individuals with BED show decreased ventral-striatal and insular cortex activation upon notification of a monetary reward, suggesting an effect on reward evaluation (2013). The prefrontal cortex is another area of the brain that is known to mediate impulse-control networks. In various task performances, obese individuals with BED present decreased activity in the ventromedial-prefrontal, inferior-frontal cortex, and insular cortex, all of which are areas of the brain that control impulses (Balodis et al., 2013). In isolation, these regions of the brain do not facilitate the onset of binge eating behavior, but paired with increased reward in response to calorically-dense foods facilitated by the dopamine reward system, frontal cortex inhibition is insufficient to overcome the drive to binge eat.
Bulimia Nervosa
â&#x20AC;&#x153;Twin and familial studies reveal that relatives of individuals with BN are more likely to develop the disorder compared to relatives of unaffected controls, suggesting a genetic component to BN etiology.â&#x20AC;?
One theory surrounding the binge-purge cycle of BN is that patients with BN have a genetic predisposition to both AN and BED in addition to an environmental pressure to be thin (Nettersheim et al., 2018). In fact, a recent study suggests that BN likely shares much of its genetic and environmental etiology with AN, since both disorders have been linked to a shared set of chromosomes (Yao et al., 2019). Additionally, pathological concern with weight and shape are relevant risk factors for both AN and BN (Berrettini, 2004), and serotonergic modulation of mood seems to underlie appetite dysregulation and obsessive behaviors in both disorders. Although the elevated attention paid to body shape and the obsessive behavioral patterns associated with BN are also found in AN, the mesolimbic dopamine pathway in patients with BN shares a striking similarity to the reward pathway in BED. The neurobiology of BN may therefore be viewed as a kind of intersection between AN and BED (Kaye, 2008). Much of the research addressing the etiology of BN also pertains to AN, likely due to the following similarities: extreme behavioral shifts associated with weight loss, distorted body image, withdrawal from social engagements, and elevated focus on food (Eddy, et al. 2008). Despite these similarities, BN is characterized by a unique set of binge-purge behaviors, and these behaviors are used to identify BN patients for participation in the genetic, environmental,
192
and biomolecular studies. 3.1 Genetics and Environment Twin and familial studies reveal that the relatives of individuals with BN are more likely to develop the disorder compared to relatives of unaffected controls, suggesting a genetic component to BN etiology (Bulik, 2000). Due to small sample size, the heritability estimates for BN vary widely, with some studies suggesting as little as 28% of variation attributable to genetics and others as much as 83% (Bulik et al., 2000; Strober et al., 2000; Lilenfeld et al., 1998). One team of researchers conducted a genome-wide analysis to determine chromosomal linkage among BN-prevalent families and found significant linkages on chromosomes 10 and 14 (Bulik et al., 2003a). These risk factors are most thoroughly investigated in candidate gene association studies of receptors implicated in the serotonergic, dopaminergic, and appetiteregulating neural systems. Some researchers have shifted their focus from 5-HT receptors to the dopamine system, scanning for potential similarities between dopamine in BN, BED and addiction. Researchers have found reduced DA release in the dorsal striatum (caudate and putamen), a region implicated in addictive behaviors (Broft at al., 2012). This addiction-mirroring DA excess, however, does not account for the entirety of BN pathology. Though a small number of studies investigate potential associations between BN and a DRD2 polymorphism (Trace et al., 2013), the weight of evidence suggests that the binge-purge cycle of BN might be more closely tied to ghrelin-related genes and the appetite regulation system. Miyasaka and colleagues found a significant association between BN and a polymorphism of the GHSR ghrelin-receptor gene (Miyasaka et al., 2006). Though these results are promising, they
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
will require replication before the ghrelinBN link can be stated with certainty. BN genetic research has also investigated the endocannabinoid and estrogen systems, and scientists have found association between BN and the estrogen receptor ERβ, as well as the cannabinoid receptor CB1R (Nilsson et al., 2004; Gerard et al., 2011). Though preliminary, these results suggest that the estrogen and endocannabinoid systems could be impacted in BN, likely acting in tandem with appetiteregulating ghrelin genes (Hildebrandt et al., 2010). Although twin studies consistently suggest that BN is influenced by genetic factors, molecular genetic studies have not yet been adequate in scope or design to identify susceptibility loci. Conducting a cohesive genome-wide association study of BN is a reasonable next step that could guide future genetic explorations. 3.2. Biochemical and Cellular Changes Patients with BN have dopaminergic deficits similar to those seen in BED and drug addiction. PET studies have shown that both ill and recovered patients have reduced binding of striatal D2-type dopamine receptors relative to controls (Broft et al., 2012). Reduced D2receptor binding is a stable neuroanatomical change in individuals with poor regulatory control over consumptive behaviors (Yager et al., 2015). As discussed, dynamic control of synaptic weighting in striatal medium spiny neurons is an essential component of reward learning. Increased D1-type receptor binding and decreased D2-type receptor binding facilitates recurrent binge eating episodes in both BED and BN. While the dopamine system in patients with BN is similar to BED and drug addiction, the serotonin system is more similar to AN and is thought to facilitate purging behaviors. Patients with BN have higher CNS concentrations of the 5-HT metabolite 5-hydroxyindoleacetic acid (Steiger et al., 2001) and PET studies have found higher levels of 5-HT1A receptor binding in ill and recovered patients while 5-HT transporter binding did not differ between patients and controls (Bailer and Kaye, 2010). Taken together, these results suggest that patients with BN overproduce of serotonin which likely causes dysphoria after eating. Purging behaviors may serve as a compensatory mechanism to reduce serotonin increase and dysphoria after a binge eating episode. Additionally, patients with BN see a larger than normal drop in serotonin levels after a period of fasting, which may facilitate
SPRING 2020
binge-purge cycles in these individuals (Steiger et al., 2001). Alterations of the serotonin system may also impact impulsive behaviors. Bailer and colleagues reported that 5-HT2A binding was lower in patients with BN relative to controls, (especially in the left subgenual cingulate), and that reduced 5-HT binding potential was positively related to harm avoidance and negatively related to novelty-seeking behavior (Bailer et al., 2004). However, Wu and colleagues found that bulimic-type EDs showed impairments in inhibitory control (heightened impulsivity) in response to both general stimuli and disease-salient stimuli (Wu et al., 2013). The question of whether BN etiology is tied more closely to impulsivity or harm avoidance may depend on which behaviors are being examined and a multi-factorial model may be most appropriate. 3.3 Bulimia Neurocircuitry Two main circuits have been proposed to facilitate binge-purge cycling in BN; the dorsal frontostriatal circuit is involved in selfregulatory capacities and habit learning, and the ventral circuit is involved in reward processing and reward-based learning (Berner and Marsh, 2014). Those with BN seem to have abnormally matured frontostriatal and mesolimbic (reward) circuits. It is thought that dorsal striatal activity is too low to inhibit a binge eating episode, and that these individuals are more susceptible to the rewarding effects of calorically-dense food due to decreased D2-type receptor binding in the striatum. Impaired inhibition of binging and purging behaviors by the dorsal frontostriatal striatal circuit paired with high susceptibility to rewarding substances generated by the ventral frontostriatal circuit is thought to incentivize binge-purge behaviors in vulnerable individuals.
â&#x20AC;&#x153;Two main circuits have been proposed to facilitate bingepurge cycling in BN; the dorsal frontostriatal circuit is involved in selfregulatory capacities and habit learning, and the ventral circuit is involved in reward processing.â&#x20AC;?
Bulimia nervosa comes with a host of anatomical irregularities, many of which involve morphometric changes of the cerebral cortex. Some researchers suggest that these irregularities, such as deficits in cortical volume, might relate to impaired functioning of the neural circuits localized in those areas of the brain. In one such investigation, Marsh and colleagues detected significant reductions of local volumes on the brain surface in frontal and temporoparietal areas in BN participants compared with control participants, suggesting that local volumes of inferior frontal regions are smaller in individuals with BN compared with healthy individuals. These reductions along
193
Figure 9: Proposed model of bulimia nervosa neurobiology Original figure created using Visme; see Berner & Marsh, 2014 for frontostriatal circuitry inspiration
“...there has been rapid growth in treatment centers in recent years, providing specialty care to patients with EDs, involving both hospital-based and residential treatment programs.”
the cerebral surface are thought to contribute to functional deficits in self-regulation (Marsh et al., 2015). Similar neurostructural irregularities have also been reported using MRI; a study by Cyr and collegues reported that reduced cortical thickness in inferior frontal regions of the brain coincided with the emergence of BN and its persistence into adulthood (Cyr et al., 2017). Reductions in cortical thickness may be used as a clinical biomarker for BN diagnosis in the future.
Eating Disorder Treatments Due to the unique genetic and environmental factors leading to the development of each type of eating disorder, it is difficult to isolate
194
one “catch-all” treatment plan. Many patients with eating disorders are resistant to being helped, possibly due to poor motivation or positive perception of their eating behaviors, and many experience relapse even years following recovery (Abbate-Daga et al., 2013). Nonetheless, there has been rapid growth in treatment centers in recent years, providing specialty care to patients with EDs, involving both hospital-based and residential treatment programs (Guarda et al., 2018). Pharmacology, psychotherapy, neuroendocrine therapy, and neuromodulation are several treatment options, and patients are often recommended to use a combination of treatments due to psychiatric comorbidities. 4.1 Pharmacology
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
treat AN and weight loss, and it also has shown therapeutic effects for patients with obesity (McElroy et al., 2012).
Patients with eating disorders often have other psychopathologies, and pharmacotherapy has seen higher efficacy in patients with mood disorders, anxiety, insomnia, and obsessivecompulsive disorders. Antidepressants, such as the selective serotonin reuptake inhibitor (SSRI) fluoxetine, have shown some success in the treatment of BN by reducing the number of binging episodes in patients, possibly through a downregulation of synaptic serotonin receptors (Milano, 2013). Notably, the bipolar treatment lithium has been associated with a significant decrease in the number of BN binge-purge episodes (McElroy et al., 2006). Carbamazepine is also effective for BN cooccurring with bipolar disorder (McElroy et al., 2012). Other mood stabilizing agents such as olanzapine and risperidone have been found to be effective in patients with AN. While some pharmacologic treatments can address comorbidities, drugs do not always have the intended effect. For example, valproate has been shown to improved affective and bulimic symptoms in patients with bipolar disorder and comorbid BN, but along with other atypical antipsychotics, it can exacerbate binge eating in patients with BED with comorbid bipolar or psychotic disorders (McElroy et al., 2009). Antiepileptic drugs (AEDs) are also being considered in the management of some eating disorders. Topiramate is an AED with antibinge eating, anti-purging and weight loss properties (McElroy et al., 2012). With a median dosage of 100 mg per day, topiramate reduces the frequency of binge-purge episodes and is associated with much higher remission rates for patients with BED. For patients with BN, topiramate also led to significant decreases in binging and purging. Zonisamide, an AED used for epilepsy treatment in adults, can be used to
SPRING 2020
Pharmacological treatment options for EDs can be difficult to develop due to the complicated biological, psychological and social influences on ED pathology (Thompson, 2004). Researchers have looked towards opioid antagonists with little success. Samidorphan, also known as ALKS-33, is an opioid modulator for alcohol dependence and was initially thought to be a useful treatment for BED (McElroy et al., 2013). However, it was determined that Samidorphan did not significantly reduce binging frequency. Similarly, bupropion was used to treat overweight women with bingeeating disorder but ultimately was only useful in enhancing outcomes of other treatments that had already demonstrated efficacy for BED but failed to produce weight loss (White and Grilo, 2012). Despite these complications, case studies have pointed to the possibility of using naltrexone as monotherapy for BED treatment with concurrent alcoholism (Leroy et al., 2017). Many studies of pharmacological treatments have shown little efficacy, and many of them also cause undesirable side effects. For these reasons, non-drug therapies have gained increasing attention from researchers and clinicians. 4.2 Psychotherapies Psychotherapies are extremely successful for treating AN, BN, and BED. From acceptance and commitment therapy (ACT) and cognitive behavioral therapy (CBT) to family treatment or cognitive remedial therapy (CRT), individuals with eating disorders have shown positive responses and improvements when engaging in psychotherapy. Acceptance and Commitment Therapy explicitly targets critical aspects of eating disorders such as high experiential avoidance, poor experiential awareness, and lack of motivation (Juarascio et al., 2013). Studies have shown that ACT is a viable treatment option for AN and BN groups, as ACT patients saw larger improvements to eating behaviors and lower rates of rehospitalization during the six months following discharge. Fundamentally, ACT works by teaching patients to distance themselves from distressing internal experiences and create positive goals that can help them live a more fulfilling life. It increases psychological flexibility with six core processes: defusion, acceptance, contact with the present moment,
Figure 10: Fluoxetine, also known as Prozac, is an SSRI that has been successful in treating symptoms of BN, specifically by reducing binge prevalence in patients. Source: Wikimedia Commons; Creator: Tom Varco
â&#x20AC;&#x153;Patients with eating disorders often have other psychopathologies, and pharmacotherapy has seen higher efficacy in patients with mood disorders, anxiety, insomnia, and obsessive-compulsive disorder.â&#x20AC;?
195
Figure 11: Cognitive Behavioral Therapy focuses on the three tenants of behavior, thoughts, and feelings. In terms of treatments for eating disorders, CBT represents one of many psychotherapies and is particularly useful for reducing bulimia behaviors. Source: Wikimedia Commons
“The presence of leptin receptors on dopaminergic neurons of the ventral tegmental area (VTA) may provide a possible therapeutic target for correcting reward-based deficits in AN.”
values, committed action, and self-as-context (Manlick et al., 2013). The goal is that the individual understands that an over-reliance on control is the problem, not the solution, and therefore the relationship between aversive internal thoughts or emotions and maladaptive behaviors can be recognized and changed. Cognitive Behavioral Therapy has been successful in greatly reducing binge eating, purging, and other compensatory behaviors among patients with BN (Juarascio et al., 2013). CBT is an important psychotherapy because bulimic patients with this treatment undergo rapid changes in their eating patterns, which tend to be well maintained over time. However, it is important to note that the effects of CBT are mostly based on binge eating or purging behavior and not on cognitive symptoms (Linardon et al., 2017). Additionally, beyond its efficacy compared to other treatment options, CBT has been studied against itself in order to learn how the treatment can be strengthened to be more effective. This works by more precisely identifying for whom the treatment works best or by enhancing the mechanisms that are hypothesized to underlie the treatment’s effects (Agras et al., 2017). CBT has also been adjusted by adding new modules to the treatment or by removing ineffective modules, and the original treatment may be modified to provide easier access for a greater number of affected individuals. Given the integration of technology into everyone’s daily lives, CBT has seen further advances for eating disorders in terms of patients’ access to E-mental health services (Agras et al., 2016). For BED patients, there is strong empirical support for therapist-led CBT in short-term and long-term outcomes, but self-help CBT is not recommended for patients with low self-esteem or high weight concerns. Furthermore, studies have shown that CBT is not very effective for patients with AN as these patients often value their food-restrictive behaviors. Family-based treatment (FBT) and cognitive remedial therapy have been studied for their efficacy in AN patients. FBT is a solid intervention to mobilize parents to help restore weight (Le Grange et al., 2016). By engaging the families to support patients with EDs, adolescents in particular benefit greatly. Phase 1 of FBT is focused on the rapid restoration of patients’ physical health. For young patients, parents are given the responsibility of deciding when and how their child eats. Phase 2 gives the responsibility back to the adolescent, and during Phase 3, the therapist checks in with the family
196
(Rienecke, 2017). Beyond FBT, AN patients also benefit from Cognitive Remediation Therapy as it can lead to an increase in treatment motivation (Roberts, 2018). 4.3 Neuroendocrine EDs are reward-dependent syndromes implicating alterations in brain reward circuitry for affected individuals. Brain imaging data has been instrumental in characterizing the reward circuit dysfunctions that occur in EDs (Monteleone et al., 2018). These findings point to a potential area of treatment that investigates the modification of specific hormone levels to decrease symptoms. Neuroendocrine models of treatment are still underdeveloped due to the lack of mechanistic understanding of how hormones and neurotransmitters cause and/ or exacerbate EDs and their symptoms. Despite the holes in ED neuroendocrine research, there hormones have shown promise in correcting disordered eating. Patients with AN have low body fat content. Consequently, circulating leptin concentrations are significantly below average. The presence of leptin receptors on dopaminergic neurons of the ventral tegmental area (VTA) may provide a possible therapeutic target for correcting reward-based deficits in AN. Activation of VTA leptin receptors releases DA into the ventral striatum, the brain’s reward center. Animal studies have shown that caloric restriction decreases leptin levels yet increases rewardrelated behaviors, while restoration of body weight increases leptin levels and decreases reward-related behaviors (Monteleone et al., 2018). In normal circumstances, low levels of leptin induce starvation-related food seeking by increasing dopaminergic tone in brain reward circuits. Chronically reduced leptin levels in patients with AN may impair DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 12: Leptin and ghrelin are hormones that modulate hunger levels. Ghrelin is released from an empty gut and responsible for feeling hunger. Conversely, leptin is released from growing fat tissue, and is responsible for feeling full. These two hormones, alongside others, help the body achieve energy homeostasis. Source: Wikimedia Commons
these food-seeking behaviors (Hebebrand et al., 2007). Additionally, low leptin levels have been associated with greater physical exercise (Hopkins et al., 2014). The hypothesis surrounding leptin supplementation therapy is that normal ocillations in leptin levels may restore food-related rewards in patients with AN (Giel et al., 2013). Ghrelin has been identified as an important hormone modulating hunger. In general, ghrelin levels are elevated in patients with AN. Holson and colleagues found that ghrelin activates reward-related brain areas in response to high-calorie foods, but that this relationship is hindered in AN patients (Holsen et al., 2012). Moreover, AN patients showed decreased ghrelin levels after exposure to palatable food whereas healthy individuals showed an increase in ghrelin levels. Interestingly, a pilot study of female patients with AN showed increased caloric intake for patients administered intravenous ghrelin, as well as improved body weight and hypoglycemia for patients administered an intranasal ghrelin receptor agonist before meals (Lutter, 2017).
Neuromodulation refers to a collection of techniques that use physics techniques (electricity and magnetism) to change the activity of neuronal populations. Neuromodulation is an important area of treatment for any mental illness and provides an alternative to drug-based therapy. fMRI studies have shown that the abnormal development of all areas of the cerebral cortex (frontal, parietal, temporal and occipital lobes) are coincident with the onset of AN (Fuglset et al., 2016). Other studies have conducted trials and identified the left dorsolateral prefrontal cortex (PFC) as a potential site of neuromodulation to curb the severity of ED symptoms.
There are also several satiety hormones that are dysregulated in binge eating and bulimia. CCK and GLP-1 have both shown blunted levels postprandially while other studies have shown increased PYY levels postprandially in patients with BN (Lutter, 2007). These hormones may play a role in the development of binge eating, but the exact mechanism for this occurrence is unknown. Moreover, it is still unclear whether there are any applicable, viable neuroendocrine treatment options.
Repetitive transcranial magnetic stimulation (rTMS) is the leading noninvasive approach that allows for the stimulation or depression of specific brain areas using magnetic waves (Lutter, 2017). A double-blind study by Van Den Eynde and colleagues showed that rTMS decreased self-reported urges to eat and caused fewer binge-eating episodes in a 24 hour period when compared to sham-operated controls (Van Den Eynde et al., 2010). rTMS is currently used as a treatment for depression, and given the significant comorbidity of EDs and depression, many researchers believe that rTMS can be an important tool to treat these comorbid disorders and improve overall quality of life. rTMS has been shown to reduce feelings of fullness and anxiety (Lutter, 2017) and improve attitudes towards body shape, weight, and food (Rachid, 2018). Many studies have shown positive short-term benefits, but the long-term efficacy has yet to be thoroughly investigated.
4.4. Neuromodulation
The other leading neuromodulation technique
SPRING 2020
â&#x20AC;&#x153;Repetitive transcranial magnetic stimulation (rTMS) is the leading noninvasive approach that allows for the stimulation or depression of specific brain areas using magnetic waves.â&#x20AC;?
197
Figure 13: Deep brain stimulation is an invasive neurosurgical technique that inserts neurostimulators (electrodes and probes) into the brain. These neurostimulators send electrical impulses that can upregulate or downregulate activity in specific parts of the brain. Source: Wikimedia Commons
is an invasive procedure known as deep brain stimulation (DBS) in which an electrode is surgically inserted into the brain to modulate the neuronal activity of deep brain structures (Lutter, 2017). In a study by Lipsman and collegues, 16 patients with chronic AN were tracked over a 12-month period post-electrode insertion. The patients showed significant and notable improvements in affective symptoms, BMI, and changes in neural circuitry (Lipsman et al., 2017). Specifically, stimulation of the ventral striatum in patients with severe AN yielded remarkable improvements in body weight restoration (Lutter, 2017). DBS can also be used to reduce binge eating episodes for patients with moderate BED, but only minor improvements are seen in cases of severe binge-eating disorder. Overall, less research has been conducted on the impact of DBS on BED and BN (Prinz and Stengel, 2018). DBS has the potential to play a pivotal role in managing patients with EDs as it has shown success in animal and small sample size human trials. Moreover, DBS is an easily reversible, and a relatively safe procedure with a low complication rate (Gorgulho et al., 2014). DBS may become a more popular treatment option if physicians can develop a safe, ethical framework that minimizes surgical complications and locates optimal neural targets (Park et al., 2017). Ultimately, neuromodulation is an increasingly promising treatment option as studies corroborate positive results with both rTMS and DBS in patients suffering from EDs. Nevertheless, especially for the case of BED, larger clinical trials must be carried out to offer more concrete claims.
Conclusions
“The individuality of EDs requires a combinatorial approach to treatment that combines nutritional rehabilitation, clinical intervention, and psychotherapy.”
It may be informative to place the EDs discussed in this paper on a scale of executive control. On one end is AN, which represents almost complete inhibition of feeding behaviors; on the other is BED, which represents a significant loss of control. Patients with BN are also highly impulsive, and are unable to control their binge-eating and compensatory purging behaviors. Despite these differences, there are commonalities between the three. Many hormones are dysregulated, and many of these hormones—like leptin and ghrelin—help control hunger and satiety. Genetics, especially genes modulating serotonin and dopamine, have been implicated in onset of each of these EDs. It is important to note that an individual will develop disordered eating only if they have a genetic predisposition paired with the appropriate environmental triggers. There are a variety of options for treating
198
EDs and each option described has shown some clinical success. However, despite the similarities between the three disorders disussed, there is no single treatment that will work for every individual suffering from an ED. The individuality of EDs requires a combinatorial approach to treatment that combines nutritional rehabilitation, clinical intervention, and psychotherapy (Gogulho et al., 2014). As mentioned previously, there are significant biological, psychological, and social components to EDs that complicate the path to an ideal treatment plan. Moreover, stigmatization of EDs in public discourse needs to be reworked so that suffering individuals can find the treatment they need and fewer individuals develop an ED in the first place. For many people dealing with EDs, their biological, psychological, and sociocultural circumstances make healthy eating habits difficult to adopt. Given the significant relationship between these pressures and the development of disordered eating, there are many possibilities for the future of EDs regarding successful treatment plans. References Abbate-Daga, G., Amianto, F., Delsedime, N., De-Bacco, C., & Fassino, S. (2013). Resistance to treatment in eating disorders: A critical challenge. BMC Psychiatry, 13, 294. https://doi. org/10.1186/1471-244X-13-294 Agras, W. S., Fitzsimmons-Craft, E. E., & Wilfley, D. E. (2017). Evolution of cognitive-behavioral therapy for eating disorders. Behaviour Research and Therapy, 88, 26–36. https:// doi.org/10.1016/j.brat.2016.09.004 American Psychiatric Association. (2013). Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
American Psychiatric Association. https://doi.org/10.1176/ appi.books.9780890425596 Avena, N. M., & Bocarsly, M. E. (2012). Dysregulation of brain reward systems in eating disorders: Neurochemical information from animal models of binge eating, bulimia nervosa, and anorexia nervosa. Neuropharmacology, 63(1), 87–96. https://doi.org/10.1016/j.neuropharm.2011.11.010 Bailer, U. F., & Kaye, W. H. (2010). Serotonin: Imaging Findings in Eating Disorders. In R. A. H. Adan & W. H. Kaye (Eds.), Behavioral Neurobiology of Eating Disorders (Vol. 6, pp. 59–79). Springer Berlin Heidelberg. https://doi. org/10.1007/7854_2010_78 Bailer, U. F., Price, J. C., Meltzer, C. C., Mathis, C. A., Frank, G. K., Weissfeld, L., McConaha, C. W., Henry, S. E., Brooks-Achenbach, S., Barbarich, N. C., & Kaye, W. H. (2004). Altered 5-HT2A Receptor Binding after Recovery from Bulimia-Type Anorexia Nervosa: Relationships to Harm Avoidance and Drive for Thinness. Neuropsychopharmacology : Official Publication of the American College of Neuropsychopharmacology, 29(6), 1143–1155. https://doi.org/10.1038/sj.npp.1300430 Basu, D., & Chakraborty, K. (2010). Management of anorexia and bulimia nervosa: An evidence-based review. Indian Journal of Psychiatry, 52(2), 174. https://doi. org/10.4103/0019-5545.64596 Berner, L. A., & Marsh, R. (2014). Frontostriatal Circuits and the Development of Bulimia Nervosa. Frontiers in Behavioral Neuroscience, 8. https://doi.org/10.3389/fnbeh.2014.00395 Birketvedt, G. S., Drivenes, E., Agledahl, I., Sundsfjord, J., Olstad, R., & Florholmen, J. R. (2006). Bulimia nervosa—A primary defect in the hypothalamic-pituitary-adrenal axis? Appetite, 46(2), 164–167. https://doi.org/10.1016/j.appet.2005.11.007 Bland, R. D., Clarke, T. L., & Harden, L. B. (1976). Rapid infusion of sodium bicarbonate and albumin into high-risk premature infants soon after birth: A controlled, prospective trial. American Journal of Obstetrics and Gynecology, 124(3), 263–267. https://doi.org/10.1016/0002-9378(76)90154-x Broft, A., Shingleton, R., Kaufman, J., Liu, F., Kumar, D., Slifstein, M., Abi-Dargham, A., Schebendach, J., Van Heertum, R., Attia, E., Martinez, D., & Walsh, B. T. (2012). Striatal dopamine in bulimia nervosa: A pet imaging study. International Journal of Eating Disorders, 45(5), 648–656. https://doi.org/10.1002/ eat.20984 Bruce, K. R., Steiger, H., Joober, R., Kin, N. M. K. N. Y., Israel, M., & Young, S. N. (2005). Association of the promoter polymorphism −1438G/A of the 5-HT2A receptor gene with behavioral impulsiveness and serotonin function in women with bulimia nervosa. American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 137B(1), 40–44. https://doi. org/10.1002/ajmg.b.30205 Bulik, C. M., Sullivan, P. F., Wade, T. D., & Kendler, K. S. (2000). Twin studies of eating disorders: A review. International Journal of Eating Disorders, 27(1), 1–20. https://doi. org/10.1002/(SICI)1098-108X(200001)27:1<1::AIDEAT1>3.0.CO;2-Q Davis, C., Levitan, R. D., Kaplan, A. S., Carter, J., Reid, C., Curtis, C., Patte, K., Hwang, R., & Kennedy, J. L. (2008). Reward sensitivity and the D2 dopamine receptor gene: A casecontrol study of binge eating disorder. Progress in NeuroPsychopharmacology and Biological Psychiatry, 32(3), 620–628. https://doi.org/10.1016/j.pnpbp.2007.09.024
SPRING 2020
Davis, C., Levitan, R. D., Yilmaz, Z., Kaplan, A. S., Carter, J. C., & Kennedy, J. L. (2012). Binge eating disorder and the dopamine D2 receptor: Genotypes and sub-phenotypes. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 38(2), 328–335. https://doi.org/10.1016/j. pnpbp.2012.05.002 Davis, M., & Whalen, P. J. (2001). The amygdala: Vigilance and emotion. Molecular Psychiatry, 6(1), 13–34. https://doi. org/10.1038/sj.mp.4000812 de Araujo, I. E. (2004). Representation in the Human Brain of Food Texture and Oral Fat. Journal of Neuroscience, 24(12), 3086–3093. https://doi.org/10.1523/ JNEUROSCI.0130-04.2004 Ea, S., & Mm, H. (2003). Neuroimaging in eating disorders. Nutritional Neuroscience, 6(6), 325–334. https://doi.org/10.10 80/10284150310001640338 (EC Framework V ‘Factors in Healthy Eating’ consortium), Gorwood, P., Adès, J., Bellodi, L., Cellini, E., Collier, D. A., Di Bella, D., Di Bernardo, M., Estivill, X., Fernandez-Aranda, F., Gratacos, M., Hebebrand, J., Hinney, A., Hu, X., Karwautz, A., Kipman, A., Mouren-Siméoni, M.-C., Nacmias, B., Ribasés, M., … Treasure, J. (2002). The 5-HT2A −1438G/A polymorphism in anorexia nervosa: A combined analysis of 316 trios from six European centres. Molecular Psychiatry, 7(1), 90–94. https:// doi.org/10.1038/sj.mp.4000938 Fetissov, S. O., Hallman, J., Oreland, L., af Klinteberg, B., Grenbäck, E., Hulting, A.-L., & Hökfelt, T. (2002). Autoantibodies against α-MSH, ACTH, and LHRH in anorexia and bulimia nervosa patients. Proceedings of the National Academy of Sciences of the United States of America, 99(26), 17155–17160. https://doi.org/10.1073/pnas.222658699 Fowler, S. J., & Bulik, C. M. (1997). Family Environment and Psychiatric History in Women with Binge-eating Disorder and Obese Controls. Behaviour Change, 14(2), 106–112. https:// doi.org/10.1017/S0813483900003569 Frank, G. K., Bailer, U. F., Henry, S. E., Drevets, W., Meltzer, C. C., Price, J. C., Mathis, C. A., Wagner, A., Hoge, J., Ziolko, S., Barbarich-Marsteller, N., Weissfeld, L., & Kaye, W. H. (2005). Increased Dopamine D2/D3 Receptor Binding After Recovery from Anorexia Nervosa Measured by Positron Emission Tomography and [11C]Raclopride. Biological Psychiatry, 58(11), 908–912. https://doi.org/10.1016/j. biopsych.2005.05.003 Fuglset, T. S., Landrø, N. I., Reas, D. L., & Rø, Ø. (2016). Functional brain alterations in anorexia nervosa: A scoping review. Journal of Eating Disorders, 4. https://doi.org/10.1186/ s40337-016-0118-y Giel, K. E., Kullmann, S., Preißl, H., Bischoff, S. C., Thiel, A., Schmidt, U., Zipfel, S., & Teufel, M. (2013). Understanding the reward system functioning in anorexia nervosa: Crucial role of physical activity. Biological Psychology, 94(3), 575–581. https://doi.org/10.1016/j.biopsycho.2013.10.004 Giel, K., Teufel, M., Junne, F., Zipfel, S., & Schag, K. (2017). FoodRelated Impulsivity in Obesity and Binge Eating Disorder—A Systematic Update of the Evidence. Nutrients, 9(11), 1170. https://doi.org/10.3390/nu9111170 Gorgulho, A. A., Pereira, J. L. B., Krahl, S., Lemaire, J.-J., & De Salles, A. (2014a). Neuromodulation for Eating Disorders. Neurosurgery Clinics of North America, 25(1), 147–157.
199
https://doi.org/10.1016/j.nec.2013.08.005 Gorgulho, A. A., Pereira, J. L. B., Krahl, S., Lemaire, J.-J., & De Salles, A. (2014b). Neuromodulation for Eating Disorders. Neurosurgery Clinics of North America, 25(1), 147–157. https:// doi.org/10.1016/j.nec.2013.08.005 Guarda, A. S., Wonderlich, S., Kaye, W., & Attia, E. (2018). A path to defining excellence in intensive treatment for eating disorders. International Journal of Eating Disorders, 51(9), 1051–1055. https://doi.org/10.1002/eat.22899 Hebebrand, J., Muller, T. D., Holtkamp, K., & Herpertz-Dahlmann, B. (2007). The role of leptin in anorexia nervosa: Clinical implications. Molecular Psychiatry, 12(1), 23–35. https://doi. org/10.1038/sj.mp.4001909 Holsen, L. M., Lawson, E. A., Blum, J., Ko, E., Makris, N., Fazeli, P. K., Klibanski, A., & Goldstein, J. M. (2012). Food motivation circuitry hypoactivation related to hedonic and nonhedonic aspects of hunger and satiety in women with active anorexia nervosa and weight-restored women with anorexia nervosa. Journal of Psychiatry & Neuroscience : JPN, 37(5), 322–332. https://doi. org/10.1503/jpn.110156
Le Grange, D., Hughes, E. K., Court, A., Yeo, M., Crosby, R. D., & Sawyer, S. M. (2016). Randomized Clinical Trial of Parent-Focused Treatment and Family-Based Treatment for Adolescent Anorexia Nervosa. Journal of the American Academy of Child & Adolescent Psychiatry, 55(8), 683–692. https://doi.org/10.1016/j.jaac.2016.05.007 Lee, Y. H., Abbott, D. W., Seim, H., Crosby, R. D., Monson, N., Burgard, M., & Mitchell, J. E. (1999). Eating disorders and psychiatric disorders in the first-degree relatives of obese probands with binge eating disorder and obese non-binge eating disorder controls. International Journal of Eating Disorders, 26(3), 322–332. https://doi.org/10.1002/(SICI)1098108X(199911)26:3<322::AID-EAT10>3.0.CO;2-K Leroy, A., Carton, L., Gomajee, H., Bordet, R., & Cottencin, O. (2017). Naltrexone in the treatment of binge eating disorder in a patient with severe alcohol use disorder: A case report. The American Journal of Drug and Alcohol Abuse, 43(5), 618–620. https://doi.org/10.1080/00952990.2017.1298117
Hopkins, M., Gibbons, C., Caudwell, P., Webb, D.-L., Hellström, P. M., Näslund, E., Blundell, J. E., & Finlayson, G. (2014). Fasting Leptin Is a Metabolic Determinant of Food Reward in Overweight and Obese Individuals during Chronic Aerobic Exercise Training. International Journal of Endocrinology, 2014, 1–8. https://doi.org/10.1155/2014/323728
Li, W., Lai, T. M., Bohon, C., Loo, S. K., McCurdy, D., Strober, M., Bookheimer, S., & Feusner, J. (2015). Anorexia nervosa and body dysmorphic disorder are associated with abnormalities in processing visual information. Psychological Medicine, 45(10), 2111–2122. https://doi.org/10.1017/ S0033291715000045
Hudson, J. I., Lalonde, J. K., Berry, J. M., Pindyck, L. J., Bulik, C. M., Crow, S. J., McElroy, S. L., Laird, N. M., Tsuang, M. T., Walsh, B. T., Rosenthal, N. R., & Pope, H. G. (2006). Binge-Eating Disorder as a Distinct Familial Phenotype in Obese Individuals. Archives of General Psychiatry, 63(3), 313–319. https://doi.org/10.1001/ archpsyc.63.3.313
Lilenfeld, L. R. R., Ringham, R., Kalarchian, M. A., & Marcus, M. D. (2008). A family history study of binge-eating disorder. Comprehensive Psychiatry, 49(3), 247–254. https://doi. org/10.1016/j.comppsych.2007.10.001
Joiner, T. E., Heatherton, T. F., Rudd, M. D., & Schmidt, N. B. (1997). Perfectionism, perceived weight status, and bulimic symptoms: Two studies testing a diathesis-stress model. Journal of Abnormal Psychology, 106(1), 145–153. https://doi. org/10.1037/0021-843X.106.1.145 Juarascio, A., Kerrigan, S., Goldstein, S. P., Shaw, J., Forman, E. M., Butryn, M., & Herbert, J. D. (2013). Baseline eating disorder severity predicts response to an acceptance and commitment therapy-based group treatment. Journal of Contextual Behavioral Science, 2(3–4), 74–78. https://doi.org/10.1016/j. jcbs.2013.09.001 Kaye, W. (2008). Neurobiology of anorexia and bulimia nervosa. Physiology & Behavior, 94(1), 121–135. https://doi. org/10.1016/j.physbeh.2007.11.037 Kaye, W. H., Wierenga, C. E., Bailer, U. F., Simmons, A. N., & Bischoff-Grethe, A. (2013). Nothing tastes as good as skinny feels: The neurobiology of anorexia nervosa. Trends in Neurosciences, 36(2), 110–120. https://doi.org/10.1016/j. tins.2013.01.003 Kirkpatrick, S. L., Goldberg, L. R., Yazdani, N., Babbs, R. K., Wu, J., Reed, E. R., Jenkins, D. F., Bolgioni, A. F., Landaverde, K. I., Luttik, K. P., Mitchell, K. S., Kumar, V., Johnson, W. E., Mulligan, M. K., Cottone, P., & Bryant, C. D. (2017). Cytoplasmic FMR1Interacting Protein 2 Is a Major Genetic Factor Underlying Binge Eating. Biological Psychiatry, 81(9), 757–769. https://doi. org/10.1016/j.biopsych.2016.10.021 Kontis, D., & Theochari, E. (2012). Dopamine in anorexia
200
nervosa: A systematic review. Behavioural Pharmacology, 23(5 and 6), 496–515. https://doi.org/10.1097/ FBP.0b013e328357e115
Linardon, J., Fairburn, C. G., Fitzsimmons-Craft, E. E., Wilfley, D. E., & Brennan, L. (2017). The empirical status of the third-wave behaviour therapies for the treatment of eating disorders: A systematic review. Clinical Psychology Review, 58, 125–140. https://doi.org/10.1016/j.cpr.2017.10.005 Lipsman, N., Lam, E., Volpini, M., Sutandar, K., Twose, R., Giacobbe, P., Sodums, D. J., Smith, G. S., Woodside, D. B., & Lozano, A. M. (2017). Deep brain stimulation of the subcallosal cingulate for treatment-refractory anorexia nervosa: 1 year follow-up of an open-label trial. The Lancet Psychiatry, 4(4), 285–294. https://doi.org/10.1016/S2215-0366(17)30076-7 Lutter, M. (2017). Emerging Treatments in Eating Disorders. Neurotherapeutics, 14(3), 614–622. https://doi.org/10.1007/ s13311-017-0535-x Manlick, C. F., Cochran, S. V., & Koon, J. (2013). Acceptance and Commitment Therapy for Eating Disorders: Rationale and Literature Review. Journal of Contemporary Psychotherapy, 43(2), 115–122. https://doi.org/10.1007/s10879-012-9223-7 McElroy, S. (2013). Bipolar disorder and obesity: The role of eating disorders. BIPOLAR DISORDERS, 15(1, SI), 11. McElroy, S. L., Guerdjikova, A. I., Blom, T. J., Crow, S. J., Memisoglu, A., Silverman, B. L., & Ehrich, E. W. (2013). A placebo-controlled pilot study of the novel opioid receptor antagonist ALKS-33 in binge eating disorder: ALKS-33 in Binge Eating Disorder. International Journal of Eating Disorders, 46(3), 239–245. https://doi.org/10.1002/eat.22114 McElroy, S. L., Guerdjikova, A. I., Martens, B., Keck, P. E., Pope, H. G., & Hudson, J. I. (2009). Role of Antiepileptic Drugs in the Management of Eating Disorders: CNS Drugs, 23(2), 139–156. https://doi.org/10.2165/00023210-200923020-00004
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
McElroy, S. L., Guerdjikova, A. I., Mori, N., & O’Melia, A. M. (2012). Current pharmacotherapy options for bulimia nervosa and binge eating disorder. Expert Opinion on Pharmacotherapy, 13(14), 2015–2026. https://doi.org/10.1517 /14656566.2012.721781 McElroy, S. L., Kotwal, R., & Keck, P. E., Jr. (2006). Comorbidity of eating disorders with bipolar disorder and treatment implications. BIPOLAR DISORDERS, 8(6), 686–695. https://doi. org/10.1111/j.1399-5618.2006.00401.x
Nilsson, M., Naessén, S., Dahlman, I., Lindén Hirschberg, A., Gustafsson, J.-Å., & Dahlman-Wright, K. (2004). Association of estrogen receptor β gene polymorphisms with bulimic disease in women. Molecular Psychiatry, 9(1), 28–34. https:// doi.org/10.1038/sj.mp.4001402
McGlynn, S. M., & Schacter, D. L. (1989). Unawareness of deficits in neuropsychological syndromes. Journal of Clinical and Experimental Neuropsychology, 11(2), 143–205. https:// doi.org/10.1080/01688638908400882
Nisoli, E., Brunani, A., Borgomainerio, E., Tonello, C., Dioni, L., Briscini, L., Redaelli, G., Molinari, E., Cavagnini, F., & Carruba, M. O. (2007). D2 dopamine receptor (DRD2) gene Taq1A polymorphism and the eatingrelated psychological traits in eating disorders (anorexia nervosa and bulimia) and obesity. Eating and Weight Disorders - Studies on Anorexia, Bulimia and Obesity, 12(2), 91–96. https://doi.org/10.1007/ BF03327583
Milano, W., De Rosa, M., Milano, L., Riccio, A., Sanseverino, B., & Capasso, A. (2013). The Pharmacological Options in the Treatment of Eating Disorders. ISRN Pharmacology, 2013, 1–5. https://doi.org/10.1155/2013/352865
Nunn, K., Frampton, I., Gordon, I., & Lask, B. (2008). The fault is not in her parents but in her insula-A neurobiological hypothesis of anorexia nervosa. European Eating Disorders Review, 16(5), 355–360. https://doi.org/10.1002/erv.890
Mitchell, K. S., Neale, M. C., Bulik, C. M., Aggen, S. H., Kendler, K. S., & Mazzeo, S. E. (2010). Binge eating disorder: A symptomlevel investigation of genetic and environmental influences on liability. Psychological Medicine, 40(11), 1899–1906. https://doi.org/10.1017/S0033291710000139
Park, R. J., Singh, I., Pike, A. C., & Tan, J. O. A. (2017). Deep Brain Stimulation in Anorexia Nervosa: Hope for the Hopeless or Exploitation of the Vulnerable? The Oxford Neuroethics Gold Standard Framework. Frontiers in Psychiatry, 8. https://doi. org/10.3389/fpsyt.2017.00044
Miyasaka, K., Hosoya, H., Sekime, A., Ohta, M., Amono, H., Matsushita, S., Suzuki, K., Higuchi, S., & Funakoshi, A. (2006). Association of ghrelin receptor gene polymorphism with bulimia nervosa in a Japanese population. Journal of Neural Transmission, 113(9), 1279–1285. https://doi.org/10.1007/ s00702-005-0393-2
Prinz, P., & Stengel, A. (2018). Deep Brain Stimulation— Possible Treatment Strategy for Pathologically Altered Body Weight? Brain Sciences, 8(1). https://doi.org/10.3390/ brainsci8010019
Monteleone, A. M., Castellini, G., Volpe, U., Ricca, V., Lelli, L., Monteleone, P., & Maj, M. (2018). Neuroendocrinology and brain imaging of reward in eating disorders: A possible key to the treatment of anorexia nervosa and bulimia nervosa. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 80, 132–142. https://doi.org/10.1016/j. pnpbp.2017.02.020 Monteleone, P., Bifulco, M., Filippo, C. D., Gazzerro, P., Canestrelli, B., Monteleone, F., Proto, M. C., Genio, M. D., Grimaldi, C., & Maj, M. (2009). Association of CNR1 and FAAH endocannabinoid gene polymorphisms with anorexia nervosa and bulimia nervosa: Evidence for synergistic effects. Genes, Brain and Behavior, 8(7), 728–732. https://doi. org/10.1111/j.1601-183X.2009.00518.x Monteleone, Palmiero, Tortorella, A., Castaldo, E., Di Filippo, C., & Maj, M. (2006). No association of the Arg51Gln and Leu72Met polymorphisms of the ghrelin gene with anorexia nervosa or bulimia nervosa. Neuroscience Letters, 398(3), 325–327. https://doi.org/10.1016/j.neulet.2006.01.023 Monteleone, Palmiero, Tortorella, A., Castaldo, E., Di Filippo, C., & Maj, M. (2007). The Leu72Met polymorphism of the ghrelin gene is significantly associated with binge eating disorder. Psychiatric Genetics, 17(1), 13–16. https://doi.org/10.1097/ YPG.0b013e328010e2c3 Nestler, E. J. (2012). Transcriptional Mechanisms of Drug Addiction. Clinical Psychopharmacology and Neuroscience, 10(3), 136–143. https://doi.org/10.9758/cpn.2012.10.3.136 Nettersheim, J., Gerlach, G., Herpertz, S., Abed, R., Figueredo, A. J., & Brüne, M. (2018). Evolutionary Psychology of Eating Disorders: An Explorative Study in Patients With Anorexia Nervosa and Bulimia Nervosa. Frontiers in Psychology, 9, 2122. https://doi.org/10.3389/fpsyg.2018.02122
SPRING 2020
Rada, P., Avena, N. M., & Hoebel, B. G. (2005). Daily bingeing on sugar repeatedly releases dopamine in the accumbens shell. Neuroscience, 134(3), 737–744. https://doi.org/10.1016/j. neuroscience.2005.04.043 Repetitive transcranial magnetic stimulation in the treatment of eating disorders: A review of safety and efficacy— ScienceDirect. (n.d.). Retrieved May 16, 2020, from https:// www-sciencedirect-com.dartmouth.idm.oclc.org/science/ article/pii/S0165178117319960?via%3Dihub Repetitive Transcranial Magnetic Stimulation Reduces CueInduced Food Craving in Bulimic Disorders—ScienceDirect. (n.d.). Retrieved May 16, 2020, from https://wwwsciencedirect-com.dartmouth.idm.oclc.org/science/article/ pii/S0006322309014164?via%3Dihub Rienecke, R. D. (2017). Family-based treatment of eating disorders in adolescents: Current insights. Adolescent Health, Medicine and Therapeutics, 8, 69–79. https://doi.org/10.2147/ AHMT.S115775 Roberts, M. E. (2018). Feasibility of group Cognitive Remediation Therapy in an adult eating disorder day program in New Zealand. Eating Behaviors, 30, 1–4. https:// doi.org/10.1016/j.eatbeh.2018.04.004 Rodgers, R. F., Paxton, S. J., & McLean, S. A. (2014). A Biopsychosocial Model of Body Image Concerns and Disordered Eating in Early Adolescent Girls. Journal of Youth and Adolescence, 43(5), 814–823. https://doi.org/10.1007/ s10964-013-0013-7 Rosenkranz, K., Hinney, A., Ziegler, A., Hermann, H., Fichter, M., Mayer, H., Siegfried, W., Young, J. K., Remschmidt, H., & Hebebrand, J. (1998). Systematic Mutation Screening of the Estrogen Receptor Beta Gene in Probands of Different Weight Extremes: Identification of Several Genetic Variants. The Journal of Clinical Endocrinology & Metabolism, 83(12), 4524–4524. https://doi.org/10.1210/jcem.83.12.5471
201
Rudebeck, P. H., Putnam, P. T., Daniels, T. E., Yang, T., Mitz, A. R., Rhodes, S. E. V., & Murray, E. A. (2014). A role for primate subgenual cingulate cortex in sustaining autonomic arousal. Proceedings of the National Academy of Sciences, 111(14), 5391–5396. https://doi.org/10.1073/pnas.1317695111
Yager, L. M., Garcia, A. F., Wunsch, A. M., & Ferguson, S. M. (2015). The ins and outs of the striatum: Role in drug addiction. Neuroscience, 301, 529–541. https://doi. org/10.1016/j.neuroscience.2015.06.033
Shimada, S., Hiraki, K., & Oda, I. (2005). The parietal role in the sense of self-ownership with temporal discrepancy between visual and proprioceptive feedbacks. NeuroImage, 24(4), 1225–1232. https://doi.org/10.1016/j.neuroimage.2004.10.039 Smink, F. R. E., van Hoeken, D., & Hoek, H. W. (2012). Epidemiology of Eating Disorders: Incidence, Prevalence and Mortality Rates. Current Psychiatry Reports, 14(4), 406–414. https://doi.org/10.1007/s11920-012-0282-y Steiger, H., Bruce, K. R., & Groleau, P. (2011). Neural Circuits, Neurotransmitters, and Behavior. In R. A. H. Adan & W. H. Kaye (Eds.), Behavioral Neurobiology of Eating Disorders (pp. 125–138). Springer. https://doi.org/10.1007/7854_2010_88 Steiger, H., Israël, M., Gauvin, L., Ng Ying Kin, N. M. K., & Young, S. N. (2003). Implications of compulsive and impulsive traits for serotonin status in women with bulimia nervosa. Psychiatry Research, 120(3), 219–229. https://doi.org/10.1016/S01651781(03)00195-1 Strober, M., Freeman, R., Lampert, C., Diamond, J., & Kaye, W. (2000). Controlled Family Study of Anorexia Nervosa and Bulimia Nervosa: Evidence of Shared Liability and Transmission of Partial Syndromes. American Journal of Psychiatry, 157(3), 393–401. https://doi.org/10.1176/appi.ajp.157.3.393 Thompson, J. K. (Ed.). (2004). Handbook of eating disorders and obesity. John Wiley & Sons. van Kuyck, K., Gérard, N., Laere, K. V., Casteels, C., Pieters, G., Gabriëls, L., & Nuttin, B. (2009). Towards a neurocircuitry in anorexia nervosa: Evidence from functional neuroimaging studies. Journal of Psychiatric Research, 43(14), 1133–1145. https://doi.org/10.1016/j.jpsychires.2009.04.005 Vaquero, J. J., & Kinahan, P. (2015). Positron Emission Tomography: Current Challenges and Opportunities for Technological Advances in Clinical and Preclinical Imaging Systems. Annual Review of Biomedical Engineering, 17(1), 385– 414. https://doi.org/10.1146/annurev-bioeng-071114-040723 Volkow, N. D., Wise, R. A., & Baler, R. (2017). The dopamine motive system: Implications for drug and food addiction. Nature Reviews Neuroscience, 18(12), 741–752. https://doi. org/10.1038/nrn.2017.130 White, M. A., & Grilo, C. M. (2013). Bupropion for Overweight Women With Binge-Eating Disorder: A Randomized, Double-Blind, Placebo-Controlled Trial. The Journal of Clinical Psychiatry, 74(04), 400–406. https://doi.org/10.4088/ JCP.12m08071 Winstanley, C. A., Theobald, D. E. H., Dalley, J. W., Cardinal, R. N., & Robbins, T. W. (2006). Double Dissociation between Serotonergic and Dopaminergic Modulation of Medial Prefrontal and Orbitofrontal Cortex during a Test of Impulsive Choice. Cerebral Cortex, 16(1), 106–114. https://doi. org/10.1093/cercor/bhi088 Wu, M., Hartmann, M., Skunde, M., Herzog, W., & Friederich, H.C. (2013). Inhibitory control in bulimic-type eating disorders: A systematic review and meta-analysis. PloS One, 8(12), e83412. https://doi.org/10.1371/journal.pone.0083412
202
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SPRING 2020
203
From Billiard Balls to Global Catastrophe: How Plastics Have Changed the World STAFF WRITERS: JESS CHEN, BRITTANY CLEARY, TEDDY PRESS, ALLEN RUBIO BOARD WRITERS: ANNA BRINKS, LIAM LOCKE Cover Image: Assorted plastic bottles. Source: Wallpaper Flare
“The invention of plastic sprung from, at least in part, the plight of elephants and the game of billiards.”
204
Introduction The invention of plastic sprung from, at least in part, the plight of elephants and the game of billiards. From the 1600s to the early twentieth century, ivory was the preferred material for high-end billiard balls (Shamos, 2003). The game was popular amongst the European and American upper class, and nearly every estate, mansion, or high-end club boasted a table. However, the increasing demand placed additional strain on the already over-hunted population of elephants, and thousands were being slaughtered yearly to meet the demands of the ivory trade. In response, in 1863 a New York Billiards supplier ran a newspaper ad offering a fortune— ten thousand dollars in gold— to anyone who could invent a suitable alternative for the ivory balls. John Wesley Hyatt, a young inventor in upstate New York, decided to take on this challenge (Frienkel, 2011). Building on previous successes in materials innovation such as the vulcanization (hardening) of
rubber achieved independently by Thomas Hancock and Charles Goodyear in 1844 and the invention of “parkesine” (the first member of the celluloid class of materials) patented by Alexander Parkes in 1856, Hyatt began experimenting on different combinations of solvents, nitric acid, and cotton in a shack behind his home (Mann, 2012). In 1869, he had a breakthrough: a moldable, durable material derived from a natural polymer (cellulose in the cotton) but with a versatility that far surpassed that of natural plastics such as ivory. Dubbed celluloid, the material proved to be a wonderful substitute for ivory in many products (Frienkel, 2011). Unfortunately, Hyatt would never collect his ten-thousand-dollar prize — celluloid was also highly volatile, and two balls colliding could create a sound like a shotgun blast. However, the material made a large impact on other products based on ivory such as combs and also sparked interest in continuing the search for new and better plastic materials. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
The term “plastic” comes from the Greek word “plastikos,” which means moldable and malleable; in its noun form, it is defined as “any of numerous organic synthetic or processed materials that are mostly thermoplastic or thermosetting polymers of high molecular weight and that can be made into objects, films, or filaments” (Merriam-Webster). Modern-day plastics fit their etymology nicely: they are strong, durable, versatile, and inexpensive (Andrady & Neal, 2009). Plastics are commonly used for food packaging to increase the longevity of food; automotive components to fashion life-saving seatbelts and lightweight engine parts; construction insulation and sealants to increase the energy efficiency of homes; sports safety equipment to decrease an athlete’s risk of injury and death; disposable and sterile medical tools and devices to decrease infection transmission; and electronics to improve everyday quality of life (Connecticut Plastics, 2014). In sum, plastic is vitally important to the global economy and everyday life as it functions today.
While the advent of plastic has greatly benefited humankind, the environment was not always considered during the growth of this industry. As factories produced more plastic, global solid waste generation has increased significantly over the past five decades, increasing the need for landfills and other means of disposal. Additionally, the most commonly used plastics are notorious for being non-biodegradable, resulting in heaps of trash accumulating in landfills and the environment as opposed to biodegrading back into nature. To make matters worse, plastics have invaded almost every ecosystem. They have been found in all major ocean basins, the top of untouched mountains, and in almost every other habitat on Earth (Geyer et al., 2017; Allen et al., 2019). Human activity is now the driving force behind large-scale changes to the environment, and as a result climate experts claim that we have entered a new human-dominated geological era called the “Anthropocene” (Lewis and Maslin, 2015). Plastic is so common within our environment that researchers have suggested the material be labeled the “geological indicator of the proposed Anthropocene era” (Geyer et al., 2017). The sustainability effort is made even more complicated by the wide diversity of plastics in existence.
Types of Plastics and Polymer Chemistry Plastics are organic polymers made by linking monomeric units with covalent bonds. While biopolymers (plastics derived from biological materials) are increasing in popularity because of sustainability concerns, the majority of plastics are still derived from petroleum and are
Figure 1: The patent for injection molding of celluloid billiard balls by John Wesley Hyatt. In the image, fig. 1 shows a billiard ball made from celluloid, fig. 2 shows a cross section of the ball, fig.3 shows the injection molding apparatus, and fig. 4 shows a front view of the apparatus. Source: Wikimedia Commons
“Plastics are organic polymers made by linking monomeric units with covalent bonds.”
Figure 2: Free radical polymerization of vinyl chloride to polyvinyl chloride (PVC) using catalytic n,ndialkyl hydroxylamine radical (ᐧONR1R2). Reaction termination by the catalyst is in equilibrium, so the reaction will continue to proceed until the majority of vinyl chloride has reacted. Source: Wikimedia Commons
SPRING 2020
205
“Many petroleumbased plastics are long chain hydrocarbons and are therefore relatively unreactive compounds due to a lack of polarizable bonds.”
often long-lived in the environment. Petroleumbased plastics can be divided into two main categories: thermoplastics and thermosetting resins. Most materials commonly considered plastics are thermoplastic polymers which can be heated, reshaped, and recycled. Thermosetting polymers harden irreversibly and are more difficult to recycle – polyurethane (PU) is an example of a commonly used thermosetting polymer (Chamas et al., 2020). Many petroleum-based plastics are long chain hydrocarbons and are therefore relatively unreactive compounds due to a lack of polarizable bonds. Plastic polymers with heteroatoms (oxygen, nitrogen, C=O) in their backbone are more efficiently biodegraded because these heteroatoms give chemically reactive sites for enzymatic oxidation by aerobic bacteria (Gewert et al., 2015). The following sections outline the most commonly used petroleum-based plastic polymers, including their monomeric structures, polymerization reaction, common uses, environmental halflife and recycling number. Recycling numbers identify the type of plastic polymer, which is necessary for sorting recycled products. The half-lives reported were proposed by Chamas and colleagues (2020) based on the average specific surface degradation rates (SSDR). 2.1 Polyethylene (HDPE/LDPE) Polyethylene is the most common plastic produced today, making up 28% of total petroleum-based plastic production by weight (Yeo et al., 2018). There are two categories of polyethylene thermoplastics: high density (13% of the 28%) and low density (15% of the 28%). High density polyethylene (HDPE) is used in the production of erosion resistant plastic piping and hard plastic containers. Low density polyethylene (LDPE) is the main component of plastic bags and commonly found in consumer packaging. The branched structure of LDPE makes this polymer significantly more reactive than HDPE, and this increased reactivity is largely attributable to a decrease in van der Waals forces in LDPE; while the chemical half-life (time taken for 50% of the plastic to be degraded) is about 3.4 years for LDPE in marine environments, the half-life of HDPE is nearly 1,500 times as long at 1,200 years (Chamas et al., 2020). 2.2 Polypropylene (PP) Polypropylene is an incredibly versatile thermoplastic polymer that comprises 17% of manufactured plastics. Polypropylene is stronger than polyethylene and is used in many
206
products that involve heating or weathering. This includes hydraulic heating systems, plastic molding, hinges, and laboratory equipment that needs to be sterilized in an autoclave. Although PP is highly heat resistant, the plastic can become brittle at temperatures below 0°C (Shafigullin et al., 2018). Polypropylene is made entirely of carbon and hydrogen and requires radical generation to perform polymerization. Despite its high thermal resistivity, PP is less dense and has a higher SSDR than HDPE; the environmental half-life of PP in marine environments in 53 years (Chamas et al., 2020). 2.3 Polyphthalamide (PPA) The structure of polyphthalamide is significantly different from HDPE, LDPE or PP as it contains heteroatoms (nitrogen and C=O) in the polymer backbone. As a result, PPA is biodegradable and has a half-life of 0.19 years on land and 2.5 years in marine environments (Chamas et al., 2020). PPA is related to nylon, a class of polyamide plastics, and comprises 14% of manufactured plastics. The melting point of PPA is about 370°C which makes it a suitable plastic for automotive parts, electronics, and medical appliances (e.g. catheters). It is also used in toothbrush and hairbrush bristles (Kohan, 1995). 2.4 Polyvinyl Chloride (PVC) Polyvinyl chloride is a thermoplastic polymer that comprises 9% of manufactured plastics. It comes in two forms: rigid and flexible. Rigid PVC is used in traditional PVC pipes and is easily bound by glues and resins. Flexible PVC is made through the addition of plasticizers (additives that make plastics more malleable) and is used in wire covering, flooring, automobile parts, and surgical equipment among other uses (Yeo et al., 2018). There has not yet been enough research on PVC degradation in the environment to calculate a half-life (Chamas et al., 2020). 2.5 Polyethylene Terephthalate (PET) Polyethylene terephthalate is a semiaromatic thermoplastic that comprises 8% of manufactured plastics. PET is the most common thermoplastic used in clothing (polyester family) and it is found in plastic containers and photographic film. It is also the primary plastic used in beverage containers (Gironi & Piemonte, 2010). The aromatic rings in PET stack to form a semi-crystalline lattice. PET begins to soften around 70°C, making it an unsuitable plastic for applications that require heating. Data on the half-life of PET is not available, but the structure
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
contains heteroatoms in the polymer backbone giving some additional chemical reactivity and biodegradability. 2.6 Polyurethane (PU) Polyurethane is an unusual plastic in this list because it can be both a thermosetting polymer and a thermoplastic polymer. Thermosetting polyurethanes are generally adhesives, plastic coatings, elastomers, or sealants. One of the most common forms of PU thermoplastic is a heat-resistant foam used in the seat of cars, building insulation, and furniture. PU comprises 7% of manufactured plastics and there are mechanisms for PU recycling, although these are largely unutilized by the general public (Zia et al., 2007). Polyurethane foam has an environmental half-life of approximately 0.1 years, which is much shorter than other plastics due to its high energy carbamate bond (Wu et al., 2001). 2.7 Polystyrene (PS) Polystyrene is an inert, hydrophobic thermoplastic polymer that comprises 6% of manufactured plastics. Much of the PS produced is extruded polystyrene (XPS), a foam form of thermoplastic that is comprised of about 95% air (Ling-ling & Jin-hua, 2014). XPS is most commonly known for its use in Styrofoam®, but it is also used in appliances, automobiles, packaging, and as a thermal insulator. PS is long-lived in the environment, and although no conclusive studies have been performed to calculate its half-life, it is estimated to be over 2,500 years (Chamas et al., 2020).
Sources and Transfer of Plastics to the Environment
After manufacturing and use, plastic products face several different fates: they may be recycled, destroyed thermally, or discarded. Reprocessing or recycling is a popular option, but while this extends the plastic’s lifetime, it ultimately only delays the plastic’s final disposal. Additionally, the recycling process often involves contamination and mixing of different polymer types, reducing the plastic’s quality and economic value. Alternatively, plastic can be broken down using thermal incineration. The efficiency and safety of this process is dependent upon incinerator design and operation as well as emission control technology. Depending on these factors, incineration may result in loss of energy, toxic byproducts, or emissions (Geyer et al., 2017). Pyrolysis is an alternative form of thermal destruction that has received recent SPRING 2020
interest due to its ability to convert plastic waste into energy. It involves degrading long chain polymer molecules into smaller, less complex molecules in the absence of oxygen through pressure and intense heat. The three major products that are produced are oil, gas, and char, which can then be used in furnaces, boilers, or other appliances. However, further research is still necessary to optimize parameters for different plastics and make pyrolysis more efficient (Anuar Sharuddin et al., 2016). Finally, discarded plastic may find its way to either a managed containment system such as a landfill, an uncontained open dump, or the natural environment (Geyer et al., 2017). This last option poses the greatest risk of resulting in direct, harmful interactions with wildlife and facilitating the transfer of plastics from terrestrial to marine environments. Plastic litter is now ubiquitous, especially in marine ecosystems around the world. Despite the vastness of the ocean, which covers around 70% of the earth and has an average depth of thousands of feet, ocean currents, winds, river outflow, and drift can allow plastic to travel large distances to otherwise remote and pristine locations. Plastic has been found on mid-ocean islands, the poles, and at considerable ocean depths (Cole et al., 2011). There are several avenues and factors which can introduce plastic into the ocean. Prior to widespread bans preventing at-sea vessels from dumping waste overboard, a large amount of plastic was directly introduced into the ocean from fishing and other marine operations. In 1975 alone, the estimated annual flux of litter of all plastic materials to the ocean was 6.4 million metric tons (MT) based only on discharges from ocean vessels, military operations, and ship casualties (Jambeck et al., 2015). Although this particular pathway has been at least partially curtailed, a significant amount of plastic still enters marine environments from nearby coastal cities. In fact, plastic litter with a terrestrial source contributes approximately 80% of plastic found in marine litter. Half the world’s population lives within 50 miles of the coast, and plastic can travel from coastal cities to the ocean through wastewatersystems, rivers, or being blown offshore (Cole et al., 2011). The unidirectional flow of freshwater systems drives the movement of plastic towards the sea, and this flow can be exacerbated by extreme weather such as flash flooding or hurricanes. Additionally, coastal tourism and recreational fishing are important contributors to ocean plastic. Discarded or lost fishing gear such as monofilament line and
“Despite the vastness of the ocean, which covers around 70% of the earth and has an average depth of thousands of feet, ocean currents, winds, river outflow, and drift can allow plastic to travel large distances to otherwise remote and pristine locations.”
207
High Density Polyethylene (HDPE) Figure 3: HDPE is used in plastic pipes and bottles. HDPE is synthesized using a Ziegler-Natta catalyst to prevent branching during high-pressure, free radical polymerization. Estimated half-life is 5,000 years on land and 1,200 years in marine environments (all chemical structures in figures 3-10 are original figures generated in ChemDraw). RECYCLING NUMBER: 2
Low Density Polyethylene (LDPE) Figure 4: LDPE is used in plastic bags, containers, and commercial packaging. It is synthesized under the same conditions as HDPE (high-pressure, free radical), but in the absence of a Ziegler-Natta catalyst which results in a branched structure with lower density. Estimated half-life is 4.6 years on land and 3.4 years in marine environments. RECYCLING NUMBER: 4
Polypropylene (PP) Figure 5: PP is a durable, yet flexible thermoplastic polymer used in products that require mechanical force or frequent heating. It is synthesized using chain growth polymerization which requires using a propylene monomeric unit that is reduced to a propyl group during the reaction. Estimated half-life is 780 years on land and 53 years in marine environments. RECYCLING NUMBER: 5
Polyphthalamide (PPA) Figure 6: PPA is a semi-aromatic polyamide thermoplastic polymer with interesting and useful chemical properties. The polymer is synthesized via a polycondensation reaction (amide nitrogen attacks the carbonyl adjacent to the aromatic ring). Oxidative cleavage of the carbon backbone by aerobic bacteria is possible due the presence of polarizable bonds. Estimated half-life is 0.19 years on land and 2.5 years in marine environments. RECYCLING NUMBER: 7 (OTHER)
208
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Polyvinyl Chloride (PVC) Figure 7: PVC is synthesized through suspension polymerization, a process in which monomeric units suspended in solution aggregate to form a polymer. The chloride functionality increases intermolecular forces relative to polyethylene and gives PVC products additional stability. PVC degradation has not been thoroughly studied, but the half-life is estimated to be at least 2,500 years. RECYCLING NUMBER: 3
Polyethylene terephthalate (PET) Figure 8: The monomeric unit of PET is derived from the esterification of ethylene glycol and terephthalic acid. These PET units are polymerized via a polycondensation reaction (nucleophilic attack by the hydroxyl at the carbonyl carbon). Newly synthesized PET is a viscous material that can easily be spun into fibers or used to make plastic bottles. on land and 53 years in marine environments. RECYCLING NUMBER: 1
Polyurethane (PU) Figure 9: Thermosetting PU hardens irreversibly and is commonly used in adhesives, sealants, and elastomers. As a thermoplastic, PU is used primarily as foam cushioning in various products. The polymer is synthesized via an esterification reaction which forms carbamate linkages. PU has an environmental half-life of about 0.1 years (~750 hours). RECYCLING NUMBER: 7 (OTHER)
Polystyrene (PS) Figure 10: PS is an aromatic thermoplastic polymer created by free-radical polymerization of styrene. It is highly unreactive and long-lived making it a useful plastic in appliances and insulation. However, single use PS products are a serious environmental concern, since PS is not easily broken down in the environment. Extruded PS (XPS), the main component of StyrofoamÂŽ, is very harmful to the environment due to its minimal biodegradability. RECYCLING NUMBER: 6
SPRING 2020
209
Figure 11: The pathway by which plastic enters the world’s oceans. This image displays estimates of global plastics entering the oceans from land-based sources. Source: Wikimedia Commons, Author: Our World in Data
“The scientists estimated that without improvement, the amount of plastic entering the ocean will increase by an order of magnitude by 2025.”
nylon netting can be particularly detrimental due to the strings’ neutral buoyancy, which increases their capacity to entangle marine biota in a phenomenon known as “ghost fishing” (Cole et al., 2011). In order to calculate the amount of mismanaged plastic waste that poses a risk of entering the ocean, Jambeck et al. investigated 192 coastal countries with at least 100 permanent residents that border the Atlantic, Pacific, and Indian oceans as well as the Mediterranean and Black Seas (2015). Using a framework that involves calculating the total mass of waste generated per capita annually, the percentage of waste that is plastic, and the percentage of plastic waste that is mismanaged, the researchers found that 99.5 million MT of plastic waste were generated in coastal regions in 2010. Of this, 31.9 million MT were classified as mismanaged and an estimated 4.8 to 12.7 million MT entered the ocean. The two major factors influencing waste streams in the different regions were population size and the quality of waste management systems. The scientists estimated that without improvement, the amount of plastic entering the ocean will increase by an order of magnitude by 2025 (Jambeck et al., 2015). Figure 11 summarizes the pathway that can lead to plastic accumulation in the ocean. Another emerging concern is the introduction of microplastic to marine environments. Microplastic is usually defined as plastic debris that is less than 5mm, and there are two
210
different types: primary and secondary. Primary microplastics are manufactured to be of a microscopic size and include materials used in facial-cleansers, cosmetics, air-blasting media, and medicine (Cole et al., 2011). Cosmetic products have increasingly replaced natural ingredients such as oatmeal and pumice in exfoliating scrubs with microplastics, often marketing them as “microbeads” or “microexfoliates.” Fortunately, the Microbead-Free Water Act of 2015 prohibited the manufacturing, packaging, and distribution of rinse-off cosmetics containing plastic microbeads, significantly decreasing the contribution of cosmetic products to the microplastic water supply. Conversely, secondary microplastics are derived from the breakdown of larger plastic debris. This fragmentation can be promoted by abrasion that occurs in the natural environment as well as sunlight (Cole et al., 2011). Exploring the impact of sewage on microplastic accumulation on shorelines, Browne et al. collected sediment from 18 different shorelines worldwide representing six continents from the pole to the equator. They then extracted microplastic from the samples, analyzing the identity of the microplastic as well as the sediment particle size. These results were compared to analysis of local sewage effluent and washing machine effluent. The researchers found that there were higher levels of microplastic found in more densely populated areas, and that even at locations where sewage had not been dumped for more than a decade,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
disposal sites still contained over 250% more microplastic than reference sites. Additionally, they found that microplastic fibers were mainly derived from sewage via washing clothes (rather than fragmentation or other cleaning products). Upon further investigation, Browne et al. showed that a single garment can produce over 1,900 fibers per wash, and that most sewage treatment plants are not equipped with filters that are specifically designed to retain microplastics (Browne et al., 2011).
4. The Effect of Plastic Exposure on Organisms Plastic that gets dumped or inadvertently littered into the ocean can begin to affect the environment at a startling scale. Since around 60% of produced plastic is buoyant in seawater, large amounts of floating plastic waste can exist for long periods of time on the ocean surface. Currents guide these pollutants to central locations where they can end up trapped and form huge, floating bodies of trash. In the North Pacific Subtropical Gyre, a disruptive collection of garbage from many countries around the world has clustered to form the infamous Great Pacific Garbage Patch (GPGP). The massive plastic accumulation zone has amassed at least 79 thousand tons of plastic across 1.6 million km2 and is causing drastic effects on the environment and marine life in its vicinity (Lebreton et al., 2018). Perhaps the most direct impact that plastic waste has on marine ecosystems is the contamination of food chains. As shown in
SPRING 2020
Figure 12: Plastic Bag Jellyfish. An exhibit at the Mote Marine Aquarium, FL depicting the similarity between floating plastic bags and jellyfish, a common prey for marine organisms. Source: Wikimedia Commons, Author: U+1F360
figure 12, it is easy for species to mistake floating plastic for food; upon ingestion, plastic can cause gastrointestinal blockages, ulceration, and even death (Teuten et al., 2009). Endangered and keystone species vulnerable to the sudden rise in oceanic plastic pollution are especially impacted as plastic becomes another stressor for survival. In a study off the North Atlantic subtropical gyre, researchers recorded that 83% of endangered loggerhead turtles have ingested plastic items — an incredibly high percentage that suggests detrimental health effects for these protected reptiles (Pham et al., 2017). Another study estimates around 90% of seabirds have plastic present in their gastrointestinal tract, as demonstrated by the picture of the dead albatross chick shown in figure 13 (Wilcox, Sebille, & Hardesty, 2015). Considering the wide diversity of organisms supported by oceans, it is staggering how common plastic is in so many animal’s diets. The problem shifts as the plastic decreases in size to the micro scale. In their study of the GPGP, Lebreton et al. also revealed that around 8% of the total plastic mass was comprised of microplastics (2018). While macroplastics cause great damage to the bodies of larger organisms, microplastics can affect populations of invertebrates and microorganisms in aquatic and marine environments. At the micro scale, plastic can cause a variety of repercussions that might not be as obvious as the impacts of larger plastics. Exposure to microplastics, particularly in smaller organisms, can eventually lead to ingestion and the likely exposure to toxic additives (phthalates, BPA, pesticides etc.) that induce inflammation, detrimentally-increased immune activity, and mortality (Carbery et al., 2018). Harmful organic compounds and metals from these particles can also greatly inhibit growth and alter the way microorganisms behave (Tetu, Sarker, & Moore, 2020). For example, nitrogen cycling microbes perform
“Perhaps the most direct impact that plastic waste has on marine ecosystems is the contamination of food chains... upon ingestion, plastic can cause gastrointestinal blockages, ulceration, and evan death.”
Figure 13: Dead Albatross Chick. A Laysan albatross chick lays dead, its belly fatally filled with macroplastics. Source: Wikimedia Commons, Author: Claire Fackler
211
at varying degrees of efficiency (both increased and decreased) when affected by microplastics, signifying that chemical cycles might be another victim of pollution (Seeley et al., 2020). For microorganisms and other invertebrates that rely on nitrogen compounds for nutrition and development, this poses a huge issue as microplastic contamination spans massive areas all across the ocean.
“There are two main types of recycling processes: mechanical and tertiary.”
Microplastic uptake unfortunately occurs in a host of organisms such as zooplankton, invertebrates, mollusks, and crustaceans (Carbery et al., 2018). Common prey for larger marine organisms, the trophic transfer of microplastics and its associated toxic chemicals can also pose an additional problem for the ecosystem. Although multiple studies have shown that trophic transfer indeed occurs, organisms in the higher trophic levels show negligible levels of contaminants (Teuten et al., 2009). This, however, does not signify that the issue is completely harmless or irrelevant for human consumption since there has been a lack of research in microplastic contamination of seafood in our diets (Carbery et al., 2018). For smaller organisms that fulfill key roles in the environment, the massive amount of microplastics in the ocean continues to directly affect them, and as levels increase, trophic transfer may become more significant.
Recycling 5.1 Overview of recycling system mechanics As the use of plastic has increased substantially, so has the generation of plastic waste. The question remains: how is society to manage all this plastic waste? Over the past few decades, solid waste management agencies have increasingly focused on recycling systems as a way to deal with plastic waste. However, only a small percentage of plastic is actually recycled. In 2016, only 9% of plastic waste generated in the U.S. was recycled, still a jump from the disappointing 2% recycled in 1990 (US EPA, 2017). Plastic waste that is recycled typically comes from two sources: pre-consumer and post-consumer plastics. Pre-consumer plastics are scraps from industrial manufacturing processes that can be collected and recycled. Post-consumer plastics are plastic products that have been used and disposed of by consumers. Recycling collection systems are now commonplace throughout much of the U.S., from residential curbside pick-up to recycling bins in offices and public spaces. Usually, different types of plastic are collected alongside other recyclables (i.e. paper,
212
aluminum, glass) and brought to a recycling facility for sorting. There are two main types of recycling processes: mechanical and tertiary. Mechanical recycling involves reprocessing plastics into products similar to their original use. Tertiary processing involves chemically breaking down plastics to their basic components of oils and gases to be reprocessed as feedstock in the manufacturing of new plastics (Al-Salem et al., 2009). Because of the high operating costs of tertiary recycling, it is not very common. For mechanical recycling, sorting recyclable plastics by polymer type quickly and accurately is the biggest challenge. Pre-consumer plastic supplies tend to be easier to sort, but postconsumer plastic supplies face many sources of contamination. In this case, “contamination” refers to any material that is not recyclable but improperly ends up in recycling bins. Plastics need to be separated by polymer type, and they need to be pure. As of right now, flexible plastic, such as plastic bags, actually count as contamination because there is a lack of machinery available to separate it from other plastics (Hopewell et al., 2009). Furthermore, because of the difficulty of separating pure plastics from other material, mechanical recycling has focused on easily identifiable products such as PET soft drink bottles and HDPE milk bottles, as opposed to multicomponent items such as paper plates with a plastic lining. Thus, mechanical recycling mostly consists of rigid plastics that are easily identifiable. Commonly, facilities employ manual labor to initially sort out plastic from paper, metals, glass, and non-recyclable materials. Next, as previously mentioned, the plastic must be further sorted by polymer type. This is immensely important because different types of plastic are incompatible at the molecular level; in order for a recycled material to be most useful, the streams of plastic must be composed of only one type of plastic. For example, PET molecules that end up in PVC streams will form lumps of crystalline PET, making the recycled material less valuable (Hopewell et al., 2009). Recycling facilities vary by technology and methods used to sort and clean plastics, but the general process is as follows: plastic is first cut, shredded, or ground into flakes (Al-Salem et al., 2009). Then, these plastic flakes can be further separated by polymer types using sink/float separation. Water, when used as a medium,
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 14: Recycling plastic waste in Hoi An, Vietnam Source: Flickr, Author: Global Environmental Facility
can separate polyolefins (PP, HDPE, L/LLDPE) from PVC, PET and PS. Different liquid mediums can further separate PS, but PVC and PET have similar densities, so separating the two requires other methods. One popular method is the use of thermal kilns that can be used to heat the plastic as PVC turns black upon heating and can then be sorted from PET by color (Hopewell et al., 2009). Another practical method of sorting is triboelectric separation, which involves rubbing different plastics together. When rubbed together, one type of plastic becomes positively charged, while the other becomes negatively charged or remains neutral, and can then be identified based on charge. After sorting, the plastic flakes also need to be cleaned, typically with water or “dry cleaning” methods that use friction; chemical cleaning, for example, is used to remove glue residue (AlSalem et al., 2009). Finally, once the plastics are cleaned and separated by polymer and color type, they can be extruded into pellets and sold to be made into new products. 5.2 The current global recycling crisis Like much of the manufacturing industry, the recycling trade has largely globalized due to cheaper labor and lax environmental regulation overseas. At its peak, global studies have found that over half of the worlds’ plastic waste that is intended for plastic recycling is exported; of those exports, China and Hong Kong imported 72.4% for recycling processing (Brooks et al., 2018). The U.S. is the largest plastic waste exporter in the world (Wang et al., 2020), and before 2017 China used to buy about half of these exports, recycling approximately 1.6 million tons of the U.S.’s plastic waste per year
SPRING 2020
(McCormick et al., 2019). China has a large capacity for recycling, and for the U.S., shipping plastic waste to China was cheap; shipping companies offered cheap shipping of plastic waste to China because once unloaded, it gave them empty shipping containers to bring manufactured goods back to the U.S.
“It is estimated that 111 MMT of plastic waste will be globally displaced in ten years due to the new ban.”
However, in 2017, China announced that it would permanently ban the import of recyclables if the shipment had more than 0.5% contamination (Brooks et al., 2018). This was largely because high contamination rates in plastic streams made sorting quite expensive, making the recycling process less profitable. The Chinese government is also trying to reduce the mismanagement of plastic waste rampant in many Chinese recycling operations, where non-recyclable contaminants and plastics are often burned or dumped into the environment, leading to environmental and health issues (Joyce, 2019). As a result of this ban, countries such as the U.S. have had to redirect their recyclable waste. It is estimated that 111 MMT of plastic waste will be globally displaced in ten years due to the new ban (Brooks et al., 2018). Much of the waste has been redirected to Southeast Asian countries. In the year following the China ban, exports increased almost 7,000 percent from the U.S. to Thailand, and several hundred percent from U.S. to Malaysia (Joyce, 2019). Figure 14 shows a recycling worker in Vietnam sorting through plastic bottles. By 2018, these Southeast Asian countries became overwhelmed with the amount of waste imports. An inundation of plastic waste and the mismanagement of
213
recycling operations has led to widespread plastic pollution and burning, exposing many recycling workers to toxic fumes. This has led Southeast Asian countries to start restricting plastic waste imports as well. For example, Thailand has a plan to phase out all plastic waste imports over the course of two years. Malaysia and Vietnam stopped issuing or revoked plastic permits and licenses for plastic waste imports (Wang et al., 2020). Overall, after China’s ban, only about 56% of plastic waste that the U.S. used to export is still being accepted by foreign markets (McCormick et al., 2019).
“The Ocean Cleanup project prototyped its design and is in early stages of development, with a full rollout scheduled for 2021.”
Because of plastic recycling’s global dependency, the U.S. has built little infrastructure for recycling its own waste. Now, with global restrictions on recycling imports, there is no longer a market for plastic recycling. The cost of recycling has gone up, and local municipalities throughout the U.S. are left with two options: pay much more for recycling plastic waste, or throw it away (Semuels, 2019). For example, the city manager of Franklin, New Hampshire used to be able to recycle residential waste by selling it for $6 a ton. Now, the town waste management charges $125 a ton to recycle, or $68 a ton to incinerate (Semuels, 2019). This is part of a national trend throughout the U.S., as many towns are now unable to pay the cost of recycling and are left to either incinerate or throw away plastic in landfills. The global recycling system currently does not have the capacity to manage all of the plastic waste being produced. Moving forward, the U.S. has several options for trying to improve the recycling system. The nation may consider increasing its recycling infrastructure by building more processing facilities. Improving the technology for sorting might help make processing plastic recycling more cost effective, therefore incentivizing recycling. There is also increasing focus on reducing plastic usage from the source, such as designing packaging that uses less plastic.
Clean-Up and Biological Degradation Because of plastic’s resistance to degradation and its negative environmental consequences, one of the greatest challenges besides reducing plastic use is keeping plastic away from potential wildlife victims. Many solutions have been proposed on how to best deal with plastic that ends up in waterways and, ultimately, the ocean. In recent years, one project that has garnered attention is The Ocean Cleanup project, spearheaded by Dutch entrepreneur Boyan
214
Slat. The project aims to use the ocean’s own currents to collect surface level macroplastic waste, with an intense focus on breaking up the Great Pacific Garbage Patch. Named by Time Magazine as one of the best inventions of 2015, The Ocean Cleanup is a non-profit environmental organization whose main focus is to develop a screen net utilizing natural wind and water currents to collect plastics. Current models and designs have employed a 600-meter-long U-shaped net that floats at surface level, with a 3-meter net attached below the main floater (“Final Design”, 2018). The company claims that the skirt creates a downward draft that allows marine life to pass below the net safely. The skirt is meant to trap smaller plastic, while the surface-level floater prevents larger plastic from passing over it. Although the plastic is subject to movement by ocean currents, the floater moves with wind currents, while the plastic sits just below the water surface and is less affected (“Final Design”, 2018). Once the plastic is collected, the company plans to sort and recycle the materials. The Ocean Cleanup project prototyped its design and is in early stages of development, with a full rollout scheduled for 2021 (“Back in the Patch”, 2019). However, its early models have faced difficulties with components snapping off and losing their structural integrity due to the constant movement of the ocean. There have also been issues keeping the plastic in the screen net once it has been caught. Since it takes time for boats to come and retrieve the built-up plastic, much of the plastic is lost back to the ocean in the waiting process (Holst, 2019). Critics also argue that the project is not a worthwhile use of resources because the current model only catches larger pieces of plastic, which do not pose the greatest immediate risk to sea life. Slat argues that since microplastics are caused by the breakdown of larger pieces, the attempts are not futile. In addition to its large-scale ocean screens, The Ocean Cleanup is developing a product called the Interceptor that collects plastics at river outflows (“Rivers”, 2020). The company claims that 1,000 rivers in the world are responsible for 80% of ocean plastic waste. Using solar energy, an Interceptor prototype machine is currently operational in the Klang River in Malaysia. With a dual-embankment system, it uses rivers’ natural flows to push the plastic towards the receptacle, with solar powered conveyor belts and shuttles to store the plastic until collected
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Figure 15: Lignin is a structurallycomplex polysaccharide that can be modified to form a wide range of polymers, including plastic-like materials Source: WikiMedia Commons, Author: Smokefoot
(“Rivers”, 2020). Further data collection will later provide information on how effective the Interceptor is at removing plastic. An entrepreneurial venture called the Seabin Project has similar goals for capturing plastic. According to the organization, a seabin is a “‘trash skimmer’ designed for harbors, marinas, or any water body with a calm environment and suitable services available” (“Trademark: Seabin Project”, 2018). By pumping plastic-filled water into the catch bags, the device can catch both macro and microplastics. In addition to catching plastic, newer versions of the seabin come equipped with oil-pads that can clean up toxic water contaminants as an additional benefit. The current version of the seabin is capable of removing about 4 kg of debris per day, or 1.4 tons per year, depending on the density of trash in the area deployed. 4ocean is a for-profit company that claims to “hire boat captains and other local workers to clean the ocean and coastlines full time” (“4ocean has removed”, 2020). The company then converts the plastic into bracelets which they market and sell to the public. Although it is a for-profit company, 4ocean claims to have collected over seven million pounds of trash since their founding and pledges to remove at least one pound of trash per bracelet sold. Entrepreneurial efforts such as these, in addition to small scale community efforts, begin tackling the massive problem of ocean plastic pollution. But innovation in oceanic plastic reduction has extended to the lab as well. Scientists have proposed other methods for
SPRING 2020
plastic management such as utilizing bacteria to catalyze plastic degradation. Depending on the types of plastic involved, degradation can take hundreds of years. Plastic, being largely resistant towards degradation, requires applied catalysts to speed its breakdown. Regarding the feasibility of degradation through selected microbes, “humidity, temperature, pH, salinity, the presence or absence of oxygen, sunlight, water, stress and culture conditions not only affect the polymer degradation, but also have a crucial influence on the microbial population and enzyme activity” that would work in conjunction to degrade such plastics (Kale et. al, 2015). The idea to use microbes to degrade plastics arose in the early 1960s when a scientist reported that “several microorganisms can consume paraffin as a carbon source.” Numerous studies have since been conducted on various types of plastics, including PVC films, PHA and PHB, polyester-polyurethane, polyethylene bags, low-density polyethylene film, etc. Although there have been varying degrees of success, these studies have found significant increases in the rate of degradation due to added microbes such as marine fungi, naturally harvested lichen, and naturally occurring soil bacteria. Biodegradation works to break down the plastics, but there are still concerns regarding the toxicity of these degraded materials, since many of them contain harmful additives meant to increase strength and durability. For example, adipates and phthalates leaching out from polystyrene food containers can disrupt human hormones and have been shown to have carcinogenic effects. Degraded plastics in other studies have been shown to stunt plant growth in
“Entrepreneurial efforts... in addition to small scale community efforts, begin tackling the massive problem of ocean plastic pollution.”
215
lignin, a major component of lignocellulosic biomass. Though the structure of lignin is complex and variable, in all cases it contains hydroxyl groups which can be exploited for modification. Esterification (RCOOR’) or etherification (R-O-R’) produces epoxy resins; similarly esterification, phenolation, or urethanization leads to synthesis of polyurethanes. These hydroxyl groups can also be used as anchors for lignin graft copolymers, molecules with a lignin core and branching polymeric side chains (Figueiredo et al., 2018). In sum, lignin is a widely available polymer and offers promise for the development of plastics from sustainable precursors.
Figure 16: While many materials are labeled as biodegradable, few labels specify the conditions under which a material will biodegrade. Adding this classification is important, as the environment of the material will greatly impact the efficacy of biodegradation. Source: Flickr, Author: Doug Beckers
“A truly 'sustainable' plastic must fulfill two key criteria: it must be derived from renewable precursors and readily biodegradable.”
affected soils as well as diminish the efficiency of nutrient uptake for plant roots (Kale et. al, 2015). In addition, CO2 emissions are another concern of such degradation, which on a larger scale may add to the growing CO2 crisis and climate change. Though many bacteria have shown potential to speed up degradation, large scale implementation of these conditions for plastic treatment requires further screening of efficient organisms and advances towards decreased environmental impact (Kale et. al, 2015). More experiments must also be conducted to ensure that the bacteria do not adversely affect other organisms in the environment.
Plastic Alternatives
The degree to which humans rely on plastics, along with their potential harm to the environment, overabundance, and mismanagement, necessitates the engineering of new, sustainable materials that mimic the properties of conventional plastics. A truly “sustainable” plastic must fulfill two key criteria: it must be derived from renewable precursors and readily biodegradable (Zhang et al., 2018). Researchers around the world have made great strides, but still have many challenges to overcome before sustainable plastics can become the norm. Recent advances in polymer science and catalysis have allowed for the development of plastics from renewable materials like biomass rather than fossil resources, such as ethylene, propylene, and glycerol. Additionally, bio-based polymers can be exploited to form new materials, which have similar uses to conventional plastics. Chitin, cellulose, and starch are all available in biomass and can be modified to fulfill key plastic functionalities (Gandini & Lacerda, 2015). One particularly notable material in this category is
216
Further, promising monomers which cannot be found in fossil resources, including terpenes, terpenoids, rosin-derived structures, sugarderived structures, citric acid, and tartaric acid, have also been studied as potential building blocks for conventional plastics. Synthesis of conventional plastics from these classes of monomers would produce materials with the same biodegradability characteristics as petroleum-based products but would place less strain on limited global fossil-fuel reserves (Gandini & Lacerda, 2015). Unfortunately, the current plastics economy is well-established, and our socioeconomic system is not equipped to incentivize sustainable alternatives. Biorefinery infrastructure is expensive, which hinders the commercialization of renewable plastics. Current methods to refine biomass, a necessary step prior to its conversion to new materials, are energy intensive. Because we lack a universal, reliable quantitative tool to model environmental sustainability, it’s difficult to evaluate whether such initial energy costs will dwarf the environmental benefits of renewable plastics (Zhang et al., 2018). The second criterion for an environmentally sustainable plastic is biodegradability. Biodegradation of plastics occurs in four steps. First, microorganisms grow around a material and carry out biodeterioration, the superficial breakdown of the plastic into smaller pieces. Second, extracellular enzymes secreted by these microorganisms depolymerize the fragmented material into monomers and oligomers. Third, microbes take up these small molecules through bioassimilation. In the fourth and final step, mineralization, these microbes metabolize the small molecules into CO2, CH4, H2O, and N2 (Haider et al., 2019).
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
The success of biodegradation depends on many environmental factors which are difficult to control in the real world, including exposure to UV light, temperature, pH, and presence of microorganisms. Polymers that are considered biodegradable in one setting are unlikely to biodegrade in all settings. PLA-based bioplastics, for example, have been shown to biodegrade in soil and compost, but the various studies which demonstrate these results were carried out under specific temperature and humidity conditions. Further, the efficiency of biodegradation varied across the studies (Emadian et al., 2017). This demonstrates that “biodegradability” cannot be used as a blanket term; it must be qualified with the specific environmental conditions under which a material degrades.
Conclusion It is undeniable that plastic has spurred human innovation and our advancement as a species overall. However, the massive presence of plastic has taken a toll on the natural environment. To alleviate this issue, recycling has helped to delay the final disposal of plastics, but the current recycling system is simply incapable of processing the sheer amount of plastic being produced. To address this problem, significant effort has gone into crafting policies that reduce plastic usage and creating new materials to replace plastic. Government regulation has been an important step in reducing plastic waste. Canada recently announced a ban on single-use plastics that will begin in 2021, and eight U.S. states have already prohibited the use of plastic bags. Reusable grocery bags and water bottle refilling stations are becoming commonplace as citizens begin to embrace environmentalism. However, individual plastic use reflects the abundance of plastic materials available for purchase. Efforts to limit industrial plastic production are currently underway; the Break Free From Plastic Pollution Act is a bill that will make US plastic manufacturing companies accountable for their production of plastic waste, including fees on single use plastics, and a requirement to fund recycling programs (Lowenthal, 2020). The business community, particularly sciencebased start-ups, have also made strides to combat plastic waste. Notpla, a Londonbased startup, has developed a completely biodegradable and edible packaging solution for liquids. Made from seaweed, these capsules
SPRING 2020
can store condiments, water, and other beverages. Notably, the company produced 42,000 units for the London marathon to be distributed to runners at hydration stations (Rushe, 2020). EggPlant, an Italian startup, leverages wastewater as raw material and synthesizes high-performance bioplastics, all of which are biodegradable (StartUs Insights, 2019). These companies demonstrate that even innovators outside of the traditional academic realm recognize the need for sustainable plastic alternatives and are devoting their efforts to the cause. Ultimately, the world has become inundated with plastics. While this has provided great benefits for human industry and innovation, it has also had widespread negative impact on environmental and human health. Still, the use of plastic is increasing every year: if current trends continue, humans are projected to produce 26 billion metric tons of plastic waste by 2050 (Geyer et al., 2017). There are solutions in development and efforts to reduce our reliance on plastic, but more drastic changes need to happen before the damage of plastic waste becomes irreversible. Just as it took years of research and intense scaling to create the plastic economy the world engages in today, it will also take the same, if not more, collective effort to restore the environment and reverse the plastic contamination caused within the past century.
“Canada recently announced a ban on single-use plastis that will begin in 2021, and eight U.S. states have already prohibited the use of plastic bags.”
References 4ocean Has Removed More Than 7 Million Pounds Of Trash, Expands Bracelet-Funded Cleanups To Central America. (n.d.). Retrieved May 15, 2020, from https://www.forbes.com/sites/ jeffkart/2020/01/29/4ocean-has-removed-more-than-7million-pounds-of-trash-expands-bracelet-funded-cleanupsto-central-america/#64e64dd35f0a 5 Top Biodegradable Material Startups Out Of 700 In Packaging. (2019, April 5). StartUs Insights. https://www. startus-insights.com/innovators-guide/5-top-biodegradablematerial-startups-out-of-700-in-packaging/ Allen, S., Allen, D., Phoenix, V. R., Le Roux, G., Durántez Jiménez, P., Simonneau, A., Binet, S., & Galop, D. (2019). Atmospheric transport and deposition of microplastics in a remote mountain catchment. Nature Geoscience, 12(5), 339–344. https://doi.org/10.1038/s41561-019-0335-5 Al-Salem, S. M., Lettieri, P., & Baeyens, J. (2009). Recycling and recovery routes of plastic solid waste (PSW): A review. Waste Management, 29(10), 2625–2643. https://doi.org/10.1016/j. wasman.2009.06.004 Andrady, A. L., & Neal, M. A. (2009). Applications and societal benefits of plastics. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1526), 1977–1984. https:// doi.org/10.1098/rstb.2008.0304
217
Anuar Sharuddin, S. D., Abnisa, F., Wan Daud, W. M. A., & Aroua, M. K. (2016). A review on pyrolysis of plastic wastes. Energy Conversion and Management, 115, 308–326. https://doi. org/10.1016/j.enconman.2016.02.037 BACK IN THE PATCH: THE OCEAN CLEANUP DEPLOYS AGAIN - ProQuest. (n.d.). Retrieved May 15, 2020, from https://searchproquest-com.dartmouth.idm.oclc.org/docview/2283116636? accountid=10422&rfr_id=info%3Axri%2Fsid%3Aprimo Break Free From Plastic Pollution Act, no. H.R. 5845, House Energy and Commerce (2020). https://www.congress.gov/ bill/116th-congress/house-bill/5845 Brooks, A. L., Wang, S., & Jambeck, J. R. (2018). The Chinese import ban and its impact on global plastic waste trade. Science Advances, 4(6), eaat0131. https://doi.org/10.1126/ sciadv.aat0131 Browne, M. A., Crump, P., Niven, S. J., Teuten, E., Tonkin, A., Galloway, T., & Thompson, R. (2011). Accumulation of Microplastic on Shorelines Woldwide: Sources and Sinks. ENVIRONMENTAL SCIENCE & TECHNOLOGY, 45(21), 9175– 9179. https://doi.org/10.1021/es201811s Carbery, M., O’Connor, W., & Palanisami, T. (2018). Trophic transfer of microplastics and mixed contaminants in the marine food web and implications for human health. Environment International, 115, 400–409. https://doi.org/10.1016/j. envint.2018.03.007 Chamas, A., Moon, H., Zheng, J., Qiu, Y., Tabassum, T., Jang, J. H., Abu-Omar, M., Scott, S. L., & Suh, S. (2020). Degradation Rates of Plastics in the Environment. ACS Sustainable Chemistry & Engineering, 8(9), 3494–3511. https://doi.org/10.1021/ acssuschemeng.9b06635 Cole, M., Lindeque, P., Halsband, C., & Galloway, T. S. (2011). Microplastics as contaminants in the marine environment: A review. MARINE POLLUTION BULLETIN, 62(12), 2588–2597. https://doi.org/10.1016/j.marpolbul.2011.09.025 Emadian, S. M., Onay, T. T., & Demirel, B. (2017). Biodegradation of bioplastics in natural environments. Waste Management, 59, 526–536. https://doi.org/10.1016/j.wasman.2016.10.006 FDA. (2017, November 3). The Microbead-Free Waters Act: FAQs. U.S. Food & Drug Administration. https://www.fda.gov/ cosmetics/cosmetics-laws-regulations/microbead-free-watersact-faqs Figueiredo, P., Lintinen, K., Hirvonen, J. T., Kostiainen, M. A., & Santos, H. A. (2018). Properties and chemical modifications of lignin: Towards lignin-based nanomaterials for biomedical applications. Progress in Materials Science, 93, 233–269. https:// doi.org/10.1016/j.pmatsci.2017.12.001 Freinkel, S. (2011). Plastic: A toxic love story. Houghton Mifflin Harcourt. Gandini, A., & Lacerda, T. M. (2015). From monomers to polymers from renewable resources: Recent advances. Progress in Polymer Science, 48, 1–39. https://doi.org/10.1016/j. progpolymsci.2014.11.002 Geyer, R., Jambeck, J. R., & Law, K. L. (2017). Production, use, and fate of all plastics ever made. Science Advances, 3(7), e1700782. https://doi.org/10.1126/sciadv.1700782 Gironi, F., & Piemonte, V. (2011). Life cycle assessment of polylactic acid and polyethylene terephthalate bottles for drinking water. Environmental Progress & Sustainable Energy, 30(3), 459–468. https://doi.org/10.1002/ep.10490
218
Haider, T. P., Völker, C., Kramm, J., Landfester, K., & Wurm, F. R. (2019). Plastics of the Future? The Impact of Biodegradable Polymers on the Environment and on Society. Angewandte Chemie International Edition, 58(1), 50–62. https://doi. org/10.1002/anie.201805766 Holst, R. R. (2019). The Netherlands: The 2018 Agreement between The Ocean Cleanup and the Netherlands. The International Journal of Marine and Coastal Law, 34(2), 351–371. https://doi.org/10.1163/15718085-13421090 Hopewell, J., Dvorak, R., & Kosior, E. (2009). Plastics recycling: Challenges and opportunities. Philosophical Transactions of the Royal Society B-Biological Sciences, 364(1526), 2115–2126. https://doi.org/10.1098/rstb.2008.0311 Jambeck, J. R., Geyer, R., Wilcox, C., Siegler, T. R., Perryman, M., Andrady, A., Narayan, R., & Law, K. L. (2015). Plastic waste inputs from land into the ocean. SCIENCE, 347(6223), 768–771. https://doi.org/10.1126/science.1260352 Jiao, L., & Sun, J. (2014). A Thermal Degradation Study of Insulation Materials Extruded Polystyrene. Procedia Engineering, 71, 622–628. https://doi.org/10.1016/j. proeng.2014.04.089 Joyce, C. (2019, March 13). Where Will Your Plastic Trash Go Now That China Doesn’t Want It? NPR.Org. https://www.npr. org/sections/goatsandsoda/2019/03/13/702501726/wherewill-your-plastic-trash-go-now-that-china-doesnt-want-it Kale, S. K., Deshmukh, A. G., Dudhare, M. S., & Patil, V. B. (2015). MICROBIAL DEGRADATION OF PLASTIC: A REVIEW. Undefined. /paper/MICROBIAL-DEGRADATION-OFPLASTIC%3A-A-REVIEW-Kale-Deshmukh/916a631f296fbabca e40daefe4b862a7158cd67b Kohan, M. I. (Ed.). (1995). Nylon plastics handbook. Hanser Publishers ; Distributed in the USA and in Canada by Hanser/ Gardner Publications. Laurichesse, S., & Avérous, L. (2014). Chemical modification of lignins: Towards biobased polymers. Progress in Polymer Science, 39(7), 1266–1290. https://doi.org/10.1016/j. progpolymsci.2013.11.004 Lebreton, L., Slat, B., Ferrari, F., Sainte-Rose, B., Aitken, J., Marthouse, R., Hajbane, S., Cunsolo, S., Schwarz, A., Levivier, A., Noble, K., Debeljak, P., Maral, H., Schoeneich-Argent, R., Brambini, R., & Reisser, J. (2018). Evidence that the Great Pacific Garbage Patch is rapidly accumulating plastic. Scientific Reports, 8(1), 1–15. https://doi.org/10.1038/s41598018-22939-w Lewis, S. L., & Maslin, M. A. (2015). Defining the Anthropocene. Nature, 519(7542), 171–180. https://doi.org/10.1038/ nature14258 Mann, C. C. (2012). 1493: Uncovering the new world Columbus created (1st Vintage Books ed). Vintage Books. McCormick, E. (2019). Americans’ plastic recycling is dumped in landfills, investigation shows. 7. McCormick, E., Fullerton, J., Gee, A., Simmonds, C., Murray, B., Fonbuena, C., Kijewski, L., & Saraçoğlu, G. (2019, June 17). Where does your plastic go? Global investigation reveals America’s dirty secret. The Guardian. https://www. theguardian.com/us-news/2019/jun/17/recycled-plasticamerica-global-crisis
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Merriam-Webster. (n.d.). Plastic. Merriam-Webster.Com Dictionary. Retrieved May 15, 2020, from https://www. merriam-webster.com/dictionary/plastic Perfect Plastic: How Plastic Improves Our Lives. (2014, July 1). Connecticut Plastics. http://www.pepctplastics.com/ resources/connecticut-plastics-learning-center/perfectplastic-how-plastic-improves-our-lives/ Pham, C. K., Rodríguez, Y., Dauphin, A., Carriço, R., Frias, J. P. G. L., Vandeperre, F., Otero, V., Santos, M. R., Martins, H. R., Bolten, A. B., & Bjorndal, K. A. (2017). Plastic ingestion in oceanic-stage loggerhead sea turtles (Caretta caretta) off the North Atlantic subtropical gyre. Marine Pollution Bulletin, 121(1), 222–229. https://doi.org/10.1016/j.marpolbul.2017.06.008 Plastic’s carbon footprint: Researchers conduct first global assessment of the life cycle greenhouse gas emissions from plastics. (n.d.). ScienceDaily. Retrieved May 12, 2020, from https://www.sciencedaily.com/ releases/2019/04/190415144004.htm Rivers. (n.d.). The Ocean Cleanup. Retrieved May 15, 2020, from https://theoceancleanup.com/rivers/ Rushe, E. (n.d.). The Startup Behind The Viral Whisky Capsules Wants To Make Plastic Packaging Disappear. Forbes. Retrieved May 5, 2020, from https://www.forbes.com/ sites/elizabethrushe/2019/10/07/the-startup-behind-theviral-whisky-capsules-wants-to-make-plastic-packagingdisappear/ Seeley, M. E., Song, B., Passie, R., & Hale, R. C. (2020). Microplastics affect sedimentary microbial communities and nitrogen cycling. Nature Communications, 11(1), 2372. https://doi.org/10.1038/s41467-020-16235-3 Semuels, A. (2019, March 5). Is This the End of Recycling? The Atlantic. https://www.theatlantic.com/technology/ archive/2019/03/china-has-stopped-accepting-ourtrash/584131/ Shafigullin, L. N., Romanova, N. V., Gumerov, I. F., Gabrakhmanov, A. T., & Sarimov, D. R. (2018). Thermal properties of polypropylene and polyethylene blends (PP/LDPE). IOP Conference Series: Materials Science and Engineering, 412, 012070. https://doi.org/10.1088/1757899X/412/1/012070
US EPA, O. (2017, October 2). National Overview: Facts and Figures on Materials, Wastes and Recycling [Overviews and Factsheets]. US EPA. https://www.epa.gov/facts-and-figuresabout-materials-waste-and-recycling/national-overviewfacts-and-figures-materials USPTO ISSUES TRADEMARK: SEABIN PROJECT FOR CLEANER OCEANS - ProQuest. (n.d.). Retrieved May 15, 2020, from https://search-proquest-com.dartmouth. idm.oclc.org/docview/2132155033?accountid=10422&r fr_id=info%3Axri%2Fsid%3Aprimo Wang, C., Zhao, L., Lim, M. K., Chen, W.-Q., & Sutherland, J. W. (2020). Structure of the global plastic waste trade network and the impact of China’s import Ban. Resources, Conservation and Recycling, 153, 104591. https://doi. org/10.1016/j.resconrec.2019.104591 Wu, Y. C., Huang, C.-M., Li, Y., Zhang, R., Chen, H., Mallon, P. E., Zhang, J., Sandreczki, T. C., Zhu, D.-M., Jean, Y. C., Suzuki, R., & Ohdaira, T. (2001). Deterioration of a polyurethane coating studied by positron annihilation spectroscopy: Correlation with surface properties. Journal of Polymer Science Part B: Polymer Physics, 39(19), 2290–2301. https://doi.org/10.1002/ polb.1202 Yeo, J. C. C., Muiruri, J. K., Thitsartarn, W., Li, Z., & He, C. (2018). Recent advances in the development of biodegradable PHB-based toughening materials: Approaches, advantages and applications. Materials Science and Engineering: C, 92, 1092–1116. https://doi.org/10.1016/j.msec.2017.11.006 Zhang, X., Fevre, M., Jones, G. O., & Waymouth, R. M. (2018). Catalysis as an Enabling Science for Sustainable Polymers. Chemical Reviews, 118(2), 839–885. https://doi.org/10.1021/ acs.chemrev.7b00329 Zia, K. M., Bhatti, H. N., & Ahmad Bhatti, I. (2007). Methods for polyurethane and polyurethane composites, recycling and recovery: A review. Reactive and Functional Polymers, 67(8), 675–692. https://doi.org/10.1016/j. reactfunctpolym.2007.05.004
Shamos, M. I. (2003). The new illustrated encyclopedia of billiards. Lyons Press. Tetu, S. G., Sarker, I., & Moore, L. R. (2020). How will marine plastic pollution affect bacterial primary producers? Communications Biology, 3(1), 1–4. https://doi.org/10.1038/ s42003-020-0789-4 Teuten, E. L., Saquing, J. M., Knappe, D. R. U., Barlaz, M. A., Jonsson, S., Bjorn, A., Rowland, S. J., Thompson, R. C., Galloway, T. S., Yamashita, R., Ochi, D., Watanuki, Y., Moore, C., Viet, P. H., Tana, T. S., Prudente, M., Boonyatumanond, R., Zakaria, M. P., Akkhavong, K., … Takada, H. (2009). Transport and release of chemicals from plastics to the environment and to wildlife. PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY B-BIOLOGICAL SCIENCES, 364(1526), 2027–2045. https://doi. org/10.1098/rstb.2008.0284 The Final Design of the World’s First Cleanup System | Updates. (2018, July 21). The Ocean Cleanup. https:// theoceancleanup.com/updates/the-final-design-of-theworlds-first-cleanup-system/
SPRING 2020
219
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE Hinman Box 6225 Dartmouth College Hanover, NH 03755 USA http://dujs.dartmouth.edu dujs@dartmouth.edu
SPRING 2020
220