DUJS
Dartmouth Undergraduate Journal of Science FA L L 2 0 1 9
|
VO L . X X I
|
NO. 2
INTERSECTION
THE CROSSROADS OF SCIENCE
The Impacts of Permafrost on Terrestrial Chemistry
Tobacco Use In Schizophrenia Self-Medication of Greater Addiction Vulnerability?
p. 9
The Evolution of Cystic Fibrosis Therapy - A Triumph of Modern Medicine
p. 53
p. 82 1
A Letter from the Editors Dear Readers, Writing about science is a valiant undertaking, one that is both difficult and rewarding. It requires the communication of complex scientific concepts to an audience that is largely uninformed. This seems rather a thankless endeavor. Why should a scientist stop to explain their work to those who are not familiar with (and often, not appreciative of ) their daily research efforts? There are many answers to this question. First the practical one: if scientific results are not communicated to the broader public – more specifically to policymakers, business executives, lab directors, and other researchers equipped to act on them – then they are not relevant. Science may inform our current lifestyle and enhance our standard of living, but only if it is shared. Beyond its practical importance, another objective of scientific writing is to offer inspiration. The past development of powerful technologies and life-saving medications has driven generations of scientists to don their lab coats and tackle new scientific problems. Complex scientific problems such as these are completely unsurmountable without adequate communication that stimulates intellectual conversation on an international scale. And while this is a lofty task, those who write for DUJS do their part and contribute to the effort. We assume the task of bringing science to the broader community – promoting the findings of researchers from numerous academic institutions (Dartmouth chiefly among them) with the rest of the world. There were 33 writers who completed print articles for this volume of the Dartmouth Undergraduate Journal of Science. This is, by far, a record for the Journal – a body of writers so large that two issues were required to showcase their work. The theme of this term’s journal was “Intersection,” and it is evident in its articles. Our writers chose to explore the connections between biology and government, chemistry and the environment, physics and astronomy. The topics these writers addressed are as diverse as they are important. Several writers explored the allergic response, shedding light on the immense complexity of the human immune system. Another writer examined desalination – the process of removing the salt from saltwater and producing potable water for the developing world. Others looked at dieting trends (specifically intermittent fasting) – a subject of great interest to personal well-being and to public health. One writer even published her own original research, hoping that the work she has done will inspire other scientists to investigate the topics they find compelling. We encourage you to read this journal and hope that its articles inspire you to read and write about science, or even to conduct research of your own. We thank you for supporting our organization and for honoring our commitment to science. And we hope that you enjoy reading these articles as much as we have enjoyed editing them. Warmest Regards, Sam Neff ’21 Nishi Jain ‘21 DUJS Hinman Box 6225 Dartmouth College Hanover, NH 03755 (603) 646-8714 http://dujs.dartmouth.edu dujs@dartmouth.edu Copyright © 2017 The Trustees of Dartmouth College
The Dartmouth Undergraduate Journal of Science aims to increase scientific awareness within the Dartmouth community by providing an interdisciplinary forum for sharing undergraduate research and enriching scientific knowledge. EDITORIAL BOARD President: Sam Neff Editors-in-Chief: Anders Limstrom, Nishi Jain Chief Copy Editor: Anna Brinks, Liam Locke STAFF WRITERS Aditi Gupta Alex Gavitt Alexandra Limb Allan Rubio Amber Bhutta Anahita Kodali Aniketh Yalamanchili Anna Brinks Annika Morgan Arjun Miklos Brandon Feng Catherine Zhao Chengzi Guo Daniel Cho Dev Kapadia Dina Rabadi Eric Youth Georgia Dawahare Hubert Galan Jennifer Chen Jessica Campanile John Ejiogu Julia Robitaille Klara Barbarossa Kristal Wong Liam Locke Love Tsai Maanasi Shyno Madeleine Brown Nishi Jain Sam Neff Zhanel Nugumanova
ADVISORY BOARD Alex Barnett – Mathematics Marcelo Gleiser – Physics/Astronomy David Glueck – Chemistry Carey Heckman – Philosophy David Kotz – Computer Science Richard Kremer – History William Lotko – Engineering Jane Quigley – Kresge Physical Sciences Library Roger Sloboda – Biological Sciences Leslie Sonder – Earth Sciences
SPECIAL THANKS Dean of Faculty Associate Dean of Sciences Thayer School of Engineering Office of the Provost Office of the President Undergraduate Admissions R.C. Brayshaw & Company
A Conversation with Professor Kevin Peterson The Dartmouth Undergraduate Journal of Science recently sat down with Dr. Kevin Peterson, a professor of biological sciences at Dartmouth College. He opened up about his decision to pursue his particular career path, his draw toward science, and advice for future students in the field of STEM at large.
What was your path to your current position? I was born in Butte Montana in 1966 and moved to Helena MT where I attended the public schools. I then attended Carroll College (in Helena MT) earing in 1989 a B.A. in Biology and graduated Maxima cum laude. After taking two years off and working for the Montana Department of Highways I attending UCLA earing a Ph.D. in Geology in 1996 and then did my postdoctoral studies at the California Institute of Technology (Pasadena, CA). I came to Dartmouth as an Assistant Professor in 2000 and have been here ever since, becoming an Associate Professor in 2006 and a Full Professor in 2012.
Why did you choose to go into biology? I’m not sure I really even had a choice in this matter. Ever since I was four years old – probably after finding my very first fossil (which I still have! See the photo below) – I wanted to be a paleontologist. There was a significant detour to pre-medicine for a few years, but I eventually found my way back to my first love:
Why is biology changing/will change in the future? That’s the beautiful and inspiring thing about Biology - its always changing! Everything that I work on today didn’t exist in 2000 and there was no way I could have ever imagined working on the problems that engage me today. Indeed, who would have ever thought that we could actually sequence a Neandertal genome and then discover that all Eurasian peoples carry a bit of their DNA in our genomes? Or CRISPR-Cas technology that is revolutionizing the way we approach the possibility of DNA editing, especially to potentially treat disease. So, the short answer to a deep question is that there is literally no way to predict where the field might be in 10 years or 20 years. But it will be something wonderfully engaging!
What do you see as the most significant areas of reserach in modern biology? Whatever fascinates the individual scientist is significant as there is no way to predict where a simple discovery made by an individual doing what they love might lead. Just let us follow our noses – and our hearts – and humanity will reap the rewards.
What is important about the intersection between biology and the humanities? Science is an art form predicated on simple rules of honesty, repeatability, and testability, but is ultimately done by humans with all their strengths and foibles, their passions and prejudices, their failures as well as their joys of discovery. There is no science without humans, and hence there can be no science without the humanities.
What do you like to do when you're not doing biology? I love to garden and hike in the summer, spend time with my family and our dogs, and listen and play music (I’m a fairly untalented bass player, but I do love it!).
What advice do you have for budding scientists?
(And when I found this as a four-year old the fossil filled my entire hand!!!)
Science is a wonderful and rewarding way to spend a life. Enjoy! But remember, many a time you will be wrong-headed, misguided, misdirected, and sometimes just plain deceived. But, as Fox Mulder always said, “The truth is out there.” And I like to think that through dogged perseverance coupled with copious amounts of curiosity and a passion for a problem that just won’t sate, that journey to truth – whatever or wherever it might be ¬– is well worth it. Don’t do it for the prizes, the awards, the accolades. Do it because there is nothing else you’d rather do, or in my case, can do.
Table of Contents Targeting MDSCs: The Magic Bullet for Tumor Immunology
4
Dina Rabadi
3
The Impacts of Permafrost on Terrestrial Chemistry
9
Eric Youth The Intersection of Eating and Memory Consolidation
14
Georgia Dawahare 22
The Plastisphere: An Emerging Human Derived Ecosystem Hubert Galan
14
The Evolution of Desalination Use in Face of Water Scarcity
27
Jennifer Chen 32
Healthcare and Disability Jessica Campanile Dealing with Ischemic Heart Disease and Stroke
37
John Ejiogu
40
The Intersection Between Brain, Pain, & Body
40
Julia Robitaille 43
Understanding What Others Think of Us Klara Barbarossa
48
Lyme Disease Through a Geographical Lens Kristal Wong
64
Tobacco Use in Schizophrenia: Self-Medication or Greater Addiction Vulnerability?
53
Liam Locke Intermittent Fasting: Understanding the Mechanics Underlying Human Nutrition and Dieting
64
Love Tsai 71
Reevaluating the Sex Binary’s Role in Medicine Maanasi Shyno
77
Hesitant Hope for Food Allergies
77
Maddie Brown Breast Cancer and Its Developing Therapies
82
Nishi Jain The Evolution of Cystic Fibrosis Therapy - A Triumph of Modern Medicine
90
Sam Neff Human Immortality: When, How, Why or Why Not
90 3
100
Zhanel Nugumanova
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
MDSC
Targeting MDSCs: The Magic Bullet for Tumor Immunotherapy
BY DINA RABADI
INTRODUCTION
There are two main classes of immunity. Innate immunity is the frontline of the immune system and involves non-specific clearance of most foreign bodies, such as microbes or pollen. A key part of the innate immune system is myeloid cells, which include both granulocytes and monocytes.1,2 Adaptive immunity is the slower yet highly specific immune response, and unlike innate immunity, leads to the generation of ‘memory.’ This allows the body to recognize previous pathogens through the production of memory B and T-cells, which will act more rapidly to ward off infection. Adaptive immunity can prevent re-infection in humans. Examples of adaptive immunity include T-cells that kill tumor cells and B-cells that make antibodies. Before attacking infected or mutated cells, or reacting to pathogens, T-cells need to be ‘educated’ about the presence of foreign invaders or tumor cells by the innate immune system. This is accomplished via ‘antigen presentation’ where peptides from foreign or mutated proteins are presented on major histocompatibility complex (MHC) molecules at the surface of the myeloid cell. Furthermore, myeloid cells can influence h ow other immune cells behave by producing cytokines. These are peptide signaling molecules that bind to surface receptors on FALL 2019
immune cells and modulate immune functions. Myeloid-derived suppressor cells (MDSCs) are produced in response to inflammation and are often found in the tumor microenvironment (TME). MDSCs are only generated under special conditions such as cancer and chronic infection.2,3 MDSCs act as a defense mechanism for a tumor in numerous ways, but one of their most drastic effects is the suppression of T cells. This occurs in three ways: depleting amino acids that cause T cells to proliferate, releasing oxidizing molecules that destroy T cell receptors, and inducing development of Treg. In turn, Treg act as tumor ‘protector’ cells, impairing cytotoxic T cell migration and function.5
COVER: This is an image of a drug resistant mesothelioma sphere. Understanding MDSCs and how they work may help overcome drug resistance in tumors such as this one. Source: NCI Center for Cancer Research
“MDSCs create barriers that prevent effector cells, such as natural-killer (NK) cells and cytotoxic T cells, from infiltrating suppressive the tumor to kill it."
MDSCs create this microenvironment by several strategies. As the body attempts to fight o ffca ncer, MDSCs produce chemokines to recruit other suppressive cells to the tumor.4 Consequently, MDSCs create barriers that prevent effector cells, such as natural-killer (NK) cells and cytotoxic T cells, from infiltrating the tumor to kill it. Further, the MDSCs in the tumor convert effector cells into ‘ protector’ cells, essentially causing effector cells to protect the tumor as if it is normal tissue rather than the body [Figure
4
“First, [MDSCs] were initially thought to simply be myeloid cells with immunoregulatory activity, but with more research, it became clear that MDSCs are more important players in immune function than previously assumed.”
Figure 1: This displays the orientation and structure of the TME, including both protector and effector cell groups.
1].2 Examples of protector cells include Treg, tumor-associated macrophages (TAM), cancerassociated fibroblasts (CAF), and MDSCs. Figure 2 shows how cancer cells are oriented in relation to macrophages and is useful for visualizing the tumor in a live model. However, among all these different cell types in the TME, MDSCs are considered the “queen bee” of the TME because they promote proliferation and differentiation of many of the other protector cells.2 MDSCs accumulate by two types of signals. The first of which involves a tumor signal in response to chronic infection and inflammation, and the second signal is controlled by inflammatory cytokines and other molecular patterns that are associated with damage.6 There are many controversies and questions that have risen with the identification of MDSCs in recent years. First, they were initially thought to simply be myeloid cells with immunoregulatory activity, but with more research, it became clear that MDSCs are more important players in immune function than previously assumed.4 Second, although these cells were first identified in the context of cancer, they are present in numerous conditions, including infectious diseases, autoimmune disorders, obesity, and pregnancy.6 For example, it has been found that MDSC populations expand when the body is exposed to certain infections like tuberculosis or conditions such as sepsis.6 Many of the experiments done on conditions other than cancer, such as pregnancy or autoimmune disorders, have resulted in contradictory results due to the complex nature of MDSCs, including the origin of these cells and the features that distinguish them from other types of immune cells.
MDSC TYPES
Currently, there are two well-studied types of MDSCs, the first of which is monocytic MDSCs (M-MDSCs). These cells are phenotypically and morphologically similar to monocytes, which are phagocytotic white blood cells that function as a part of the innate immune system. M-MDSCs are the suppressive counterparts to monocytes. One method of distinguishing M-MDSCs from monocytes is by the expression of MHC II. Since MHC II is either not expressed on M-MDSCs or expressed very little, they are not antigen-presenting and consequently, do not elicit T-cell activity. The MHC II protein expression difference allows M-MDSCs and monocytes to be distinguished from one another.4 The second group is granulocytic or polymorphonuclear MDSCs (PMN-MDSCs), and are similar phenotypically and morphologically to neutrophils, a type of white blood cells and phagocyte that falls under the classification of a myeloid cell. Due to the similarities between PMN-MDSCs and neutrophils, it is challenging to distinguish them phenotypically; however, Lectin-type oxidized LDL receptor 1 (LOX-1) has been shown to be an effective distinguishing marker among these two cell groups.7 This receptor’s expression correlates with carcinogenesis, the formation of cancer. Lox-1 is strongly associated with PMN-MDSC
Source: See R.J. Tesi (2019) for original figure.
5
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
MDSC populations in cancer patients, as Lox-1 is not well-expressed in healthy individuals.7,8 This receptor can serve as a prognostic marker, as well as a potential therapeutic target because its silencing results in inhibition of both tumor progression and metastasis.9 There are numerous differences between PMN-MDSCs and M-MDSCs. One difference is the necessity of cell-to-cell contact versus no cell-to-cell contact. While PMN-MDSCs must be in close enough proximity to T-cells to establish cell-to-cell contact, M-MDSCs do not require cellto-cell contact.10 Another distinction between the two groups of MDSCs is their potency, as M-MDSCs appear to be more potent than PMNMDSCs, meaning that M-MDSCs have greater immunosuppressive effects that PMN-MDSCs and can result in more severe disease prognosis in cases studied thus far.3,11 This indicates that M-MDSCs appear to be a convenient biomarker to assess disease severity and progression. In fact, presence of larger populations of both M-MDSCs and PMN-MDSCs result in poorer outcomes among patients with breast cancer.11 This shows that though potency is a factor in determining prognosis, increased numbers of either type of MDSC are associated with more negative patient outcomes. Though it appears that there are two distinct types of MDSCs, as shown in Figure 3, the process of cell differentiation is a spectrum, resulting in more diversity and heterogeneity than simply the bifurcation of the two groups.12 An additional distinction among different types of MDSCs is determined by the location of the MDSC in the body. MDSCs are often isolated from blood, since isolating MDSCs from tumors presents challenges such as low levels of cells and impacted function.3 One example of a location-based difference among MDSCs is the more potent immunosuppressive activity found in MDSCs gathered from the tumor rather than spleen.3 Additionally, splenic and tumor-infiltrating MDSC have different mechanisms of immunosuppression as well as rates of differentiation into other tumorassociated immune cells, such as TAM.3 Though there are numerous ways by which MDSCs can be distinguished, it is unknown whether a systematic decrease in MDSCs is effective in eliminating tumors, or whether eliminating tumor MDSCs is a more effective therapy.
IMMUNE CHECKPOINTS AND IMMUNOTHERAPY
There are numerous types of MDSC treatments, including cortico-steroids, surface markers, enzymes, and non-steroidal antiinflammatory drugs (NSAIDs), that work in
Figure 2: Breast cancer TME from a live mouse model: Cyan: tumor cells. Red: macrophages. Green: collagen fibers. Source: Flickr
different ways.2 Targeting surface markers is one of the most promising method of reducing MDSCs, though it is challenging to identify surface receptors of MDSCs. One class of surface receptors are immune checkpoints, which provide ‘brakes’ to T-cell activation. Examples of immune checkpoints expressed on MDSCs include programmed death ligand 1 (PD-L1) and V-domain Ig suppressor of T-cell activation (VISTA), both of which can be targeted therapeutically. PD-L1 is a ligand of programmed cell death protein 1 (PD-1), a receptor on T-cells that, when activated, initiates programmed cell death in antigenspecific T -cells of lymph nodes.13 Increased PD-L1 expression is associated with poor patient outcomes, but blocking PD-L1 allows Tcells to infiltrate and kill the tumor.13 T-cells in the tumor are often in a dysfunctional state called exhaustion, because they experience chronic antigen in the absence of activating signals. Blocking of PD-1 and PD-L1 can relieve cells from exhaustion and restore anti-tumor activity.
“Though there are numerous ways by which MDSCs can be distinguished, it is unknown whether a systematic decrease in MDSCs is effective in eliminating tumors, or whether eliminating tumor MDSCs is a more effective therapy.”
Another example of an immune checkpoint is VISTA, which is a negative regulator of immunity.14 The function of VISTA in tumor immunity has been shown in acute myeloid leukemia (AML) experiments in which VISTA is deleted by knocking out the Vsir gene.5 Knocking out VISTA reduces MDSCmediated inhibition of T-cells, indicating VISTA’s immunosuppressive effects.14 Upregulation of VISTA in MDSCs may be another mechanism of MDSC immunosuppressive activities, so blocking and downregulating VISTA could result in decreased MDSC levels and immunosuppressive activity of the tumor.5 Blockade of VISTA was shown to improve both Tcell and myeloid cell function, due to the expression on both cell types.15 VISTA is also expressed on both PMN-MDSCs and TAMs, as shown in Figure 4. Anti-VISTA’s effects on multiple types of cancer and autoimmune diseases show the potentially wide applications
6
Figure 3: Like all immune cells, MDSCs arise from hematopoietic stem cells (HSC) in the bone marrow. HSC differentiate into immature myeloid cells in the presence of inflammation. .
CTLA-4 is another immune checkpoint molecule that acts as a ‘brake’ on T-cell activation. Blocking this ‘brake’ with anti-CTLA-4 may be a promising way to increased T-cell activation, since long-term blockade therapy of CTLA-4 in melanoma has been shown to improve patient outcome. AntiCTLA-4 is often used with anti-PD-1 because both therapies target T-cell checkpoints; however, these blockade therapies can fail as a result of MDSC or TAM activity, due to these cells’ abilities to defend the tumor against the body’s immune system.
Source: See R.J. Tesi (2019) for original figure.
“Targeting MDSCs using blockade therapy and combination therapy provides a promising future in the world of cancer treatment and immunotherapy.”
Figure 4: VISTA Expression in the CT26 Tumor Microenvironment. DAPI is a fluorescent blue DNA stain, and VISTA expression is shown in red. Ly6G in A is a marker for PMN-MDSCs, and CD11b in B is a marker for TAMs. The orange in both images that appears in both images results from overlap of the red and green, indicating an overlap in expression. A shows the overlap in expression between VISTA and MDSCs, whereas the orange in B displays the overlap between VISTA and TAMs.
of the treatment.5,14 VISTA is an exciting potential target for therapeutics due to its ability to target both T-cells and myeloid cells, whereas most current immunotherapies target one or the other.
IMMUNOTHERAPY
Currently, immunotherapy often uses effector cell treatment rather than ‘protector’ cell treatment. Though effector cell treatments are crucial for immunotherapy treatments, it may not be enough as a standalone treatment because of the effectiveness of ‘protector’ cells.2 Effector cell treatments do not target the TME, or the ‘protector’ cells.2 Future treatments will rely on a “one-two immunotherapeutic punch,” consisting of first, the patient’s immune system gaining the ability to fully attack the tumor by improving effector cells, and second, the targeting of protector cell function in order to destroy the tumor’s ability to protect itself from the immune system.2 A way in which the “one-two immunotherapeutic punch” may be utilized is the pairing of anti-VISTA therapeutics with other immunotherapy treatments, such as anti-PD-1 and anti-CTLA-4. Targeting the PD-1/PD-L1 pathway, along with blocking VISTA and cytotoxic T-lymphocyteassociatedprotein 4 (CTLA-4), is an extremely promising method of combining effector and protector cell treatments.
It is also important to acknowledge that that there is a strong positive association between MDSC expression of VISTA and T-cell expression of PD-1, displaying the potential for combination treatments of VISTA and PD-1 pathways in order to control cancer.5 Therefore, the addition of VISTA will minimize likelihood of failure as a result of MDSC or TAM activity because VISTA will address myeloid cells responses as well as T-cell responses. This shows how effector cell treatments, such as anti-PD-1 and anti-CTLA-4, and protector cell treatments, such as anti-PD-L1 and anti-VISTA, complement each other.
CONCLUSION
Targeting MDSCs using blockade therapy and combination therapy provides a promising future in the world of cancer treatment and immunotherapy. The importance of using therapies that diminish presence of MDSCs is critical because of the central role these cells play in the immunosuppressive nature of the TME. Significant future research is required in order to more effectively identify human, rather than mouse, targets, as well as understanding the mechanisms by which MDSCs form and affect the TME. Despite the many questions regarding MDSCs that remain, it is certain that the potential benefits and effectiveness of treatments against these cells will be revolutionary.
Source: Noelle lab at Dartmouth-Hitchcock Medical Center.
7
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
MDSC References
pone.0127028.
(1) Kawamoto, H.; Minato, N. Myeloid Cells. Int. J. Biochem. Cell Biol. 2004, 36 (8), 1374–1379. https://doi.org/10.1016/j. biocel.2004.01.020.
(12) Canè, S.; Ugel, S.; Trovato, R.; Marigo, I.; De Sanctis, F.; Sartoris, S.; Bronte, V. The Endless Saga of Monocyte Diversity. Front. Immunol. 2019, 10. https://doi. org/10.3389/fimmu.2019.01786.
(2) Tesi, R. J. MDSC; the Most Important Cell You Have Never Heard Of. Trends Pharmacol. Sci. 2019, 40 (1), 4–7. https:// doi.org/10.1016/j.tips.2018.10.008. (3) Kumar, V.; Patel, S.; Tcyganov, E.; Gabrilovich, D. I. The Nature of Myeloid-Derived Suppressor Cells in the Tumor Microenvironment. Trends Immunol. 2016, 37 (3), 208– 220. https://doi.org/10.1016/j.it.2016.01.004. (4) Talmadge, J. E.; Gabrilovich, D. I. History of Myeloid Derived Suppressor Cells (MDSCs) in the Macro- and MicroEnvironment of Tumour-Bearing Hosts. Nat. Rev. Cancer 2013, 13 (10), 739–752. https://doi.org/10.1038/nrc3581. (5) Wang, L.; Jia, B.; Claxton, D. F.; Ehmann, W. C.; Rybka, W. B.; Mineishi, S.; Naik, S.; Khawaja, M. R.; Sivik, J.; Han, J.; et al. VISTA Is Highly Expressed on MDSCs and Mediates an Inhibition of T Cell Response in Patients with AML. Oncoimmunology 2018, 7 (9). https://doi.org/10.1080/21 62402X.2018.1469594.
(13) Salmaninejad, A.; Valilou, S. F.; Shabgah, A. G.; Aslani, S.; Alimardani, M.; Pasdar, A.; Sahebkar, A. PD-1/PD-L1 Pathway: Basic Biology and Role in Cancer Immunotherapy. J. Cell. Physiol. 2019, 234 (10), 16824–16837. https://doi. org/10.1002/jcp.28358. (14) ElTanbouly, M. A.; Croteau, W.; Noelle, R. J.; Lines, J. L. VISTA: A Novel Immunotherapy Target for Normalizing Innate and Adaptive Immunity. Semin. Immunol. 2019, 42, 101308. https://doi.org/10.1016/j.smim.2019.101308. (15) LeMercier, I.; Chen, W.; Lines, J. L.; Day, M.; Li, J.; Sergent, P.; Noelle, R. J.; Wang, L. VISTA Regulates the Development of Protective Anti-Tumor Immunity. Cancer Res. 2014, 74 (7), 1933–1944. https://doi.org/10.1158/0008-5472.CAN13-1506.
(6) Veglia, F.; Perego, M.; Gabrilovich, D. Myeloid-Derived Suppressor Cells Coming of Age. Nat. Immunol. 2018, 19 (2), 108–119. https://doi.org/10.1038/s41590-017-0022-x. (7) Condamine, T.; Dominguez, G. A.; Youn, J.-I.; Kossenkov, A. V.; Mony, S.; Alicea-Torres, K.; Tcyganov, E.; Hashimoto, A.; Nefedova, Y.; Lin, C.; et al. Lectin-Type Oxidized LDL Receptor-1 Distinguishes Population of Human Polymorphonuclear Myeloid-Derived Suppressor Cells in Cancer Patients. Sci. Immunol. 2016, 1 (2). https://doi. org/10.1126/sciimmunol.aaf8943. (8) Aarts, C. E. M.; Kuijpers, T. W. Neutrophils as MyeloidDerived Suppressor Cells. Eur. J. Clin. Invest. 2018, 48 Suppl 2, e12989. https://doi.org/10.1111/eci.12989. (9) Zhou, J.; Nefedova, Y.; Lei, A.; Gabrilovich, D. Neutrophils and PMN-MDSC: Their Biological Role and Interaction with Stromal Cells. Semin. Immunol. 2018, 35, 19–28. https:// doi.org/10.1016/j.smim.2017.12.004. (10) Gabrilovich, D. I. Myeloid-Derived Suppressor Cells. Cancer Immunol. Res. 2017, 5 (1), 3–8. https://doi. org/10.1158/2326-6066.CIR-16-0297. (11) Bergenfelz, C.; Larsson, A.-M.; von Stedingk, K.; Gruvberger-Saal, S.; Aaltonen, K.; Jansson, S.; Jernström, H.; Janols, H.; Wullt, M.; Bredberg, A.; et al. Systemic MonocyticMDSCs Are Generated from Monocytes and Correlate with Disease Progression in Breast Cancer Patients. PloS One 2015, 10 (5), e0127028. https://doi.org/10.1371/journal.
8
The Impacts of Permafrost on Terrestrial Chemistry BY ERIC YOUTH COVER: Permafrost on Herschel Island in Canada. Source: Wikimedia Commons
“In particular, changes in the rates of organic matter accumulation can impact the rate at which greenhouse gases enter the atmosphere, and increases in ionic compounds can also influence the chemical composition of the environment”
9 FALL 2019
OVERVIEW
With climate change becoming an increasingly critical issue around the world, it is important to consider its effects on permafrost in places such as Northern Canada, Alaska, and Russia. Permafrost locks up nutrients and chemicals for long periods of time, so thawing permafrost impacts local chemistry. In particular, changes in the rates of organic matter accumulation can impact the rate at which greenhouse gases enter the atmosphere, and increases in ionic compounds can also influence the chemical composition of the environment. Peatlands are bryophyte dominated wetlands where continuous decomposition leads to an accumulation of organic matter over many years2, 10. They contain almost 30% of the organic carbon in the Earth’s northern permafrost region, which corresponds to 277302 Pg of organic carbon, a quantity that is equivalent to over one-third of the carbon currently in the atmosphere5. Historically, the existence of permafrost and waterlogged soils in northern peatlands has facilitated the accumulation of carbon by reducing rates of microbial decomposition, but recent warming has accelerated permafrost thaw in these regions5. Such warming can
promote decomposition and release carbon from formerly frozen peat deposits into the atmosphere, as well as increase CH4 released as the peat becomes inundated5. Unfortunately, it is estimated that 40% to 90% of all of Earth’s permafrost areas may thaw by the year 21005. Clearly, action must be taken to combat climate change before potentially irreversible changes to currently frozen land take place.
AN INTRODUCTION TO PERMAFROST
While the first references to permanently frozen ground date back to Russia’s Medieval period, the earliest written occurrence of “permafrost” can be traced to a 1598 account of frozen ground in Novaya Zemlya, a Russian archipelago in the Arctic Ocean11. There were further references to permafrost in the early 1700s due to increased trade and settlement in the Siberian region of Yakutia11. The International Permafrost Association (IPA) defines permafrost as ground (soil or rock and included ice and organic material) that remains at or below 0°C for at least two consecutive years11. Because permafrost is defined on the basis of temperature, land does not need to be frozen to be considered permafrost, and moisture in the solid form may or may not be present3. Based on the IPA’s definition, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
PERMAFROST permafrost is not a material phenomenon but rather a physical state of the lithosphere11. An “active layer” consists of ground or rock that lies physically above the permafrost table which experiences seasonal freezing in winter (and thus is not considered permafrost)11. Because permafrost’s existence is dependent on subzero temperatures, it is most commonly found at extreme latitudes, particularly in the northern hemisphere, which has more land than the southern hemisphere. Although the impacts of permafrost thaw on terrestrial chemistry are felt more acutely in such areas, the shallowest active layers above permafrost are actually found in equatorial regions at the summits of high mountains in South America (Andes Mountains), Africa (Eastern Rift Mountains), and New Guinea (Bismarck Range)11. Thus, permafrost thaw is not limited to polar regions, as seen in Figure 1. Permafrost forms whenever the extent of winter freezing of ground exceeds the extent of summer thawing. If cooling in a particular region regularly surpasses heating, the thickness of the permafrost increases from year to year11. Sub-zero ground and rock in these areas is known as permafrost of a periglacial environment, which can undergo either aggradation (growth) or degradation (decay)11. These processes lead to major changes in the region’s hydrology, insolation, and topography, which in turn affects biological and chemical processes in the area’s peatlands. A peatland’s hydrology determines its classification. For example, a peatland whose water has come at least partially into contact with mineral soil (via ground or surface flow water) is a fen. Alternatively, if its waters are derived solely from precipitation, the peatland is known as a bog2. Canada’s “Discontinuous Permafrost Zone” is a region of northern Canada where local factors such as shade (determined
by forest cover) and ground surface albedo (determined by lichen cover) influence the existence of perennially frozen ground that wanes as one travels south. Within this region, permafrost is most common in peatlands2. While permafrost is dependent on subzero temperatures, microorganisms can still survive if the surrounding environment is not frozen. Microorganisms have been found in permafrost at temperatures as low as -10oC given the presence of glycerol, a viscous liquid that increases cellular solute concentrations and serves as a cryoprotectant3. Experiments have also shown that halophilic (salt-loving) bacteria remain viable at -80oC in the presence of 25% NaCl3. Thus, although permafrost may be thought of as a lifeless phenomenon, there are exceptions to this assumption.
UNDERSTANDING THE CONTRIBUTIONS TO LOCAL ENVIRONMENTS: FLUCTUATING RATIOS OF ORGANIC COMPOUNDS IN DISTRUBUTED AREAS
Given the current rate of climate change, thawing permafrost has a detrimental impact on local environments. Peatlands are a net sink for atmospheric CO2 and contain roughly one-third of the world’s soil carbon stores, sequestering approximately 76 Tg of carbon per year from the atmosphere10. Peatlands therefore play a vital role in reducing the amount of greenhouse gases, particularly CO2 and CH4, in the atmosphere. Permafrost thaw directly influences chemical composition of affected areas, as greenhouse gases that are locked up in frozen ground may eventually enter the atmosphere after thawing; this exacerbates climate change. Suzanne B. Hodgkins et al. analyzed the impact of permafrost thaw on potential CO2 and CH4 production in a Florida State University 2014 study. The team studied nine peatland environments in northern Sweden, documenting a direct correlation between pH and thaw stage, and determining that the peatlands that have experienced more thawing have higher pH values4. The sites, classified as collapsed palsa, bog, or fen, were organized by increasing active layer depth, indicating a progression from the least thaw to the most thaw4. The pH for the two collapsed palsa sites was 4.1, while the ranges of values for the three bog sites and the four fen sites were 4.04.2 and 4.8-6.0, respectively. They also found that permafrost thaw increases potential CH4 production, potential CO2 production4, and the CH4:CO2 production ratio (p < 0.0001 for
“Although impacts of permafrost thaw on terrestrial chemistry are found more acutely in such areas, the shallowest active layers above permafrost are actually found in equatorial regions at the summits of high mountains in South America (Andes Mountains), Africa (Eastern Rift Mountains), and New Guinea (Bismarck Range).”
Figure 1: The extent of permafrost in the Northern Hemisphere. Red and pink areas indicate regions where 90-100% of the land is considered permafrost. Source: Wikimedia Commons.
10
Figure 2: A carboxylate anion is the conjugate base of a carboxylic acid and has the formula RCO2-. As the image shows, its electron cloud is delocalized along the O-C-O structure. Source: Wikimedia Commons
CH4 production, potential CO2 production4, and the CH4:CO2 production ratio (p < 0.0001 for each trend) at each site4. It is thought that this increase in CH4 and CO2 production potentials, which are influenced by the local plant community, indicates a heightened organic matter lability caused by thawing permafrost, where lability refers to the ease with which the matter can be decomposed by soil organisms. Specifically, an increase in pH results in a loss of organic acids such as sphagnum acid and other Sphagnum-derived phenolics that normally mitigate microbial decomposition4.
"As average global temperatures continue to rise, fens and bogs will tend to have lower C/N ratios, more polysaccharides, fewer lignins, aromatics, and aliphatics, and less hydrogenotropic methanogenesis and more acetoclastic methanogenesis."
11
Hodgkins et al. concluded that peat becomes more labile across the thaw progression because they found that carboxylic acid bands weaken along the thaw progression due to the fens’ higher pH, which maintains organic acids as nonvisible carboxylate anions4 [Figure 2]. Surface fen peat (more thawed) also had more ubiquitous polysaccharides and less abundant lignins, aromatics, and aliphatics than deep fen peat (less thawed)4, indicating more cellulose plant material in the surface fens and more decomposed, humified material in deep fens4. The team also found that thawing shifts the process of methane generation from hydrogenotrophic methanogenesis, which yields methane through the reduction of CO2 with H2, to more acetoclastic methanogenesis, which yields methane through the cleavage of acetate, CH3COO-, into CH4 and CO2. This shift increases both organic matter lability and pH4. Additionally, thaw-associated changes in the plant communities (e.g., from E. vaginatum and Sphagnum spp. to E. angustifolium and Carex rostrata) results in decreased Carbon/ Nitrogen (C/N) ratios, which in turn increases organic matter lability by liberating nitrogen for decomposers4. A separate study conducted in 2000 by Turetsky et al. with the University of Alberta found that Bleak Lake Fen, a peatland complex in central Alberta lacking permafrost, had lower C/N ratios compared to a permafrost bog in northern Alberta, corroborating the idea that permafrost thaw is associated with greater populations of plants with greater nitrogen compositions10. Bleak Lake Fen’s C/N ratio was 52.2 ± 3.8, while the permafrost bog’s C/N ratio was 57.9 ± 2.410. Furthermore, of the five sites analyzed by Turetsky et al., the northern
permafrost bog exhibited the highest density, the highest concentrations of ash, sulfur, and lignin, and the lowest concentrations of hotwater-soluble carbohydrates, α-cellulose, and hemicellulose10. As average global temperatures continue to rise, fens and bogs will tend to have lower C/N ratios, more polysaccharides, fewer lignins, aromatics, and aliphatics, and less hydrogenotrophic methanogenesis and more acetoclastic methanogenesis1. Such conditions may be amplified and more widespread in the future as permafrost continues to thaw.
CHANGES IN RATES OF ORGANIC MATTER ACCUMULATION IN DISTRUBED AREAS
Turetsky et al. also used lead-210-dated chronologies to estimate net vertical peat growth or net organic matter accumulation over the past 100-200 years. They found that three non-permafrost sites have accumulated approximately 26 cm of peat in the past century, while the permafrost bog has only accumulated 12-13 cm of peat in the same time period10. Additionally, net organic matter accumulation at the internal lawn studied was approximately 1.6 times the amount recorded at the permafrost bog.10 Permafrost bogs can be transformed into internal lawns via climate change or fire, suggesting that permafrost thaw is associated with higher rates of organic matter accumulation. Another study conducted in 2010 by Mesquita et al. with the University of Victoria focused on the lake-rich Mackenzie River Delta of Canada’s Northwest Territories, and analyzed other relevant impacts of permafrost thaw on sediment chemistry and submerged macrophytes in arctic lakes [Figure 3]. This study found that pH and specific conductivity were significantly higher in disturbed lakes (pH 7.6 and 8.2, and conductivity 128.6 and 516.7 μS/cm in undisturbed and disturbed lakes, respectively), and underwater light attenuation was higher in undisturbed lakes (1.40 in undisturbed and 1.02 in disturbed)7. In the sediment of the disturbed lakes, the mean concentrations of calcium and magnesium were also significantly higher. In the undisturbed lakes, the Ca and Mg concentrations were 4.85 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
PERMAFROST
and 5.74 g/kg, while in the disturbed lakes, the Ca and Mg concentrations were 9.44 and 7.35 g/kg. This demonstrates that lakes affected by permafrost thaw contain calcium and magnesium that had previously been locked up in frozen land7. Other notable differences between the undisturbed and the disturbed lakes studied involve organic carbon, organic nitrogen, arsenic, nickel, and zinc concentrations, all of which were significantly higher in undisturbed lakes than disturbed lakes. Undisturbed and disturbed lakes had mean values of 7.29% and 4.90% organic carbon, 0.61% and 0.34% organic nitrogen, 0.02 and 0.015 g/kg of As, 0.052 and 0.041 g/kg of Ni, and 0.137 and 0.106 g/kg of Zn, respectively7. Additionally, macrophytes were present more frequently in disturbed lakes (44%) compared to undisturbed lakes (11%), and there was a maximum biomass of 705.5 g/m2 in disturbed lakes, compared to a maximum biomass of 24.27 g/m2 in undisturbed lakes7. This higher biomass of macrophytes in disturbed lakes is related to higher water transparency (lower light attenuation) and higher nutrient concentrations associated with permafrost thaw. Enriched runoff from catchment areas has altered nutrient concentrations in the sediment, making it more conducive to the growth of macrophytes while producing a structurally more complex habitat at the bottom of the lakes7. However, this study did not find macrophytes dominant in lakes with higher nitrogen (undisturbed lakes). It is thought that the greater organic content of the sediment of undisturbed lakes could interfere with the uptake of nutrients or that organic nitrogen is not transformed to inorganic forms to a great enough degree to render it sufficiently available to macrophytes7.
VARIED CONCENTRATIONS OF IONIC COMPOUNDS IN DISTURBED AREAS
Thawing permafrost, often as a result of a warming climate or fire, leads to “retrogressive
thaw slumps” that are commonly seen in arctic lakes, which refers to the destabilization of the surrounding soil structure that leads to subsidence6. In addition to changing the rates of organic matter accumulation of peatlands, permafrost thaw alters the concentrations of ionic compounds in disturbed areas. Typically, arctic lakes have relatively low ionic concentrations because runoff from their drainage basins generally travels through a nutrient-poor active layer and the contribution of deeper groundwater is negligible6. However, there is a significant geochemical contrast between the ion-rich permafrost and the nearby active layer, which suggests that thawing permafrost can alter the chemical composition of affected soils and surface waters. It is also likely that the magnitude of any impacts would depend on how intense the degradation is, i.e., how extensively thawing occurs6. A 2009 study conducted by Kokelj et al. looked at 73 lakes in the forest-tundra transition region east of Canada’s Mackenzie Delta, and 34 were affected by thaw slumping6. The remaining 39 lakes were unaffected. The study used several statistical analyses on water chemistry data to determine sources of variability between undisturbed lakes and all lakes studied6. From a data set that measured the lakes’ conductivity, alkalinity, pH, total organic carbon (TOC), and concentrations of calcium, magnesium, sulfate, and chlorine, a principal component analysis showed that nearly 60% of the total variability was described by the first principal component, which differentiates the water chemistry of lakes affected by retrogressive thaw slumping from those which are undisturbed6. More specifically, because calcium, hardness, magnesium, alkalinity, and conductivity are highly correlated6, this first principal component score is essentially a composite measure of ionic strength6. This indicates that permafrost thaw leads to increased ionic concentrations and alkalinity in lakes that undergo retrogressive thaw slumping.
Figure 3: Canada’s Mackenzie River Delta Region. Source: NASA.
"However, there is a significant geochemical contrast between the ion-rich permafrost and the nearby active layer, which suggests that thawing permafrost can alter the chemical composition of affected soils and surface waters."
A similar 2017 study conducted by Roberts et al. which focused on two Canadian High Arctic Lakes (West and East Lake) found that permafrost degradation has the capacity to deliver solutes through subsurface flow and release sulfur in amounts that are high enough to rapidly alter the composition of the lakes9. This thawing is hypothesized to have been the primary cause of a substantial increase in sulfate concentrations in these two lakes; SO42- concentrations increased by 1.4 and 2.1 mg/L each year from 2006 to 2016, which 12
"As the threat of severe, existential heatlh risks from climate change become increasingly dire, it is important to consider how the effects of permafrost thaw on peatlands' chemistry tie into the broader system of climate change."
corresponds to the addition of approximately 30 and 43 Mg of SO42- each year to West and East Lake, respectively9. Additionally, SO42concentrations increased from approximately 3 to 15 mg/L (+500%) in West Lake from 2006 to 2016 and from 5 to 17 mg/L (+340%) in East Lake from 2008 to 20169. This increase in solutes, including those besides SO42-, has impacted the lakes’ wildlife. For example, there have been documented decreases in Ba2+ concentrations and increases in Mg2+ concentrations in the outer 100-200 µm of the ear bones of Arctic char that are associated with the greater solute concentrations9.
if not prevented. Successful solutions can only be implemented once the gravity of climate change is completely acknowledged.
This observed thermally-driven thawing is expected to be widespread across the region and has the potential to deliver large amounts of solutes to not just the two lakes studied, but to the many thousands of lakes in the Mackenzie River Delta area and other arctic regions. For reference, the aforementioned 2009 study by Kokelj et al. focused on the 3740 km2 region of the Mackenzie River Delta, which has more than 4850 lakes and ponds identifiable on 1:50,000 scale digital maps6. Permafrost thaw has the potential to drastically alter solutes in thousands of arctic lakes.
[3] Gilichinsky D.A., Rivkina E.M. (2011) Permafrost Microbiology. In: Reitner J., Thiel V. (eds) Encyclopedia of Geobiology. Encyclopedia of Earth Sciences Series. Springer, Dordrecht
FUTURE IMPLICATIONS
As the threat of severe, existential health risks from climate change become increasingly dire1, it is important to consider how the effects of permafrost thaw on peatlands’ chemistry tie into the broader system of climate change. Notably, carbon release as a result of permafrost thaw represents a major positive climate change feedback loop4. As the Arctic warms, the amounts of CH4 and CO2 released by peat are expected to increase as more peat becomes exposed and decomposition rates increase due to increasing temperatures4. The progressively methanogenic conditions brought about by heightened organic matter lability will further increase the proportion of carbon released as CH4, which has 21 times the atmospheric warming potential of CO28. A model developed in 2016 indicates that permafrost thaw causes peatlands to exude carbon for roughly a decade, after which postthaw bog peat accumulation turns the sites back into net carbon sinks5. However, because it can take several centuries to millennia for a site to build up its prethaw carbon stocks,5 the fact that thawed peatlands eventually turn back into carbon sinks is not a solution to the problem of permafrost thaw. Rather, the alarming problem of permafrost thaw is that it is simply a rapidly accelerating process that will cause irreversible damage to Earth’s ecosystems 13
References [1] Butler C. D. (2018). Climate Change, Health and Existential Risks to Civilization: A Comprehensive Review (1989⁻2013). International journal of environmental research and public health, 15(10), 2266. [2] David W. Beilman, Dale H. Vitt & Linda A. Halsey (2001) Localized Permafrost Peatlands in Western Canada: Definition, Distributions, and Degradation, Arctic, Antarctic, and Alpine Research, 33:1, 70-77,
[4] Hodgkins, S. B., Tfaily, M. M., McCalley, C. K., Logan, T. A., Crill, P. M., Saleska, S. R., … Chanton, J. P. (2014). Changes in peat chemistry associated with permafrost thaw increase greenhouse gas production. Proceedings of the National Academy of Sciences of the United States of America, 111(16), 5819–5824. doi:10.1073/pnas.1314641111 [5] Jones, M.C., Harden, J., O'Donnell, J., Manies, K., Jorgenson, T., Treat, C. and Ewing, S. (2017), Rapid carbon loss and slow recovery following permafrost thaw in boreal peatlands. Glob Change Biol, 23: 1109-1127. [6] Kokelj, S.V., Zajdlik, B. and Thompson, M.S. (2009), The impacts of thawing permafrost on the chemistry of lakes across the subarctic boreal‐tundra transition, Mackenzie Delta region, Canada. Permafrost Periglac. Process., 20: 185-199. doi:10.1002/ppp.641 [7] Mesquita, P.S., Wrona, F.J. and Prowse, T.D. (2010), Effects of retrogressive permafrost thaw slumping on sediment chemistry and submerged macrophytes in Arctic tundra lakes. Freshwater Biology, 55: 2347-2358. [8] Mohajan, H.K. (2012), Dangerous Effects of Methane Gas in Atmosphere, International Journal of Economic and Political Integration, 2(1): 3–10. [9] Roberts, K.E., Lamoureux, S.F., Kyser, T.K. et al. Climate and permafrost effects on the chemistry and ecosystems of High Arctic Lakes. Sci Rep 7, 13292 (2017) [10] Turetsky, M., Wieder, R., Williams, C., & Vitt, D. (2000). Organic matter accumulation, peat chemistry, and permafrost melting in peatlands of boreal Alberta. Écoscience, 7(3), 379-392. [11] Wojciech Dobinski, Permafrost, Earth-Science Reviews, Volume 108, Issues 3–4, 2011, Pages 158-169, ISSN 00128252, https://doi.org/10.1016/j.earscirev.2011.06.007.
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
MEMORY
An Intersection of Eating and Memory Consolidation BY GEORGIA DAWAHARE
INTRODUCTION
The eating culture of today’s world is vastly different than that of our ancestors. What once was necessary purely for survival purposes has now become a leisure activity. People participate in eating contests, watch cooking competitions, and stream popular mukbang videos in which a host eats food while interacting with their audience. In essence, eating has transitioned into a more social aspect of our culture rather than being a purely biological function. Nevertheless, the biological functions of eating remain key to survival: We eat in order to keep our bodies functioning. We need energy to enable growth and repair tissues, to maintain body temperature, and to fuel physical activity60. Additionally, gut hormones and neurotransmitters tell the brain whether the body is hungry or satiated and when to stop or start eating. Particularly interesting is the lesser-known intersection of eating and memory consolidation. In studying Aplysia, a marine snail best known for its contribution to our understanding of the cellular and molecular basis of memory1, Nikolay Kukushkin and Sidney Williams discovered a single insulinlike molecule which strengthens connections between neurons, a mechanism thought to underlie long-term memory2, 54. Thomas Carew, first author of the Aplysia paper, notes FALL 2019
that these results, “will help us understand the mechanisms by which insulin and similar molecules elicit both their diet-related and memory-enhancing properties in humans and other animals.”
THE EVOLUTION OF EATING
Eating is a vital part of survival for all organisms, and the neurological systems involved in feeding behavior have evolved over time. It was advantageous for early humans and lower-order mammals to remember the location of a food source and to then efficiently navigate back to a shelter. Thus, it stands to reason that biological systems that regulate feeding behavior may share a common origin with those involved with learning about external environments and spatial navigation3. In addition to the external spatial and contextual cues, there are other aspects of feeding behavior that provide survival advantages. Social cues, for example, influence food choice and quantity consumed partially by indicating whether specific foods are safe and/or nutritive3-6; temporal factors such as seasonal changes and diurnal fluctuations (changes that occur during the same day) also modulate memory of the physical location of food. Remembering these and other
COVER: FMRI scan during working memory tasks. Working memory tasks typically show activation in the bilateral and superior frontal cortex as well as in parts of the superior bilateral parietal cortex. Source: Wikimedia Commons
“[Eating] was advantageous for early humans and lower-order mammals to remember the location of a food source and to them effeciently navigate back to a shelter.”
14
Figure A: This graph illustrates the changes in glucose and insulin blood levels throughout the preprandial, prandial, and postprandial stages of feeding behavior. . Source: Wikimedia Commons.
“Consequently, biological systems that fluctuate and influence feeding behavior may also promote a hippocampaldependent process known as declarative memory, which is a flexible memory for facts and episodic memory.”
Figure B: This flow chart displays the relationship between ghrelin and leptin. Because ghrelin induces hunger and leptin reduces appetite, the increase of ghrelin is accompanied by the decrease of leptin. This is considered an anabolic process since complex molecules are constructed from smaller units. Conversely, high levels of leptin and low levels of ghrelin have the opposite effect, and this process is considered catabolic since complex molecules are being broken down into simpler molecules.
components of feeding are important guides for future foraging and feeding behaviors. Consequently, biological systems that fluctuate and influence feeding behavior may also promote a hippocampal-dependent process known as declarative memory, which is the flexible memory for facts and episodic events3,7.
EATING
During the pre-prandial (before a meal), prandial, and postprandial stages of feeding behavior, various endocrine, neuropeptidergic, and neural signals are released to regulate appetite, meal size, and the inter-meal interval3 [Figure A]. These biological signals converge with external sensory-related cues in the hippocampus (HPC), a brain region that is most famously associated with memory and, more recently, higher-order controls of feeding behavior3, 11-13. Interestingly, in addition to regulating food intake, each of these hormonal signals up-regulates dynamic structural changes in hippocampal neurons that are purported to contribute to the formation and maintenance of new memories, including synaptic plasticity (molecular changes that modulate the strength of connections between neurons) and neurogenesis (the formation of new neurons from neural stem cells)8. Ghrelin, also known as the “hunger hormone”, is a peptide hormone secreted from the stomach that communicates with the
central nervous system (CNS) to increase food intake and food-motivated behavior and is the only circulating hormone with orexigenic (appetite stimulating) properties8 [Figure B]. Consistent with ghrelin’s orexigenic effects, ghrelin levels are elevated during energy restriction, peak pre-prandially14-17, and then rapidly decrease in response to eating3,18-20. Type 1a growth secretagogue receptor (GHSR1a, aka “ghrelin receptor”)21-22 are expressed in “higher order” (limbic, cortical) brain regions involved in memory and cognition, including the HPC3,23-24. Ghrelin may facilitate food seeking by enhancing HPC-dependent spatial and contextual memory to remember the physical location of food sources, as well as contributing to other factors (e.g., social factors) that comprise episodic memories (memories of autobiographical events) surrounding appetitive and consummatory behaviors3. Research has demonstrated that wild-type mice treated with a GHSR antagonist and GHSR-null mice fail to show conditioned place preference (CPP) to a high fat diet (HFD)23-24. The CPP paradigm is a standard preclinical behavioral model used to study the rewarding and aversive effects of drugs. The basic characteristics of this task involve the association of a particular environment with drug treatment, followed by the association of a different environment with the absence of the drug61. The fact that the mice in the study fails to show CPP demonstrates that ghrelin plays a role in enhancing memory for the location of reward-based food intake3. Ghrelin enhances memory function, in part, by promoting adult hippocampal neurogenesis – the process by which new neurons are formed in the brain9 – and synaptic plasticity – the process that controls synaptic strength10. Insulin is a peptide hormone produced by pancreatic β cells in the islets of Langerhans that plays an important role in nutrient metabolism and energy homeostasis. Literature strongly supports a role for insulin receptor (IR) signaling in HPC-dependent learning and memory
Source: Wikimedia Commons.
15
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
MEMORY Figure C: A representation of the Standard Model of Systems Consolidation (SMSC). In this theory, the memory trace (features of the experience represented by red circles) is initially weak in the neocortex and is reliant on its connections to the medial temporal hippocampal system (MTH) for retrieval. Over time, an intrinsic process results in the strengthening of the connections between memory trace representations in the neocortex. Since the connections are consolidated, the memory can now be retrieved without the hippocampus. Source: Wikimedia Commons
supports a role for insulin receptor (IR) signaling in HPC-dependent learning and memory function. Research also suggests that memory function can be enhanced with exogenous insulin treatment under both pathological and healthy conditions3. Early evidence for insulin’s involvement in memory was reported by Strong et al. (1990) who discovered that peripheral injections of insulin completely reversed deficits in working memory attributable to ischemic stroke3,27. It was later found that in healthy human subjects, 8 weeks of intranasal insulin treatment (4×/day) significantly improves hippocampal-dependent declarative memory. The researchers in this experiment investigated the effects of intranasal insulin treatments on declarative memory by testing the subjects’ immediate and delayed recall of word lists. After 8 weeks of intranasal insulin administration, delayed recall of words significantly improved (words recalled, Placebo 2.92 ± 1.00, Insulin 6.20 ± 1.03, p < 0.05)3,28. The mechanisms through which central IR signaling may enhance HPC functioning include improved glucose utilization and enhanced neuronal plasticity via neurotrophic pathways.3 Leptin, the major anorexic signaling molecule in the body, is a hormone synthesized and released into the blood primarily by white adipose tissue29. Leptin acts as a satiety signal to reduce food intake and body weight during times of nutritional abundance, and also promotes memory function, particularly through its action in hippocampal neurons3. For example, intravenous and peripheral administration of leptin enhances performance in HPC-dependent spatial and contextual memory tasks in rodents30. Within the brain, direct administration of leptin to the dorsal hippocampus (dHPC)
improves HPC-dependent memory retention in mice in memory assessment tasks32. In humans, leptin deficiency is associated with impaired verbal memory function, which is recovered following leptin administration31. Collectively, these behavioral studies indicate that leptin signaling improves HPC-dependent memory function associated with passive reinforcement tasks and memory tasks based on escape/ aversive reinforcement3. At the neuronal level, leptin acts on HPC neurons to promote synaptic function, in part, by protecting HPC neurons from apoptosis, or cell death33. Findings suggest that a specific concentration of leptin when optimal energy reserves are met may enhance synaptic plasticity, with opposite effects observed in the presence of either insufficient or excess energy reserves3.
“Memory consolidation refers to the process by which a temporary, labile memory trace is transformed into a more stable, long lasting form.”
THEORIES OF MEMORY RETRIEVAL
Memory consolidation refers to the process by which a temporary, labile memory trace is transformed into a more stable, longlasting form. This Standard Model of Systems Consolidation (SMSC) is described as the process by which memories, initially dependent on the hippocampus, are reorganized as time passes [Figure C]. By this process, the hippocampus gradually becomes less important for storage and retrieval, and a more permanent memory develops in distributed regions of the neocortex. The idea is that gradual changes in the neocortex, beginning at the time of learning, establish stable long-term memory by increasing the complexity, distribution, and connectivity among multiple cortical regions39. Recent findings have enriched this perspective by emphasizing the dynamic nature of longterm memory39-40. Memory is reconstructive and vulnerable to error, as in false remembering39,41. Another
theory
regarding
memory 16
“The implication is that information initially requires the integrity of medial temporal lobe structures but is reorganized as time passes deptn much less (or not at all) on these same structures.”
retrieval is the Multiple Trace Theory (MTT) in which memories are always dependent on the hippocampus regardless of age. Proposed as an alternative to the standard model, MTT proposes that the hippocampus has an important role in the retrieval of all episodic memories, including remote ones. Similar to the SMSC, MTT also proposed that memories are encoded in hippocampal-neocortical networks, but that each reactivation resulted in a different trace in the hippocampus. Hippocampal-bound traces are presumed to be contextual and rich in spatial and temporal details, while corticalbound traces are presumed to be semantic (memories of facts and general knowledge) and largely context-free. Thus, retrieval of remote semantic memories does not require the hippocampus, however, retrieval of remote episodic memories always does, irrespective of the age of the memory.64 More recently proposed, however, is the competitive trace theory (CTT). CTT is an integrated theory that attempts to explain phenomenological distinctions such as episodic vs. semantic memory using neurocomputational proposals based on interference and associations. CTT can be viewed as a harmonization of SMSC and MTT in which consolidation and hippocampal independence occurs for semantic components of experiences via a multiple trace mechanism. Every time a memory is reactivated, the hippocampus encodes a partially overlapping trace that serves to compete with other similar traces from other reactivations in the neocortex. This model proposes the existence of a continuum and hypothesizes that the role of the hippocampus during retrieval is recontextualization of memories along this continuum.64 Memory consolidation was first proposed in 190039,42-43 to account for the phenomenon of retroactive interference, the disruption of old memories by newer memories, for a period of time after learning. The key observation was that recent memories are more vulnerable to injury or disease than remote memories39. Studies in retrograde amnesia, a type of memory loss in which information acquired before the incident of brain damage is lost, have provided useful information about memory consolidation. Six memory-impaired patients with bilateral damage limited to the hippocampus were given a test of 250 news events covering 50 years39,44. The patients showed a similar graded memory loss extending just a few years into the premorbid period. These results suggest a consolidation process whereby the human hippocampus can be needed to support
17
memory for factual information (semantic memory) for as long as a few years after learning but is not needed after that time. Patients with medial temporal lobe lesions can have considerable sparing of premorbid memory (e.g., recognition of famous faces: patient H.M.45 ; spatial knowledge of the childhood environment: patient E.P.)39,46. These findings show that the brain regions damaged in such patients, although important for new learning, are not similarly important for recollecting the past. The implication is that information initially requires the integrity of medial temporal lobe structures but is reorganized as time passes to depend much less (or not at all) on these same structures. MTT suggests, on the other hand, that the reason older memories are more resilient to hippocampal damage is that they have been retrieved and reconsolidated more frequently than newer memories, and therefore have a larger number of neurons incorporated into the memory trace. There is some evidence that complete excision of the hippocampus rather than partial damage affects old and new memories equally.
LONG TERM POTENTIATION
In the early 1970s, it was shown that repetitive activation of excitatory synapses in the hippocampus caused an increase in synaptic strength that could last for hours or even days47-48. This long-lasting synaptic enhancement is called long term potentiation (LTP), and it is believed to be important for our understanding of the cellular and molecular mechanisms by which memories are formed and stored47, 49. The LTP at CA1 (Cornu Ammon, a Latin name for Ammon’s horn –the ram’s horn that resembles the shape of the hippocampus)50 synapses, which release the neurotransmitter glutamate, appear to be identical to the LTP observed at glutamatergic excitatory synapses throughout the mammalian brain, including the cerebral cortex51. The fact that LTP can be most reliably generated in brain regions involved in learning and memory is often used as evidence for its functional relevance to the creation of new memories47. LTP of the Schaffer collateral – axons of cells in the CA3 – synapse exhibits several qualities that suggest that it is a neural mechanism for information storage [Figure D]. LTP is state-dependent which means that the degree of depolarization of the postsynaptic cell determines whether or not LTP occurs. In order to induce strong depolarization of the postsynapse, sufficient sodium (Na+) must enter the cell through AMPARs, an ionotropic DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
MEMORY glutamate receptor, to remove the magnesium ion (Mg2+) plug from NMDARs, another type of ionotropic glutamate receptor. Calcium (Ca2+) conductance in NMDARs activates kinases that traffic new AMPARs into the postsynaptic density (PSD) and strengthen the synapse. This is similar to the Hebbian theory (an early attempt to establish a theoretical framework of the synaptic changes underlying learning and memory) which requires co-incident activation of presynaptic and postsynaptic elements. LTP also exhibits the property of input specificity in that it is restricted to activated synapses. If the activation of one set of synapses led to all other synapses being potentiated, it would be difficult to enhance particular sets of inputs, as is presumably required for learning and memory. Another important property of LTP is associativity. If one pathway is weakly activated at the same time that a neighboring pathway onto the same cell is strongly activated, both synaptic pathways undergo LTP. This selective enhancement of conjointly activated sets of synaptic inputs is often considered a cellular analog of associative or classical conditioning. More generally, associativity is expected in any network of neurons that links one set of information with another.47 Since LTP seems to be associated with memory formation, it makes sense that it interacts with the biological signals that facilitate feeding behavior. Studies demonstrate that in vitro ghrelin administration enhances LTP in HPC slices3, 52. Inversely, leptin receptor long isoform (LepRb) deficient rodents demonstrate impaired LTP in HPC CA1 synapses, as well as impaired spatial memory performance3, 53.
POSTPRANDIAL MEMORY CONSALIDATION
“Most animals tend to slow down and rest after a large intake of calories, suggesting that there is a biological function to this reaction,” says Thomas Carew, a professor in New York University’s Center for Neural Science2. Carew was the senior editor of a paper that focused on the neurotropic and modulatory effects of
Figure E: The Aphlysia californica, also called the California sea hare, is a species of sea slug that inhabit coastal regions thick with vegetation. This particular species is located in the Pacific Ocean off the coast of California. They can usually be found crawling around the seafood as a source of food. Source: Wikimedia Commons.
insulin-like growth factor II (IGF2) in Aplysia. The expression of IGF2 is mainly regulated by growth hormone and nutrition63. Carew and his fellow scientists, Nikolay Kukushkin and Sidney Williams, found that human IGF2 produces an enhancement of both synaptic transmission, a mechanism thought to underlie long-term memory, and neurite outgrowth in the marine mollusk Aplysia californica54 [Figure E]. This is unusual because insulin-like molecules in humans are segregated into at least two distinct functional modules: a metabolic module, represented by insulin, controls feeding and energy balance, while a neurotropic module, like IGF2, controls memory formation2. However, in Aplysia, the distinct modules are unified into a single system. Kukushkin, Williams, and Carew speculate that this combination of neurotropic and metabolic effects of the Aplysia insulin-like system represents a coordinated response to eating.
"It has been proposed in a range of systems that long-term memory and active active behaviors are energetically costly and require tradeoffs."
It has been proposed in a range of systems that long-term memory and active behaviors are energetically costly and require trade-offs2, 54-58 . Carew and his team speculate that it is possible that in Aplysia, memory consolidation is induced as part of a general reallocation of resources in response to feeding. Consistently with this hypothesis, the level of active behavior is significantly attenuated by an injection of GSK1838705A, the inhibitor of insulin receptor (InsR)/IGF1 receptor (IGF1R) in mammals, suggesting that feeding broadly controls Aplysia behavior via insulin-like receptors54.
CONCLUSION
Scientific discoveries relating feeding behavior and memory have created a vast web of connections from biological orexigenic and satiety signals to the complexity that is long-term potentiation and the ‘food coma’ you experience after eating a particularly large meal. The systems that control eating and memory intertwine and reveal to us the intricate processes that so delicately maintain homeostasis within our body and brain, and in the coming years, more information correlating memory and eating have the potential to alter the way people think about food. It is even
Figure D: Within the hippocampus is a one way loop in which the information enters. The loop first connects at the dentate gyrus. The loop's second connection occurs in area CA3 and its final connection is made in area CA1. Source: Wikimedia Commons.
18
coming years, more information correlating memory and eating have the potential to alter the way people think about food. It is even conceivable that scientists will discover how to manipulate and take advantage of the processes related to feeding behavior that facilitate memory formation and consolidation. Progress has already been made in this direction: a recent study demonstrated that a diet high in soya significantly improved short-term memory, long-term memory, and mental flexibility.59 This is essential to our understanding of food because it proves that significant cognitive improvements can arise from a relatively brief dietary intervention. This finding and others like it are the types of discoveries that will facilitate further advancement of human capability. References [1] Aggio, J. F., & Derby, C. D. (2010). Encyclopedia of Animal Behavior. [2] New York University. (2019, October 10). Food comas and long-term memories: New research points to an appetizing connection. ScienceDaily. Retrieved December 22, 2019 from www.sciencedaily.com/ releases/2019/10/191010113154.htm [3] Suarez, N., A., E., E., Kanoski, & E., S. (2019, April 3). Regulation of Memory Function by Feeding-Relevant Biological Systems: Following the Breadcrumbs to the Hippocampus. Retrieved from https://www.frontiersin. org/articles/10.3389/fnmol.2019.00101/full. [4] de Castro, J. M., Brewer, E. M., Elmore, D. K., and Orozco, S. (1990). Social facilitation of the spontaneous meal size of humans occurs regardless of time, place, alcohol or snacks. Appetite 15, 89–101. doi: 10.1016/0195-6663(90)90042-7 [5] Levitsky, D. A. (2005). The non-regulation of food intake in humans: hope for reversing the epidemic of obesity. Physiol. Behav. 86, 623–632. doi: 10.1016/j. physbeh.2005.08.053 [6] Herman, C. P., and Higgs, S. (2015). Social influences on eating. An introduction to the special issue. Appetite 86, 1–2. doi: 10.1016/j.appet.2014.10.027 [7] Eichenbaum, H., and Cohen, N. J. (2014). Can we reconcile the declarative memory and spatial navigation views on hippocampal function? Neuron 83, 764–770. doi: 10.1016/j.neuron.2014.07.032 [8] Kanoski, S. E., & Grill, H. J. (2017, May 1). Hippocampus Contributions to Food Intake Control: Mnemonic, Neuroanatomical, and Endocrine Mechanisms. Retrieved from https://www.ncbi.nlm.nih.gov/pmc/articles/ PMC4809793/. [9] What is neurogenesis? (2017, May 18). Retrieved from 19
https://qbi.uq.edu.au/brain-basics/brain-physiology/ what-neurogenesis. [10] What is synaptic plasticity? (2018, April 17). Retrieved from https://qbi.uq.edu.au/brain-basics/brain/brainphysiology/what-synaptic-plasticity. [11] Davidson, T. L., Kanoski, S. E., Schier, L. A., Clegg, D. J., and Benoit, S. C. (2007). A potential role for the hippocampus in energy intake and body weight regulation. Curr. Opin. Pharmacol. 7, 613–616. doi: 10.1016/j.coph.2007.10.008 [12] Parent, M. B., Darling, J. N., and Henderson, Y. O. (2014). Remembering to eat: hippocampal regulation of meal onset. Am. J. Physiol. Regul. Integr. Comp. Physiol. 306, R701–R713. doi: 10.1152/ajpregu.00496.2013 [13] Kanoski, S. E., and Grill, H. J. (2017). Hippocampus contributions to food intake control: mnemonic, neuroanatomical, and endocrine mechanisms. Biol. Psychiatry 81, 748–756. doi: 10.1016/j. biopsych.2015.09.011 [14] Wren, A. M., Seal, L. J., Cohen, M. A., Brynes, A. E., Frost, G. S., Murphy, K. G., et al. (2001a). Ghrelin enhances appetite and increases food intake in humans. J. Clin. Endocrinol. Metab. 86:5992. doi: 10.1210/jc.86.12.5992 [15] Drazen, D. L., Vahl, T. P., D’Alessio, D. A., Seeley, R. J., and Woods, S. C. (2006). Effects of a fixed meal pattern on ghrelin secretion: evidence for a learned response independent of nutrient status. Endocrinology 147, 23–30. doi: 10.1210/en.2005-0973 [16] Blum, I. D., Patterson, Z., Khazall, R., Lamont, E. W., Sleeman, M. W., Horvath, T. L., et al. (2009). Reduced anticipatory locomotor responses to scheduled meals in ghrelin receptor deficient mice. Neuroscience 164, 351– 359. doi: 10.1016/j.neuroscience.2009.08.009 [17] Davis, J. F., Choi, D. L., Clegg, D. J., and Benoit, S. C. (2011). Signaling through the ghrelin receptor modulates hippocampal function and meal anticipation in mice. Physiol. Behav. 103, 39–43. doi: 10.1016/j. physbeh.2010.10.017 [18] Ariyasu, H., Takaya, K., Tagami, T., Ogawa, Y., Hosoda, K., Akamizu, T., et al. (2001). Stomach is a major source of circulating ghrelin and feeding state determines plasma ghrelin-like immunoreactivity levels in humans. J. Clin. Endocrinol. Metab. 86, 4753–4758. doi: 10.1210/ jc.86.10.4753 [19] Cummings, D. E., Purnell, J. Q., Frayo, R. S., Schmidova, K., Wisse, B. E., and Weigle, D. S. (2001). A preprandial rise in plasma ghrelin levels suggests a role in meal initiation in humans. Diabetes 50, 1714–1719. doi: 10.2337/ diabetes.50.8.1714 [20] Nass, R., Farhy, L. S., Liu, J., Prudom, C. E., Johnson, M. L., Veldhuis, P., et al. (2008). Evidence for acyl-ghrelin DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
MEMORY modulation of growth hormone release in the fed state. J. Clin. Endocrinol. Metab. 93, 1988–1994. doi: 10.1210/ jc.2007-2234 [21] Howard, A. D., Feighner, S. D., Cully, D. F., Arena, J. P., Liberator, P. A., Rosenblum, C. I., et al. (1996). A receptor in pituitary and hypothalamus that functions in growth hormone release. Science 273, 974–977. doi: 10.1126/ science.273.5277.974 [22] Sun, Y., Wang, P., Zheng, H., and Smith, R. G. (2004). Ghrelin stimulation of growth hormone release and appetite is mediated through the growth hormone secretagogue receptor. Proc. Natl. Acad. Sci. U S A 101, 4679–4684. doi: 10.1073/pnas.0305930101 [23] Guan, X. M., Yu, H., Palyha, O. C., McKee, K. K., Feighner, S. D., Sirinathsinghji, D. J., et al. (1997). Distribution of mRNA encoding the growth hormone secretagogue receptor in brain and peripheral tissues. Mol. Brain Res. 48, 23–29. doi: 10.1016/s0169-328x(97)00071-5 [24] Zigman, J. M., Jones, J. E., Lee, C. E., Saper, C. B., and Elmquist, J. K. (2006). Expression of ghrelin receptor mRNA in the rat and the mouse brain. J. Comp. Neurol. 494, 528– 548. doi: 10.1002/cne.21171 [25] Wren, A. M., Small, C. J., Abbott, C. R., Dhillo, W. S., Seal, L. J., Cohen, M. A., et al. (2001b). Ghrelin causes hyperphagia and obesity in rats. Diabetes 50, 2540–2547. doi: 10.2337/ diabetes.50.11.2540 [26] Faulconbridge, L. F., Cummings, D. E., Kaplan, J. M., and Grill, H. J. (2003). Hyperphagic effects of brainstem ghrelin administration. Diabetes 52, 2260–2265. doi: 10.2337/ diabetes.52.9.2260 [27] Strong, A. J., Fairfield, J. E., Monteiro, E., Kirby, M., Hogg, A. R., Snape, M., et al. (1990). Insulin protects cognitive function in experimental stroke. J. Neurol. Neurosurg. Psychiatry 53, 847–853. doi: 10.1136/jnnp.53.10.847 [28] Benedict, C., Hallschmid, M., Hatke, A., Schultes, B., Fehm, H. L., Born, J., et al. (2004). Intranasal insulin improves memory in humans. Psychoneuroendocrinology 29, 1326–1334. doi: 10.1016/j.psyneuen.2004.04.003 [29] Zhang, Y., Proenca, R., Maffei, M., Barone, M., Leopold, L., and Friedman, J. M. (1994). Positional cloning of the mouse obese gene and its human homologue. Nature 372, 425–432. doi: 10.1038/372425a0 [30] Oomura, Y., Hori, N., Shiraishi, T., Fukunaga, K., Takeda, H., Tsuji, M., et al. (2006). Leptin facilitates learning and memory performance and enhances hippocampal CA1 long-term potentiation and CaMK II phosphorylation in rats. Peptides 27, 2738–2749. doi: 10.1016/j. peptides.2006.07.001 [31] Paz-Filho, G. J., Babikian, T., Asarnow, R., Delibasi, T., Esposito, K., Erol, H. K., et al. (2008). Leptin replacement
improves cognitive development. PLoS One 3:e3098. doi: 10.1371/journal.pone.0003098 [32] Farr, S. A., Banks, W. A., and Morley, J. E. (2006). Effects of leptin on memory processing. Peptides 27, 1420–1425. doi: 10.1016/j.peptides.2005.10.006 [33] Guo, Z., Jiang, H., Xu, X., Duan, W., and Mattson, M. P. (2008). Leptin-mediated cell survival signaling in hippocampal neurons mediated by JAK STAT3 and mitochondrial stabilization. J. Biol. Chem. 283, 1754–1763. doi: 10.1074/jbc.m703753200 [34] Garza, J. C., Guo, M., Zhang, W., and Lu, X. Y. (2008). Leptin increases adult hippocampal neurogenesis in vivo and in vitro. J. Biol. Chem. 283, 18238–18247. doi: 10.1074/ jbc.M800053200 [35] O’Malley, D., MacDonald, N., Mizielinska, S., Connolly, C. N., Irving, A. J., and Harvey, J. (2007). Leptin promotes rapid dynamic changes in hippocampal dendritic morphology. Mol. Cell. Neurosci. 35, 559–572. doi: 10.1016/j.mcn.2007.05.001 [36] Stranahan, A. M., Lee, K., Martin, B., Maudsley, S., Golden, E., Cutler, R. G., et al. (2009). Voluntary exercise and caloric restriction enhance hippocampal dendritic spine density and BDNF levels in diabetic mice. Hippocampus 19, 951–961. doi: 10.1002/hipo.20577 [37] Dhar, M., Wayman, G. A., Zhu, M., Lambert, T. J., Davare, M. A., and Appleyard, S. M. (2014a). Leptin-induced spine formation requires TrpC channels and the CaM kinase cascade in the hippocampus. J. Neurosci. 34, 10022– 10033. doi: 10.1523/JNEUROSCI.2868-13.2014 [38] Dhar, M., Zhu, M., Impey, S., Lambert, T. J., Bland, T., Karatsoreos, I. N., et al. (2014b). Leptin induces hippocampal synaptogenesis via CREB-regulated microRNA-132 suppression of p250GAP. Mol. Endocrinol. 28, 1073–1087. doi: 10.1210/me.2013-1332 [39] Squire, L. R., Genzel, L., Wixted, J. T., & Morris, R. G. (2015, August 3). Memory consolidation. Retrieved from https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC4526749/. [40] Dudai Y, Morris RGM. (2000). To consolidate or not to consolidate: What are the questions? In Brain, perception, memory advances in cognitive sciences (ed. Bulhuis JJ), pp. 149–162. Oxford University Press, Oxford. [41] Schacter DL, Dodson CS. (2001). Misattribution, false recognition and the sins of memory. Philos Trans R Soc London B Biol Sci 356: 1385–1393. [42] Müller GE, Pilzecker A. (1900). Experimentelle Beiträge zur Lehre vom Gedächtnis. [Experimental contributions to the science of memory]. Z Psychol Ergänzungsband 1: 1–300. [43] Lechner HA, Squire LR, Byrne JH. (1999). 100 years of 20
[46] Teng E, Squire LR. (1999). Memory for places learned long ago is intact after hippocampal damage. Nature 400: 675–677.
[59] File, S.E., Jarrett, N., Fluck, E. et al. (2001) Eating soya improves human memory. Psychopharmacology 157, 430–436. doi:10.1007/s002130100845
[47] Malenka RC, Nicoll RA. (1999). Long-Term Potentiation—A Decade of Progress?. Science 285: 18701874. DOI: 10.1126/science.285.5435.1870
[60] Food function and structure – introduction. (n.d.). Retrieved from https://www.sciencelearn.org.nz/ resources/528-food-function-and-structure-introduction.
[48] T. Lomo, Acta Physiol. Scand. 68 (suppl. 277), 128 (1966); T. V. P. Bliss, T. Lomo, J. Physiol. 232, 331 (1973); T. V. P. Bliss and A. R. Gardner-Medwin, ibid., p. 357
[61] Prus, A. J. (1970, January 1). Conditioned Place Preference. Retrieved from https://www.ncbi.nlm.nih.gov/ books/NBK5229/
[49] T. J. Teyler, P. DiScenna, Annu. Rev. Neurosci. 10, 131 (1987); B. Gustafsson, H. Wigstrom, Trends Neurosci. 11, 156 (1988); R. A. Nicoll, J. A. Kauer, R. C. Malenka, Neuron 1, 97 (1988); D. V. Madison, R. C. Malenka, R. A. Nicoll, Annu. Rev. Neurosci. 14, 379 (1991) ; T. V. P. Bliss, G. L. Collingridge, Nature 361, 31 (1993); A. U. Larkman, J. J. B. Jack, Curr. Opin. Neurobiol. 5, 324 (1995) ; R. A. Nicoll, R. C. Malenka, Nature 377, 115 (1995).
[62] Plasticity in Neural Networks. (n.d.). Retrieved from https://thebrain.mcgill.ca/flash/a/a_07/a_07_cl/a_07_cl_ tra/a_07_cl_tra.html.
[50] Purves, D. (1970, January 1). Long-Term Synaptic Potentiation. Retrieved from https://www.ncbi.nlm.nih. gov/books/NBK10878/. [51] A. Kirkwood et al., Science 260, 1518 (1993); M. F. Bear, A. Kirkwood, Curr. Opin. Neurobiol. 3, 197 (1993).
[63] IGF-2 (Insulin-Like Growth Factor 2). (n.d.). Retrieved from https://www.biovendor.com/igf-2. [64] Yassa MA and Reagh ZM. (2013). Competitive trace theory: a role for the hippocampus in contextual interference during retrieval. Front. Behav. Neurosci. 7:107. doi: 10.3389/fnbeh.2013.00107 [65] California Sea Hare (California Sea Slugs - Nudibranchs (and other marine Heterobranchia) of California) · iNaturalist. (n.d.). Retrieved from https://www.inaturalist. org/guide_taxa/2869
[52] Diano, S., Farr, S. A., Benoit, S. C., McNay, E. C., da Silva, I., Horvath, B., et al. (2006). Ghrelin controls hippocampal spine synapse density and memory performance. Nat. Neurosci. 9, 381–388. doi: 10.1038/nn1656 [53] Li, X. L., Aou, S., Oomura, Y., Hori, N., Fukunaga, K., and Hori, T. (2002). Impairment of long-term potentiation and spatial memory in leptin receptor-deficient rodents. Neuroscience 113, 607–615. doi: 10.1016/s03064522(02)00162-8 [54] Nikolay Vadimovich Kukushkin, Sidney Paulina Williams, Thomas James Carew. (2019) Neurotropic and modulatory effects of insulin-like growth factor II in Aplysia. Scientific Reports. 9 (1) DOI: 10.1038/s41598-01950923-5 [55] Dukas, R. (1999). Costs of memory: ideas and predictions. J Theor Biol 197, 41-50. doi: 10.1006/ jtbi.1998.0856 [56] Mery, F. & Kawecki, T.J. (2005). A cost of long-term memory in Drosophilia. Science 308, 1148. doi: 10.1126/ science.1111331 [57] Niven, J. E. & Laughlin, S. B. (2008). Energy limitation as a selective pressure on the evolution of sensory systems. J Exp Biol 211, 1792–1804, doi: 10.1242/jeb.017574 [58] Horne, J. (2009). REM sleep, energy balance and ‘optimal foraging’. Neurosci Biobehav Rev 33, 466–474. doi: 10.1016/j.neubiorev.2008.12.002
21
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
PLASTISPHERE
The Plastisphere: An Emerging Human Derived Ecosystem BY HUBERT GALAN
INTRODUCTION
Social media, a pervasive imperative component of daily life, has significantly influenced communication and behavior in modern society. With just the tap of one finger, an individual is able to establish and maintain relationships with people on the opposite side of the world. The ways in which people communicate using social media were once limited to simple text, but now, through technological advances, have include new media platforms such as musical videos and sensational studio art. A key aspect of social media is its ability to generate awareness of social, political, and environmental issues; one such issue (categorized by the trending social medial hashtag savetheturtles) is the growing accumulation of plastic in the oceans. Video of turtles ingesting or being strangled by becoming victims to plastic contamination reveals the horrific reality of pollution. As a result, young people have turned away from plastic straws and equipped themselves with the new and trendy reusable straw. Because of social media, awareness was spread to a point where real concern prompted action among young people, all for the sake of saving the turtles. The ocean has fallen victim to copious FALL 2019
pollution caused by the extensive use of nonbiodegradable materials. Plastic, however, has ignited a unique concern among scientists. The same properties that make the material so desirable and convenient are the same reasons why its presence in the ocean is so concerning. The durability of the material results in a lifespan of 450 years in the ocean, and this accumulation of plastics challenges the wellbeing of marine ecosystems1. Sea creatures are among those affected, often with horrendous consequences (strangulation, choking, and digestive tract mutilation), but the weathering and degradation of plastics also poses a significant threat to the foundation of marine ecosystems: the microbial community3. The term â&#x20AC;&#x153;plastisphereâ&#x20AC;? refers to newly established miniature ecosystems residing on aggregated plastics floating in the ocean2. This article will explain the scientific phenomena of the plastisphere as it pertains to marine ecosystems as a whole and outline different components of this its creation including the history of plastic, its chemical viability, and why life is attracted to it.
COVER: Plastic has been at the forefront of environmental concerns for decades. Source: Wikimedia Commons
"A key aspect of social media is it's ability to generate awareness of social, political, and environmental issues; one such issue (categorized by the trending social media hashtag savetheturtles) is the growing accumulation of plastic in oceans."
In 1907, a Belgian Chemist Leo Baekeland, in his pursuits to search for an inexpensive electrical insulator, inadvertently created a material on which humans would grow severely 22
“Plastic is extremely versatile, and this versatility has created a significant dependency.”
dependent decades later: plastic. Baekeland’s invention was not only durable, but also a good insulator, heat resistant, and extensively versatile. The malleability of the material made it possible to mold and shape it into many forms. The material was the first to be fully synthetic, meaning its components were artificially made molecules, with none coming from nature4. This fact distinguished plastic from its predecessor of celluloid, a material similar to plastic but that possessed remnants of natural molecules. The artificial nature of plastic made the item better suited for mass production4.
Chemically, polypropylene is constructed from the monomer propylene (C3H6), a colorless and immensely flammable gas which produced through the refining of gasoline6. The ability to create these versatile and revolutionary material stems from a process known as Ziegler-Natta Polymerization, a chemical procedure that links simpler monomers to form complex polymers7. Research has discovered many derivatives of polypropylene which are also used, in some cases with the original polypropylene, to create new plastics with distinct properties.
World War II stimulated the rapid expansion of the plastic industry in the U.S., in part because of the urgency the government felt to preserve natural resources, but also because of the need to maintain military might. The investment into research of new polymers and plastics led to the creation of synthetic silk and plexiglas plastic, materials that constituted an abundance of war-time equipment5. The production of plastic continued to increase well after the war as a result of the economic prosperity in the U.S. The malleability and versatility of plastic made it suitable for an endless array of products. Consumerism, the foundation of a thriving capitalist economy, experienced formative development after World War II. In this type of economy, little to no attention was paid to the well-being of the environment. Rather, the path most affordable was the path taken. Plastic was intertwined into everything. It served as an absolute replacement to the expensive, but biodegradable, natural resources which had previously been used. Plastics were introduced into various industries, challenging traditional materials and triumphing continuously5. Moreover, the simple chemical structure of the polymer made it easily manufactured in large scale production5. Plastic quickly became a staple in American consumerism and continues to be widely used today. The normalization of plastic is an impeding element preventing the salvation of the ocean’s health.
Plastic is extremely versatile, and this versatility has created a significant dependency. Plastics are wegded into almost every product we use and are involved in almost every aspect of our lives. However, it hasn’t always been this way. In the 1960s, plastic waste from humans amounted to just 1% of all waste, but from 1960 to present, polypropylene plastic production increased from 0.5 to 260 million tonnes10. A major portion of the plastic being produced is designated specifically for products and items that are meant to be short lived. As a result, a lot of plastic sees general function for less than a year before being disposed of, with about 0.3% of that plastic ending up in the ocean10. But how exactly does it get there? Littering is one of the largest culprits and is seen on two scales: the individual person who litters in a mundane routine and large-scale industrial pollution. Typically, dropped pieces of plastics on the streets are washed into rivers and streams by rainwater, which is eventually carried all the way to the ocean. Moreover, some products, such as sanitary products, are flushed down the drain. Washing machines also dispose of microfibers that aggregate in open water10.
THE MOLECULE OF LIFE Figure 1: The image above depicts a 3D figure of a polypropylene molecule, the main substance used to create the plastic that is commercially sold. The hydrocarbon polymer is a consistent repeated sequence of the propylene (C_3 H_6) hydrocarbon. The simple structure allows it to be industrially produced with ease.
Polypropylene is a durable and malleable polymer which acts as the main material for plastics in general commercial use, which includes industries such as household appliances, textiles, and technology.
TO THE TRASHCAN AND BEYOND
CHARACTERIZING SPHERE
THE
PLASTI-
The abundance of plastic debris within marine environments has only grown, a direct consequence of the rising global production and use of plastic. One of the most significant consequences of plastic in our oceans is that it allows for the proliferation of microbial communities, typically containing organisms that would not otherwise be present. The Josephine Bay Paul Center for Comparative
Source: Wikimedia Commons
23
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
PLASTISPHERE Figure 2: The image above depicts an aggregation of plastics, debris and other matter that washed ashore in San Francisco after a storm. The pieces of plastic in the image above obviously have vastly different origins, as emphasized by the differences in structure, color and size. Plastic pollution has only become more prevalent with time and it poses a problem to marine ecosystems. Source: Flickr
Molecular Biology conducted research attempting to characterize this unique ecosystem. Experimenters used scanning electron microscopy (SEM) partnered with DNA sequencing to specifically analyze the microbial communities and found that marine plastic debris possessed an intensely diverse ecosystem, including symbionts, heterotrophs, autotrophs, and predators8. Not only were these communities richly endowed with bacterial microbiota, but several eukaryotic organisms were also recognized. The DNA analysis showed that among plastic fragments the individuals that inhabited it were vastly distinct8. The few commonalities between samples occurred because of certain species that are known to create biofilm in aquatic environments, such as species in the genera Navicula, Sellaphora, and Nitzschia. The researchers continued their study by analyzing the diversity patterns between the plastisphere and surrounding seawaters. They found that plastic samples possessed less diversity and more ‘evenness,’ meaning that the species that did inhabit the plastics were found proportionately amongst the greater population in the plastisphere. That is to say, there was high diversity within these plastic samples, but when compared to other rich oceanic ecosystems, the sample appear rather simple. Conversely, in the seawater samples the presence of rare species were abundant which the researchers attribute this to less pressure from other species8. On the surface of plastic, the population is more metabolically active and competitive, a condition that is detrimental for these smaller, less adaptive microbial organisms. DNA and rRNA sequence analyses showed that the composition of the plastisphere was consistently different from that of the seawater samples where the plastic was
found. For instance, the researchers found one plastisphere had an abundance of two species of photosynthetic filamentous cyanobacteria that were completely absent in seawater8. In a sample labeled ‘C230_01 polypropylene,’ researchers found that 24% of the sample population was from the genus Vibrio compared to only 1% in seawater populations8. The unique environment of microbial organisms that thrive on plastic facilitates the proliferation of organisms otherwise unseen in the ocean. As a result, a major concern is the potential introduction of invasive species to these environments. The effects of invasive species can be dire, as it is one of the leading causes of animal extinction9.
“Extensive plastic ingestion has accumulated to a point where almost every marine species in the ocean has ingested a piece of plastic at least once.”
WILDLIFE AND THE PLASTISPHERE
Extensive plastic ingestion has accumulated to a point where almost every marine species in the ocean has ingested a piece of plastic at least once8. Once again, the culprit of these tragedies is a suspect receiving almost no attention: microbial plastic debris. These nanoscopic pieces of plastic, after being mistaken for food, are consumed by primary consumers, such as plankton. In a study conducted in the western North Atlantic Ocean and Caribbean Sea, the researchers found that 60% of their testing sample (n = 6,136) surface plankton contained pieces of plastic within their digestive system8. Plankton are normally eaten by larger predators, so bioaccumulation of polypropylene plastics poses a serious health risk for organisms further up the food chain. In a study conducted by the American Chemical Society, researchers identified the presence of a dangerous dinoflagellate species within a piece of plastic derived from the Mediterranean Sea. Several harmful 24
Figure 3: Image taken of a plastic marine debris (PMD) using a scanning electron microscopy (SEM) technique. The pores depicted above indicate the deterioration of plastic by a bacterial species. The diversity seen phenotypic morphotypes indicate the presence of a diverse microbial community known as the plastisphere. Source: Wikimedia Commons
"The plastic is a human derived phenomena threatening the underpinnings of marine ecology."
Figure 4: A deceased fish aimlessly floating in the ocean within a plastic glove. The death of the organism occurred as a direct result of this type of plastic population. Because of sheer durability of plastics constructed using polypropylene, marine life are often affected in some shape or form. This is just one example of how plastics directly circumvent the life and wellbeing of marine life.
dinoflagellate species were also located in a plastisphere taken from the Atlantic ocean1. Dinoflagellates are unicellular organisms (protists) that produce distinct toxins that directly target the neurological system of mammals11. Moreover, dinoflagellates are easily able to adhere to tough surfaces, something that is perfectly provided by plastics1. Given the chance, these organisms, and others like them, could proliferate to a point that directly challenges the life of animals within natural marine ecosystems. Because the species is new to this environment, there will be no direct challenge or predator to these organisms, allowing them to grow, reproduce rapidly, and excrete toxic chemicals.
types of plastics create an environment suitable for the proliferation of microbial communities, an ecosystem known as the plastisphere. These communities typically constitute a combination of unique organisms rarely seen in the natural world. Plastic’s durability creates a situation that directly threatens the ecological balance of marine ecosystems through the transport of invasive species. As humans, it is imperative that we look at our lifestyle and make changes to ensure that we our minimizing our effects on the environment: moving away from plastics and back towards natural, biodegradable alternatives would be a momentous step towards this goal of preserving our environment.
CONCLUSION
References [1] Eriksen, M., Lebreton, L. C. M., Carson, H. S., Thiel, M., Moore, C. J., Borerro, J. C., … Reisser, J. (2014). Plastic Pollution in the Worlds Oceans: More than 5 Trillion Plastic Pieces Weighing over 250,000 Tons Afloat at Sea. PLoS ONE, 9(12). doi: 10.1371/journal.pone.0111913
The plastisphere is a human derived phenomena threatening the underpinnings of marine ecology. Recently, the plastic epidemic has been depicted and understood in a noncomplex fashion. Images of turtles and seals entangled by plastic that was carelessly discarded after a single use have caught the attention of the masses, however, microscopic plastic is the root of most concerns coming from the scientific community. Aggregation of these
[2] Carson, H. S., Nerheim, M. S., Carroll, K. A., & Eriksen, M. (2013). The plastic-associated microorganisms of the North Pacific Gyre. Marine Pollution Bulletin, 75(1-2), 126–132. doi: 10.1016/j.marpolbul.2013.07.054
Source: Wikimedia Commons
25
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
PLASTISPHERE [3] Haward, M. (2018). Plastic pollution of the world’s seas and oceans as a contemporary challenge in ocean governance. Nature Communications, 9(1). doi: 10.1038/ s41467-018-03104-3 [4] History and Future of Plastics. (2019, November 20). Retrieved from https://www.sciencehistory.org/thehistory-and-future-of-plastics. [5] Freinkel, S. (2011, May 29). A Brief History of Plastic's Conquest of the World. Retrieved from https://www. scientificamerican.com/article/a-brief-history-of-plasticworld-conquest/. [6] The Definitive Guide to Polypropylene (PP) Retrieved from https://omnexus.specialchem.com/selection-guide/ polypropylene-pp-plastic. [7] Chikkali, S. H. (2017). Ziegler–Natta polymerization and the remaining challenges. Resonance, 22(11), 1039–1060. doi: 10.1007/s12045-017-0570-2 [8] Zettler, E. R., Mincer, T. J., & Amaral-Zettler, L. A. (2013). Life in the “Plastisphere”: Microbial Communities on Plastic Marine Debris. Environmental Science & Technology, 47(13), 7137–7146. doi: 10.1021/es401288x [9] Clavero, M., & Garciaberthou, E. (2005). Invasive species are a leading cause of animal extinctions. Trends in Ecology & Evolution, 20(3), 110–110. doi: 10.1016/j.tree.2005.01.003 [10] Wabnitz, C. (n.d.). Plastic Pollution: An Ocean Emergency . Fisheries Centre. [11] Dinoflagellata. (n.d.). Retrieved from https:// microbewiki.kenyon.edu/index.php/Dinoflagellata. [12] Irimia R, Gottschling M (2016) Taxonomic revision of Rochefortia Sw. (Ehretiaceae, Boraginales). Biodiversity Data Journal 4: e7720. https://doi.org/10.3897/ BDJ.4.e7720. (n.d.). doi: 10.3897/bdj.4.e7720.figure2f
26
The Evolution of Desalination Use in The Face of Water Scarcity BY JENNIFER CHEN COVER: Desalination around the world. Source: Wikimedia Commons
“For instance, the WRI ranks the United States as a country with lowmedium water stress, but individual states like Arizona experience high water stress."
27 FALL 2019
INTRODUCTION: FRESHWATER, WATER SCARCITY, AND THE HISTORY OF DESALINATION PLANTS
Water, especially from freshwater sources, is an integral part of maintaining human life. However, freshwater makes up only 3 percent of the world’s water, and about two-thirds of it remains inaccessible in the form of ice1. Water scarcity and water stress – the adverse conditions created by water scarcity - are common issues across the world2. Countries in the Middle East and North Africa experience the highest baseline water stress, which is the ratio of “total annual freshwater withdrawals” to the “expected annual renewable freshwater supply3,4.” In other words, baseline water stress as a metric assesses the demands of a country or community in relation to their supply. Qatar, Israel, Lebanon, Iran, Jordan, Libya, Kuwait, and Saudi Arabia are the highest-ranking countries according to the World Resources Institute (WRI), meaning that those countries withdraw more than 80% of their annual freshwater supply3,4. However, countries ranked with low or medium water stress levels often have pockets of areas that experience extreme water stress. For example, the WRI ranks the United States as a country
with low-medium water stress, but individual states like Arizona experience high water stress3. An area may experience water stress for various reasons. Firstly, climate change can exacerbate conditions5. Increasing the temperature in already hot regions further accelerates evaporation of water supplies. Climate change will also decrease precipitation, a form of replenishing freshwater sources, in some areas, mostly around the equator where regions are hottest. As temperatures in hot regions around the equator increase further, air circulation systems called Hadley Cells move poleward and slow down6. The Hadley Cell is responsible for circulating warm air (holding water vapor that condenses once it rises) from the equator to the subtropics where the cool air sinks and the dry air circulates back to the equator to become warm and moist again. Since warmer air has larger water-holding capabilities, the poleward movement of the cell disrupts the distinct conditions created in the subtropics (i.e. Saharan Africa, Middle East, and Central America) and lead to dryer conditions5. According to the World Health Organization (WHO), 785 million people around the globe currently do not have an DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
D E S A L I N AT I O N improved drinking water source, meaning that there is no infrastructure protecting their freshwater from contaminants like fecal matter (ex. protected dug wells) .7 The WHO also predicts that 50 percent of the world population will live in water-stressed areas by 20258. Due to demands of an increasing global population with increasing income and quality of life (that require manufacturing more water-intensive products), water usage is rising. This also contributes to global water stress8. Furthermore, freshwater is constantly used for farming and industrial purposes. Poor use of water infrastructure, either manmade or natural, can lead to lost water through leaky pipes. Natural infrastructure like marshes or mangrove forests can filter contaminants out of water, but poor management of natural infrastructure too can lead to loss of potable water8. Fresh water may also be polluted by human activity. One source of contamination is human and animal waste, which breed pathogens and cause diseases like cholera, typhoid, dysentery, polio, and diarrhea9. Developed nations, as well as developing countries, struggle with water pollution and waterborne pathogens. Disease treatment, much of which is needed to treat water-borne illnesses, constitutes “more than one third of the income of poor households in sub-Saharan Africa”10. According to the WHO, at least 2 billion people drink water from a source contaminated with feces8. An area may experience water stress also because of lack of reusing wastewater or, potentially, saltwater9. Some countries have alleviated water stress through efficient water resource management. For example, countries have invested in sturdier, man-made water infrastructure to avoid leakage and loss of freshwater as well as natural water infrastructure to promote systems that naturally filter freshwater. Another common way that countries try to manage their water
Figure 2: Climate regions that show the current climate regions, which can change as temperatures become more extreme and impact various cells like the Hadley Cell .
resources more effectively is by treating and reusing waste waters3. Desalination, however, is a solution that is altogether different. Desalination, or the process of removing salts from salt water to create freshwater, is most often used to obtain water for drinking purposes. To create potable water, almost all salts from saltwater must be removed11. Desalination is used most often in countries with the highest baseline water stress, such as Saudi Arabia, Bahrain, Qatar, Kuwait, UAE, and Oman3. Countries like the United States have explored desalination, though only in certain areas that experience higher baseline water stress like Texas, Florida, and California3. Desalination practices are most commonly employed in places that struggle with drought and rapid population growth, as well as difficult climate change-induced issues3. Since the 1950s, the United States has funded the research and development of desalination practices, and various state governments have also invested further resources into research12.
Source: https://www. researchgate.net/figure/TheEarth-can-be-broadly-dividedinto-three-major-regionsthe-temperate-subtropicaland_fig1_268274685)
“Desalination, or the process of removing salts from saltwater to create freshwater, is most often used to obtain water for drinking purposes."
METHODS OF DESALINATION AND CURRENT DESALINATION TECHNOLOGIES
Most popular desalination practices can be categorized as thermal-based processes or membrane processes13. Thermal-based processes heat the feed water (i.e. saltwater, sea water) to separate the salts from the water, while membrane processes separate the salts from the water using a membrane and pressure. The most popular thermal-based process is multi-stage flash distillation. The water undergoes multiple stages of flash evaporation, where it is separated into freshwater product and brine discharge14. Flash evaporation is the partial evaporation of a liquid to a gas due to falling pressure15. The first stage has the highest temperature and highest pressure, and the pressure drops in each stage afterwards.
Figure 1: The Hadley Cell, located around the equator, circulates air to 30 degree latitude and defines the climate regions of that area. Source: Shutterstock.
Heated saltwater flows through each stage, and as it undergoes flash evaporation, water vapor flows upwards and condenses on pipes. The heated saltwater that didn’t flash evaporate continues to the next stage to be flash evaporated, and so on until freshwater 28
Figure 3: Phase change diagram labeled with a drop in pressure to show how a drop in pressure, even at the same temperature, can lead to a phase change. Source: https://course-notes. org/chemistry/topic_notes/ intermolecular_forces/phase_ changes_diagrams) Figure 6: Cation exchange membrane and anion exchange membrane showing movement of ions. This diagram shows the role of anodes and cathodes in attracting ions across these membranes Source: http://www.astomcorp.jp/en/product/02.html)
"In reverse electrodialysis, electricity is applied to an electrode, and between the anode (negativelycharged) and cathode (positivelycharged) are anion and cation exchange membranes that alternate."
Figure 4: Multi-stage flash distillation diagram, including 3 stages. This method uses flash evaporation. Source: http://www. separationprocesses.com/ Distillation/Fig078a.htm
product and a concentrated brine discharge are separated14. Another type of thermal-based desalination process is multiple-effect distillation (MED). Multiple-effect distillation also has stages or “effects” of decreasing pressure. In this process, seawater is sprayed onto pipes which are hot from flowing steam. Some of the seawater evaporates, and this steam is either fed into the next effect or collected as freshwater. The seawater that did not evaporate falls to the bottom of the container to be sprayed in the next effect16. Standing in contrast to thermal-based processes, electrodialysis reversal and reverse osmosis are the most popular membrane processes. In reverse electrodialysis, electricity is applied to an electrode, and between the anode (negatively-charged) and cathode (positively-charged) are anion and cation exchange membranes that alternate17. Anion
exchange membranes are made from ionized polymers that are positively charged, such that they attract negatively-charged anions and allow them to pass. On the other hand, cation exchange membranes are charged negatively and allow positive ions to flow through the membrane. These membranes are semipermeable such that ions can flow through, but water molecules can not. These exchange membranes assemble themselves in an alternating pattern because the electrode has a positive (anode) and negative side (cathode). Across these exchange membranes, ions usually dissolved in saltwater like sodium cations and chlorine anions move through the closest exchange membranes they are attracted to, creating alternating pockets of freshwater product and salt concentrate18. The second type of membrane process is reverse osmosis. A membrane separates two sides—one with low salt concentration and the other with the feedwater (and a higher salt concentration). This membrane is semi-permeable, therefore only allowing water molecules through19. Osmosis involves the movement of water molecules through a semi-permeable membrane from low salt concentration to high salt concentration until the two sides have equilibrated at the same salt concentration. In reverse osmosis, though, an external pressure is applied to the side with the higher salt concentration such that water flows in the opposite direction – from the side with high salt concentration to the side with low salt concentration. In order to prevent water from moving to the higher salt concentration,
Figure 5: Multi-effect distillation diagram with 3 effects. This method also uses flash evaporation. Source: http://www. separationprocesses.com/ Distillation/Fig078b.htm Figure 7: Anion (labeled with plus signs) and cation exchange membranes (labeled with minus signs) in an alternating pattern. Source: http://www.unido. or.jp/en/technology_db/4456/
29
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
D E S A L I N AT I O N Figure 8: Electrodialysis diagram with anion and cation exchange membranes, a cathode, and an anode. In electrodialysis, the product or the freshwater and concentrate alternate in a way that freshwater are at the ends of the system. Therefore, there will always be cations (most likely sodium cation) that remain in the product.
the external pressure must be greater than the osmotic pressure, or the pressure of the water moving across the membrane during regular osmosis19.
PROBLEMS WITH DESALINATION
Unfortunately, there are a number of problems associated with the desalination processes. The first class of concerns involves contact with wildlife and the environment. In terms of waste products, desalination plants usually produce waste like coagulants, bisulfates, and chlorines which may harm wildlife if exposed to the environment20. The consequences are not just biological, but economic also – a decrease in fish and marine biodiversity due to chemical exposure will harm the fishermen who depend on fishing for their livelihood. The industrial process of desalination damages marine ecosystems because the mass intaking of saltwater would kill microorganisms, the foundation of these ecosystems 21. The second area of concern is that water filtered by desalination techniques may not be safe for human consumption. Unlike freshwater, seawater contains boron, which is difficult to remove completely and also causes a variety of health problems. Though the technology to reduce boron to safe levels exists, current government regulations do not address issues like boron in desalinated drinking water16.
Source: https://www. slideshare.net/laxer_12/ edr-101-introduction-toelectrodialysis-reversal
The costs and energy usage of desalination practices is another issue altogether. Desalination plants require a lot of energy – “nine times as much energy as surface water treatment and 14 times as much energy as groundwater protection21.” Thus, by responding to the effects of climate change, desalination plants may contribute to the issue further if the energy used is not carbon neutral20. Furthermore, treated freshwater (which does not damage any technology) is needed in power plants to generate electricity. Since desalination requires a large amount of energy, potentially drinkable water is lost. The cost of plants is another barrier to global desalination – they are two to four times more expensive than the use of groundwater or surface water20. Only with innovation and technological improvement will the economic costs of building and operating desalination plants decline18.
CONCLUSION
Scientists have also attempted to improve the membranes used in membrane-based desalination processes. Different researchers have developed nano-porous carbon membranes, some of which reduce energy consumption of some desalination processes by 80% or increase freshwater return23. However, research does suggest that any reduction in reverse osmosis desalination cost is less likely
Figure 10: Reverse electrodialysis diagram. In reverse electrodialysis, the product is sandwiched between concentrate, so more ions will transfer across ion exchange membranes on both sides, creating less contaminated freshwater. Source: https://www. slideshare.net/laxer_12/ edr-101-introduction-toelectrodialysis-reversal
"The consequences are not just biological, but economic also - a decrease in fish and marine biodiversity due to chemical exposure will harm the fishermen who depend on fishing for their livelihood."
Figure 9: Osmosis and Reverse Osmosis Diagram. As the external pressure is applied to the side with a higher concentration of salt, the water from the low concentration side is prevented from flowing to the other side. Source: http://www. industmarine.com/content/ essentials/reverse-osmosis. php)
30
"As climate change worsens current circumstances, increasing the number of areas experiencing drought and elongating drought periods, desalination must be considered and researched more extensively."
to emerge from the improvement of existing membrane technology, and more likely to come from the increasing the technology’s energy efficiency, optimizing brine treatment, and integrating cheap renewable energy sources24. Not enough resources are currently invested into researching desalination as a solution for water scarcity. In conclusion, desalination is currently plausible on a small scale in coastal regions where freshwater is expensive to obtain. However, on a large scale for communities around the globe, desalination is currently not an option. As climate change worsens current circumstances, increasing the number of areas experiencing drought and elongating drought periods, desalination must be considered and researched more extensively. References [[1] Water Scarcity. (n.d.). Retrieved from https://www. worldwildlife.org/threats/water-scarcity [2] Sawe, B. E. (2018, June 18). What Is the Difference Between Water Stress and a Water Crisis? [3] Hofste, R. W., Reig, P., & Schleifer, L. (2019, August 6). 17 Countries, Home to One-Quarter of the World's Population, Face Extremely High Water Stress. [4] Freshwater Sustainability Analyses: Interpretative Guidelines. (2011) (pp. 4–4). Retrieved from http://www. fao.org/nr/water/aquastat/water_use/Cocacola2011_ freshwater_sustainability_analyses.pdf [5] Schleifer, L. (2018, September 26). 7 Reasons We're Facing a Global Water Crisis. Retrieved from https://www. wri.org/blog/2017/08/7-reasons-were-facing-globalwater-crisis [6] Reichler, T. (2009, June 17). Changes in the Atmospheric Circulation as Indicator of Climate Change. [7] The Drinking Water Ladder. (n.d.). Drinking Water. Retrieved from https://www.who.int/water_sanitation_ health/monitoring/water.pdf [8] Drinking-water. (n.d.). Retrieved from https://www. who.int/news-room/fact-sheets/detail/drinking-water [9] Unsafe drinking-water, sanitation and waste management. (2016, August 4). Retrieved from https:// www.who.int/sustainable-development/cities/healthrisks/water-sanitation/en/ [10] Denchak, M. (2019, November 13). Water Pollution: Everything You Need to Know. [11] Desalination. (n.d.). Retrieved from https://www. sciencedaily.com/terms/desalination.htm
31
[12] Hinkebein, T. (2004). Introduction. In Water and Sustainable Development: Opportunities for the Chemical Sciences: A Workshop Report to the Chemical Sciences Roundtable. Washington, DC: National Academies Press (US). [13] Kress, N. (2019, February 1). Desalination Technologies. Retrieved from https://www.sciencedirect.com/science/ article/pii/B9780128119532000025 [14] Multi-Stage Flash Distillation. (n.d.). Retrieved from http://www.separationprocesses.com/Distillation/DT_ Chp07a.htm [15] FLASHING FLOW. (n.d.). Retrieved from http://www. thermopedia.com/content/768/ [16] Multiple-Effect Distillation. (n.d.). Retrieved from http://www.separationprocesses.com/Distillation/DT_ Chp07b.htm [17] Electrodialysis Reversal Desalination. (n.d.). Retrieved from https://www.amtaorg.com/electrodialysis-reversaldesalination [18] Reverse ElectroDialysis (RED). (n.d.). Retrieved from https://www.redstack.nl/en/technology/reverseelectrodialysis-red [19] How Do Reverse Osmosis Systems Work?: WaterRight. (2016, July 27). Retrieved from https://www.waterrightgroup.com/blog/how-do-reverse-osmosis-drinkingwater-systems-work/ [20] Fried, K. (2009, February 4). Ocean Desalination No Solution to Water Shortages. Retrieved from https://www. foodandwaterwatch.org/news/ocean-desalination-nosolution-water-shortages [21] The Impacts of Relying on Desalination for Water. (2009, January 20). Retrieved from https://www. scientificamerican.com/article/the-impacts-of-relying-ondesalination/ [22] Ganora, D., Dorati, C., Huld, T. A., Udias, A., & Pistocchi, A. (2019, November 7). An assessment of energy storage options for large-scale PV-RO desalination in the extended Mediterranean region. [23] Wang, H. (2018, April 10). Low-energy desalination. Retrieved from https://www.nature.com/articles/s41565018-0118-y [24] Zhu, A., Rahardianto, A., Christofides, P. D., & Cohen, Y. (2010). Reverse osmosis desalination with high permeability membranes — Cost optimization and research needs. Desalination and Water Treatment, 15(13), 256–266. doi: 10.5004/dwt.2010.1763
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
DISABILITY
Healthcare and Disability BY JESSICA CAMPANILE
INTRODUCTION TO DISABILITY AND HEALTHCARE
People with disabilities, despite making up a significant portion of the United States population continue to face practical and sociocultural barriers to receiving adequate, accessible, and equitable healthcare. One in four Americans live with a disability, which can be classified roughly among six categories: mobility, cognition, hearing, vision, independent living, and self-care.4 With age, poverty, marginalized racial and gender status, and even location within the southern part of the United States, the probability that an individual experiences disability increases.4 Many people with disabilities experience secondary health issues arising in conjunction with their disability or chronic illness, which lead them to utilize the healthcare system more than non-disabled individuals. For example, in 2006, when the rate of disability in the United States was 18%, this community was associated with almost 27% of the nationâ&#x20AC;&#x2122;s $2.1 trillion in healthcare expenditures.2 Now, that the rate of disability in the United State has increased to 25.7% for non-institutionalized adults,4 and healthcare spending has risen to $3.6 trillion, effectively caring for people with disabilities becomes not only a moral imperative, but a fiscal one as well. FALL 2019
The American healthcare system is currently ill-equipped to treat disability. This is because many disability self-advocates subscribe to the social model of disability, rather than the medical model. The medical model, which is often perpetuated by legislature, and the healthcare system, equates impairment with disability.13 This ideology views disability as â&#x20AC;&#x153;an individual deficit,â&#x20AC;? which is meant to be cured or solved.13 The social model instead separates impairment and disability, asserting that while an individual may have an impairment, it is society and the way our systems and environments are built that enacts disability upon these people.13 This model is commonly utilized by activists who are advocating for accessibility reform benefitting the disability community. Despite advances in disability advocacy efforts, the medical model and subsequent overmedicalization of disability persists. There is a certain irony in the entangled histories of medicine and disability. As a result of medical advancements to prenatal care, neonatology, trauma care, and treatment technologies, people with previously fatal injuries and illnesses are now surviving, with disabilities, through infancy, childhood, and into adulthood.20 These individuals, with
COVER: Healthcare and disability Source: Wikimedia Commons
"Many people with disabilities experience secondary health issues arising in conjunction with their disability or chronic illness, which lead them to utilize the healthcare system more than non-disabled individuals."
32
Figure 1: Photo of examination room in a physician’s office. This room includes a scale that patients need to step onto, and an exam table that is not adjustable or accessible to patients who use mobility aids or have physical disabilities. Source: Flickr
"People with disabilities face practical barriers to equitable healthcare, such as lack of accessible transportation to their healthcare appointments and communication barriers with their healthcare providers."
33
impairments ranging from lupus to spinal cord injuries to cancer and heart disease, now face barriers accessing care from the very system that extended their lives in the first place. The history of the disability community and the history of the American medical system are deeply intertwined; from the dark history of forced sterilization and institutionalization to the more recent ethical debates regarding CRISPR gene-editing technologies and preimplantation genetic diagnoses of embryos with congenital diseases.3 These new technologies, and the ethical debates surrounding their use to eliminate or engineer disability, may place medical professionals and the disability community in conflict. Social determinants of health are the conditions into which patients are born and live, including issues of gender and racial equity, socioeconomic status, and social exclusion.18 These conditions are colored by the distribution of power, money, and resources, and understanding them is key to providing equity and justice-oriented healthcare.18 The tension between the aforementioned competing models of disability creates a fundamental disconnect as people with disabilities seek healthcare – while the medical system is likely viewing their disability as a comorbidity or a primary complaint, many people with disabilities view it instead as more akin to a social determinant of health.8 Viewing disability as a social determinant of health acknowledges the systemic biases and injustices against this community, and can aid physicians in understanding the holistic effect of disability on a patient’s life. Cultural disconnect is only one of a variety of barriers – ranging from inaccessible examination equipment to communication barriers – that the disability
community faces when interacting with the healthcare system.
PRACTICAL BARRIERS TO CARE
Patients with disabilities face practical barriers to equitable healthcare, such as a lack of accessible transportation to their healthcare appointments and communication barriers with their healthcare providers. Despite the 30-year lifespan of the Americans with Disabilities Act (ADA), patients with disabilities still face disparities in receiving medical care. All hospitals and medical offices, private and public, are required under the ADA to provide individuals with disabilities “full and equal access to their health care services and facilities; and reasonable modifications to policies, practices, and procedures when necessary to make health care services fully available to individuals with disabilities.”17 Many low-income patients struggle to find reliable transportation to their appointments, an issue which proves to be even more severe in rural areas and for marginalized populations.7 In fact, in any given year, 3.6 million patients (with and without disabilities) delay or miss their healthcare appointments due to lack of available and affordable transportation.7 Given the intersectional nature of disability, its prevalence in low-income and marginalized communities, and the unique transportation barriers that may face people with mobility or vision disabilities especially, the disability community is hit hard by the transportation access issue. People with disabilities who require accessible transportation are probably even more likely to miss their appointments because of the lack of affordable and available paratransit options, especially outside of DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
DISABILITY Figure 2: Patient, family member, and doctor having a conversation about a patient’s treatment plan. For patients with sensory or other communication-related disabilities, this can be a frustrating experience. Source: Free Stock Photos
paratransit options, especially outside of metropolitan areas.7 Multiple disabilities and illnesses can cause difficulties with doctor-patient communication, including cerebral palsy, aphasia, stutter, and hearing disabilities.16 In previous studies of doctor-patient communication within this population, barriers to high-quality care included the patient’s difficulty communicating effectively with their healthcare team, time constraints of physician visits, and inappropriate assumptions of a patient’s hearing or cognitive abilities.16 Despite legal mandates to include auxiliary aids and services to patients with disabilities (including services for those with communicationrelated disabilities, like interpreters or a text telephone), healthcare providers receive little to no training in how to obtain such services or how to respectfully and appropriately offer them to a patient.16 As a result of these barriers, individuals with communication disabilities have longer clinical encounters, face difficulties scheduling appointments over the phone, and frequently switch healthcare providers due to communication challenges. In fact, these disparities in care extend beyond the practical nature of accessing care, affecting clinical outcomes for patients with disabilities as well – patients with communication disabilities are three times more likely to experience an adverse medical event during a hospital stay than those without this type of disability.16 Inaccessible doctor’s office equipment and practices exacerbate disparities in healthcare for individuals with physical disabilities [Figure A]. Some physician’s office buildings are still inaccessible to individuals with physical disabilities, and such offices may also refuse to see a patient who requires special assistance
or equipment that they cannot provide.5 The quintessential examination table, a staple of the physician visit, requires a patient to climb up onto an elevated surface. This is an issue for individuals who use mobility aids or have other physical disabilities. Some physicians attempt to circumvent this inaccessibility by asking medical staff to lift the patient from their wheelchair to the examination table, which can be humiliating, demeaning, and above all, dangerous – patients can be dropped, twisted, and seriously hurt when lifted improperly.14 For patients who would rather avoid this embarrassing ordeal, or for physicians worried about legal action from patients that might get hurt, the alternative is an incomplete exam done while the patient is seated. Some evidence has shown that inaccessible equipment may contribute to the lower rates of pap smears (which require the patient to be on the examination table) and mammograms (which traditionally require the patient to stand) among women with physical disabilities.14 Similar problems exist for imaging equipment like X-rays and MRI machines, as well as weight scales.14 Inaccessible medical equipment can contribute to poorer health outcomes, such as the higher incidence of late stage breast cancer and higher mortality rates for women with physical disabilities.12
“As a result of these barriers, individuals with communication disabilities have longer clinical encounters, face difficulties scheduling appointments over the phone, and frequently switch healthcare providers due to communication challenges."
Such practical barriers to equitable healthcare showcase the broad range of needs of the disability community, how they intersect with other marginalized groups of patients, as well as the need for increased awareness of inclusive and accessible practices.
SOCIOCULTURAL BARRIERS TO CARE
In addition to practical barriers to equitable healthcare, patients with disabilities 34
Figure 3: Photo of a standardized test answer sheet. Many aspiring doctors with disabilities find difficulty gaining accommodations for standardized tests like the MCAT. Source: Pixabay
"Sociocultural barriers are more difficult to solve than practical barriers; to solve them necessitates the retanining of healthcare professionals and the changing of medical school curricula to better reflect the needs of the patient population."
also face sociocultural barriers, such as the aforementioned lack of knowledge of the social model of disability, and a lack of respect in doctor-patient communication.
necessitates the retraining of healthcare professionals and the changing of medical school curricula to better reflect the needs of the patient population.
Physicians who view disability only through a medical lens, without paying mind to the social causes and effects of disability, may miss important aspects of whole-patient care. For example, disability is more prevalent in lowincome patients who may not have the means to afford a prescription or to adhere to a special diet to manage their chronic illness. Physicians can in turn grow frustrated at times with patients who do not adhere to their treatment plans, but without understanding the financial root cause, it may be difficult for them to help the patient find a solution that will work within their social situation.
DOCTORS WITH DISABILITY
People with intellectual disabilities often complain about their interactions with their physicians and report that they feel physicians do not understand them.19 Another significant complaint present in people with intellectual disabilities (but also seen with use of interpreters or translators) is that their physician speaks more to their personal care assistant, support worker, family member, or interpreter – essentially anyone except the patient [Figure B].19 This issue raises concerns about patient confidentiality, especially if the physician speaks about the patient’s medical information without consent. Other problematic issues reported regarding the interactions between doctors and patients with intellectual disabilities include lack of explanation and demonstration before physical examination, insufficient time for the visit, and that the physician may not “take [the patient] seriously.”19 Sociocultural barriers are more difficult to solve than practical barriers; to solve them 35
In a study of African American male patients, subjects were randomly assigned to black and non-black male physicians. Subjects were more likely to select preventative care measures, particularly invasive preventative measures, after meeting with a “racially concordant” physician.1 The findings from this study suggest that a more diverse physician workforce – in this case, a higher presence of black physicians – could reduce the blackwhite male cardiovascular mortality gap by 19%.1 Given this research, it is interesting to hypothesize about the effect of an increased presence of people with disabilities in the medical professions might have on patients with disabilities. It seems logical to believe that a patient may feel more respected, listened to, and accommodated by a physician who has personal experience with disability. In addition, physicians who have faced challenges with accessibility themselves might take it upon themselves to ensure that their patients have access to accessible examination equipment and be more mindful of disability as a social identity, not just a medical condition. Despite disability affecting a quarter of individuals in America, this community is severely underrepresented in the medical profession, with only 2.7% of allopathic medical students reporting disability.11 This disparity in medical school representation is not only a diversity and inclusion issue, but one that might affect patient outcomes. Barriers to the medical profession prove to be, similar to those blocking access to healthcare, both practical and sociocultural. Medical schools set technical standards that their applicants DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
DISABILITY must meet, which vary from school to school, and set out requirements like sight, ability to move quickly and deliver emergency care, palpation, and communication with patients.9 However, these requirements differ greatly by school, with some offering affirmations that people with disabilities can indeed become successful medical professionals,15 and others offering little to no guidance at all.10 Other barriers include a studentâ&#x20AC;&#x2122;s accommodations procedures in their undergraduate education, the difficulty of applying for accommodations for the MCAT test, and opportunities for implicit bias on behalf of admissions committees, given the underrepresented nature of this minority group in medical education.
CONCLUSION
[8.] Emerson, E., Madden, R., Graham, H., Llewellyn, G., Hatton, C., & Robertson, J. (2011). The health of disabled people and the social determinants of health. Public health, 3(125), 145-147. [9.] Harvard Medical School, n.d. Technical Standards. Admissions Process. https://meded.hms.harvard.edu/ admissions-technical-standards [10.] The Johns Hopkins University School of Medicine, n.d. Prerequisites, Requirements and Policies. MD Program Application Process. [11.] Meeks, L.M. and Herzer, K.R., 2016. Prevalence of selfdisclosed disability among medical students in US allopathic medical schools. Jama, 316(21), pp.2271-2272.
In conclusion, patients with disabilities face a wide variety of barriers in seeking equitable and accessible healthcare. These barriers can be practical, such as lack of transportation or accessible examination equipment, or sociocultural, such as lack of respect or appropriate communicative knowledge. There is evidence that this community can benefit from increasing the number of physicians with disabilities in the physician workforce. However, this population faces stringent barriers when attempting to apply to graduate medical education. Given these problems, there is a clear need for further research in this field on the experiences and barriers that doctors and patients with disabilities face.
[12.] Roetzheim, R. G., & Chirikos, T. N. (2002). Breast cancer detection and outcomes in a disability beneficiary population. Journal of Health Care for the Poor and Underserved, 13(4), 461-476.
References [1.] Alsan, M., Garrick, O., & Graziani, G. (2019). Does diversity matter for health? Experimental evidence from Oakland. American Economic Review, 109(12), 4071-4111.
[16.] Stransky, M. L., & Morris, M. A. (2019). Adults with Communication Disabilities Face Health Care Obstacles: Adults with communication disabilities struggle to access quality health care-significantly more than typical peers. How can we get them needed health information and services?.
[2.] Anderson, W. L., Armour, B. S., Finkelstein, E. A., & Wiener, J. M. (2010). Estimates of state-level health-care expenditures associated with disability. Public Health Reports, 125(1), 44-51. [3.] Benston, S. (2016). CRISPR, a crossroads in genetic intervention: Pitting the right to health against the right to disability. Laws, 5(1), 5. [4.] CDC, 2018. 1 in 4 US Adults Live With a Disability. CDC Newsroom website, August 16. [5.] Chen, P. W. (2013, May 23). Disability and Discrimination at the Doctor's Office. [6.] Cronk, I. (2015, August 17). When You Don't Have a Ride to the Doctor's Office.
[13.] Shakespeare, T., 2006. The social model of disability. The disability studies reader, 2, Pp.197-204. [14.] Shapiro, J. (2007, September 13). Medical Care Often Inaccessible to Disabled Patients. [15.] Stanford University School of Medicine, 2018. Technical Standards. MD Admissions. https://med. stanford.edu/md-admissions/how-to-apply/technicalstandards.html
[17.] US Department of Health and Human Services. (2012). Americans with Disabilities Act: Access to medical care for individuals with mobility disabilities. [18.] World Health Organization. (2017). About social determinants of health. [19.] Wullink, M., Veldhuijzen, W., van Schrojenstein Lantman-de, H. M., Metsemakers, J. F., & Dinant, G. J. (2009). Doctor-patient communication with people with intellectual disability-a qualitative study. BMC family practice, 10(1), 82. [20.] Zola, I. K. (2017). The medicalization of aging and disability. The Elderly: Legal and Ethical Issues in Healthcare Policy, 17.
[7.] Dize, V. (2017, November 16). Transportation Undergirds Health Care.
36
Dealing with Ischemic Heart Disease and Stroke: Stress, Consequences and, Management JOHN EJIOGU ‘23 Cover: A clot of necrotic tissue as a result of atherosclerosis. Source: Wikimedia Commons
“Atherosclerosis is a disease characterized by the hardening and narrowing of arteries due to the formation of plaques in the inner walls of the arteries.” 37
Introduction One of the many mysteries people struggle to understand is death. To evade our inevitable encounter with this mystery, we try to live longer to make the best of our lives and explore all that the world has to offer. Our exploration entails harnessing our intelligence to improve the world. Our exploration also drives us to ask questions about almost every field of life, including about this largely dreaded and controversial topic of death; questions like do we really have to die? Questions like what are the leading causes of death? And if we know them, can we and how do we curb them to give us the chance to live longer and explore the world more? According to World Health Organization (2016), the leading causes of death globally in 2016 were ischemic heart disease and stroke, both accounting for approximately 27% of total deaths and consistent with the pattern of
being the leading causes of death over the past decade. Ischemic heart disease, also known as coronary artery disease, is caused by the narrowing of the coronary arteries of the heart ultimately limiting blood flow to the heart. Stroke is as a result of a compromise of the arteries that supply blood to the brain: either a clot forms in these arteries leading to insufficient supply of blood to the brain (ischemic stroke) or a rupture in the arteries causes severe blood loss(hemorrhagic stroke). In the case of ischemic stroke, however, clots usually form in the heart and travel through the circulatory system to the neck or skull. Risk factors for ischemic heart disease and stroke range from genetic predispositions or previous medical conditions to lifestyle. However, at the intersection of genetic predispositions, medical conditions and lifestyle is a major risk factor for the development of ischemic heart disease and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
the body. As a result, cells of the immune system (macrophages and lymphocytes) come into the site of the endothelial injury and lodge themselves in the endothelium. The macrophages then take up low density lipoprotein (LPL) which circulates in the blood. LPL takes up cholesterol and leads to the formation of foam cells. Foam cells are macrophages filled with LPL and lipids. The accumulation of foam cells can then lead to a plaque. When a plaque forms, one of two things can happen: either the plaque grows so big that the cell lumen is narrowed, leading to insufficient blood flow through the arteries to target organs or the plaque ruptures, in which case the platelets clotting the rupture block the artery and prevent blood flow. stroke: atherosclerosis.
Atherosclerosis Atherosclerosis is a disease characterized by the hardening and narrowing of arteries due to the formation of plaques in the inner walls of the arteries. Plaques are composed of fat, cholesterol, calcium, and other substances found in the blood. Atherosclerosis is a disorder with multiple genetic and environmental contributions. One example of a genetic risk factor is familial hypercholesterolemia, a genetic disease that could independently lead to atherosclerosis. However, research shows that for the most part, the genetic risk factors for atherosclerosis do not lead to ischemic heart disease or stroke without environmental contributions. In other words, even if an individual is genetically predisposed to the development of atherosclerosis, he/she would most likely not contract the disease without environmental contributions such as hypertension, cigarette smoking, diabetes, etc.
Development of Atherosclerosis The innermost layer of an artery is composed of a single layer of endothelial cells on the luminal surface. The endothelial cells form a non-thrombotic, non-adherent surface, acting as a semipermeable membrane, synthesizing and releasing chemical mediators, maintaining the basement membrane and modifying lipoproteins as they cross into the artery wall. Chronic high pressure on arteries can lead to injury of the endothelial cells. This injury brings about an inflammatory sensation within
FALL 2019
Figure 1: The narrowing of an artery (a form of atherosclerosis) that leads to abnormal blood flow. Source: Wikimedia Commons
Stress Stress is differentially defined across a variety of fields. Under normal circumstances, the human body undergoes stress in response to certain changes in the environment. Stress, thus, can be viewed as the bodyâ&#x20AC;&#x2122;s defensive mechanism in reaction to any change that requires an adjustment or response. However, there is no standard metric for which stress can be measured for every individual. A stressor that affects one individual, for instance, may have a completely different influence on another person. Research suggests that multiple factors determine these differences in stress perception. Considering the differences between individuals, stress can be more accurately defined as a relationship between a person and an environment which exceeds his or her coping capabilities and endangers his or her well-being. We often find ourselves in this situation as a result of our insatiability and quest to survive. In the long run, we tend to forget about our health and bury ourselves in our stressors (and pain relievers!) Overtime, therefore, we find ourselves at the mercy of chronic stress and all its consequences.
â&#x20AC;&#x153;In other words, even if the individual is predisposed to the development of atherosclerosis, they would most likely not contract the disease without environmental contributions such as hypertension, cigarette smoking, and diabetes.â&#x20AC;?
The relationship between chronic stress and atherosclerosis There are different kinds of stress: oxidative stress, mental stress and social stress. Oxidative stress is a disturbance in the balance between the production of reactive oxygen species and antioxidant defenses in the body. Oxidation of these oxygen species can be very harmful to the
38
Figure 2: The body’s response to stress from the brain to the blood stream. Source: Wikimedia Commons
vascular system in the body and can lead to the endothelial injuries which are the foundations of atherosclerosis. Oxidative stress is brought about by the underlying risk factors for atherosclerosis including diabetes and smoking. Therefore, in order to reduce the risk of any compromise to the vascular system, especially in individuals with these underlying conditions, it is crucial to keep oxidative stress in check. Mental stress refers to the pressure felt by an individual as a result of a present situation that exceeds his/her control. Social stress refers to stress resulting from lack of social support. When using the word ‘stress’, it is usually in reference to mental stress. Indeed, chronic mental stress is dangerous to the human body as it is pivotal to the incidence of atherosclerosis, thus ischemic heart disease and stroke.
“For such situations, adopting an effective approach to dealing with [troubling situations] is cruicial to avoid chronic stress.”
When a human being is stressed, the amygdala in the brain sends a signal to the hypothalamus which, through the autonomic nervous system, brings about the release of epinephrine (adrenaline) from the adrenal gland into the blood stream. The epinephrine brings about a series of changes in the body including increase in blood pressure. Chronic stress can, therefore, lead to chronic high blood pressure which then leads to endothelial injury, thus atherosclerosis. In addition, research has shown that mental stress fuels the emergence and persistence of oxidative stress, increasing the risk of compromise to the vascular system and atherosclerosis.
Psychological management of stress Human beings have become used to hearing things like “try to get enough sleep,” “exercise is good for you,” “eat good food,” etc. and we probably will still continue to hear these techniques for stress management. For the most part however, people fail to understand the purposes of these. In other words, people will often invent all sorts of recipes for stress management, and while some hold possibilities for success, if poorly executed, they can do more harm than good. On a different note, chronically stressed individuals without plans for stress management may resort to bad habits such as addiction to smoking and drinking, both of which increase the risk for atherosclerosis based cardiovascular
39
diseases. In order to effectively manage stress (and therein help prevent both atherosclerosis and associated diseases), it is important to acknowledge that for every stressor, medications are not necessarily the best answers. Instead, it is crucial to acknowledge stress and take steps to identify one’s stressors in order to deal with them. Research has shown that the lifestyle changes that are the bedrock of stress management are hinged on psychological adjustments. How so? Consider the example below: An individual comes back from work, exhausted and annoyed because his boss threatened to fire him if he does not pick up his pace at work. The individual feels that he is working hard enough so he has never considered that there may be room for improvement. Instead of thinking about the possibility of improving, he is engrossed in the fact that he may lose his job. In such a situation, not only is he stressing himself from worrying, he may be paving the way for more stress because he may also worry about not having figured out a way to prevent himself from being fired. In other words, he is stressing himself on multiple levels. For such situations, adopting an effective approach to dealing with the stress is crucial to avoid chronic stress. After encountering a stressor, an individual must be able to take appropriate steps to deal with it in a healthy and effective manner. Without these steps, the risk for developing chronic stress can be immense and the consequences of such stress
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
can fundamentally and abysmally alter future lifestyles. Some of the things that we can do are:
mortalities per year can be reduced and healthy lifestyles can be encouraged.
1. Pause: It is important to take a deep breath, stop overthinking, and try to acknowledge the stressor and identify it.
References
2. Review: It is not uncommon to dwell on the ugly faces of situations so much so that we often overlook the possible positive outcomes. By looking at the multiple faces of a situation and considering the positive possibilities, the negative impact of the stressor can be minimized. Therefore, it is important to try to always maintain a positive outlook.
2. Weissberg PL. Atherosclerosis involves more than just lipids: Plaque dynamics. Eur Heart J. 1999;1(suppl T):T13-T18.
3. Contentment: Sometimes, we get stressed worrying about material things, without which we can still survive comfortably. However, because somebody else has them, we want to acquire them by any means possible. Itâ&#x20AC;&#x2122;s important to realize that no one is selfsufficient and everything they want. So, learning to be contented with whatever one has is key to a happy less stressful life. 4. Hope: Whatever circumstance you find yourself, it helps to realize that you do not have the worst problem in the world. As long as you are alive, there is hope for a better tomorrow. But for you to see that tomorrow, you need to be alive and stressing may prevent you from seeing it. In other words, try to constructively hope and work for a better tomorrow.
1. Global Health Estimates 2016: Deaths by Cause, Age, Sex, by country and by region
3. Ross R. The pathogenesis of atherosclerosis: A perspective for the 1990s. Nature. 1993;362:801-809. 4. https://my.clevelandclinic.org/health/articles/11874-stress 5. Lazarus R S and Folkman S (1984) Stress, Appraisal and Coping. New York, Springer. 6. Inoue N (2014) Stress and Atherosclerotic Cardiovascular Disease. US National Library of Medicine. National Institutes of Health. 7. Understanding the Stress Response (2018) Harvard Medical Publishing. 8. Ryan M. Niemiec (2017) 10 New Strategies for Stress Management. 9. Lusis AJ, et al. 2004 Genetics of Atherosclerosis 10. Erica H (2008) Everything you should Know about Ischemic Stroke https://www.healthline.com/health/stroke/ cerebral-ischemia#recovery
5. Ask for help: If you are constantly experiencing stress, and you have tried the aforementioned steps, ask for help. Speak to a counselor, speak to a health care practitioner, speak to a clergyman. Speak to someone, anyone you feel comfortable around and/or can confide in. Sometimes, we just need to share our burden with others who can either help or direct us to the best resources that we need to get help. Now, while sometimes, stressing seems inevitable, making use of these approaches can reduce the risk of chronic stress and by extension ischemic heart disease and stroke. In addition, such psychological adaptations ensure the ability to effectively embrace the lifestyle changes that are encouraged for stress management. And in doing so, the number of
FALL 2019
39A
39B
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
ENDURANCE SPORTS
The Intersection Between Brain, Pain, & Body: How Greater Inhibitory Control Leads to Better Performance in Endurance Sports BY JULIA ROBITAILLE
INTRODUCTION
Endurance athletes often experience the significant effects that mental state can have on performance, especially during long periods of exertion. The intersection between brain and body in endurance is complex but ubiquitous, and the ability to endure a strenuous physical task relies not only on the body but on the brain. It is said that the brain will quit a thousand times before the body will. As it turns out, there is some truth to this saying. The brain can limit the abilities of the body as a form of protection. If the body is being pushed to extremes, the brain will step in and signal the body to stop exerting itself. In endurance sports, this often manifests itself in reducing the body’s physical toil. This can mean slowing, stopping, or lessening one’s effort. While this function of the brain is meant to protect the body, it may result in poorer endurance performance. If it were possible to completely overcome this limiting factor, one might reach true physical potential, although this would likely be destructive to various aspects of the body. It is possible, however, that some exceptional athletes have a greater resistance to this limiting mental factor, which enables them to push themselves further over the brink of exhaustion than ordinary individuals. To override this mental limitation and reach such potential, people exercise FALL 2019
inhibitory control, or the ability to deliberately “inhibit or regulate prepotent attentional or behavioral responses,” as described in the National Survey for Child Adolescent WellBeing.3 These questions relating inhibitory control and endurance led exercise physiologist Samuele Marcora and his team of researchers to question whether endurance athletes, who push the boundaries of physical limitations, possess a certain exceptionality in inhibitory control, and if so, whether this skill is innate or developed. Inhibitory control, according to Canadian psychiatrist Adele Diamond, is essential for “overriding habitual responses” and “exerting self-control.”6 Examples of functions dependent on inhibitory control include a child’s ability to remain focused on a teacher’s lesson in a loud and chaotic classroom or the resistance to obey every command in the game of Simon Says.3 Inhibitory control sounds suspiciously similar to Marcora’s definition of endurance, which is “the struggle to continue against a mounting desire to stop.”1 The part of the brain responsible for resisting impulsive overeating is the same function involved in an endurance athlete’s ability to continue in spite of increasing discomfort.
Cover Image Source: Wikimedia Commons
"The brain can limit the abilities of the body as a form of protection. If the body is being pushed to extremes, the brain will step in and signal the body to stop exerting itself."
40
inhibitory control in elite athletes.5 Figure 1: This is an example of a Stroop Task, an effective test of inhibitory control. A test like this one was administered to a sample of professional and recreational road cyclists. Source: Wikimedia Commons
GREATER INHIBITORY CONTROL IS RELATED TO BETTER ENDURANCE PERFORMANCE
"Inhibitory control and other executive functions can be improved through activities such as aerobics, martial arts, and yoga. Most activities that require discipline and selfcontrol can help develop inhibitory control."
Research suggests that varying levels of inhibitory control do, in fact, have an effect on one’s ability to endure. Researchers in Australia found through mental and physical tests that professional cyclists have superior inhibitory control compared to recreational cyclists. In the tests conducted, professional and recreational road cyclists were administered either a 30-minute Stroop task or an undemanding cognitive task lasting 10 minutes. The Stroop task required participants to indicate the color of sequential words written in incongruous ink colors (such as the word “green” written in yellow ink). [Figure 1] After completing their respective cognitive tasks, all participants took part in a 20-minute time trial on a stationary bicycle. The professional cyclists reported little difference in perceived exertion of the time trial after either cognitive task given, while recreational cyclists reported higher perceived rates of exertion after the challenging Stroop task in comparison to the easy task. This suggests that the professionals have greater resistance to mental fatigue, perhaps aided by superior inhibitory control. In addition, professional cyclists scored more correct responses on the Stroop task than the recreational cyclists, definitively indicating their greater inhibitory control.2 [Figure 2] Another study conducted in Taiwan measured results of elite badminton players and non-athletes on a stop signal reaction-time (SSRT) test. This test measures the “inhibition of a motor response that has already been executed” and like the Stroop task, requires inhibitory control.7 Participants were instructed to click the slash key when shown a circle and the “Z” key when shown a square. They were also informed to withhold any response if shown a “stop” signal. Upon analysis of average reaction times and accuracies, the researchers found that athletes with more experience in high-level competition had greater chances of inhibiting responses, suggesting superior
41
Such results lead to the question of whether inhibitory control is fixed or developmental. In other words, were the experienced athletes born with this skill or was it developed over years of training? According to research conducted by the National Survey of Child and Adolescent Well-Being, it seems inhibitory control is an inherent trait. Children with superior inhibitory control to their peers at younger ages were also found to have better inhibitory control as adults, with the interesting finding that young girls had better inhibitory control than boys.3 Clearly, certain individuals were predisposed to have superior inhibitory control to others. Despite this predisposition, inhibitory control is one of three main executive functions, which, according to a 2010 study, can be developed over time.9 Executive functions are cognitive mechanisms involving the control of one’s behavior and the three main functions are inhibitory control, memory, and “cognitive flexibility.”10 Inhibitory control develops between early childhood and adolescence and improves with age, supporting that it is not a fixed trait. Although the professional road cyclists may have been born with levels of inhibitory control superior to others, it is equally as possible that over years of training and discipline, they cultivated the neurological practice of inhibitory control. Inhibitory control and other executive functions can be improved through activities such as aerobics, martial arts, and yoga. Most activities that require discipline and self-control can help develop inhibitory control.
HOW INHIBITORY CONTROL CAN BE IMPROVED
How is inhibitory control developed? Any activity that involves self-control or the resistance to act on impulses may increase inhibitory control. Aerobic exercise, for example, has recently been postulated to help smokers quit. The idea is that aerobic exercise strengthens the smoker’s inhibitory control, which results in better resistance of cravings.4 Aerobic exercise increases inhibitory control in two ways. Continuing despite an increasing urge to stop during aerobic exercise uses the inhibitory control function, thus strengthening it. With use, this function can improve and develop over time. Aerobic exercise also increases blood flow to the prefrontal cortex (PFC) in the brain. The PFC is where inhibitory control and other executive functions (such as memory and emotional control) are regulated. With increased blood flow through exercise, the density and size of capillaries may increase, thus increasing function of inhibitory control.5 DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
ENDURANCE SPORTS doi: 10.1037/e565892012-001 [4] Morava, A. 3MT 2018. 3MT 2018. Western University. [5] Thomas, A. G., Dennis, A., Bandettini, P. A., & JohansenBerg, H. (2012). The effects of aerobic activity on brain structure. Frontiers in psychology, 3, 86. doi:10.3389/ fpsyg.2012.00086
A study conducted by Lakes and Hoyt in 2008 assigned each child in a sample of children aged 5 to 11 either a traditional tae kwon do program or a standard physical education class for three months. The children were tested for executive functions before and after the trial. The children in the tae kwon do program showed greater improvements in inhibitory control than students in the group assigned to standard physical education.6,8 This further supports the theory that inhibitory control can be developed over time, and it also raises the possibility that recreational athletes and ordinary people can cultivate inhibitory control to enhance their performance in other aspects of their lives.3
CONCLUSION
Inhibitory control plays a role in the lives of every individual, especially athletes, ranging in levels from elite to recreational. Although certain individuals are predisposed to differing natural levels of inhibitory control, research suggests that this function can be improved by activities that involve self-control and inhibition, such as martial arts, aerobic exercise, and yoga. Athletes can improve inhibitory control to better performance through extracurricular activities non-specific to one’s sport. Harnessing the power of the mind by increasing inhibitory control can help athletes push their bodies further, which can be a significant advantage in endurance competition, especially when the difference between winning and losing comes down to a matter of seconds.
Figure 2: The number of correct responses of professional and recreational cyclists during a 30-minute Stroop task. Source: Martin, PLoS
[6] Diamond A. (2012). Activities and Programs That Improve Children's Executive Functions. Current directions in psychological science, 21(5), 335–341. doi:10.1177/0963721412453722 [7] Logan, G. D. (1994). On the ability to inhibit thought and action: A users’guide to the stop signal paradigm. Retrieved from http://psycnet.apa.org/psycinfo/1994-97487-005 [8] Lakes KD, Hoyt WT. Promoting self-regulation through school-based martial arts training. Applied Developmental Psychology. 2004;25:283–302. [9] Best, J. R., & Miller, P. H. (2010). A developmental perspective on executive function. Child development, 81(6), 1641–1660. doi:10.1111/j.1467-8624.2010.01499.x [10] Diamond A. (2013). Executive functions. Annual review of psychology, 64, 135–168. doi:10.1146/ annurev-psych-113011-143750
“Inhibitory control plays a role in the lives of every indivdual, expecially athletes, ranging in levels from elite to recreational."
References [[1] Hutchinson, A. (2019). Endure. HarperCollins UK. [2] Martin, K., Staiano, W., Menaspà, P., Hennessey, T., Marcora, S., Keegan, R., … Rattray, B. (2016). Superior Inhibitory Control and Resistance to Mental Fatigue in Professional Road Cyclists. PloS one, 11(7), e0159907. doi:10.1371/journal.pone.0159907 [3] National Survey of Child Adolescent and Well-Being, No. 1: Inhibitory Control Abilities Among Young Children in the Child Welfare System. (2009). PsycEXTRA Dataset. 42
Understanding What Others Think of Us BY KLARA BARBAROSSA Cover Image Source: Wikimedia Commons
"In addition to influencing mental health outcomes, self-concept has been shown to modulate psychophysiological responses."
43 FALL 2019
OVERVIEW
Understanding what other people think about us is a crucial part of daily social interactions. At the surface, it might sound narcissistic to be concerned with how we are viewed by others. Nonetheless, it is critical to understand how we are perceived by other people in order to navigate social situations17. Additionally, self-concept distortions have profound mental health implications5,9,21. “Selfdiscrepancy theory” stipulates that a mismatch between beliefs and realities about the self can result in emotional discomfort; an individual’s level of psychological distress is thought to grow proportionately with increasing discrepancy between beliefs and reality about the self. In a study with undergraduates with mild depression or social anxiety, symptoms of social anxiety increased as the magnitude of their self-discrepancy increased [Figure 1]. Additionally, a follow-up study examining normal undergraduates, clinically diagnosed depressed patients, and clinically diagnosed social phobic patients revealed that normal undergraduates had the lowest levels of discrepancy5.
instance, in a study in which undergraduates were provided with math performance feedback that could be either consistent or inconsistent with their self-concept, participants showed varying levels of heart rate response recovery to prestress levels. When undergraduates were given positive performance feedback, their cardiovascular response returned to baseline rapidly, whereas the response of participants who received negative feedback varied based on their self-concept. Participants who received negative feedback and who already had a negative self-concept (and therefore the feedback was consistent with the self-concept) returned to baseline cardiovascular response at a rate comparable to that of the participants who had received positive feedback. A key difference emerged with participants who had a positive self-concept and received negative feedback. Unlike participants who received negative feedback consistent with their negative self-concept, these participants showed a delayed heart rate recovery. These findings suggest that a self-discrepancy could impair psychophysiological recovery to a stressor9.
In addition to influencing mental health outcomes, self-concept has been shown to modulate psychophysiological responses. For
Despite the importance of understanding how others perceive us, little is known about how we encode and remember these opinions. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SOCIAL NETWORKS Much of what is currently known about memory and the self pertains to the selfreference effect (SRE), in which information is encoded differently based on whether or not the information is relevant to the self. More specifically, information that is personally relevant has a privileged memory status, and multiple studies have shown that this type of information is more easily recalled10,16, 23. For instance, words that we consider descriptive of ourselves are more likely to be remembered than words that are unrelated to the self16. A meta-analysis determined that the SRE is “time sensitive,” as stimuli that were presented for shorter durations tended to increase selfreference compared to other-reference. Such a relationship with time could suggest that relating information to the self is more of an automatic process than relating information to others1,16. Functional magnetic resonance imaging (fMRI) studies have tied the SRE to characteristic brain regions and activity patterns. The medial prefrontal cortex (mPFC) plays a key role in the SRE as participants with damage to this brain area do not display the same pattern of self-reference memory. That is, unlike control participants who remember more traits about themselves, mPFC damaged participants remember approximately the same number of traits for themselves and for others10. Other fMRI studies have aimed to uncover how activation of the mPFC differs when humans make judgments about the self, close others (family and friends), or semantic (nonsocial) information4,16,19, 23. The mPFC shows greater activation when participants make judgements about themselves compared to when they make judgements about close others’ personality traits4. In a study focused on uncovering the neural correlates underlying retrieval of information, researchers found differential levels of activation based on the type of information that was retrieved. Participants
showed greater activation in the ventromedial prefrontal cortex (vmPFC), posterior cingulate cortex (PCC) and bilateral angular gyrus (AG) when they retrieved information about the self than when they retrieved information about other people23 [Figure 2]. Given these fMRI findings related to the SRE, it is possible that the mPFC also plays a role in how we learn what other people think of us.
SCHEMA-CONSISTENCY AND MEMORY
One factor that may influence whether and how we learn what other people think of us is the extent to which information is consistent or inconsistent with what we already know about ourselves. That is, is it easier for us to remember what others think about us when it confirms what we already consider to be true? Animal research has provided evidence that retrieval of information can be facilitated when it aligns with pre-existing memories. For instance, rats can learn to associate flavors with a place in their cage better if the flavors are placed in locations consistent with the spatial representations that they had learned previously18. These spatial layouts correspond to mental representations or “schemas” in the brain which are thought to support rapid and successful recall of information. Early psychology research by Craik and Tulving delves into how the “principle of congruity” influences whether or not a memory is successfully retrieved. As its name would suggest, this principle proposes that new information consistent with prior memories can be readily incorporated in schemas. This integration of the congruent information can then strengthen the “memory trace” and thereby facilitate later retrieval1. Behavioral data in a variety of studies have supported this theory, showing that humans often remember information consistent with preexisting schemas8,14, 19. Recent fMRI findings have shown that the enhanced encoding of schema-congruent information is supported by activation of certain brain regions such as the anterior-ventral left inferior frontal gyrus (avLIFG), the middorsal left inferior frontal gyrus (mdLIFG), and the left inferior temporal gyrus (ITG)15. A fMRI study in which undergraduates encoded new information that was either related or unrelated to their previous coursework found that the successful encoding of course-related information was tied with increased mPFC activity20. Additional research has supported these findings, with evidence that increasing congruency of information is tied to increasing mPFC activity19.
“Much of what is currently known about memory and the self pertains to the self reference effect (SRE), in which information is encoded differently based on whether or not the information is relevant to the self."
Figure 1: Model showing how different types of selfdiscrepancy correlate with social anxiety and depression. Source: Wikimedia Commons.
44
Figure 2: A midsagittal section of the brain displaying cortical midline structures including the MPFC, PCC, and ACC. Source: Wikimedia Commons
SCHEMA-INCONSISTENCY AND MEMORY
"Social psychology research has found that social information inconsistent with our schemas tends to be better remembered than consistent information."
At the same time, there is a body of research that suggests that the opposite could be true: information that is inconsistent with what we already know might “stick out” and be more likely to be remembered than consistent information. Prediction error refers to a difference between what was expected and what was actually experienced. This “element of surprise” occurs when information is inconsistent with what we already understood to be true12. Social psychology research has found that social information inconsistent with our schemas tends to be better remembered than consistent information. For instance, undergraduates who were presented with information that was incongruent with their pre-existing stereotypes of other social groups exhibited better recall of the information and experienced higher feelings of agitation2. Additionally, people have shown better memory discrimination for information considered “atypical” in relation to schemata of certain personality types. For example, in the “man of the world” schema being “suave” was considered typical while being “scatterbrained” was considered atypical22. In the same vain, participants exhibited superior recall for behaviors that were incongruent with a particular person’s personality traits. These results were explained by a“depth of processing” model. Salient in the context of previously-held beliefs, incongruent information was thought to be processed at a deeper level that involved an update of an individual’s impressions. This adjusting of expectations based on the novel information can presumably facilitate the creation of an accessible “memory trace” that promotes later recall3. At the same time, there has been research that suggests that this “novelty effect” could perhaps be due to differing discrimination demands rather than the novelty of the stimulus11. Researchers have sought to uncover the brain areas that could underlie this superior recall of incongruent information. For instance, certain amnesic patients with substantial
45
medial temporal lobe damage failed to show a novelty advantage, suggesting a critical role of the hippocampal system [Figure 3]. Under their SLIMM (schema-linked interactions between medial prefrontal and medial temporal regions) framework, van Kestern et al. argue that the mPFC acts to detect new information’s congruence—there should be greater mPFC activity when information is highly congruent and less activity when information is highly incongruent. On top of its congruencydetection role, the mPFC is thought to inhibit the medial temporal lobe (MTL) through a neurotransmitter mechanism that still requires further investigation. In cases where there is highly incongruent information, the MTL receives less inhibition from the mPFC. As a result, new neocortical associations between previously unrelated pieces of information can form20.
WHAT'S NEXT?
Given these two possibilities, a question still remains: are we better at remembering consistent or inconsistent feedback about ourselves? Additionally, what are the brain mechanisms that underlie this possible difference in memory scores? The neural correlates of prediction error have been studied before, but have been less about selfrelated information and more about others. For instance, activity in parts of the anterior cingulate cortex (ACC) played differential roles in a task in which participants formed impressions about others and then were subsequently provided with inconsistent feedback about them. While the ventral ACC (vACC) was sensitive to the social feedback, the dorsal ACC (dACC) appeared sensitive to expectancy violation. That is, the dACC showed a signal change in response to whether or not the feedback was incongruent or congruent with the impressions that were initially formed13. This fMRI study was unique because it probed the neural mechanisms involved with social feedback about other people rather than the self. In addition to focusing on feedback about other people, a large part of the fMRI research in humans tends to center around “positive” or “negative” social evaluative feedback about the self. For instance, researchers investigated how people integrate favorable or unfavorable feedback about their own personality traits. People showed a positive bias, being more likely to remember and integrate positive feedback over the negative feedback they received. The rewarding nature of positive social evaluative feedback was thought to be processed by the ventral striatum as well as the areas along the border between the ACC and DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SOCIAL NETWORKDS personality traits as organizing principles in memory for behaviors. Journal of Personality and Social Psychology, 37, 25-38.
Figure 3: The hippocampus is located in the medial temporal obe of the brain.
[4] Heatherton, T.F. et al. (2006).“Medial prefrontal activity differentiates self from close others.” SCAN, 1 (1), 18-25.
Source: Wikimedia Commons.
[5] Higgins, E.T. (1989). “Self-Discrepancy Theory: What Patterns of Self-Beliefs Cause People to Suffer?” Advances in Experimental Social Psychology, 22, 93-136.
mPFC. On the other hand, the “comparisonrelated component” – the judgement of difference between ratings made about the self and ratings received as feedback from others – was thought to be tied to activity in the “mentalizing network,” a collection of brain areas including the mPFC, right superior temporal sulcus (STS), bilateral inferior frontal gurus (IFG), right temporoparietal junction (TPJ), and left temporal poles (TP)7. In a departure from the current social fMRI literature, a study offering feedback that is either consistent or inconsistent with selfconcept could shed light on how we come to understand how others evaluate us. Activity in the mPFC as well as the ACC may work together to support the encoding of inconsistent information about the self. Using a paradigm with relatively ambiguous feedback (rather than clear-cut positive or negative feedback) could act as a realistic model of how people are evaluated in everyday social settings. Examining the underlying mechanisms that allow us to encode and remember other people’s perceptions of us would elucidate some of the nuances of human social function. Knowing how consistency between others’ perceptions and self-perceptions impacts neural responses to and our memory of feedback could not only inform everyday social interactions, but also treatment of mental and physical pathologies. If brain regions associated with encoding and remembering other people’s perceptions of the self are identified, scientists could arguably be equipped to update therapies and treatments for a variety of psychopathologies in which self-perception is paramount. References [1] Craik F.I.M and Tulving, E. (1975). “Depth of processing and the retention of words in episodic memory.” J Exp Psychol Gen, 104, 268-294.
[6] Kishiyama, M.M et al. (2004). “The von Restorff Effect in Amnesia: The Contribution of the Hippocampal System to Novelty-Related Memory Enhancements.” Journal of Cognitive Neuroscience, 16(1), 15-23. [7] Korn, C.W., et al. (2012). “Positively Biased Processing of Self-Relevant Social Feedback.” The Journal of Neuroscience, 47 (32), 16832-16844. [8] Packard, P.A. et al. (2017). “Semantic Congruence Accelerates the Onset of the Neural Signals of Successful Memory Encoding.” The Journal of Neuroscience, 37(2), 291-301. [9] Papousek, I., et al. (2011). “Delayed psychophysiological recovery after self-concept-inconsistent negative performance feedback.” International Journal of Psychophysiology, 3(82), 275-282.
"Examining the underlying mechanisms that allow us to encode and remember other people's perceptions of us would elucidate some of the nuances of human social function."
[10] Philippi, C.L., et al. (2012). “Medial PFC Damage Abolishes the Self-reference Effect.” J Cogn Neurosci, 24(2), 475 – 481. [11] Poppenk, J. et al. (2011) “Revisiting the novelty effect: when familiarity, not novelty, enhances memory.” J. Exp. Psychol. Learn. Mem. Cogn. 36, 1321–1330. [12] Sinclair, A.H. and Barense, M. D. (2018). “Surprise and destabilize: prediction error influences episodic memory reconsolidation.” Learning and Memory, 25, 369-381. [13] Somerville, L.H., et al. (2006). “Anterior cingulate cortex responds differentially to expectancy violation and social rejection.” Nature Neuroscience, 9(8), 1007-1008. [14] Stangor, C. et al. (1989). “Strength of Expectancies and Memory for Social Information: What We Remember Depends on How Much We Know.”Journal of Experimental Social Psychology, 25, 18-35. [15] Staresina, B.P., et al. (2009). “Event Congruency Enhances Episodic Memory Encoding through Semantic Elaboration and Relational Binding.” Cereb Cortex, 19 (5), 1198-1207.
[2] Forster, J., et al. (2000) “When Stereotype Disconfirmation is a Personal Threat: How Prejudice and Prevention Focus Moderate Incongruency Effects.” Social Cognition, 18(2), 178-197.
[16] Symons, C.S. and Johnson, B.T. (1997). “The Self-Reference Effect in Memory: A Meta-Analysis.” Psychological Bulletin, 121 (3), 371-394.
[3] Hastie, R. & Kumar, P. A. (1979). Personal memory:
[17] Tamir, D.I. and Thornton, M.A. (2018). “Modeling the 46
predictive social mind.”Trends Cogn Sci. 22(3), 201-212. [18] Tse, D. et al. (2007). “Schemas and Memory Consolidation.” Science, 316 (5821), 76-82. [19] van Kesteren, M.T., et al. (2014). “Building on prior knowledge: schema-dependent encoding processes relate to academic performance.” J Cogn Neurosci. 26, 2250 –2261. [20] van Kestern, M.T., et al (2012). “How schema and novelty augment memory foundation.” Trends Neurosci, 35, 211-219. [21] Veale, D., et al. (2003). “Self-discrepancy in body dysmorphic disorder.” The British Journal of Clinical Psychology, 42, 157-169. [22] Woll, S. B. & Graesser, A. C. (1982). “Memory discrimination for information typical or atypical of person schemata.” Social Cognition, 1(4), 287-310. [23] Yaoi, K. et al. (2015). “Neural correlates of the selfreference effect: evidence from evaluation and recognition processes.” Front Hum Neurosci, 9 (383).
47
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
LY M E D I S E A S E
Lyme Disease Through a Geographical Lens BY KRISTAL WONG
INTRODUCTION
Lyme disease is the most common tickborne disease in the United States. The disease is caused by the spirochete bacterium Borrelia burgdorferi and is usually transmitted to humans through two tick species, the Ixodes scapularis and the Ixodes pacificus. Ixodes scapularis, colloquially known as the blacklegged tick, is responsible for most of the transmission in the United States. In its two-year life cycle, ticks like I. scapularis undergo three stages which heavily depend on the seasonal changes in the eastern and central areas of the United States [Figure 1]. On the western coast the other species Ixodes pacificus, or the pacific blacklegged tick, is the source of disease transmission. Due to the dryer climate and differing seasonal patterns, the blacklegged tick follows a three-year life cycle. Recently, Lyme disease has received increased attention amongst Americans due to its growing prevalence since its discovery in the 1970s. This increased occurrence may be due in part to environmental changes propelled by climate change. Factors such as average temperature, humidity, canopy cover, and host abundance may play key roles in the increased incidence of Lyme disease1. I. scapularis habitat expansion may also have led to this increased FALL 2019
incidence. Geographical Information Systems (GIS) modeling has shown (and continues to predict) the expansion of I. scapularis into previously uninhabited regions of Canada and the central United States2. The expansion of the hospitable environment of the Lyme disease vector is of utmost concern, as an increase in prevalence of the blacklegged tick increases the risk of ticks infected with B. burgdorferi. We see this effect in a trend of increasing disease prevalence [Figure 2]3. However, tackling Lyme disease goes beyond addressing tick populations. A third party—animal reservoirs of B. burgdorferi—are usually responsible for infecting individual ticks since the bacterium are not easily transmitted from I. scapularis mother to offspring. Therefore, studying the distribution and habits of Lyme disease hosts such as the white-tailed deer and white footed mouse are essential for understanding and combatting Lyme disease in the United States.
Cover Source: Wikimedia Commons
"Factors such as average temperature, humidity, canopy cover, and host abundance play key roles in the increased incidence of Lyme disease."
Additionally, increased media coverage of the disease, spearheaded by personal stories and celebrity figures, undoubtedly have contributed to the public’s recent attention to the disease. However, media attention can also come with falsified information. For example, Post Treatment Lyme disease syndrome is a prolonged condition of Lyme 48
Figure 1: Life Cycle of Ixodes scapularis illustrating inactivity during colder months and feeding during warmer months Source: Flickr
"Thus, the lack of efficient testing options and scientific evidence for chronic Lyme disease coupled with the spread of emotional patient stories cause this divide in scientific literature and patient experience."
disease associated symptoms and currently lacks concrete scientific basis; yet, its colloquial name, “chronic Lyme disease” causes heightened attention amongst the public. The following article discusses Lyme disease in the United States and the associated difficulties in controlling the disease, focusing on the geographical-ecological aspect of the disease, its hosts, and reservoirs.
WHAT IS LYME DISEASE?
Lyme disease is the most common tickbLyme Disease is the fastest growing vector borne disease in the United States. As the risk of the disease continues to increase, it is important to recognize the signs and symptoms which include fever, lethargy, and headache, and erythema migrans, the disease’s signature bullseye rash4. Generally, cases of Lyme disease peak from June to September, when people increase their time outdoors in efforts to take advantage of the warmer. Data comparing age demographics and disease incidents illustrate a higher incidence of the disease in patients aged 5-14 years old and 50-69 years old5.This finding can be attributed to frequent outdoor playtime of American youth and increased gardening and outdoor exercise of the more elderly population3. Thus, many Americans are exposed to Lyme disease through “manufactured risk,” a term coined for the combination of the new ecology of human activities and climate change leading to increased exposure of humans to Lyme disease. Moving to forested areas and partaking in outdoor activities additionally increase human exposure to tick bites from infected I. scapularis6. Typically, diagnosis of Lyme disease involves history of residing or spending time in a Lyme disease-endemic area and display of associated symptoms. Laboratory diagnoses of Lyme disease exist (PCR and antibody testing)
49
but are faulty since antibodies of B. burgdorferi aren’t always produced or prevalent in the bloodstream of an infected individual. And to further complicate the diagnosis process, these antibodies sometimes remain present even after the disease goes away3. With an inefficient means of disease identification, physicians and patients do not always have concrete evidence for a proper diagnosis. Usually, a course of antibiotics will be able to treat Lyme disease. But problems of antibiotic resistance cause medical professionals to be cautious in prescribing such antibiotic treatments without good reason. However, if symptoms go untreated for long periods of time, there is an increased risk of disease dissemination or post-treatment Lyme disease syndrome (PTLDS)3. Similar to ordinary Lyme disease, PTLDS is characterized prolonged fatigue, musculoskeletal pain, and headaches. Lyme disease’s “invisible” nature (due to the lack of efficient testing and diagnosis) can lead to dissatisfaction in patients with seemingly “chronic” Lyme disease. In some cases, a patient with obscure symptoms does not receive antibiotics. These patients soon spread testimonials emphasizing their pain and the physician’s lack of “understanding.” In other cases, these “chronic” patients are suffering from PTLDS which, due to a lack of recognition and standard treatment, is treated based on a patient’s symptoms. Thus, the lack of efficient testing options and scientific evidence for chronic Lyme disease coupled with the spread of emotional patient stories cause this divide in scientific literature and patient experience.
SPREADING OF THE DISEASE, VECTORS, AND TRANSMISSION
Lyme disease, spread by infected tick vectors, is due to a bacterium Borrelia burgdorferi and not the tick itself. Interestingly, recent evidence has also presented a second disease causing agent, Borrelia mayonii7. While this bacterium has been seen primarily in the upper midwestern United States, there is great possibility of it expanding to previously uninhabited areas in the near future8. In order to understand disease transmission, it is important to note that the B. burgdorferi is rarely transmitted within generations of I. scapularis. The rate of mother to offspring transmission of the B. Burgdorferi amongst ticks is only about 1%9. Ticks become infected by B. burgdorferi through feeding on hosts: primarily rodents and deer. A common blacklegged tick undergoes three stages in its two-year life cycle, larvae, nymph, and adult, and at each stage, a tick feeds on hosts (mice, human, etc.) [Figure 1]. These blood meals taken from infected hosts are usually when an DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
LY M E D I S E A S E Figure 2: (Left) Increase in Lyme disease prevalence in the United States. (A) Less dense pattern of Lyme disease cases mostly in the northeast and mid-Atlantic coastal region and the states of Wisconsin and Minnesota. (Right) Increase in density and spatial range of Lyme disease cases along the east coast and Midwest region of the United States.
taken from infected hosts are usually when an uninfected tick becomes infected. Due to its close proximity to crawling larvae on the lower lower levels of forests, the white-footed mouse is the host that is most often responsible for tick infection with the Lyme spirochete [Figure 1]8. Conversely, human infection with the bacteria usually occurs from a nymphal tick bite. In the nymphal phase, the ticks are less mobile than the older adults which can jump to feed on white tailed deer, humans, and dogs. Nymphal ticks frequent lower parts of humans, dogs, rodents, and birds. However, these nymphs are more mobile than they were in the larval stage and less visible than an adult tick to the human eye and thus have a greater human infection rate.
ECOLOGICAL COMPONENT OF LYME DISEASE
Due to the seasonal lifestyle of ticks, cases of Lyme disease and infected tick bites peak in the summer. The warm weather brings ticks from their quiescent stages (which occur during colder months spanning from late fall to early spring) and causes them molt into a new stage for feeding. When preparing for a blood meal, ticks partake in an activity called questing where they perch on the edge of an elevated surface and wait for a host to pass by. Questing and disease cases are affected by CO2 levels, heat, and surrounding movement1. It is important to note the ecological role of oak tree masting on B. burgdorferi host populations. In years proceeding oak masting seasons, the plethora of acorns from mast seeding increases the populations of popular hosts for the Lyme spirochete, particularly the white footed mouse and the white-tailed deer. When host populations increase following mast years, tick populations and density also rise, along with Lyme disease cases and risk. This complex relationship of ecology, host, vector, and human Lyme disease both complicate efforts to combat the disease but allow for more pathways for targeting the disease. Efforts towards establishing relationships
between factors such as humidity, canopy, leaf coverage, and areas of established tick populations aim to explore factors that contribute to Lyme disease prevalence in order to combat the disease1. Researchers at Yale University focused on the abiotic factor of climate in predicting populations of I. scapularis. Here, Dr. John Brownstein previously utilized climate-based models in his work and found that working with maximum and minimum temperatures provided good models for predicting the spatiality of I. scapularis and its pattern of spread. The ability of minimum temperature to predict the I. scapularis population can be attributed to temperatureâ&#x20AC;&#x2122;s importance in I. scapularis habitable environmental conditions. Previously established data also suggests that maximum temperature is a strong predictor of I. scapularis populations. This strong correlation may be due in part to heatâ&#x20AC;&#x2122;s positive effect on the increased developmental and hatching rates of young ticks. However, increased temperatures can also hinder tick growth, as excessively high temperatures may hinder survival and decrease the success of an egg hatching into larvae8. Additionally, climactic factors such as relative humidity may hinder disease spread indirectly, through decreasing the success of questing behavior in ticks. Previous studies with ticks show a lower questing position, and a decreased chance in host contact, in ticks when relative humidity was at less than adequate levels. The closer proximity to the ground would allow for I. scapularis to better absorb water from the environment and maintain their physical water balance10.
Source: Center for Disease Control
"Due to the seasonal lifestyle of ticks, cases of Lyme disease and infected tick bites peak in the summer. The warm weather brings ticks from their quiescent stages (which occur during colder months spanning from late fall to early spring) and causes them to molt into a new stage for feeding."
CHANGE OF LYME DISEASE THROUGH TIME
Unfortunately, the incidence of this disease has only grown since its first official report in the late 20th century. In 2017, the average case reporting Lyme disease increased five-fold from the 2012-2016 data [Figure 2]1. Conclusions from studies combining geographical mapping, environmental change patterns and disease reports only provide 50
reasoning to this continued increase and spread in not only incidence numbers but also range of the disease throughout the 21st century. Boundary lines of hardiness zones, a measure of minimum winter temperature used to determine hospitable vegetation, in the United States have seen movement north corresponding to increases in average global temperature. Ticks, highly dependent on temperatures, will shift in distribution in accordance to these environmental changes.
"Presently, measures of combatting the contraction of the illness are focused on individualbased preventative measures."
Most notably, the biggest area of disease growth is in Canada where data suggests a 212.9% increase in suitable habitat for I. scapularis by the 2080s2. In the United States, it is estimated that populations of the blacklegged tick will, in time, retract from the southern regions of the United States (e.g. Texas, Florida, Mississippi) and expand into central regions of the United States (e.g. midwestern states of Kansas and Missouri) with an overall expansion due to climate change2. However, studies such as the one headed Dr. Brownstein pointed out that accounting for space using spatial autocorrelation only increases the importance of the minimum temperature factor. By omitting the spatial autocorrelation aspect of the model, it is easier to identify the roles of each factor in predicting I. scapularis populations8. These changes in climate may be affected by the increase of fragmented forests within the late 20th century. Fragmented forests have increased proximity to farming lands and residential and business areas. The resulting increase in human-forest contact ultimately increases the chance of tick bites and risk for Lyme disease. In addition, demographics of B. burgdorferi hosts, namely deer, have changed during the 20th century. Earlier on, wild deer were hunted, and populations dwindled. With one of the largest sources of tick infection at such a low level, infection of I. scapularis and the presence of Lyme disease was reduced.
WHAT NOW?
A feasible method for assessing Lyme disease risk lies in surveillance and education practices. Data collection on mast seasons, climate change, and host populations may prove key to warning Americans of years with higher risk for contracting the disease. Although spatial modeling of disease incidence may seem like a foolproof plan, there are limits to accurate modeling of the disease. The variability of overdiagnosis, underreporting, human travel, and increased human surveillance/awareness can contribute to false data. Due to the lack of established
51
field techniques and uneven sampling, it often becomes difficult to compare models and build on previously established models8. While less recent publications emphasize the need for spatial-temporal work in databases, more recent work combining the field of GIS and health risk point out that improvements and standardizations in modeling and data collection are still necessarily. Greater precision and uniformity in the measurement of both environmental and human factors and I. scapularis populations including study of factors previously unconsidered such as CO2 and pollution should continue in for combatting Lyme disease. Still, extrapolations and hypotheses from environmental change patterns and incidence predict a continued increase in Lyme disease in the United States. Continued patterns of average temperature increase correlate to changes in I. scapularis habitat. Model predictions showing the encroachment of blacklegged ticks in previously uninhabited areas of North America reveal increases in Lyme disease risk in areas not previously exposed to the disease2. Additionally, human manufactured risk consisting of increased contact with forested areas, the creation of fragmented forests, and frequent time in the outdoors only work to exacerbate individual and national Lyme disease risk. Presently, measures of combatting the contraction of the illness are focused on individual-based preventive measures6. During tick season, outdoor goers are advised to wear light colored clothing to easily spot ticks, long pants and long shirts to prevent bites, long socks with closed toe shoes, and insect repellant. However simple, these measures are not always easily followed. Researchers in science and health started to explore other options of combatting Lyme disease: a vaccine was created in the late 1990s. However, due to short-term effectiveness and lack of consumer demand, it was recalled in 20029. Even if the vaccine was successful, inoculation plans and rules would still be highly disputed. Who would be required to be vaccinated? What defines someone as â&#x20AC;&#x153;at-riskâ&#x20AC;? of a Lyme disease infected tick bite? Some argue for a greater governmental role in public health concerns such as the Lyme disease epidemic2. The call for pesticide use and the simultaneous eradication of tick populations would harm the ecosystem in uncontrollable ways. Many are instead focusing efforts on stopping I. scapularis, the vector, rather than the bacterium. Others are looking DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
LY M E D I S E A S E towards controlling the hosts of the Lyme spirochete. And after collecting data about the relationships of various environmental factors and their effects on vector/host populations, controls on rodent populations, regulations on deer populations, and similar efforts can be established. References [1] Ginsberg, H. S., Rulison, E. L., Miller, J. L., Pang, G., Arsnoe, I. M., Hickling, G. J., … Tsao, J. I. (2020). Local abundance of Ixodes scapularis in forests: Effects of environmental moisture, vegetation characteristics, and host abundance. Ticks and Tick-Borne Diseases, 11(1), 101271. doi: 10.1016/j. ttbdis.2019.101271 [2] Brownstein, J. S., Holford, T. R., & Fish, D. undefined. (2004). Effect of Climate Change on Lyme Disease Risk in North America. Effect of Climate Change on Lyme Disease Risk in North America. New York. Retrieved from https:// www.ncbi.nlm.nih.gov/pmc/articles/PMC2582486/ [3] Nelder, M., Wijayasri, S., Russell, C., Johnson, K., Marchand‑Austin, A., Cronin, K., Sider, D. (2018). The continued rise of Lyme disease in Ontario, Canada: 2017. Canada Communicable Disease Report, 44(10), 231–236. doi: 10.14745/ccdr.v44i10a01 [4] Lyme Disease. (2019, December 16). Retrieved January 2, 2020, from https://www.cdc.gov/lyme/index.html. [5] Fischhoff, I. R., Keesing, F., & Ostfeld, R. S. (2019). Risk Factors for Bites and Diseases Associated With BlackLegged Ticks: A Meta-Analysis. American Journal of Epidemiology, 188(9), 1742–1750. doi: 10.1093/aje/ kwz130 [6] Peretti-Watel, P., Ward, J., Lutaud, R., & Seror, V. (2019). Lyme disease: Insight from social sciences. Médecine Et Maladies Infectieuses, 49(2), 133–139. doi: 10.1016/j. medmal.2018.12.005 [7] Tickborne Diseases of the United States. (2019, September 12). Retrieved December 22, 2019, from https://www.cdc.gov/ticks/diseases/index.html. [8] Brownstein, J. S., Holford, T. R., & Fish, D. (2003). A climate-based model predicts the spatial distribution of the Lyme disease vector Ixodes scapularis in the United States. Environmental Health Perspectives, 111(9), 1152– 1157. doi: 10.1289/ehp.6052 [9] Ostfeld, R. S. (1997). The Ecology of Lyme Disease Risk. American Scientist, 85, 338–346. [10] Estrada-Peña, A., Gray, J. S., Kahl, O., Lane, R. S., & Nijhof, A. M. (2013). Research on the ecology of ticks and tick-borne pathogens—methodological principles and caveats. Frontiers in Cellular and Infection Microbiology, 3. doi: 10.3389/fcimb.2013.00029
52
Tobacco Use in Schizophrenia: SelfMedication or Greater Addiction Vulnerability?
BY LIAM LOCKE COVER: False colored image from the Nation Institute of Mental Health (NIMH) depicting hypofrontality in patients with schizophrenia. Source: Wikimedia Commons
"Notably, smoking rates in patients with schizophrenia range from 60 to 90% compared to less than 20% in the general public."
53 FALL 2019
INTRODUCTION
Schizophrenia is a debilitating psychiatric disease characterized by unusual thought patterns, delusions, hallucinations, social isolation, depression, suicide, and increased rates of substance use disorders including tobacco, cannabis, alcohol, and cocaine.1 As the general public becomes more educated about the harmful effects of smoking, tobacco use is increasingly concentrated in psychiatric patients with whom smoking cessation treatments are largely unsuccessful. 2 Notably, smoking rates in patients with schizophrenia range from 60 to 90% compared to less than 20% in the public. 3, 4, 5 Patients with schizophrenia also have life expectancies that are about 25 years less than average, and this reduced longevity is largely attributable to increased incidence of smoking related diseases. 6 Understanding the neurobiological basis of tobacco use in schizophrenia will help reduce mortality for these individuals, improve clinical outcomes, and prevent mental health subsidies from turning into “tobacco industry profits.” 7 Theories regarding increased tobacco use in schizophrenia fall into two major categories: self-medication and addiction vulnerability. 1, 6, 7 The self-medication hypothesis suggests
that cigarette smoking reduces the severity of negative symptoms (cognitive deficits, negative affect, social isolation) and may also counteract the extrapyramidal side effects of antipsychotic medications (tremor, anxiety, slurred speech, dystonia, slowness of thought, etc.) 8 Both of these processes suggest that nicotine use in schizophrenia is an example of negative reinforcement, in which a behavior is strengthened because it stops or lessens an aversive stimulus. 9 Hypotheses regarding increased addiction vulnerability claim that positive reinforcement, the reward generated by nicotine, is the primary mechanism supporting tobacco use in patients with schizophrenia.1 The purpose of this review is to summarize evidence for the selfmedication and addiction vulnerability hypotheses of tobacco use in schizophrenia.
Nicotine Pharmacology
Nicotine binds to nicotinic acetylcholine receptors (nAChRs) which are distributed broadly throughout the central nervous system (CNS).9,17 nAChRs are each comprised of five subunits (α2-α7, α9-α10, β2-β4), and can be either heteropentamers (a combination of different subunits) or homopentamers (five of the same subunit).19 The five subunits of a nAChR form a central pore that is permeable DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SCHIZOPHRENIA to sodium, potassium, and calcium when bound by the endogenous neurotransmitter acetylcholine or an exogenous agonist like nicotine. The cationic selectivity of these receptors causes excitatory post-synaptic potentials which activate the neurons in which they are expressed.10 The induction of nicotine tolerance seems at first glance oxymoronic because chronic exposure to nicotine upregulates the concentration of nAChRs in the postsynaptic density.11 This should make an individual more sensitive to nicotine with subsequent exposure and lead to an ultimate decrease in nicotine consumption (which is not observed). However, nicotine also induces post-translational modifications of nAChRs (i.e. phosphorylation) that desensitize the receptor and make subsequent consumption of nicotine less effective.11,14,19 Upregulation of nAChRs and desensitization are thought to be compensatory mechanisms that prevent excitotoxicity at cholinergic synapses (i.e. junctions between neurons that communicate with the neurotransmitter acetylcholine).19 Several nAChRs important for the induction and maintenance of nicotine use in schizophrenia are the α4β2 heteromeric receptor (either 3α:2β or 2α:3β), the α7 homomeric receptor, and α5-containing receptors in the prefrontal cortex.10,14 Nicotine shows selectivity for the α4β2 receptor, and this subtype is necessary and sufficient for generating nicotine dependence, tolerance, and craving, however, the α7 homomeric receptor and α5-containing receptors are implicated in the pathophysiology of schizophrenia.11, 12 Nicotine’s ability to alleviate the negative symptoms of schizophrenia depends on activation of α4β2 receptors in the prefrontal cortex (PFC). 14
Figure 2: Three-dimensional structure of a nAChR. Binding of acetylcholine or nicotine changes the confirmation of the receptor, opening the central pore, and depolarizing the postsynaptic cleft. Source: RCSB PDB Database
THE SELF-MEDICATION HYPOTHESIS
The self-medication hypothesis states that patients with schizophrenia smoke because it improves their sense of well-being; this is done by either treating the negative symptoms of their disease (mainly referring to nicotine’s ability to improve cognitive deficits) or by reducing blood concentration of antipsychotic medications and lessening their side effects.8 The following sections will outline why negative reinforcement may be the primary mechanism of increased nicotine use in patients with schizophrenia.
Cholinergic Abnormalities in Schizophrenia Cause Hypofrontality
Genome-wide association studies (GWAS) and linkage analyses have revealed many genetic variants shared by patients with schizophrenia,15,16 but a single-nucleotide polymorphism (SNP) in the α5 subunit gene has been identified as a common factor between schizophrenia and tobacco use disorder.14, 17,18,19 This α5 SNP causes disinhibition of inhibitory circuits in the PFC and an ultimate downregulation of PFC activity.14,19 This hypofrontality is thought to underlie cognitive and social deficits in schizophrenia.18 The majority of neurons in the cerebral cortex are glutamatergic pyramidal neurons which project to and excite other brain areas; however, about 20% of the cortical neurons are inhibitory interneurons which suppress activity of neighboring cells by releasing the neurotransmitter GABA and opening chloride channels.19 Interneurons provide a blanket of inhibition over cortical activity that is critically important for developmental and cognitive processes. The cerebral cortex is divided into six architecturally and functionally distinct layers. The α5 nAChR subunit is expressed almost exclusively in a subset of layer II/III inhibitory interneurons in the PFC known
"The self-medication hypothesis states that patients with schizophrenia smoke because it improves their sense of wellbeing; this is done by either treating the negative symptoms of their disease (mainly referring to nicotine's ability to improve cognitive deficits) or by reducing blood concentration of antipsychotic medication and lessening their side effects."
Figure 1: Example subunit compositions of heteromeric and homomeric nicotinic acetylcholine receptors (nAChRs). Acetylcholine binds between subunits as shown. Source: Wikimedia Commons.
54
Figure 3: 19th century drawings of pyramidal neuron layering in the cerebral cortex. The famous neuroscientist Ramon y Cajal drew these images by examining a Nissel stained slice of the adult human brain.
to be altered in schizophrenia were measured including social isolation, sensory gating (filtering of unnecessary stimuli), and inhibitory interneuron drive over PFC activity.14
Source: Wikimedia Commons
"After creating [the mouse] transgenic lines, a range of behavioral and neurological processes known to be altered in schizophrenia were measured including social isolation, sensory gating (filtering of unecessary stimuli), and inhibitory interneuron drive over PFC activity."
Figure 4: VIP interneurons are positioned such that their activity is directly correlated with the activity of pyramidal neurons in the PFC. The α5 SNP downregulates VIP activity, releasing SOM and PV cells from inhibition, downregulating PFC pyramidal neuron activity, and producing the hypofrontality phenotype seen in schizophrenia and tobacco use disorder. Nicotine can partially correct hypofrontality by desensitizing α4β2 nAChRs on SOM interneurons, decreasing their activity, and releasing PFC pyramidal neurons from excess inhibition.
as vasoactive intestinal polypeptide (VIP) interneurons.14 VIP interneurons provide inhibitory drive over two different types of inhibitory interneurons: somatostatin (SOM) and parvalbumin (PV), which both synapse on layer II/III pyramidal neurons. 14, 18, 19 The lossof-function mutation in the α5 SNP generates a dysregulated circuit – reduced excitation of VIP interneurons by acetylcholine produces increased activity of SOM and PV interneurons, and increased inhibition of layer II/III pyramidal neurons. This dysregulation of cortical circuitry produced by the α5 SNP directly implicates nicotinic signaling in the pathophysiology of schizophrenia.
Evidence for Cognitive Improvement with Nicotine – Animal Models
A 2017 study in Nature Medicine examined how the human α5 SNP impacts cortical activity and how nicotine might be able to improve cognitive deficits.14 CRISPR Cas9 gene editing was used to express the human α5 SNP (CHRNA5, D398N) in an α5 knockout mouse. Controls were α5 knockout mice in which a wild-type copy of the α5 gene was re-expressed. After creating these transgenic lines, a range of behavioral and neurological processes known
1. Social Isolation: when given the choice between a social partner and an inanimate object, wild-type mice show a strong preference for social interaction in a three-chamber social test. α5 SNP mice, however, showed no preference between the inanimate object and a social partner.14 2. Sensory Gating: prepulse inhibition is a behavioral test of sensory gating in which a weak stimulus, such as a soft tone (prepulse), is presented before a strong stimulus, such as a loud tone (pulse). Normal individuals show a reduced startle to the pulse, but this reduction is less pronounced in patients with schizophrenia.8 α5 SNP mice showed decreased prepulse inhibition at a number of different tone levels, suggesting that the α5 SNP is a primary candidate for aberrant sensory gating in schizophrenia. PFC activity and cognitive baseline were also suppressed in these mice. 14 3. Inhibitory Interneurons & Hypofrontality: to measure cortical activity in these mice, an adeno-associated virus carrying a GCaMP6f gene (a green fluorescent protein activated by calcium) was injected bilaterally into layers II and III of the PFC. Because the PFC is comprised of nearly 80% pyramidal neurons, calcium spiking was considered to be representative of layer II/III pyramidal neuron activity. α5 knockouts showed the greatest degree of hypofrontality, but PFC activity in α5 SNP mice was still significantly 14 suppressed relative to wild-type controls. 4. Nicotine Improves Hypofrontality in α5 SNP mice: the remarkable result from this study is that 1-2 weeks of nicotine exposure delivered subcutaneously by osmotic mini-pump at corresponding concentrations to those found in human smokers restored PFC activity in α5 SNP mice to wild-type levels. SOM interneurons express both α4β2 and α7 nAChRs while PV interneurons express only α7.
Source: For original figure, see Liu and Kenny (2017)
55
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SCHIZOPHRENIA Therefore, nicotine is able to upregulate PFC activity by desensitizing α4β2 receptors in SOM interneurons. SOM interneurons are overactive in individuals with the α5 SNP due to reduced inhibition from VIP interneurons. Nicotine is able to return activity of SOM interneurons to normal levels which removes excessive inhibition of pyramidal neurons and restores PFC activity. 14, 18 This study provided strong evidence that nicotine can relieve cognitive deficits in patients with the α5 SNP, and also ties cholinergic abnormalities to the negative symptoms of schizophrenia. The article suggests that the advent of new nicotinic agonists that act selectively at the α5 subunit could reduce hypofrontality without stimulating dopamine release in the reward pathway.14 Although this study provides fairly robust evidence that patients with schizophrenia smoke to relieve negative symptoms, it is important that studies in animal models are replicated in the clinic. Function magnetic resonance imaging (fMRI) provides a safe and effective way to examine circuit activity in human subjects.
Evidence for Cognitive Improvement with Nicotine – fMRI
Hypofrontality in schizophrenia is often cited as a neural correlate of negative symptoms including working-memory impairment, disorganized thought, deficits in sensory gating, social isolation, and depressed mood.8,18,19 Resting state functional connectivity (rsFC) is an fMRI technique used to assess communication between brain areas at rest (as opposed to task-based fMRI). A meta-analysis of 47 rsFC studies demonstrated significantly reduced activity in the PFC of patients with schizophrenia relative to healthy controls, providing substantial evidence for hypofrontality as a measurable biomarker for schizophrenia and related disorders. 20 Many cognitive tasks require activation of the medial prefrontal cortex (mPFC), a subregion of the PFC along the brain’s midline that facilitates memory retrieval, goaldirected behaviors, and decision making. 21
Figure 6: The mPFC and anterior cingulate cortex (ACC) of patients with schizophrenia is underactive during cognitive tasks relative to age-matched controls. Source: Wikimedia Commons
The Wisconsin Card-Sorting Task (WCST) is a neuropsychological test of flexibility that requires individuals to sort cards into different categories that change between rounds.20 Task-based fMRI of patients with schizophrenia revealed that the WCST actually increased hypofrontality. This suggests that individuals with schizophrenia are unable to generate sufficient activity in the mPFC during tasks that depend on this brain area.22,25 Other studies have reported increased activity of the mPFC following nicotine consumption in patients with schizophrenia and healthy controls, but reversal of hypofrontality may not be the mechanism behind improved cognitive processing in human subjects.27 Several structures in the medial temporal lobes are downregulated by nicotine consumption, and normalized activity in these areas has been shown to improve cognitive performance on prepulse inhibition and smooth pursuit eye movement tests.26, 30
"A meta-analysis of 47 rsFC studies demonstrated significantly reduced activity in the PFC of patients with schizophrenia relative to healthy controls, providing substantial evidence for hypofrontality as a measurable biomarker for schizophrenia and related disorders."
Failure to reduce acoustic startle following a prepulse is a hallmark of schizophrenia pathophysiology.14,30 Patients with schizophrenia and healthy controls were given a subcutaneous injection of either nicotine or saline and tested for tactile movement and skin conductance in response to a soft tone followed by a loud tone. Both patients and controls receiving nicotine had reduced acoustic startle to the loud tone (i.e. improved prepulse inhibition). This effect was dependent on normalization of limbic area activity including the hippocampus and thalamus, both of which are implicated in abnormal sensory processing in schizophrenia and are thought to produce delusional thoughts. 30 The smooth pursuit eye movement test requires subjects to closely follow a moving object with their eyes and involves predictions about the future location of the object. Patients with schizophrenia have marked deficits in this task that are normalized following nicotine administration.26 The primary effect supporting improved performance on the smooth pursuit eye movement test following nicotine consumption is a downregulation of the hippocampal activity caused by desensitization of α4β2 receptors in this brain area.31
Figure 5: Patient being prepared for an MRI scan. Neurons requires a significant amount of energy to operate, and therefore a lot of oxygen is required for aerobic respiration. fMRI can quantify brain activity by measuring blood oxygen saturation of hemoglobin. Source: Wikipedia 56
"The self-medication hypothesis has received significant criticism from many clinicians and researchers, some arguing that the social acceptance of nicotine as selfmedication was popularized by the tobacco industry during its first proposal in the 1960s as a ploy to increase cigarette purchases among the psychiatric population."
Figure 7: Chemical structures of haloperidol (Haldol®, bottom) and clozapine (Clorazil®, top), two antipsychotic medications metabolized by CYP1A2. First-generation antipsychotic treat psychosis (delusions, hallucinations) by blocking dopamine D2 receptors, while second-generation antipsychotics like clozapine have supplementary therapeutic properties beyond the treatment of psychosis; in addition to blocking D2R receptors, clozapine is also an antagonist of the serotonin 5HT2A receptor, and can improve negative cognitive symptoms, anxiety, depression, and actually reduce substance use (discussed later). 1, 42 The desired effect of antipsychotics is antagonism of D2 receptors in the mesocorticolimbic pathway (implicated in psychosis), but a byproduct is blockade of D2Rs in the nigrostriatal pathway which is affected in Parkinson’s disease. For this reason, the side effects of antipsychotics are often called drug-induced Parkinsonianism.55 57
These results implicate limbic structures in the beneficial effects of nicotine which suggests a correction of abnormal dopamine signaling in addition to desensitization of cholinergic synapses. Additionally, the ability of nicotine to temporarily reduce cognitive deficits by downregulating limbic structures may be confounded by the effects of nicotine on antipsychotic metabolism.2, 28, 29, 30, 31, 32
Managing Antipsychotic Side Effects
A subsection of the self-medication hypothesis proposes that patients with schizophrenia smoke cigarettes to relieve antipsychotic side effects.1,6 Extrapyramidal side effects are caused by excessive dopamine blockade in the basal ganglia, a collection of midbrain nuclei important for reward processing, but also for action selection and movement.34 Antipsychotics can cause repetitive involuntary movements (tardive dyskinesia), Parkinsonian tremors, inability to move (akinesia), fatigue, and may result in a fatal reaction known as neuroleptic malignant syndrome.34 Drug side effects are a major hurdle for treatment adherence in schizophrenic populations, so understanding tobacco x antipsychotic interactions is important for ensuring proper treatment.33
Many drugs and endogenous signaling molecules are metabolized by the mixedoxygenase superfamily of enzymes in the liver.2 One major oxidative enzyme implicated in tobacco x antipsychotic interactions is cytochrome P450 1A2 (CYP1A2).29 This enzyme metabolizes many antipsychotic drugs including first-generation antipsychotics like
haloperidol and tiotixene as well as secondgeneration antipsychotics like clozapine and olanzapine.2 Notably, concentrations of CYP1A2 are upregulated in smokers, but not in smokeless tobacco users.32 Combustion of tobacco produces aromatic hydrocarbons – flat carcinogenic compounds that react with DNA, and also activate the aryl hydrocarbon receptor (AHR).32 Activation of AHR induces transcription and translation of CYP1A2 and lowers blood concentrations of antipsychotic medications. 32 This result suggests that the pharmacology of tobacco smoke, and not just nicotine, is the mechanism of increased cigarette use in schizophrenia.
ADDICATION VULNERABILITY IN SCHIZOPHRENIA The Shortcomings of Self-Medication
The self-medication hypothesis has received significant criticism from many clinicians and researchers,1,7,9,35 some arguing that the social acceptance of nicotine as selfmedication was popularized by the tobacco industry during its first proposal in the 1960s as a ploy to increase cigarette purchases among psychiatric population.7,53 Although some of the animal and neuroimaging literature supports cognitive improvement from a neurobiological perspective, many of these studies do not rigorously test actual behavior and rely instead on circuit level analysis as a proxy for cognitive improvement. A 2018 study demonstrated no change in cognitive ability following 14 days of abstinence and following reinstatement of tobacco use in patients with schizophrenia. 35 While the self-medication hypothesis suggests that the neurobiological underpinnings of tobacco use in patients is a restoration of glutamatergic activity in the PFC and hippocampus, the addiction vulnerability hypothesis (also called the “primary addiction hypothesis”9 or “reward deficiency syndrome”42) focuses on midbrain dopamine neurotransmission and reward processing as the primary neurological substrates of nicotine use in schizophrenia. The following sections will summarize evidence from neuronal circuitry, animal models, behavior, and clinical neuroimaging that support positive reinforcement as the primary mechanism of increased smoking in schizophrenia.
The Brain Reward Circuit
Reward learning depends on two neurotransmitter systems: dopamine and glutamate. The dopaminergic DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SCHIZOPHRENIA mesolimbocortical pathway, often referred to as the brain’s reward pathway, originates in the ventral tegmental area (VTA) of the midbrain.36 VTA cells project to a variety of limbic structures including the striatum and hippocampus. The striatum also receives glutamatergic inputs from the prefrontal cortex, hippocampus, amygdala, and thalamus, and it is this colocalization of dopamine and glutamate that allows the brain to calculate value and form habits.37 The ventral striatum, also known as the nucleus accumbens (NAc), mediates the rewarding properties of addictive substances.37 Pleasurable stimuli including food, sex, and exercise stimulate VTA neurons which release dopamine in the NAc. All addictive drugs increase dopamine concentrations in the NAc, either by acting directly at dopamine synapses (as is the case with psychostimulants like cocaine and amphetamines) or through indirect mechanisms (like alcohol, opiates, and nicotine).36, 37 Nicotine stimulates this pathway by activating α4β2 nAChRs on the dendritic spines of VTA neurons.1, 7 With repeated exposure, these receptors become desensitized and more nicotine is required to produce the same pleasurable effect. 13, 19 About 95% of the neurons in the NAc and dorsal striatum are GABAergic medium spiny neurons (MSNs). 37 Dopaminergic projections from the VTA and glutamatergic projections from the cortex, hippocampus, amygdala, and thalamus synapse on the same dendritic branches of these striatal MSNs. The colocalization of dopamine and glutamate is a key feature of the molecular mechanism of reward reinforcement because dopamine receptors change neuronal excitability in response to glutamate. MSNs express one of two classes of G-protein-coupled dopamine receptors: D1-type dopamine receptors (D1, D5) or D2type dopamine receptors (D2-D4).40 These two classes of neurons respond differently to dopamine. Activation of D1-type receptors increases the cell’s sensitivity to glutamate while activation of D2-type receptors decreases the cell’s sensitivity to glutamate via trafficking of AMPA receptors, an excitatory glutamate receptor, in and out of the post-synaptic cleft. 39 D1-type and D2-type MSNs also separate two important pathways through the striatum known as the direct and indirect pathways. D1-type MSNs comprise the direct pathway and project directly to the output nuclei of the basal ganglia, while the indirect pathway, made up of D2-type MSNs, projects first to the subthalamic nucleus before being rerouted
Figure 8: The meso-limbic dopamine pathway is the key player in the pathophysiology of substance use disorders. Patients with schizophrenia have abnormal dopaminergic and glutamatergic neurotransmission, leading to hypersensitivity of the nucleus accumbens. This hypersensitivity may make nicotine more rewarding in these individuals Source: Wikimedia Commons
to the output nuclei.36 The architecture of these pathways results in opposing effects on basal ganglia output and behavior. The direct pathway increases basal ganglia activity through a positive feedback loop and serves as a ‘go’ signal for a behavior. The indirect pathway decreases basal ganglia activity through negative feedback and is thought to act as a ‘stop’ signal to inhibit a behavior. 37 Due to the pharmacological properties of D1 and D2type receptors, when a rewarding event causes dopamine in the NAc, the direct pathway is strengthened and the indirect pathway is weakened. 36 Therefore, when a rewarding event is encountered, the likelihood of behavioral initiation on subsequent exposure to a rewarding stimulus is increased (as would be expected by the evolutionary biology necessitating motivational circuitry). 36, 37 In the context of tobacco use disorder, individuals with greater dopamine sensitivity or greater dopamine release in response to nicotine will exhibit higher rates of smoking and nicotine addiction. 1, 6, 7, 11, 12, 36, 37, 38, 39, 40 In summary, the rewarding effects of a behavior, stimulus, or drug are dependent on the release of dopamine in the nucleus accumbens.
"All addictive drugs increase dopamine concentrations in the NAc, either by acting directly at dopamine synapses (as is the case with psychostimulants like cocaine and amphetamines) or through indirect mechanisms (like alcohol, opiates, and nicotine),"
Figure 9: The G-proteins coupled to D1-type dopamine receptors (D1, D5) and D2-type dopamine receptors (D2, D3, D4) have opposing effects on the enzyme adenylate cyclase (AC), cAMP concentrations, and PKA activity. Activation of PKA leads to transcription of ∆FosB, a transcription factor that increases drug sensitivity and facilitates addiction. (Note: D1 and D2-type receptors are NOT co-localized at the same synapses as shown in this figure, which is done simply for concision. In fact, they are expressed in entirely different cells – during development, striatal neurons make the binary decision of becoming a D1-type MSN or D2-type MSN). Source: Wikimedia Commons.
58
Repeated exposure to dopamine in the striatum will eventually increase the activity of the direct pathway (the “go” pathway) and inhibit the activity of the indirect pathway (the “stop” pathway) when the event is encountered again. Nicotine triggers dopamine release in the NAc by stimulating α4β2 nAChRs in the VTA, which makes smoking tobacco a rewarding process. Thus, according to the addiction vulnerability hypothesis, possible mechanisms underlying increased tobacco use in schizophrenia are (1) VTA cells are more sensitive to nicotine, so more dopamine is released into the NAc relative to the general population, or (2) the same amount of nicotine is released, but MSNs in the NAc are more sensitive to dopamine in patients with schizophrenia. Results from animal models and neuroimaging strongly support the second mechanism. 1, 7, 9, 45, 46, 47, 50, 53
Animal Models of Schizophrenia and Nicotine Dependence "Nicotine sensitization. in which repeated injections of nicotine increase the distance traveled by an animal during a given time period, may be augmented in NVHL rats."
Figure 10: Structure of ibotenic acid, the chemical used to generate excitotoxic lesions in the NVHL rat model of schizophrenia Source: Wikimedia Commons
59
A common model for studying schizophrenia and comorbid substance use disorders is the neonatal ventral hippocampal lesion (NVHL) rat model of schizophrenia. While the α5 SNP mice discussed earlier are an example of a genetic model of schizophrenia, NVHL rats are a neurodevelopmental model of schizophrenia.44 Patients with schizophrenia have abnormal hippocampal development caused by environmental stress during the third trimester of fetal development and during adolescence.1 The third trimester of human fetal development corresponds to rat pups that are approximately 7 days old, so on post-natal day 7, ibotenic acid, a glutamate derivative that causes excitotoxic cell death, is injected bilaterally into the ventral hippocampal formation.45 Controls are injected bilaterally with artificial cerebrospinal fluid.
NVHL rats exhibit a number of addictionlike behaviors. Nicotine sensitization, in which repeated injections of nicotine increase the distance traveled by an animal during a given time period, may be augmented in NVHL rats.46 Sensitization experiments are generally performed over several weeks followed by a two week ‘wash-out’ and final drug challenge. Sensitization to psychoactive drugs is proportional to the activity of the direct pathway through the basal ganglia, which is predictive of D1-type MSN drive over behavioral selection, so increased sensitization can be used as a proxy for greater reward.47 A group of researchers at University of Indiana Medical School observed that NVHL rats have a greater degree and quicker pattern of locomotor sensitization to nicotine relative to controls, however, the amount of dopamine
release did not differ significantly between the two groups.46 This suggests that the NAc of NVHL rats is more sensitive to dopamine release. Moreover, NVHL rats have augmented sensitization profiles in response to alcohol and cocaine, suggesting that hypersensitivity of the NAc may underlie increased addiction vulnerability to all substances in patients with schizophrenia. 48, 49 The same group at University of Indiana Medical School investigated nicotine selfadministration and cognitive performance in NVHL rats.7 Four groups were tested – NVHLs with and without adolescent exposure to nicotine (AN), and controls with and without AN. Adult rats were equipped with jugular venous catheters and allowed to self-administer nicotine daily during a two-hour period. NVHLs with AN showed the quicker acquisition of high levels of nicotine self-administration, but this difference reduced to non-significance following task acquisition. During extinction of nicotine self-administration, in which rats are placed back into the self-administration context but without nicotine in response to lever presses, NVHL groups showed slower extinction curves relative to controls. Finally, performance on a radial arm maze, a test of working-memory, revealed that NVHLs have cognitive deficits that are not alleviated by nicotine.7 These experiments provide strong evidence for the primary addiction hypothesis, and against the self-medication hypothesis. 7, 9
Reward Circuit Abnormalities in Schizophrenia – Neuroimaging Studies
Single-photon emission computed tomography (SPECT) and positron emission tomography (PET) can characterize molecular events in the brain of an awake, behaving subject. Using a D2-type receptor radiotracer and an amphetamine challenge, researchers quantified dopamine release in the NAc of patients with schizophrenia and/or substance use disorders.50 In this study, dopamine release was increased in schizophrenia without drug
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SCHIZOPHRENIA
use and decreased in drug users without schizophrenia.51,52 This provides further evidence that increased dopamine release is not the mechanism behind increased addiction vulnerability in patients with schizophrenia (if this were the case, then these individuals would be more likely to stay sober). Despite lower levels of dopamine release, an amphetamine challenge caused a brief period of drug induced psychosis in patients with schizophrenia and comorbid tobacco use that was not observed in controls.50 Overall, this study suggests that the NAc is hypersensitive to dopamine release in patients with schizophrenia. Another study conducted at the Canadian Institute for Mental Health in Montreal took advantage of rsFC fMRI. 53 In addition to reduced hippocampal activity and hypofrontality, these researchers observed decreased functional connectivity between the insula and anterior cingulate cortex (ACC, a subregion of the PFC).53 Similar studies have been conducted with schizophrenia and comorbid cannabis use disorder.42 Interestingly, patients with schizophrenia and comorbid cannabis use had reduced functional connectivity between the PFC and NAc, and this connectivity was restored by either cannabis consumption (smoked joint, or THC capsule) and also by the second-generation antipsychotic Clozapine. 42 Clozapine is rarely prescribed due to numerous side effects, but this result suggests that Clozapine has a pharmacological profile that can replace the use of addictive substances in schizophrenia. Isolating the receptor interactions that produce this effect is difficult because Clozapine is highly unselective, activating many monoamine receptors, but doing so may prove valuable in the creation of new medications to reduce smoking in schizophrenia.
CONCLUSION: REACHING A COMPROMISE
It should be noted that these hypotheses are not mutually exclusive. It is quite possible that patients with schizophrenia are more sensitive to the rewarding effects of nicotine, but also gain a cognitive benefit from the drug. It may be improper to treat these two theories as dialectic opposites as is done in the majority of publications as well as in this review. Instead, it may be more informative to analyze the neurobiological underpinnings of self-medication and addiction vulnerability as a kind of Venn Diagram, with some processes supporting both hypotheses; for example, it has been proposed that restoration of reward circuit activity (i.e. upregulation of NAc dopaminergic transmission and increased connectivity between the insula and ACC) may be implicated in both reward and cognition improvement following nicotine consumption. 42 A more holistic understanding of these comorbidities may arise from a wholebrain analysis of nicotine pharmacology in schizophrenia, rather than focusing on isolated circuitry.
Figure 11: D2 receptor binding assessed by PET scan is diminished in addicted individuals without schizophrenia. It is theorized that low concentration of D2-type receptors represents a genetic predisposition for addiction in the general population, however, drug naïve patients with schizophrenia actually have higher concentrations of D2type receptors (which causes psychosis). The mechanism of addiction in schizophrenia seems to differ from the general population – instead of a deficiency in the indirect pathway (as observed in individuals with decreased D2 receptor concentrations), patients with schizophrenia have a hypersensitive direct pathway facilitated by D1-type MSNs. The molecular and circuit mechanisms underlying direct pathway hypersensitivity remain poorly understood. Source: Wikimedia Commons
Although there is sufficient evidence in support of the self-medication hypothesis, adopting this theory does not improve clinical outcomes for patients living with schizophrenia and may worsen their overall quality of care. Calling smoking ‘self-medication’ undermines the harmful effects of tobacco smoke on pulmonary and cardiovascular health and fails to address the underlying deficits with which they are struggling. Additionally, tobacco use interacts with antipsychotics and increases the probability of a psychotic episode. The advent of new antipsychotics that treat negative symptoms with fewer extrapyramidal side effects as well as selective nicotinic acetylcholine agonists and antagonists might improve smoking cessation treatments in these individuals.
"Although there is sufficient evidence in support of the self-medication hypothesis, adopting this theory does not improve clinical outcomes for patients living with schizophrenia and may worsen their overall quality of care."
Finally, these results should help increase compassion for individuals suffering from schizophrenia and/or tobacco use disorder. These brain diseases are highly stigmatized, compounding social isolation in these groups. Overcoming the judgment that surrounds smoking and mental illness and instead engaging with a better understanding of the biological mechanisms involved with these disorders promotes a more compassionate approach to treatment. A recent study demonstrated that motivational interviewing in patients with schizophrenia and tobacco use disorder increases the percentage who contact a tobacco dependence treatment provider. 54 Through simple encouragement and a positive attitude, it is possible to change people’s minds 60
and help them tackle addiction. References [1] Khokhar, J. Y., Dwiel, L. L., Henricks, A. M., Doucette, W. T., & Green, A. I. (2018). The link between schizophrenia and substance use disorder: A unifying hypothesis. Schizophrenia research, 194, 78-85. [2] Šagud, M., Mihaljević-Peleš, A., Mück-Šeler, D., Pivac, N., Vuksan-Ćusa, B., Brataljenović, T., & Jakovljević, M. (2009). Smoking and schizophrenia. Psychiatria Danubina, 21(3), 371-375. [3] Ziaaddini, H., Kheradmand, A., & Vahabi, M. (2009). Prevalence of cigarette smoking in schizophrenic patients compared to other hospital admitted psychiatric patients. Addiction & health, 1(1), 38. [4] de Leon, J., & Diaz, F. J. (2005). A meta-analysis of worldwide studies demonstrates an association between schizophrenia and tobacco smoking behaviors. Schizophrenia research, 76(2-3), 135-157. [5] Grant, B. F., Hasin, D. S., Chou, S. P., Stinson, F. S., & Dawson, D. A. (2004). Nicotine dependence and psychiatric disorders in the united states: Results from the national epidemiologic survey on alcohol and related conditions. Archives of general psychiatry, 61(11), 1107-1115. [6] Lucatch, A. M., Lowe, D. J. E., Clark, R., & George, T. P. (2018). Neurobiological Determinants of Tobacco Smoking in Schizophrenia. Frontiers in psychiatry, 9, 672. [7] Berg, S. A., Sentir, A. M., Cooley, B. S., Engleman, E. A., & Chambers, R. A. (2014). Nicotine is more addictive, not more cognitively therapeutic in a neurodevelopmental model of schizophrenia produced by neonatal ventral hippocampal lesions. Addiction biology, 19(6), 1020-1031. [8] Kumari, V., & Postma, P. (2005). Nicotine use in schizophrenia: the self-medication hypotheses. Neuroscience & Biobehavioral Reviews, 29(6), 1021-1034.
[14] Koukouli, F., Rooy, M., Tziotis, D., Sailor, K. A., O'Neill, H. C., Levenga, J., ... & Stitzel, J. A. (2017). Nicotine reverses hypofrontality in animal models of addiction and schizophrenia. Nature medicine, 23(3), 347. [15] Jia, P., Wang, L., Meltzer, H. Y., & Zhao, Z. (2010). Common variants conferring risk of schizophrenia: a pathway analysis of GWAS data. Schizophrenia research, 122(1-3), 38-42. [16] Weinberger, D. R., Egan, M. F., Bertolino, A., Callicott, J. H., Mattay, V. S., Lipska, B. K., ... & Goldberg, T. E. (2001). Prefrontal neurons and the genetics of schizophrenia. Biological psychiatry, 50(11), 825-844. [17] Hellström-Lindahl, E., Mousavi, M., Zhang, X., Ravid, R., & Nordberg, A. (1999). Regional distribution of nicotinic receptor subunit mRNAs in human brain: comparison between Alzheimer and normal brain. Molecular brain research, 66(1-2), 94-103. [18] Liu, X. A., & Kenny, P. J. (2017). α5 nicotinic receptors link smoking to schizophrenia. Nature medicine, 23(3), 277. [19] Howe, W. M., Brooks, J. L., Tierney, P. L., Pang, J., Rossi, A., Young, D., ... & Kozak, R. (2018). α5 nAChR modulation of the prefrontal cortex makes attention resilient. Brain Structure and Function, 223(2), 1035-1047. [20] Hill, K., Mann, L., Laws, K. R., Stephenson, C. M. E., Nimmo‐Smith, I., & McKenna, P. J. (2004). Hypofrontality in schizophrenia: a meta‐analysis of functional imaging studies. Acta Psychiatrica Scandinavica, 110(4), 243-256.
[9] Chambers, R. A., Krystal, J. H., & Self, D. W. (2001). A neurobiological basis for substance abuse comorbidity in schizophrenia. Biological psychiatry, 50(2), 71-83.
[21] Marek, G. J., Behl, B., Bespalov, A. Y., Gross, G., Lee, Y., & Schoemaker, H. (2010). Glutamatergic (N-methyl-Daspartate receptor) hypofrontality in schizophrenia: too little juice or a miswired brain?. Molecular pharmacology, 77(3), 317-326.
[10] d'Incamps, B. L., Zorbaz, T., Dingova, D., Krejci, E., & Ascher, P. (2018). Stoichiometry of the heteromeric nicotinic receptors of the Renshaw cell. Journal of Neuroscience, 38(21), 4943-4956.
[22] Weinberger, D. R., Berman, K. F., & Zec, R. F. (1986). Physiologic dysfunction of dorsolateral prefrontal cortex in schizophrenia: I. Regional cerebral blood flow evidence. Archives of general psychiatry, 43(2), 114-124.
[11] Ortells, M. O., & Barrantes, G. E. (2010). Tobacco addiction: a biochemical model of nicotine dependence. Medical hypotheses, 74(5), 884-894.
[23] Hong, L. E., Yang, X., Wonodi, I., Hodgkinson, C. A., Goldman, D., Stine, O. C., ... & Thaker, G. K. (2011). A CHRNA5 allele related to nicotine addiction and schizophrenia. Genes, Brain and Behavior, 10(5), 530-535.
[12] Tapper, A. R., McKinney, S. L., Nashmi, R., Schwarz, J., Deshpande, P., Labarca, C., ... & Lester, H. A. (2004). Nicotine activation of α4* receptors: sufficient for reward, tolerance, and sensitization. Science, 306(5698), 1029-1032.
61
[13] Fenster, C. P., Whitworth, T. L., Sheffield, E. B., Quick, M. W., & Lester, R. A. (1999). Upregulation of surface α4β2 nicotinic receptors is initiated by receptor desensitization after chronic exposure to nicotine. Journal of Neuroscience, 19(12), 4804-4814.
[24] Bluhm, R. L., Clark, C. R., McFarlane, A. C., Moores, K. A., Shaw, M. E., & Lanius, R. A. (2011). Default network connectivity during a working memory task. Human brain mapping, 32(7), 1029-1035. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
receptor subunit mRNAs in human brain: comparison between Alzheimer and normal brain. Molecular brain research, 66(1-2), 94-103. [18] Liu, X. A., & Kenny, P. J. (2017). α5 nicotinic receptors link smoking to schizophrenia. Nature medicine, 23(3), 277. [19] Howe, W. M., Brooks, J. L., Tierney, P. L., Pang, J., Rossi, A., Young, D., ... & Kozak, R. (2018). α5 nAChR modulation of the prefrontal cortex makes attention resilient. Brain Structure and Function, 223(2), 1035-1047. [20] Hill, K., Mann, L., Laws, K. R., Stephenson, C. M. E., Nimmo‐Smith, I., & McKenna, P. J. (2004). Hypofrontality in schizophrenia: a meta‐analysis of functional imaging studies. Acta Psychiatrica Scandinavica, 110(4), 243-256. [21] Marek, G. J., Behl, B., Bespalov, A. Y., Gross, G., Lee, Y., & Schoemaker, H. (2010). Glutamatergic (N-methyl-Daspartate receptor) hypofrontality in schizophrenia: too little juice or a miswired brain?. Molecular pharmacology, 77(3), 317-326. [22] Weinberger, D. R., Berman, K. F., & Zec, R. F. (1986). Physiologic dysfunction of dorsolateral prefrontal cortex in schizophrenia: I. Regional cerebral blood flow evidence. Archives of general psychiatry, 43(2), 114-124. [23] Hong, L. E., Yang, X., Wonodi, I., Hodgkinson, C. A., Goldman, D., Stine, O. C., ... & Thaker, G. K. (2011). A CHRNA5 allele related to nicotine addiction and schizophrenia. Genes, Brain and Behavior, 10(5), 530-535. [24] Bluhm, R. L., Clark, C. R., McFarlane, A. C., Moores, K. A., Shaw, M. E., & Lanius, R. A. (2011). Default network connectivity during a working memory task. Human brain mapping, 32(7), 1029-1035. [25] Harrison, B. J., Yücel, M., Pujol, J., & Pantelis, C. (2007). Task-induced deactivation of midline cortical regions in schizophrenia assessed with fMRI. Schizophrenia research, 91(1-3), 82-86. [26] Tregellas, J. R., Tanabe, J. L., Martin, L. F., & Freedman, R. (2005). FMRI of response to nicotine during a smooth pursuit eye movement task in schizophrenia. American Journal of Psychiatry, 162(2), 391-393. [27] Valentine, G., & Sofuoglu, M. (2018). Cognitive effects of nicotine: recent progress. Current neuropharmacology, 16(4), 403-414. [28] Hamdan, H. F., & Zaini, S. (2018). Effects of Nicotine on Schizophrenia and Antipsychotic Medications: A Systematic Review. Malaysian Journal of Psychiatry, 27(1). [29] Levin, E. D., & Rezvani, A. H. (2007). Nicotinic interactions with antipsychotic drugs, models of schizophrenia and impacts on cognitive function. Biochemical pharmacology, 74(8), 1182-1191.
[30] Postma, P., Gray, J. A., Sharma, T., Geyer, M., Mehrotra, R., Das, M., ... & Kumari, V. (2006). A behavioural and functional neuroimaging investigation into the effects of nicotine on sensorimotor gating in healthy subjects and persons with schizophrenia. Psychopharmacology, 184(3-4), 589-599. [31] Tregellas JR, Tanabe JL, Miller DE, Ross RG, Olincy A, Freedman R: Neurobiology of smooth pursuit eye movement deficits in schizophrenia: an fMRI study. Am J Psychiatry 2004; 161:315–321. [32] Hukkanen, J., Jacob III, P., Peng, M., Dempsey, D., & Benowitz, N. L. (2011). Effect of nicotine on cytochrome P450 1A2 activity. British journal of clinical pharmacology, 72(5), 836-838. [33] Fleischhacker, W. W., Meise, U., Günther, V., & Kurz, M. (1994). Compliance with antipsychotic drug treatment: influence of side effects. Acta Psychiatrica Scandinavica.
[34] Blair, D. T., & Dauner, A. L. A. N. A. (1992). Extrapyramidal symptoms are serious side-effects of antipsychotic and other drugs. The Nurse Practitioner, 17(11), 56-62. [35] Boggs, D. L., Surti, T. S., Esterlis, I., Pittman, B., Cosgrove, K., Sewell, R. A., ... & D'Souza, D. C. (2018). Minimal effects of prolonged smoking abstinence or resumption on cognitive performance challenge the “self-medication” hypothesis in schizophrenia. Schizophrenia research, 194, 62-69. [36] Everitt, B. J., & Robbins, T. W. (2013). From the ventral to the dorsal striatum: devolving views of their roles in drug addiction. Neuroscience & Biobehavioral Reviews, 37(9), 1946-1954. [37] Surmeier, D. J., Ding, J., Day, M., Wang, Z., & Shen, W. (2007). D1 and D2 dopamine-receptor modulation of striatal glutamatergic signaling in striatal medium spiny neurons. Trends in neurosciences, 30(5), 228-235. [38] Yin, H. H., & Knowlton, B. J. (2006). The role of the basal ganglia in habit formation. Nature Reviews Neuroscience, 7(6), 464. [39] Yager, L. M., Garcia, A. F., Wunsch, A. M., & Ferguson, S. M. (2015). The ins and outs of the striatum: role in drug addiction. Neuroscience, 301, 529-541. [40] Keeler, J. F., Pretsell, D. O., & Robbins, T. W. (2014). Functional implications of dopamine D1 vs. D2 receptors: A ‘prepare and select’model of the striatal direct vs. indirect pathways. Neuroscience, 282, 156-175. [41] Robinson, T. E., & Berridge, K. C. (1993). The neural basis of drug craving: an incentive-sensitization theory of addiction. Brain research reviews, 18(3), 247-291. [42] Green, A. I., Zimmet, S. V., Straus, R. D., & Schildkraut, J. 62
J. (1999). Clozapine for comorbid substance use disorder and schizophrenia: do patients with schizophrenia have a reward-deficiency syndrome that can be ameliorated by clozapine?. Harvard review of psychiatry, 6(6), 287-296. [43] Thompson, J. L., Urban, N., Slifstein, M., Xu, X., Kegeles, L. S., Girgis, R. R., ... & Abi-Dargham, A. (2013). Striatal dopamine release in schizophrenia comorbid with substance dependence. Molecular psychiatry, 18(8), 909.
[54] Steinberg, M. L., Ziedonis, D. M., Krejci, J. A., & Brandon, T. H. (2004). Motivational interviewing with personalized feedback: a brief intervention for motivating smokers with schizophrenia to seek treatment for tobacco dependence. Journal of consulting and clinical psychology, 72(4), 723. [55] Shin, H. W., & Chung, S. J. (2012). Drug-induced parkinsonism. Journal of clinical neurology, 8(1), 15-21.
[44] Tseng, K. Y., Chambers, R. A., & Lipska, B. K. (2009). The neonatal ventral hippocampal lesion as a heuristic neurodevelopmental model of schizophrenia. Behavioural brain research, 204(2), 295-305. [45] Brady, A. M. (2016). The neonatal ventral hippocampal lesion (NVHL) rodent model of schizophrenia. Current protocols in neuroscience, 77(1), 9-55. [46] Berg, S. A., & Chambers, R. A. (2008). Accentuated behavioral sensitization to nicotine in the neonatal ventral hippocampal lesion model of schizophrenia. Neuropharmacology, 54(8), 1201-1207. [47] Volkow, N. D., & Li, T. K. (2004). Drug addiction: the neurobiology of behaviour gone awry. Nature Reviews Neuroscience, 5(12), 963. [48] Conroy, S. K., Rodd, Z., & Chambers, R. A. (2007). Ethanol sensitization in a neurodevelopmental lesion model of schizophrenia in rats. Pharmacology Biochemistry and Behavior, 86(2), 386-394. [49] Chambers, R. A., & Taylor, J. R. (2004). Animal modeling dual diagnosis schizophrenia: sensitization to cocaine in rats with neonatal ventral hippocampal lesions. Biological psychiatry, 56(5), 308-316. [50] Laruelle M, Abi-Dargham A, van Dyck CH, Gil R, D’Souza CD, Erdos J et al. Single photon emission computerized tomography imaging of amphetamineinduced dopamine release in drug-free schizophrenic subjects. Proc Natl Acad Sci USA 1996; 93: 9235–9240. [51] Breier A, Su TP, Saunders R, Carson RE, Kolachana BS, de Bartolomeis A et al. Schizophrenia is associated with elevated amphetamine-induced synaptic dopamine concentrations: evidence from a novel positron emission tomography method. Proc Natl Acad Sci USA 1997; 94: 2569–2574. [52] Potvin, S., Lungu, O., Lipp, O., Lalonde, P., Zaharieva, V., Stip, E., ... & Mendrek, A. (2016). Increased ventro-medial prefrontal activations in schizophrenia smokers during cigarette cravings. Schizophrenia research, 173(1-2), 30-36. [53] Prochaska, J. J., Hall, S. M., & Bero, L. A. (2007). Tobacco use among individuals with schizophrenia: what role has the tobacco industry played?. Schizophrenia bulletin, 34(3), 555-567.
63
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
FA S T I N G
Intermittent Fasting: Understanding the Mechanics Underlying Human Nutrition and Dieting BY: LOVE TSAI
INTRODUCTION
In the modern age, people are fascinated by – if not obsessed with – dieting. Developing 18th century countries gave rise to increased stability and accessibility to food, eliminating the worry of food as purely a means of sustenance. Gone was the concern about starvation, and diminishing was the worry about basic nutrition. Now, increased food security has given people in developed countries the ability to manipulate their diet to maximize their personal intended results. get maximum results. Whether the intention is gluten-free, vegan, or even pescatarian, the modern food landscape has given people the luxury to make more choices about how they eat. Not only have questions emerged in the 21st century about how humans should eat, but people have also started asking questions about when they should eat. Conventional eating in the United States has most Americans (around 75% in the 70s) eating three meals a day: breakfast, lunch, and dinner24. Such a traditional eating pattern is largely seen around the world and has been in place since the Industrial Revolution1. However, a growing trend called “intermittent fasting” goes directly against the universal belief in frequent and regular meals. While there are many variations FALL 2019
of this diet, all follow the basic premise of abstaining from food for an extended period of time2. The increasing popularity of this diet, and the many others that have popped up since the Industrial Revolution, warrants an investigation into possible benefits and detriments. Excitement about dieting also exposes the inherent lack of understanding behind the physiological effects of dieting behind every choice to engage in altered eating habits. Neglecting to investigate empirical evidence concerning dieting is common due to previously held beliefs or the idea that the physical processes and biological basis of health aren’t relevant to everyday life. This article challenges that phenomenon to promote a more vigilant study of popular diets; focusing on diets that vary the timing of meals, this literature review discusses whether a traditional approach involving three meals a day or intermittent fasting resulting in the skipping of meals is more beneficial for the human body.
Cover Source: Wikimedia Commons
"The increasing popularity of this diet and many other that have popped up since the Industrial Revolution, warrants an investigation into possible benefits and detriments."
THE BIOLOGY OF METABOLISM AND REGULATING THE BLOODSTREAM 64
responsible for providing the energy to drive many physiological processes, such as brain function4,5.
Figure A: A diagram showcasing metabolic processes within the cell, especially note how catabolism takes in nutrients and breaks them down into smaller, biological intermediates. Anabolism uses ATP and these intermediates to create other structures. Source: Linares-Pastén, J. A. (2018): A simplified view of the cellular metabolism. figshare. Figure. Retrieved from https://doi.org/10.6084/ m9.figshare.7138037.v1
"The traditional practice of eating three regular meals per day is based on the homeostatic goal of keeping metabolic reates and blood sugar levels stable."
Discourse regarding when people should eat their meals is first-and-foremost a discussion of the effects of diet on blood sugar and metabolism. Intermittent fasting (IF) is not concerned with regulating the number of calories consumed or the type of food the calories come from, but rather proposes the idea that restricting eating windows to affect one’s metabolism is the action that will bring about the most health benefits2. What is metabolism, then? Both modern and traditional diets have attempted to regulate this before, evident in the rising popularity of IF coupled with the age-old mantras, “don’t skip breakfast,” and even “eat breakfast like a king, lunch like a prince, and dinner like a pauper,” a phrase coined by nutritionist Adelle Davis in the 1960s3. In living organisms, metabolism involves chemical reactions used for maintenance and growth of the body4. There is not simply one metabolic pathway or event, but rather hundreds of enzymemediated reactions that fall into two categories: catabolic and anabolic [Figure A]. Catabolism is the type of metabolism relating to nutrition and dieting, as it contains the pathways that break down food and convert it into energy, whereas anabolism involves the construction of macromolecules from smaller units - e.g. the catabolic products of food digestion - and the utilization of energy25. There are three phases of catabolism: digestion, incomplete oxidation, and completion bia Krebs Cycle25. The first stage, digestion, breaks down macromolecules (protein, lipids, carbohydrates) into smaller constituent parts, as a protein would be broken into its 20 amino acid building blocks The second stage is incomplete oxidation, and the third stage is complete oxidation via the Krebs Cycle from oxaloacetate and acetyl coenzyme A, which produces the most energy25. Catabolism, due to being a subset of metabolism, is influenced by blood sugar, which plays an important role in metabolism. Blood sugar concerns the amount of glucose broken down by digestion and present in the bloodstream at any given time. Glucose is also
65
The traditional practice of eating three regular meals per day is based on the homeostatic goal of keeping metabolic rates and blood sugar levels stable6. By feeding the body at regular intervals, metabolism is less likely to stagnate and blood sugar won’t shift sporadically with meals. Stagnant metabolism causes lethargy and low-calorie consumption rates, while unstable blood sugar levels can cause unfavorable energy highs and crashes, as well as an increased appetite once the insulin spike dips after an instant high6. Other methods of eating, such as diets advocating for six small meals per day, have this same underlying principle of helping the body maintain homeostasis. Eating with IF counters the idea of strict homeostasis by instead proposing that movement between degrees of satiation and starvation is more beneficial for the body7. Going into starvation mode drives the body to use food reserves stored in fatty acid-derived ketones (rather than the normal glucose) for energy, mobilizing fat stores and preserving muscle mass. In addition, the small amounts of stress from fasting have been shown to be useful, providing the trigger necessary to catalyze many of the body’s restorative pathways26.
THE EVIDENCE FOR INTERMITTENT FASTING
A study done by the German Center for Diabetes Research (DZD) put overweight mice genetically predisposed to diabetes into either a control or experimental groups to test the efficacy and side effects of IF. While the experimental group followed an intermittent fasting regime of unlimited food one day and then no food the next, the control group had unlimited food every day through the course of the experiment. At the end, data showed that the IF mice had “better glucose homeostasis and lower fat accumulation in both the pancreas (-32%) and the liver (-35%)” than the control group8. Preliminary research showed that fat cells in the pancreas are a factor in the development of diabetes by impairing the release of insulin by pancreatic beta cells in the islets of Langerhans [Figure B]. Researcher Schürmann stated that fat cells initially caused beta cells to secrete more insulin and this “increased secretion of insulin causes the Langerhans islets of diabetes-prone animals to deplete more quickly and, after some time, to cease functioning completely”27. The lower pancreatic fat accumulation and improved glucose regulation supports the idea that the DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
FA S T I N G Figure B: (Left) A slide of the human pancreas. The distinct pancreatic islets shown are the islets of Langerhans, which contain endocrine cells and play an important role in the regulation of glucose via insulin and the activation of beta cells. Source: Wikimedia Commons
to cease functioning completely”27. The lower pancreatic fat accumulation and improved glucose regulation supports the idea that the diet might help prevent the onset of Type II diabetes. In fact, case studies of three patients in Toronto revealed that following various IF regimes allowed all three type II diabetic patients within a treatment regimen to stop using insulin within a month of going on the diet9. The study involving mice genetically predisposed to diabetes reported lower fat accumulations after beginning IF8. Other studies have further impressed IF benefits by showing how intermittent fasting can help bring obese patients back down to a healthy weight. In a study published in Obesity following 88 women for ten weeks who fasted intermittently and lost more weight and had a decrease in markers for heart disease and cholesterol than other groups including controls (no alteration in diet) and calorie-restriction groups which cut daily food-intake by 30%10. All food was moderated by researchers, so type of food was normalized across all groups. These results are especially promising given that obesity plagues 39.8% of the American public and may further lead to complications such as cardiovascular disease, diabetes, stroke, cancer, and premature death – finding the most beneficial diet for weight loss and such disorders is of great interest to researchers and health practitioners [Figure C]11. In addition to health concerns, obesity is also a detriment to the United States in other ways, driving health insurance and taxpayer expenditure on obesity measures up, decreasing productivity (resulting in lost revenue), and even weakening national security28,29,30. Intermittent fasting has also shown direct benefit for some diseases, such as heart disease and various types of cancer. A study in the British Journal of Nutrition recently found that fasting-based diets promote fat clearance from the bloodstream quicker than traditional diets, an excess of which is a frequent precursor to heart disease. The team also found that fasting lowered systolic blood pressure in
participants12. Another study done by the UT Southwestern Medical Center found that fasting-based diets kill and inhibit the growth of cancer cells in the most common form of child’s leukemia13. Interestingly, intermittent fasting works to combat different diseases in different ways, not just one—for example, the way IF deals with acute lymphoblastic leukemia using decreased leptin levels is different from cardiovascular disease, which is affected by the rate at which triglyceride is cleared from the bloodstream12,13. Lastly, while many studies have focused on how intermittent fasting allows people to get to a healthier state, there is also evidence it can result in a higher quality of life for already healthy people. A study conducted at the University of Florida found that intermittent fasting resulted in higher levels of SIRT3, a gene which can extend lifespan when upregulated in mice, and is -known to “promote longevity and is involved in protective cell responses”14,31. SIRT3 is a sirtuin located in the mitochondria and regulates various metabolic processes, most notably as a tumor suppressor31. In this case, the slight oxidative stress of fasting is thought to be beneficial by triggering the production and working of these genes14. Oxidative stress occurs when free radicals in the body are inadequately neutralized by antioxidants and accumulate to inflict physiological damage associated with numerous diseases and premature aging [Figure D]. Stress-response homeosis postulates that small amounts of this can help the body repair itself and is key in understanding arguments of how the stress of moving between satiation and starvation may be beneficial. “The hypothesis is that if the body is intermittently exposed to low levels of oxidative stress, it can build a better response to it,” said Wegman, a contributing scientist at the University of Florida14.
Figure C: (Right) Generated with data from the CDC, this map shows obesity rates across the United States in 2014; the majority of U.S. states have a BMI over 30 for 20-25% of the population. Source: Wikimedia Commons
"Intermittent fasting has also shown direct benefit for some diseases, such as heart disease and various other types of cancer."
THE EVIDENCE FOR TRADITIONAL MEALS
While there is an abundance of evidence in support of intermittent fasting, possible detriments have also been thoroughly investigated and point in favor of a more 66
Figure E: The dopamine molecule. Source: Wikimedia Commons
"In fact, there is a biological basis for [the traditional diet] - eating interfaces with the brain's reward circuit by providing a release of dopamine, a neurotransmitter involved in pleasure."
Figure D: Free radicals are dangerous, volatile molecules with an unpaired electron; antioxidants are molecules capable of donating an electron to free radicals, thus stabilizing them. Source: Linares-Pastén, J. A. (2018): A simplified view of the cellular metabolism. figshare. Figure. Retrieved from https://doi.org/10.6084/ m9.figshare.7138037.v1
67
regular eating schedule. Often times the traditional diet is preferred since fasting is difficult to sustain, resulting in a low diet retention rate, and skipping meals often leads to overeating and suffering from food cravings later on. In fact, there is a biological basis for this—eating interfaces with the brain’s reward circuit by providing a release of dopamine, a neurotransmitter involved in pleasure19. A study published in Nutrition investigated this phenomenon by conducting an experiment with 20 women; split into two groups, the control had no snack in-between meals whereas the experimental group was fed between these two intervals. Researchers evaluated dopamine levels by measuring homovanillic acid (a dopamine derivative) and found that eating prompts an increase in the levels of this molecule within the group fed between meals [Figure E, F, G]19. The increase of dopamine upon eating is associated with feelings of pleasure; loss of this during the day due to fasting can cause overindulgence during the next meal due to chemical imbalance in the brain. Additionally, larger, more frequent meals per day resulted in a decreased desire to snack and fewer thoughts about food22. Both findings challenge the efficacy of the IF diet as well as other popular diets that propose small intermittent meals through the course of the day. Furthermore, a study conducted by the University of Eastern Finland evaluated the effect of the number of meals per day on obesity by following more than 4,000 Finnish teenagers from birth to the age of 16. Researchers found a correlation between eating frequent meals (five a day versus four with and without breakfast) and a decreased risk of obesity, and even that skipping breakfast (a large part of the intermittent fasting diet) was correlated with higher BMIs and greater waist circumference [Figure E]. Since the study separated participants by both early and later life factors, such as weight status and health behavior, the differences between the two groups were corrected to reflectthe idea that it wasn’t environmental factors affecting the participants’ weight gain but the spacing and
frequency of the meals18. The study concluded that “among 16-year-olds, the five-meal-a-day pattern was robustly associated with reduced risks of overweight/obesity in both genders and abdominal obesity in boys”19. Other findings support the idea that having multiple meals a day is beneficial in providing nutrition in ways the body can most efficiently accommodate. One longitudinal study conducted at the Research Institute of the McGill University Health Centre focused on the physical strength and muscle mass of elderly populations in Canada. They found that eating a balanced meal three times a day, which supplies protein at regular intervals spread out during waking hours, made seniors stronger and less prone to losing their muscle mass17. The protein consumption between the two groups in this study was held constant daily; the researchers only manipulated how and when the participants ate their share. Subjects were evaluated on handgrip, leg strength, body composition, quadriceps strength, and ease of mobility. Though eating protein in all three meals supported seniors’ muscle mass and strength, researchers did note that this eating pattern had no effect on mobility. Many experiments evaluate the effects of skipping breakfast, a key element in the practice of IF. Skipping breakfast maintains a longer fasting period because it combines the hours asleep with an extra few hours in the morning before lunch2. One study following almost 27,000 males for 16 years sent out questionnaires every two years and found that participants who skipped breakfast were 33% more likely to have coronary heart disease and an overall 27% higher risk of death from a heart attack or coronary heart disease23. The researchers reported that skipping breakfast corresponded with other risk factors that are components of bad heart health, such as obesity, high blood pressure, high cholesterol, and diabetes. Another study upholds this finding, revealing that participants who skipped breakfast often had higher instances of hardened arteries, resulting in atherosclerosis (the buildup of fat and cholesterol in the arteries) [Figure I]20. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
FA S T I N G Figure F: Homovanillic acid (HVA), a derivative of dopamine. Source: Wikimedia Commons Figure H: A diagram showcasing the degradation of dopamine; two pathways (DOPAC and 3-MT) lead to HVA. Source: Wikimedia Commons
There is also further evidence to suggest that eating breakfast daily during childhood is associated with a higher verbal IQ. Researchers in China, in collaboration with the University of Pennsylvania’s nursing school, conducted a case study of 1656 children in Jintan, China and followed them through childhood to evaluate various health and mental characteristics such as IQ, lead exposure, and mineral consumption. The study found that students who didn’t eat breakfast regularly had an average total IQ score that was 4.6-point lower than students who did have breakfast regularly after normalizing for socioeconomic background21. One notable limitation was that the research was purely observational; as a result researchers were unable to investigate the exact rationale behind the correlation, such as if the impact of breakfast was short term and optimized performance on exams, or if it led to palpable brain development long term.
POTENTIAL CONFOUNDS AND DISADVANTAGES OF NUTRITIONAL STUDIES
Within the field of health and science, studies such as those presented through the course of this article provide just cause for changes to optimize healthcare. However, an important consideration with the use of such studies is the possibility of error, experimental flaws, and recognition of confounding variables. As with any study, experimental
or survey-based, there can be complications that affect the end results. And especially with survey-based studies – the data from many of which constitute changes in diet – that there is a distinction between correlation and causation. Simply because one symptom appears immediately after a change does not mean that the change directly caused the symptom. An example of this phenomenon is seen in the study that found skipping breakfast increases rates of coronary heart disease by 33% also noted that the men who typically skipped breakfast were, on average, younger, more likely to be smokers, less physically active, unmarried, and drank more alcohol than the men who ate breakfast regularly23. In addition, all participants were health professionals, and variables such as lifestyle and stress that accompany the profession make it so that the results don’t easily transfer to the rest of the population. The differences in lifestyle are enormous confounding variables to the results and may well influence results, since separating the end result of greater coronary artery disease from either diet or stressful lifestyle is nearly impossible without the creation of additional control groups. Another example is seen in the case study concluding that intermittent fasting can help reverse type II diabetes. It followed only three subjects—all males9. The results don’t necessarily apply to everyone and require additional evidence before they may conclusively advocate for IF. In addition, most of the nutritional studies evaluated in this
"However, an important consideration with the use of such studies is the possibility of error, experimental flaws, and the recognition of confounding variables."
Figure G: Obesity can be measured with waist circumference and having more fat around the waist can be more dangerous than accumulating fat in other parts of the body, which is why many studies targeting weight loss use this measurement in their data collections. This figure shows a healthy, overweight, and obese man with varying waist sizes. Source: Wikimedia Commons Figure I: Atherosclerosis is a condition in which blood flow through the arteries is impeded by plaque buildup. Source: Wikimedia Commons
68
"People should take stock of what works for them in maintaining a healthy lifestyle, remembering that, perhaps contrary to popular belief, there is no one "right" way to eat and the science behind nutrotion and its connection to biology is still being discovered."
the results don’t easily transfer to the rest of the population. The differences in lifestyle are enormous confounding variables to the results and may well influence results, since separating the end result of greater coronary artery disease from either diet or stressful lifestyle is nearly impossible without the creation of additional control groups.
intermittent fasting on body composition and clinical health markers in humans. Nutrition Reviews, 73(10), 661– 674. doi: 10.1093/nutrit/nuv041
Another example is seen in the case study concluding that intermittent fasting can help reverse type II diabetes. It followed only three subjects—all males9. The results don’t necessarily apply to everyone and require additional evidence before they may conclusively advocate for IF. In addition, most of the nutritional studies evaluated in this literature review only looked at the cases in which people could meet the standards that the researchers were looking for. In one case that advocated for fasting diets to reduce risk factors for cardiovascular disease, the people who weren’t able to meet a 5% weight loss goal or sustain the diet itself were removed from the study, so only “successful” people were part of the final cohort, with the control having a success rate of .625 and the experimental group a rate of .70612. These types of adjustments and manipulations make it so that the true effectiveness of the diet is unclear.
4. Kornberg, H. (2019, May 30). Metabolism. Retrieved from https://www.britannica.com/science/metabolism/.
CONCLUSION
In the end, the unsatisfactory conclusion is that there is simply not enough evidence to completely disregard one method of eating in favor of another. Both diets have biological backing and empirical evidence to suggest numerous benefits—some of which the two diets even share, such as a decreased risk of cardiovascular disease or improved weight loss 12,20,23,7,10,19. This inconclusion is indicative of how little is known about nutrition – there are more factors to consider than at first apparent, so blindly accepting or following any method of eating is inefficient and misguided. People should take stock of what works for them in maintaining a healthy lifestyle, remembering that, perhaps contrary to popular belief, there is no one “right” way to eat and the science behind nutrition and its connection to biology is still being discovered. So, the next time that a news article claims to have discovered the secret to a slimmer waistline or a revolutionary new diet, take it with a grain of salt. References 1.Murcott, A. (2013). Models of Food and Eating in the United Kingdom. Gastronomica: The Journal of Food and Culture, 13(3), 32–41. doi: 10.1525/gfc.2013.13.3.32
3. Spence, Charles (2017-07-01). "Breakfast: The most important meal of the day?". International Journal of Gastronomy and Food Science. 8: 1–6. doi:10.1016/j. ijgfs.2017.01.003
5. Wasserman, D. H. (2009). Four grams of glucose. American Journal of Physiology-Endocrinology and Metabolism, 296(1). doi: 10.1152/ajpendo.90563.2008 6. Sifferlin, A. (2016, July 19). When To Eat Breakfast, Lunch and Dinner. Retrieved from https://time.com/4408772/ best-times-breakfast-lunch-dinner/. 7. Anton, S. D., Moehl, K., Donahoo, W. T., Marosi, K., Lee, S. A., Mainous, A. G., … Mattson, M. P. (2017). Flipping the Metabolic Switch: Understanding and Applying the Health Benefits of Fasting. Obesity, 26(2), 254–268. doi: 10.1002/oby.22065 8. Charline Quiclet, Nicole Dittberner, Anneke Gässler, Mandy Stadion, Felicia Gerst, Anett Helms, Christian Baumeier, Tim J. Schulz, Annette Schürmann. Pancreatic adipocytes mediate hypersecretion of insulin in diabetessusceptible mice. Metabolism, 2019; 97: 9 DOI: 10.1016/j. metabol.2019.05.005 9. Suleiman Furmli, Rami Elmasry, Megan Ramos, Jason Fung. Therapeutic use of intermittent fasting for people with type 2 diabetes as an alternative to insulin. BMJ Case Reports, 2018; bcr-2017-221854 DOI: 10.1136/bcr-2017221854 10. Amy T. Hutchison, Bo Liu, Rachel E. Wood, Andrew D. Vincent, Campbell H. Thompson, Nathan J. O’Callaghan, Gary A. Wittert, Leonie K. Heilbronn. Effects of Intermittent Versus Continuous Energy Intakes on Insulin Sensitivity and Metabolic Risk in Women with Overweight. Obesity, 2019; 27 (1): 50 DOI: 10.1002/oby.22345 11. Adult Obesity Facts. (2018, August 13). Retrieved from https://www.cdc.gov/obesity/data/adult.html. 12. Antoni, R., Johnston, K., Collins, A., & Robertson, M. (2018). Intermittent v. continuous energy restriction: Differential effects on postprandial glucose and lipid metabolism following matched weight loss in overweight/ obese participants. British Journal of Nutrition, 119(5), 507516. doi:10.1017/S0007114517003890 13. Zhigang Lu et al. Fasting selectively blocks development of acute lymphoblastic leukemia via leptinreceptor upregulation. Nature Medicine, December 2016 DOI: 10.1038/nm.4252
2. Tinsley, G. M., & Bounty, P. M. L. (2015). Effects of 69
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
FA S T I N G 14. Martin P Wegman, Michael Guo, Douglas M Bennion, Meena N Shankar, Stephen M Chrzanowski, Leslie A Goldberg, Jinze Xu, Tiffany A Williams, Xiaomin Lu, Stephen I Hsu, Stephen D Anton, Christiaan Leeuwenburgh, Mark L Brantly. Practicality of Intermittent Fasting in Humans and its Effect on Oxidative Stress and Genes Related to Aging and Metabolism. Rejuvenation Research, 2014; 141229080855001 DOI: 10.1089/rej.2014.1624
Eating and Incident Coronary Heart Disease in a Cohort of Male US Health Professionals. Circulation, 2013; 128 (4): 337 DOI: 10.1161/CIRCULATIONAHA.113.001474
15. Intermittent, low-carbohydrate diets more successful than standard dieting, study finds. (2011, December 8). Retrieved from https://www.sciencedaily.com/ releases/2011/12/111208184651.htm.
25. Urry, L. A., Cain, M. L., Wasserman, S. A., Minorsky, P. V., Orr, R. B., & Campbell, N. A. (2011). Campbell Biology. New York, NY: Pearson.
16. American Association for Cancer Research. "Intermittent, low-carbohydrate diets more successful than standard dieting, study finds." ScienceDaily. ScienceDaily, 8 December 2011. <www.sciencedaily.com/ releases/2011/12/111208184651.htm>. 17. Choquette, S., Bouchard, D. R., Doyon, C. Y., Sénéchal, M., Brochu, M., & Dionne, I. J. (2010). Relative strength as a determinant of mobility in elders 67–84 years of age. A nuage study: Nutrition as a determinant of successful aging. The Journal of Nutrition, Health & Aging, 14(3), 190– 195. doi: 10.1007/s12603-010-0047-4 18. A. Jääskeläinen, U. Schwab, M. Kolehmainen, J. Pirkola, M.-R. Järvelin, J. Laitinen. Associations of meal frequency and breakfast with obesity and metabolic syndrome traits in adolescents of Northern Finland Birth Cohort 1986. Nutrition, Metabolism and Cardiovascular Diseases, 2012; DOI: 10.1016/j.numecd.2012.07.006 19. Heather J. Leidy, Minghua Tang, Cheryl L.H. Armstrong, Carmen B. Martin, Wayne W. Campbell. The Effects of Consuming Frequent, Higher Protein Meals on Appetite and Satiety During Weight Loss in Overweight/Obese Men. Obesity, 2010; 19 (4): 818 DOI: 10.1038/oby.2010.203 20. Irina Uzhova, Valentín Fuster, Antonio FernándezOrtiz, José M. Ordovás, Javier Sanz, Leticia FernándezFriera, Beatriz López-Melgar, José M. Mendiguren, Borja Ibáñez, Héctor Bueno, José L. Peñalvo. The Importance of Breakfast in Atherosclerosis Disease. Journal of the American College of Cardiology, 2017; 70 (15): 1833 DOI: 10.1016/j.jacc.2017.08.027
24. Kant, A. K., & Graubard, B. I. (2015). 40-Year Trends in Meal and Snack Eating Behaviors of American Adults. Journal of the Academy of Nutrition and Dietetics, 115(1), 50–63. doi: 10.1016/j.jand.2014.06.354
26. Wu, S. (2018, February 6). Fasting triggers stem cell regeneration of damaged, old immune system. Retrieved from https://news.usc.edu/63669/fasting-triggers-stemcell-regeneration-of-damaged-old-immune-system/. 27. Promising approach: Prevent diabetes with intermittent fasting. (2019, July 2). Retrieved from https:// www.sciencedaily.com/releases/2019/07/190702152749. htm. 28. Dall TM, Zhang Y, Chen YJ, et al. Cost associated with being overweight and with obesity, high alcohol consumption, and tobacco use within the military health system's TRICARE prime-enrolled population. Am J Health Promot. 2007;22(2):120–139. 29. Hammond, R. A., & Levine, R. (2010). The economic impact of obesity in the United States. Diabetes, metabolic syndrome and obesity : targets and therapy, 3, 285–295. doi:10.2147/DMSOTT.S7384 30. Biener, A., Cawley, J., & Meyerhoefer, C. (2017). The High and Rising Costs of Obesity to the US Health Care System. Journal of general internal medicine, 32(Suppl 1), 6–8. doi:10.1007/s11606-016-3968-8 31. Li, S., Banck, M., Mujtaba, S., Zhou, M. M., Sugrue, M. M., & Walsh, M. J. (2010). p53-induced growth arrest is regulated by the mitochondrial SirT3 deacetylase. PloS one, 5(5), e10486. doi:10.1371/journal.pone.0010486 32. Gems, D., & Partridge, L. (2008). Stress-Response Hormesis and Aging: “That which Does Not Kill Us Makes Us Stronger.” Cell Metabolism, 7(3), 200–203. doi: https:// doi.org/10.1016/j.cmet.2008.01.001
21. Liu, J., Mccauley, L. A., Zhao, Y., Zhang, H., & PintoMartin, J. (2009). Cohort Profile: The China Jintan Child Cohort Study. International Journal of Epidemiology, 39(3), 668–674. doi: 10.1093/ije/dyp205 22. Laura C Ortinau, Heather A Hoertel, Steve M Douglas, Heather J Leidy. Effects of high-protein vs. high- fat snacks on appetite control, satiety, and eating initiation in healthy women. Nutrition Journal, 2014; 13 (1): 97 DOI: 10.1186/1475-2891-13-97 23. L. E. Cahill, S. E. Chiuve, R. A. Mekary, M. K. Jensen, A. J. Flint, F. B. Hu, E. B. Rimm. Prospective Study of Breakfast 70
Reevaluating the Sex Binary’s Role in Medicine BY MAANASI SHYNO Cover Source: Wikimedia Commons
"While these have been strides in the right direction, modern resesarch suggests that sex, like gender, exists on a spectrum rather than on a binary."
71 FALL 2019
INTRODUCTION
The sex binary, the idea that there are only two sexes, has pervaded social life for centuries. Academics in the last two decades have worked tirelessly to clarify the definition o f s ex a s a biological, phenotypical presentation of a body, while describing gender identity as the cultural role an individual identifies with internally and may express externally in a social context. While these have been strides in the right direction, modern research suggests that sex, like gender, exists on a spectrum rather than on a binary. If this research is generally accepted, there would be reverberating implications throughout modern healthcare, which operates on the binary model of sex. Personalized medicine operating on a sex spectrum has the potential to better acknowledge the complex nature of sex. This is especially important for members of the transsexual community whose experiences do not always align with those of either males or females. Defining sex on a spectrum may also incite change on the primary care level, so that everyday clinical practices would align with the patient’s gender identity rather than the patient’s biologically assigned sex. This would be to the benefit of trans individuals who often feel ostracized and unsafe seeking care in the current medical context. In addition
to this, it is possible that cisgender people will benefit from the discontinuance of the sex binary as an imprecise determinant of female and male health.
IS THE SEX BINARY REAL? Current Justification for the Sex Binary
Despite the potential benefits associated with terminating the sex binary, societal skepticism is strong. Many do not believe that sex can exist on a spectrum, and much less are in favor of terminating the sex binary model in medicine. Proponents of the sex binary, such as MIT professor of philosophy Alex Byrne, argue that biological evidence of primary and secondary sex characteristics prove that evolution created two sexes. While acknowledging that gender exists on a spectrum, Byrne believes that activists incorrectly state sex exists on a spectrum “to support the idea that being transgender is not a mental health condition, but instead is merely a ‘normal biological variation.’”1 He asserts that while there are infants born with disorders of sex development (DSDs) where it is difficult to determine sex, there have been no clearly recorded examples of where an infant could not be placed into the binary.1 Because DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SEX BINARY so few infants, roughly 0.37%, are born with recognizable DSDs (i.e. atypical genitals), those who support the sex binary contend there is no need to redefine sex to accommodate.1 Furthermore, they assert that this would be disadvantageous to healthcare professionals, as they are better informed when knowing a patient’s biological sex. Under these premises, any person identifying as transgender , nonbinary , genderfluid , or genderqueer can identify with their preferred gender but would still be considered to have a medical identity— the sex they were assigned at birth. These arguments are true to their own logic, but trans activists, researchers, and others opposed to the binary insist that sex is just as constructed as gender. As argued by philosopher and gender theorist Judith Butler, the idea that sex is biological “suggests a radical discontinuity between sexed bodies and culturally constructed genders”2. In other words, a sex binary indicates that physical attributes are sexed, categorized into the binary, while gender may be societally constructed through clothing and demeanor. On this note, it is the physical features, not one’s outward appearance that determines sex. ‘Male’ and ‘female’ have been constructed to describe the possession of physical features like vaginas and penises. Butler contends that in doing so we have created a circular definition in which possessing a penis makes one male and to be male means possessing a penis. When considering a person with both breasts, determined to be a female attribute, and a penis, the constructed definitions of ‘male’ and ‘female’ become blurred. As such, it is evident that sex can be considered just as constructed as gender and is only as “biological” as we have chosen to define it. Thus, if sex need not rely on
physical attributes, sex no longer has a distinct purpose from gender; sex does not depend on possessing certain categorizing attributes but on the person themselves. From a theoretical standpoint, there is no reason the sex binary should exist other than to pay homage to potentially outdated traditions. It is important to question our current definition of sex as binary if we are to make a better effort to include people with DSDs. Although a small fraction of the population, their needs deserve equal recognition and attention.2
Biology Itself Deconstructs the Sex Binary
To unequivocally refute objections brought on by proponents of the sex binary, it is important to question whether the body shows deeper signs of a sex binary not apparent at first glance. It is, after all, possible that the constructed definition of sex is backed by internal aspects of human biology that go beyond visible sexual characteristics. However, research does not support this idea, showing that biology itself deconstructs the sex binary on multiple levels. Chromosomal makeup is often considered the most fundamental way to prove the sex binary. Although variations in sex chromosomes such as XXY, XYY, and XXX exist, they are far less prevalent than XY and XX so are often viewed as exceptions. As such, it seems as though the occurrence of phenotypically female and male features is a result of the presence of the XX and XY chromosomes respectively. However, the relationship between sex chromosomes and phenotypically binary features is far more complex than has been previously suggested. Recent studies suggest that the presence of XX
"To unequivocally refute objections brought on by proponents of the sex binary, it is important to question whether the body shows deeper signs of a sex binary not apparent at first glance."
Figure 1: People with DSDs that include atypical genitals do not necessarily possess ambiguous genitals. For example, this may include an elongated clitoris or labia. Source: Wikipedia
72
I L LU S I O N O F R A C E Figure 2: Depicted above is an XYY syndrome karyotype 47,XYY. This is a very common DSD in which the individual is intersex, possessesing an X and two Y chromosomes. This type of DSD may not be known until much later in life if it is not coupled with atypical gonadal or anatomical development.
pregnant women differ greatly from those of nonpregnant women, whose levels are much closer to those of men.5
Source: Wikipedia
"Hormones are another commonly misunderstood biological factor associated with sex, but there are no "male" and "female" hormones."
73
chromosomes and female phenotypic features in a given body are simply highly correlated rather than evidence of a direct connection. It has been established that the SRY gene on the Y chromosome is responsible for initiating the male sex determination. Yet there is no gene on the X chromosome that specifically codes for the same in females suggesting that sex determination in both males and females is more complicated than possessing certain chromosomes3. Furthermore, it was found that of prostate-specific genes 6.9 percent are X linked compared to 2.7 percent of genes expressed in ovary or mammary tissues being X linked4. While the difference is not statistically significant, it is important to recognize that most genes responsible for phenotypically female organs are autosomal and there are more prostate-specific genes on the X chromosome than ovary or mammary tissuespecific genes4. Simply put, opposing networks of gene activity influence the formation of phenotypic features rather than simply sexlinked genes and the connection between the two is not as diametrically binary as originally thought. Hormones are another commonly misunderstood biological factor associated with sex, but there are no “male” and “female” hormones. All bodies produce estrogens and androgens since ovaries, testes, and adrenal glands secrete these hormones5. In addition, steroid levels for estradiol and progesterone—associated with menopause and ovulation respectively— are not sexually dimorphic and exist at the same levels in men and women on average5. It is true that the levels of hormones like estradiol and progesterone will change signifigantly during reproductive phases such as pregnancy or ovulation and some may point to this as evidence of a sex binary since only females experience this (with the important exception of people with DSDs and non-females possessing ovaries)5. On the contrary, estradiol and progesterone levels in
Furthermore, hormone levels change widely throughout the human lifespan such that there is no distinction between prepubescent boys and girls except one interval during the first year5. Even in adolescence when testosterone levels increase or fluctuate at a higher average rate for boys, distributions between men and women show much more overlap than generally understood. Research suggests even gonadal hormones do not remain static throughout life and are affected by social modulation5. Progesterone, for instance, is increased by social closeness. Studies suggest that testosterone, generally thought to be a significant differentiator between men and women on a biological level, is influenced considerably by nongenetic factors like social context and is much more dynamic than previously understood. On a hormonal level, sex is much more flexible than it is dimorphic.5 Recent research may suggest that sex is better understood biologically on a spectrum than a binary, but how much of this is relevant to medical practices? Proponents of the sex binary argue that, regardless of evidence against its biological existence, the binary gives health care professionals a means to better understand the human body, analyze symptoms, and predict reactions. While those who prefer the sex spectrum do not always disagree with these ideas, some posit that the sex binary is not only less relevant to medicine than commonly believed, but also a potential handicap.
THE ROLE OF THE SEX BINARY IN MEDICINE
Women have been historically neglected in the realm of research, with many studies failing to include women participants under the claim that women’s bodies were either too difficult to study or relatively identical to that of men. This oxymoronic ideology prevented deep comprehension of women’s health until the late 1990s when the Women’s Health Initiative published data revealing that existing medical practices were injurious to women because they failed to take into account areas where men’s health differs from women's. For example, it was discovered that women often experienced different symptoms for illnesses such as cardiovascular disease, leading to misdiagnosis. These revelations eventually led to increased gender-specific healthcare, introducing the sex binary into medical DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SEX BINARY practices.6 The improved clinical care for women and men alike that followed these changes makes clear the importance of a flexible lens of sex and gender to health. Yet, no progress has been made in exploring the possible expansion of this lens beyond the sex binary despite the research of the past decade changing our preconceived notions of sex.
Understanding Conditions and Disease: Chronic Obstructive Pulmonary Disorder
The sex binary is often touted as essential to the medical understanding of how disease manifests in different people. In the case of Chronic Obstructive Pulmonary Disorder (COPD), a respiratory disease that impacts breathing, a binary understanding of sex is considered useful in understanding how the same factor may affect people of different sex to notably different degrees. The primary risk factor associated with COPD is smoking, which occurs at similar rates between men and women. Women, however, tend to have smaller bodies, smaller lungs, and smaller airways, such that smoking causes greater damage to the lungs. Estrogen and progesterone, levels for which are higher in females during certain stages of life, are also hypothesized to aid in the build-up of toxic byproducts of the metabolization of cigarette smoke and contribute to airway inflammation respectively. Studies show that the lung development gene located on chromosome 22, CELSR1, is associated with COPD in females7. Thus, it may be said that sex is an important factor in understanding how COPD effects certain people and the risks associated it.8 Alternatively, body size, hormone levels, and genetics, rather than sex, may be regarded as the driving factors behind the development of COPD. Body size varies throughout the human population, and while there are clear size trends between people assigned as male or female at birth, it may be more accurate to create size intervals to fit individuals into rather than using size generalizations derived from the sex binary. As discussed previously, steroid levels are actually quite similar in men and women, and can fluctuate throughout any individualâ&#x20AC;&#x2122;s life. Therefore, it may be more useful to take hormone levels into account separately from the sex binary as well, as they are not exclusive to any sex. As for genes like CELSR1, it is important to take into account whether the gene is sex-specific or not. CELSR1, while associated with COPD in females in a
small cohort, failed to meet the statistical requirements to be considered sex-specific across the human genome and so sex may not be as relevant to understanding COPD in the genetic dimension either8. Altogether, it may be more accurate and effective to analyze the specifics of an individual that relate to the disease or symptoms being treated than relying on generalizations that do not apply to everyone.
Medical Diagnosis
The use of the sex binary in clinical practice can also be detrimental to diagnosis, especially when considering psychological disorders. When viewing sex and gender with distinct dichotomy, gender stereotyping can negatively impact the way physicians analyze symptoms presented by patients. For example, women are more frequently diagnosed with depression than men, perhaps because the condition is often misrepresented as a â&#x20AC;&#x153;femaleâ&#x20AC;? disease9. The gender essentialist idea that women and men differ on a deep, biological level feeds inaccurate theories of the origins of psychological disorders and mistreatment, which can affect any person subjected to these stereotypes detrimentally. For instance, estrogen has been associated with the development of depression, but scientists often disregard the possibility that estrogen level changes are a result of external factors. This leads to problems in diagnosis, as research shows that men who demonstrate the same symptoms of depression as women are less likely to be properly diagnosed because the disorder is so often associated with women. Considering that these statistics do not even take into account the misdiagnosis of genderqueer patients, these findings represent a systemic, consistent, gross error in the diagnosis of depression. In this case, the sex binary is clearly not useful to diagnosis.5
"When viewing sex and gender with distinct dichotomy, gender stereotyping can negatively impact the way physicians analyze symptoms presented by patients."
This is not to say that the binary always leads to a misdiagnosis: there are instances where the use of the binary can be necessary, if not beneficial. The most popular example of such a condition is myocardial infarction (heart attack). While all people may experience chest pressure or pain as symptoms of a heart attack, women may not experience pressure when having a heart attack. Instead, they may experience subtler symptoms like fatigue, lightheadedness, upper back pressure, which are also symptoms of milder health issues such as influenza. In these situations, it is important that healthcare professionals are familiar with the different symptoms which women may experience in order to correctly diagnose women and others along the gender 74
spectrum who are not males. In emergency situations, using the sex binary or knowing the sex assigned to a patient at birth may help professionals quickly identify conditions and make the decisions that save lives. However, this is not to say that sex-specific symptoms are extremely common.10
Treatment and Medical Research
"Another factor to consider when discussing the end of using the sex binary in medicine, in addition to its biological validity and usefulness in medicine, is its effects on patients, particularly those belonging to the trans community."
Figure 3: Drug-induced long QT syndrome, the elongation of the period between ventricular polarization and depolarization in the heart, is more common in those classified as women. Source: Wikipedia
75
The sex binary proves useful in pharmacology when considering the evidence that sex provides a relatively accurate understanding of how bodies react to drugs. This owes to a number of factors: weight differences (as males tending to weigh more than females on average), body fat (males also tend to have a lower percent body fat than women which causes the bioavailability of the drug to vary between bodies), and renal clearance among them (females tend to have a lower renal clearance due to a lower rate of glomerular filtration). These differences between male and female bodies, while generalizations, can generally provide important insight as to how different people will react to certain drugs. Twothirds of all cases of drug-induced long QT syndrome, a disorder of electrical activity in the heart, occur in women. Additionally, females have been recorded to have a “higher incidence of drug-induced liver toxicity, gastrointestinal adverse events due to NSAIDs, and allergic skin rashes”11. The range and significance of such differences in bodily reactions between men and women suggest that binary sex may be helpful in pharmacology. While eliminating the sex binary may not be entirely plausible or medically beneficial for pharmacology, it is important that weight and age are taken into account independent of sex when creating doses and drug regimens.11 The sex binary has also had negative impacts on drug design and medical research. Although there are multiple variables involved with gender and sex, researchers tend to use sex as a binary variable. Average differences between women and men are also often chalked up to gender or sex when there is no evidence that necessitates such conclusions. In addition, the influence of the sex binary on medical research has inhibited the thorough study of transgender and genderqueer health.
Ignoring the variability within the behaviors of people categorized into the sex binary has led to the neglecting of building an intersectional understanding in treatment and research. As such, understanding the influence o f g enderidentity, sexual orientation, and social pressures on patients is important in progressing medicine.5 Ultimately, it isn’t clear whether the sex binary is relevant or helpful to medicine despite the lack of biological evidence for its existence. In terms of understanding disease and conditions, using the sex binary may not be necessary. While the sex binary may negatively impact diagnosis in some cases, it has proven helpful in others. It may hinder us in the development of medication but may be necessary in creating safe drug regimens. What is evident is that the use of the sex binary cannot be completely eliminated as it provides useful information, but there are also many instances in which it can create adverse effects or hinder medical progress.
RELEVANCE OF THIS DEBATE
Another factor to consider when discussing the end of using the sex binary in medicine, in addition to its biological validity and usefulness in medicine, is its effects on patients, particularly those belonging to the trans community. The use of the sex binary in healthcare is often weaponized against transgender people or inadvertently challenges their identity by forcing a biological identity upon them. This extends far beyond having to check off a box at the doctor’s office they don’t feel comfortable with or find inaccurate: many trans people are subject to healthcare discrimination. Healthcare professionals may share confidential gender information with other professionals when unnecessary or require patients to come out to their family before receiving more care. Trans people may even experience hostility from physicians who unnecessarily and incorrectly place them into the binary before providing a diagnosis or medical advice. This has additional effects on the intersex community, which suffers a lack of selfdetermination and bodily autonomy in the healthcare field.5 Other sex-binary flaws in the healthcare system negatively impacting trans people are picking up prescriptions that don’t align with their gender marker and having to uncomfortably out themselves at the pharmacy or updating General Practitioner hospital records with their proper gender, only to get recommendations for screenings of DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
SEX BINARY anatomy that they do not possess (i.e. cervical screenings for a trans woman without a cervix).12 The manifestation of these experiences has resulted in great discomfort for the trans community and discourages them from getting the healthcare they need. If the sex binary does not have the support of biological evidence and its use in the medical field is not always necessary or beneficial, it is important to pursue different ways to describe sex in order to avoid negative effects for any patient group.
References [1] Byrne, A. (2018, November 2). Is Sex Binary? Retrieved from https://arcdigital.media/is-sex-binary16bec97d161e. [2] Butler, J. (2015). Gender trouble: feminism and the subversion of identity. New York: Routledge. [3] Sun, D. (2019, June 13). Stop Using Phony Science to Justify Transphobia. Retrieved from https://blogs. scientificamerican.com/voices/stop-using-phonyscience-to-justify-transphobia/.
[8] Kirtley, C., & Faith, R. (2018, October 30). Treating Men and Women Differently: Sex differences in the basis of disease. Retrieved from http://sitn.hms.harvard.edu/ flash/2018/treating-men-and-women-differently-sexdifferences-in-the-basis-of-disease/. [9] Call, Jarrod B, and Kevin Shafer. “Gendered Manifestations of Depression and Help Seeking Among Men.” American journal of men's health vol. 12,1 (2018): 41-51. doi:10.1177/1557988315623993 [10] Heart Attack Symptoms in Women. (n.d.). Retrieved from https://www.heart.org/en/health-topics/heartattack/warning-signs-of-a-heart-attack/heart-attacksymptoms-in-women. [11] Anderson, G. D. (2008). Chapter 1 Gender Differences in Pharmacological Response. International Review of Neurobiology Epilepsy in Women - The Scientific Basis for Clinical Management, 1–10. doi: 10.1016/s00747742(08)00001-9 [12] Davis, N. (2019, February 26). 'Are you a man or a woman?': trans people on GP care. Retrieved from https:// www.theguardian.com/society/2019/feb/26/trans-manwoman-gp-care-healthcare.
[4] Lercher, M. J. (2003). Evidence That the Human X Chromosome Is Enriched for Male-Specific but not Female-Specific Genes. Molecular Biology and Evolution, 20(7), 1113–1116. doi: 10.1093/molbev/msg131 [5] Hyde, J. S., Bigler, R. S., Joel, D., Tate, C. C., & Anders, S. M. V. (2019). The future of sex and gender in psychology: Five challenges to the gender binary. American Psychologist, 74(2), 171–193. doi: 10.1037/amp0000307 [6] Rojek, M. (n.d.). AMWA. Retrieved from https://www. amwa-doc.org/sghc/sex-and-gender-health-historicalperspective/. [7] Hardin, M., Cho, M. H., Sharma, S., Glass, K., Castaldi, P. J., Mcdonald, M.-L., … Demeo, D. L. (2017). Sex-Based Genetic Association Study Identifies CELSR1 as a Possible Chronic Obstructive Pulmonary Disease Risk Locus among Women. American Journal of Respiratory Cell and Molecular Biology, 56(3), 332–341. doi: 10.1165/ rcmb.2016-0172oc
76
Hesitant Hope for Food Allergies BY MADDIE BROWN Cover Source: Wikimedia Commons
"However, as understanding of the immune system grows, researchers are discovering new, innovative ways to put a stop to these potentially life threatening conditions." Figure A: A cartoon depiction of the current eight major food allergens: dairy, eggs, shellfish, peanut, treE nuts, wheat, soy, and fish. Source: Flikr
77 FALL 2019
INTRODUCTION
Itâ&#x20AC;&#x2122;s a common sight in America: peanut-free tables, gluten-free menus, and bright colorful labels on every dish at the Dartmouth dining halls. Approximately 11% of adults suffer from a food allergy, and the number of allergies is dramatically increasing. The CDC has reported that food allergies have increased by 50% since 1997, and the numbers keep rising.7 Roughly 90% of food allergies are caused by eight major foods: dairy, eggs, fish, peanut, tree nut, wheat, soy, and shellfish. Allergic reactions associated with these foods range from mild itchiness to
full anaphylaxis â&#x20AC;&#x201C; a potentially fatal reaction that slows heart rate and restricts airflow. Each year, approximately 200,000 people in America are hospitalized due to food allergies.7 [A] Currently, there is no cure for food allergies and treatment options are nearly nonexistent. While children with dairy and egg allergies usually outgrow them, peanut allergies are usually lifelong and sufferers are prone to anaphylaxis. However, as understanding of the immune system grows, researchers are discovering new, innovative ways to put a stop to these potentially life-threatening conditions.
FOOD ALLERGIES: THE MECHANISM, MYSTERY, AND CURRENT TREATMENT
Food allergies arise when the immune system mistakes food proteins as foreign invaders (a process involving dozens of immune system cells and cytokines). An allergy develops when food allergens come into contact with the lining of the stomach, skin, or respiratory tract. For most food allergies (classified as IgE-mediated), contact with a food allergen results in the secretion of inflammatory cytokines (molecules that serve as messengers between cells) from epithelial cells, which DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE 77
FO D D A L L E R G I E S Figure C: These figures is a simplified explanation of the development of a food allergy. The ‘memory’ of the adaptive immune system is caused by the creation of IgE molecules, which remain in the body even after the allergen disappears. This allows a large and quick during the second exposure to the allergen (in this case ragweed).
triggers a cascade of immune cell and cytokine activation. [B] Eventually, this cascade results in the activation of type 2 helper T-cells, which will release more cytokines to activate B-cells. Activated B cells then produce food protein specific IgE antibodies.3 When a person comes into contact with the same allergen again, those same protein-specific IgE antibodies are produced in large numbers (due to the ‘memory’ of the adaptive immune system), stimulating vasodilation, the release of histamine, and other symptoms associated with allergies. In extreme cases, these symptoms include the potentially fatal anaphylaxis. [C] Despite the dramatic rise in food allergies, the exact cause remains elusive. Food allergies have been referred to as a “disease of civilization” because they only appear in western urbanized countries and are nearly nonexistent in developing countries.13 The prevailing theory explaining this phenomenon is the “hygiene hypothesis,” proposed by Dr. Stachen in 1989, which proposes that children are more likely to develop food allergies when not exposed to enough microbes, bacteria, and viruses early in life.13 Despite the name, the “hygiene theory” does not imply that better personal hygiene leads to allergies. Rather, it proposes that Western society has altered our microbial environment through changes in diet, early use of antibodies, more indoor lifestyles such that children are no longer exposed to the wide range of bacteria that builds immunity and trains the immune system not to overreact.12 The biological mechanisms behind this theory are complex and not fully understood. Additionally, while there is a large amount of evidence to support the hygiene theory, scientists also believe there is a strong genetic component that determines who develops food allergies. Studies on twins suggest that 80% of the risk for food allergies is heritable, but few details are currently known about what this gene(s) may be.5 The current (and only) treatment for severe
Source: Wikimedia Commons
food allergies is an epinephrine pen, or EpiPen. EpiPens are devices that allow users to inject epinephrine (adrenaline) directly into a muscle. Epinephrine can counteract many symptoms of anaphylactic shock such as vasodilation, bronchoconstriction, and a decrease in heart rate.10 Epinephrine can also suppress the activity of immune system cells to interrupt an allergic reaction.10 [D] While EpiPens can save lives, this treatment is far from ideal. In addition to the recent hikes in EpiPen prices, EpiPens do nothing to prevent anaphylaxis. Epipens are simply a reactive medication that combats symptoms associated with anaphylaxis. Once someone has been injected with an EpiPen, they must be hospitalized to prevent side effects or another allergic reaction. As a result, people with food allergies must avoid all interactions with their allergen, a time consuming, costly, and stressful process. Not only must people diligently check labels on all of their foods, but there is a constant fear of cross-contamination from equipment that may have processed allergens and still have leftover residue. Companies are not required to tell consumers if their product shared the same equipment as allergens, and therefore those with especially severe allergic reactions must call and ask to know if their food is safe.
"Food allergies have been referred to as a 'disease of civilization' because they only appear in western urbanized countries and are nearly nonexistent in developing countries."
Figure B: This figure highlights the major cells involved in an allergic reaction. In step A an allergen comes into contact with a tissue. Then (B) an antigen presenting cell binds to the allergen and (C) activates the appropriate T-cell within a lymph node. This T-cell differentiates into a Th2 cell. The allergen (D) would also be recognized by the B-cell. Next (E), the Th2 cell activates the B cell, and (F) cause it to differentiate into a plasma cell. The plasma cell is what is responsible for creating food protein specific IgE antibodies. (G) IgE proteins will eventually bind to mast cells, which releases a variety of molecules, like histamine, that are responsible for allergy symptoms. Source: Wikimedia Commons
78
I L LU S I O N O F R A C E Figure D: An Epipen autoinjector that is carried by a person suffering from a food allergy. These devices allow users to inject .15mg or .3mg of epinephrine directly into the muscle through a single use needle. The grey cap is removed, allowing the needle (at the black end) to be inserted into (most preferably) the thigh muscle. Source: Wikimedia Commons
Even sitting at a cafeteria table can cause stress, as someone may have smeared a little peanut butter on the table, possibly triggering a reaction. Additionally, one third of children with food allergies report being bullied because of their allergy.7
ETOKIMAB
"While it is typically assumed that bacteria are harmful and it's common to take measures to treat them, bacteria play an extremely important role in the body."
Recent studies have shown that people with food allergies have elevated levels of interluken-33 (IL-33). IL-33 is one of the cytokines released by epithelial cells when in contact with an allergen and is therefore believed to have an important role in triggering an allergic reaction. Scientists also believe that IL-33 may contribute to other diseases such as asthma and arthritis, making this a cytokine of primary interest.6 Researchers have begun developing drugs to inhibit IL-33 activity in hopes of suppressing these diseases. One such drug is Etokimab, which binds to IL-33 and prevents activity. In November 2019, Stanford University researchers published the results of a study that examined Etokimab’s effectiveness in treating severe peanut allergies. The doubleblind phase 2a study was conducted by giving a single dose of either Etokimab or a placebo to a group of patients with peanut allergies. Participants were then given increasing amounts of peanut protein and measured for an adverse reaction.6 By day 15 of the experiment, 73% of the treatment group had successfully ingested a cumulative 275mg of peanut protein without an adverse reaction. Conversely, 0% of the placebo group were able to ingest this much peanut protein without an allergic reaction. By day 45, 57% of the treatment group had ingested a total of 375mg of peanut protein without an adverse reaction. Researchers also noted that the experimental group had decreased levels of peanut-specific IgE and other immune system cytokines, suggesting that IL-33 was unable to stimulate function.6 While 375mg may seem trivial, people with severe allergies often can only ingest a minute amount of the allergen before triggering anaphylactic shock. This study is also noteworthy given that the treatment worked within a matter of weeks and after only
79
one injection. However, this study did have limitations. The sample size of 20 was extremely small and risks not being representative of the wider population. Further research would also be needed to test Etokimab’s effect over a longer time period, and to see whether repeat injections would be needed to maintain this level of tolerance.
ITS ALL IN THE GUT
While it is typically assumed that bacteria are harmful and it’s common to take measures to treat them, bacteria play an extremely important role in the body. The esophagus alone is thought to be home to over three hundred species of bacteria.11 The exact role of such bacteria is unclear, but scientists believe that gut bacteria in particular are vital for regulating the immune system. People with food allergies often have a gut dysbiosis - an imbalance or abnormality in the makeup of the gut bacteria.1 In 2019, a study was published by Brigham and Women’s Hospital that found evidence for a new treatment for food allergies. The researchers collected fecal samples from 56 infants with food allergies and compared them to healthy controls and found that the composition of the microbes in the food allergy infants differed significantly from those of healthy infants. The researchers transferred the microbiota from the infants into adult germ free Il4ra mice (GF Il4ra). GF mice are organisms without microbes living in or on them, meaning scientists can study the gut bacteria of the infants. These mice were “sensitized to egg,” meaning that their bodies had begun to mistake egg proteins as a pathogen. When exposed to egg proteins, the mice who received microbiota from food allergy infants went into anaphylactic shock while mice that received microbiota from the healthy infants only had a mild reaction.1 More importantly, the researchers narrowed down the species of bacteria protecting the healthy mice. Using computational techniques, the researchers determined that strains of human Clostridiales and Bacteroidetes bacteria were responsible for preventing the immune system from mistaking food as a danger.4 In the human infants, these were both strains of DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
FO O D A L L E R G I E S bacteria that were shown to exist in different abundances in those with food allergies than healthy infants. To confirm this, researchers gave mice with egg sensitization various types of bacteria via oral administration. The mice were then subjected to increasing amount of egg protein to see how their bodies would react. The mice that were given five to six strains of Clostridiales did not have any allergic reactions to egg proteins, indicating the bacteria can suppress the immune response.1 However, other types of bacteria did not prevent allergic reactions. The results of this study suggest that altering human gut bacteria may be a promising treatment option for severe allergies.
THE ROAD AHEAD
While these treatments may appear promising, the future is unclear, as scientists have attempted to bring new treatment for food allergies to market before. Children with environmental allergies (with allergens such as pets, dust, and pollen) often undergo desensitization treatments through immunotherapy. During immunotherapy, a person is repetitively exposed to an allergen in order to alter the body’s immune response. Over time, the person will build up a tolerance to the allergen and cease to have a reaction.2 For years, scientists have attempted to extend immunotherapy to food allergies. Individuals with food allergies that posed a significant health risk (with allergies so severe they could not eat most food) often sought out doctors who would administer low doses of peanut protein orally until their body was desensitized. However, the health risks of this treatment are high. Immunotherapy trials for peanut allergies were shut down in the 1990’s after a child died from anaphylactic shock as a result of the desensitization dosings.9 However, the idea has persisted. Currently, the pill Aimmune administers increasing doses of peanut protein to children over the course of six months; while Aimmune was approved by the FDA advisory board in 2019 and is currently waiting for FDA approval, it may be too risky for many people. Almost every participate in the study suffered from adverse health effects during the treatment, ranging from a skin reaction to respiratory problems. 10% of participants withdrew due to concerns over their health.8 Recently, researchers hoped to desensitize peanut allergies through a peanut patch. This skin patch would deliver increasing amount of peanut proteins to desensitize a patient’s
immune system and wouldn’t require oral ingestion. Since the majority of severe reactions only happen during ingestion, this is a much safer way to desensitize. However, the results of the latest phase 3 trial for the peanut patch were disappointing with only 35.3% of children with the patch desensitized. Consequently, the study did not meet the confidence interval to be determined a success.8 However, research into gut bacteria and antibodies has opened up new possibilities for treatment beyond immunotherapy. Immunotherapy is a very general treatment that does not target any specific parts of the immune system and has had mixed results. While these two treatments are only in their infancy, they represent a large step forward for a disease that had seen little advancement in the last decade. Such targeted therapies may provide more effective and less dangerous treatment. According to the co-senior author of the gut bacteria study, Dr. Lynn Bry, “When you can get down to a mechanistic understanding of what microbes, microbial products, and targets on the patient side are involved, not only are you doing great science, but it also opens up the opportunity for finding a better therapeutic and a better diagnostic approach to disease.”4 The more we know, the better treatments we can develop.
"While these treatments may appear promising, the future is unclear, as scientists have attempted to bring new treatment for food allergies to market before."
References [1] Abdel-Gadir, A., Stephen-Victor, E., Gerber, G. K., Rivas, M. N., Wang, S., Harb, H., ... & Secor, W. (2019). Microbiota therapy acts via a regulatory T cell MyD88/RORγt pathway to suppress food allergy. Nature Medicine, 1 [2] Allergy shots. (2018, September 13). Retrieved from https://www.mayoclinic.org/tests-procedures/allergyshots/about/pac-20392876. [3] Anvari, S., Miller, J., Yeh, C. Y., & Davis, C. M. (2019). IgE-mediated food allergy. Clinical reviews in allergy & immunology, 57(2), 244-260. [4] Brigham and Women's Hospital. (2019, June 24). New therapy targets gut bacteria to prevent and reverse food allergies. ScienceDaily. Retrieved December 23, 2019 from www.sciencedaily.com/releases/2019/06/190624111545. htm [5] Carter, Cristina A., and Pamela A. Frischmeyer-Guerrerio. “The Genetics of Food Allergy.”Current Allergy and Asthma Reports, vol. 18, no. 1, 2018, doi:10.1007/s11882-0180756-z. [6] Chinthrajah, S., Cao, S., Liu, C., Lyu, S. C., Sindher, S. B., Long, A., ... & Nadeau, K. C. (2019). Phase 2a randomized, placebo-controlled study of anti–IL-33 in peanut allergy. JCI insight, 4(22). 80
[7] Facts and Statistics. (n.d.). Retrieved December 23, 2019, from https://www.foodallergy.org/life-with-foodallergies/food-allergy-101/facts-and-statistics. [8] Fleischer, D. M., Greenhawt, M., Sussman, G., Bégin, P., Nowak-Wegrzyn, A., Petroni, D., ... & Campbell, D. E. (2019). Effect of epicutaneous immunotherapy vs placebo on reaction to peanut protein ingestion among children with peanut allergy: the PEPITES randomized clinical trial. Jama, 321(10), 946-955. [9] Johnson, C. Y., & Washington Post. (2018, November). A Peanut Allergy Treatment Pill Just Got Closer to Reality, But Sadly There's a Major Catch. Retrieved from https:// www.sciencealert.com/a-pill-to-treat-peanut-allergies-isgetting-closer-to-reality-but-expect-side-effects. [10] Kemp, S. F., Lockey, R. F., Simons, F. E., & World Allergy Organization ad hoc Committee on Epinephrine in Anaphylaxis (2008). Epinephrine: the drug of choice for anaphylaxis-a statement of the world allergy organization. The World Allergy Organization journal, 1(7 Suppl), S18– S26. doi:10.1097/WOX.0b013e31817c9338 [11] Pascal M, Perez-Gordo M, Caballero T, et al. Microbiome and Allergic Diseases. Front Immunol. 2018;9:1584. Published 2018 Jul 17. doi:10.3389/fimmu.2018.01584 [12] Scudellari, M. (2017). News Feature: Cleaning up the hygiene hypothesis. Proceedings of the National Academy of Sciences, 114(7), 1433-1436. [13] Stiemsma, L. T., Reynolds, L. A., Turvey, S. E., & Finlay, B. B. (2015). The hygiene hypothesis: current perspectives and future therapies. ImmunoTargets and therapy, 4, 143– 157. doi:10.2147/ITT.S61528
81
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
BREAST CANCER
Breast Cancer and its Developing Therapies BY: NISHI JAIN
INTRODUCTION TO CANCER BIOLOGY
“Down to their innate molecular core, cancer cells are hyperactive, survival-endowed, scrappy, fecund, inventive copies of ourselves.” 1 Siddhartha Mukherjee, prolific scientist and physician, has dedicated his life to the study of cancer, and his novel The Emperor of All Maladies details the vicious and powerful history of a centuries-long battle waged against this disease. Considered one of man’s most formidable diseases, we have attempted many cures with many cancers, but have frequently come up short due to the illness’ complexity and its ability to adapt to treatment. The essential cellular machinery of cancer is designed to allow the cells to proliferate voraciously and without reservation – this kind of unregulated growth results in either benign or malignant tumors.2 In addition to this simple designation of quickly multiplying cells, there are a few more features of malignant cancer that have been investigated, including the creation of tumor vasculature, as well as the tumor’s evasion of apoptosis, or programed cell death.3 In the beginning, patients feel little to no symptoms, but as the tumors grow, there are localized and universal symptoms that eventually lead to a cancer diagnosis FALL 2019
(if the condition was not previously caught in imaging).4 For instance, as a result of their growing mass, patients with esophageal cancer may begin to have a hard time swallowing food (localized symptom), but may eventually feel other symptoms such as weakness, weight loss, and an altered mental state (universal symptoms) that result from the body’s immune response against the invading tumor.5 Desperation for a cure has warranted significant funding and effort in cancer research and has produced some results. Among all the different kinds of cancers, breast cancer is the conditions for which we have come closest to finding a cure.6 Breast cancer is the result of a malignant tumor that originates in the breast tissue and has metastasizing potential – that is, parts of the breast tumor have the possibility to break off and enter the lymphatic system, resulting in metastasized tumors in lymph nodes, the bloodstream, and eventually other parts of the body such as the brain, liver, and lungs.5,6
Cover Source: Wikimedia Commons
"The essential cellular machinery of cancer is designed to allow the cells to proliferate voraciously and without reservation - this kind of unregulated growth results in either benign or malignant tumors."
STAGES OF BREAST CANCER DEVELOPMENT AND INDICATIONS OF THE CONDITION
Most breast cancers originate in the ductal 82
Figure 1: Early detection of breast cancer can happen through yearly mammograms. Source: Military Health Systems
"The main limitation with cancer is that in the battle between the immune system and quickly metastazing tumors, we must attempt to preserve the battle field - the patient's body - as much as possible.
region of the breast, and the remainder usually originate in the lobules. Intracellular machinery mishaps such as genetic mutations and cell cycle interruptions incite massive cell proliferation in the normal duct of the breast, which results in intraductal hyperplasia (the enlargement of the tissue caused by the increased cellular reproduction rate).5,7 After the cells enter the hyperplasic state, more malignant cells convert to the next stage, intraductal hyperplasia with atypia; in this stage, the enlargement of tissue is accompanied by additional growth that indicates that the cells are cancerous. 8 Following intraductal hyperplasia with atypia, the mass progresses further into fullblown intraductal carcinoma in situ, in which proliferating cells have filled the duct.7,8 If more time passes before diagnosis following this stage and the mass is not controlled, the patient may enter the final, metastatic stage. The conversion of noncancerous tissue to cancerous tissue is a transition that occurs subcellularly and is one of the most difficult transformations for physicians and scientists to discern. However, genome-wide association studies (GWAS) have identified several genetic variants associated with breast cancer development.8 GWAS techniques essentially sequence the genomes of affected patients and compare them to controls to find genetic differences – perhaps a mutation that is present in all or in the overwhelming majority of the diseased population compared to the general population – that scientists can investigate as a possible cause for a disease. These genetic variants can then be used by clinicians to screen for individuals with increased risk.9 Three types of genes have been identified in breast cancer prognoses – the oncogene,
83
which leads to malignant growth after being activated by mutations or overexpression, tumor suppressor genes, which lead to malignant growth after losing their function due to mutations or deletions, and modifiers, which lead to malignancies when they are not able to properly do their job in DNA repair. 10 Within these three classifications, there are multiple genetic indications that may lead to the development of metastatic breast cancer: amplification of cell proliferation gene MYC (20-30% of cases), overexpression of cell cycle control genes BCL1 or PRAD1 (20-30% of cases), overexpression or mutation of human epidermal growth factor receptor 2 (HER2) (30% of cases); overexpression of apoptosis preventing protein BCL2, mutation of tumor suppressor genes TSG101 (50% of cases), BRCA1 (55-65% of cases), and BRCA2 (45% of cases).11,12,13,14 These genetic indications illustrate the potential genetic inheritability of some breast and ovarian cancers.
CURRENT TREATMENTS
The main limitation with cancer is that in the battle between the immune system and quickly metastasizing tumors, we must attempt to preserve the battlefield - the patient’s body - as much as possible. This inhibits the use of especially cytotoxic agents, which are capable of destroying the tumor but result in devastating side effects and a quality of life that disincentivizes patients to continue treatment.15 As a result, in many of the treatments that are developed, consideration is not only given to the efficacy of the drug, but also to the drug’s cytotoxicity and potential damaging effects on the body cytotoxicity and potential damaging effects on the body. Conventional treatments include surgery, DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
BREAST CANCER Figure 2: Frequent innovations in the field of cancer therapy are taking place not only in breast cancer, but other kinds of cancers as well. This image depicts a novel treatment named Y90 in which radioactive beads are infused into the tumor to try and shrink its size. This treatment closely resembles breast cancer brachytherapy. Source: Air Force Medicine
chemotherapy, hormonal therapies, biological therapies, and radiation therapies. 16 Surgeries, including lumpectomy (removal of the lump only) and mastectomy (removal of the breast as a whole), however can only be performed if the danger of surgery is not too great or the patient does not refuse.15,16 A total mastectomy is often the surgical answer to a tumor that has grown too far, but due to substantial changes in lifestyle following this kind of surgery, it is not uncommon for a patient to refuse and opt for different treatments.15 If the surgeries are performed to resect most of the tumor, other radiation therapies are often used to followup, targeting the tumor with high energy external beams that kill the residual tumor cells.16 If external radiation therapy is not used, a recent development in the field has allowed for the administration of internal radiotherapy, or brachytherapy in which a stent is placed within the affected breast, near the tumor site, that contains a short-range radioisotope. The radioisotopes are contained within a wire that protects the rest of the healthy body from the toxicity of the stent.17 If patients opt out of surgery, they are often given medications, one of which is chemotherapy and is administered either orally or intravenously. Chemotherapeutic agents shrink tumors through disrupting cellular machinery or inflicting DNA damage within cancerous cells.18 Additionally, hormonal therapies are common for cancers such as breast and ovarian cancer, as they often require estrogen to continue growth â&#x20AC;&#x201C; such cancers are identified through tumor biopsies, which can reveal the presence of certain estrogen or progesterone receptors (ER+, PR+, respectively). When an ER+ or PR+ tumor is identified, it is treated with therapies such as tamoxifen or
letrozole, which block receptors of the tumor from receiving additional hormones that could fuel its growth.19 These techniques are all proven localized treatments for tumors that were small or operable to begin with, but there are often situations in which conventional therapies do not work. For instance, tumors frequently develop immunity towards the medications and begin to mutate and eventually avoid the need for hormones, thus rendering the drug ineffective.18,19 In the face of such changes, additional therapies must be developed to treat patients who have inoperable and mutated tumors. A class of medications called monoclonal antibodies, or mAbs for short, have recently been developed for breast cancer to target a specific receptor called HER2, which has been shown to grow in abundance on tumor cells. 2 Pertuzumab is an example of a mAb and operates in HER2-positive breast cancers to prevent the dimerization of HER2 receptors, which generally allows for the signaling of the mitogen-activated protein kinase (MAPK) and phosphatidylinositol 3-kinase (PI3K) pathways that drive cell proliferation. Pertuzumab is given intravenously and is frequently the first line of treatment. 21 MAbs can also be made part of a larger therapeutic agent, an antibody drug conjugate (or ADC) that delivers a cytotoxic payload directly into the body of the tumor, thereby prohibiting additional growth and shrinking the existing mass. 22
"A class of medications called monoclonal antibodies, or mAbs for short, have recently been developed for breast cancer to target a specific receptor called HER2, which has been shown to grow in abundance on tumor cells."
ANTIBODY DRUG CONJUGATES
ADCs traditionally combine the binding specificity of mAbs with the tumor shrinking capabilities of cytotoxic payloads using a linker, 84
Figure 3: The function of a monoclonal antibody. Source: Wikimedia Commons
"The tumor cell then responds by internalizing the ADC, along with the cytotoxic payload, which is then released by the dissolution of the linker and kills the cancerous cell."
which connects the drug conjugate with the cytotoxic agent. 23 ADCs generally operate as a trojan horse. They use the targeting powers of mAbs to first target a specific antigen on the surface of the tumor cell and attach themselves to it, “tricking” the tumor cell into believing that it is an element that is necessary for continued proliferation. The tumor cell then responds by internalizing the ADC, along with the cytotoxic payload, which is then released by the dissolution of the linker and kills the cancerous cell. 24 The concept of an ADC was first recognized after the FDA and European Medicine’s Agency approval of brentuximab vedotin, or Adcetris, an ADC specific to relapsed Hodgkin lymphoma and anaplastic large cell lymphoma that was developed by Seattle Genetics.25 Adotrastuzumab emtansine or T-DM1, an ADC that targets HER2-positive metastatic breast cancer, was approved shortly thereafter by Genentech and Roche and has since been the primary ADC in use, paired with a series of other tyrosine kinase inhibitors, or TKIs.26
Figure 4: An antibody drug conjugate is composed of a monoclonal antibody and cytotoxic drug connected by a linker. Source: Wikimedia Commons
T-DM1 is an ADC that is made from
humanized mAb trastuzumab and cytotoxic payload DM1 (hence the name T-DM1).27 Trastuzumab in and of itself is capable of stopping the growth of cancerous cells through a mechanism that is similar to pertuzumab, preventing the HER2 homodimerization (HER2 to HER2) and heterodimerization (HER2 to HER3) that causes the activation of the MAPK and PI3K cellular proliferation pathways. However, the addition of DM1 allows for the receptor-mediated internalization and eventual catabolic degradation of the cytotoxic agent, thereby releasing intracellularly the cytotoxic payload that eventually binds to the tubulin within the cell, stopping mitosis, and instigating cell death. In this case, using an ADC (the T-DM1) versus simply one component of it, the mAb (trastuzumab) allowed for not only the blockage of additional unregulated growth (which the trastuzumab would have done itself ), but also the death of existing cells (which was facilitated by DM1). 29,30,31
TYROSINE KINASE INHIBITORS
Alongside T-DM1, there are a handful of TKIs that have been developed to target the MAPK and PI3K pathways intracellularly.32 Tyrosine kinases are proteins that induce signal transduction pathways by adding phosphate groups (phosphorylation) and are themselves activated through phosphorylation. After a protein is dimerized, a tyrosine molecule within each of the receptors trans-phosphorylates the partner, thereby propagating a signal through the plasma membrane, triggering the start of important cellular pathways.33 TKIs serve to inhibit this initial phosphorylation, thereby prohibiting the activation of the entire signal transduction pathway and stopping tumor cell proliferation. TKIs have multiple proposed mechanisms.
85
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
BREAST CANCER One-way kinases can receive initial phosphorylation is through hydrolysis of adenosine triphosphate (ATP) to adenosine diphosphate (ADP); this reaction is ubiquitous in biology and releases an inorganic phosphate that can react with and activate tyrosine kinases. TKIs may be able to inhibit this activation by blocking the ATP binding domain of tyrosine kinases, thereby blocking phosphorylation.34 TKIs have also been shown to deprive the cell of CDC37-HSP90, a stabilizing agent for tyrosine kinases that causes the cellsâ&#x20AC;&#x2122; ubiquitination (the addition of the protein ubiquitin, which marks the cell for degradation).33,34 TKIs are different than ADCs since their main mode of operation is intracellular; they sit on the inside of a cell such that the activation of the signal transduction pathway is inhibited. 35,36,37 Neratinib is such an example of a TKI that operates intracellularly as a dual inhibitor of HER2 and epidermal growth factor receptor (EGFR) kinases by covalently binding with the cysteine chain on both of the proteins and inhibiting the continuation of their pathways.38 Although neratinib shares the target HER2 with trastuzumab (within T-DM1), neratinib operates intracellularly, inhibiting the continuation of the MAPK pathway that eventually allows for cell growth.39 It does not operate extracellularly and has nothing to do with the inhibition of the homodimerization or heterodimerization of HER2. Additionally, neratinib is not necessarily completely selective for tumor cells.38,39 It targets both HER2 (which is overexpressed on cancer cells) and EGFR (which is expressed approximately the same amount on both healthy cells and cancer cells). As a result, neratinib targets both cancerous cells and healthy cells to similar degrees, thereby causing normal cell casualties and creating multiple side effects that degrade quality of life for the patient.40 Much like neratinib, TKI lapatinib is also
a dual HER2 and EGFR inhibitor and is used for treatment of metastatic breast cancer, among other solid tumors.41 Unlike neratinib, it is orally bioavailable, thereby eliminating the need for a patient to travel to the hospital for its administration. Lapatinib binds to the ATP-binding pocket of the EGFR and HER2 protein kinase domains, which prevents phosphorylation by competing with ATP. 42,43 Afatinib operates a similar way as lapatinib, except it is used for the treatment of nonsmall cell lung carcinoma (NSCLC). It is also orally bioavailable, and targets HER2 and EGFR kinases.44 It specializes in targeting EGFR mutations that are more resistant to first generation TKIs like erlotinib and gefitinib, and it is able to treat cases of NSCLC caused by rare mutations. Like neratinib, it binds to cysteine 797 within the EGFR proteins via a 1,4 Michael addition, thereby blocking phosphorylation and inhibiting pathway activation. 43,44 Due to its success in NSCLC and because of its targeting potential for HER2, afatinib has begun to be investigated for metastatic breast cancer. Like neratinib, lapatinib and afatinib are all dual HER2 and EGFR inhibitors that often have higher levels of toxicity for the patient due to the lack of specificity in the EGFR inhibition. Although HER2 is overexpressed in tumor cells and its targeting is warranted and leaves normal cells relatively unharmed, the targeting of EGFR has yielded significant side effects. Tucatinib has recently emerged from the same company that initially put forth brentuximab vedotin, Seattle Genetics.45 Tucatinib is revolutionary within the field of metastatic breast cancer in that it is the first drug with HER2 selectivity (it does not bind to EGFR and thereby leaves most of the normal cells unaffected) and an ability to cross the blood-brain-barrier (BBB). The importance of the latter quality cannot be understated; as breast cancer metastasizes, there is significant potential for cancer to reach the brain and cause brain tumors.46,47,48 Neratinib, lapatinib, and afatinib all are too large (or not hydrophobic) to cross the BBB, and thereby they cannot treat tumors in the brain.
"TKIs are different than ADCs since their main mode of operation is intracellular; they sit on the inside of the cell such that the activation of the signal transduction pathway is inhibited."
Figure 5: The function of a tyrosine kinase receptor. Source: Wikimedia Commons Figure 6: An example of a 1,4 Michael addition, the mechanism by which many tyrosine kinase receptors operate. Source: Wikimedia Commons
86
Figure 7: The structure of tucatinib. Source: Wikimedia Commons
"Extraordinary discoveries have been made, namely in the field of metastatic breast cancer, which foster the hope that remission is possible."
thereby they cannot treat tumors in the brain.
TUCATINIB AS A THERAPEUTIC AGENT
Due to its increased potency, tucatinib is a very exciting breakthrough for the field of metastatic breast cancer treatments. In addition to being independently effective, there have been indications for increased therapeutic efficacy when combined with other mAbs and chemotherapeutic agents, as indicated by Seattle Genetics’ recent HER2CLIMB trial that investigated the best possible combinations. Tucatinib was combined with both trastuzumab (an mAb) and capecitabine (a chemotherapeutic agent) and was given to a set of patients who were already treated with existing medications, including trastuzumab, pertuzumab, and T-DM1. Another set of patients continued existing trastuzumab, pertuzumab, and T-DM1 treatments and were given a placebo in place of tucatinib. Patients had become increasingly resistant to these medications and had agreed to participate in the clinical trial with the hope that their medication-resistant tumors may be reduced by a tucatinib combination. The results of the study showed an impressive increase in progression-free survival and overall survival for all patients who were treated with tucatinib. Progression-free survival at 1 year with the tucatinib combination was 33%, as compared to the placebo’s 12%. Progression-free survival at 2 years with the tucatinib combination was 45%, as compared to the placebo’s 27%. The most stunning discovery was that progression-free survival for patients with brain metastases at 1 year with the tucatinib combination was 25%, as compared to the placebo’s 0%, a testament to the power of tucatinib’s BBB-crossing capability. 49
CONCLUSION
Cancer is one of the most devastating
87
illnesses to befall patients, and in the past, it has frequently been synonymous with a death sentence. However, researchers and scientists the world over have refused to accept this verdict and have accordingly funneled countless hours and dollars into the development of therapeutic agents that serve as defenses for the patient. Extraordinary discoveries have been made, namely in the field of metastatic breast cancer, which foster the hope that remission is possible. Breakthroughs such as T-DM1, lapatinib, and tucatinib have given patients a chance at survival, and continued efforts within this space have given scientists and physicians optimism. References [1] Mukherjee, Siddhartha. The Emperor of All Maladies a Biography of Cancer. Gale, Cengage Learning, 2012. [2] Cuzick, Jack. “Future Possibilities in the Prevention of Breast Cancer: Breast Cancer Prevention Trials.” Breast Cancer Research, vol. 2, no. 4, 2000, doi:10.1186/bcr66. [3] Petersen, M., & Fieler, V. (2000). Breast Cancer. The American Journal of Nursing, 100(4), 9-12. doi:10.2307/3521928 [4] Sainsbury, J., Anderson, T., Morgan, D., & Dixon, J. (1994). Breast Cancer. BMJ: British Medical Journal, 309(6962), 1150-1153. [5] Frank, R., & Parsons, G. (2013). How Cancer Grows: The Basis of Cancer Treatments. In Fighting Cancer with Knowledge and Hope: A Guide for Patients, Families, and Health Care Providers, Second Edition (pp. 125-136). New Haven; London: Yale University Press. [6] Lakhani, Sr. “Molecular Pathology of Breast Cancer.” Breast Cancer Research, vol. 4, no. S1, 2002, doi:10.1186/ bcr464. [7] Esteller, M. “Epigenetics of Breast Cancer.” Breast Cancer Research, vol. 9, no. S1, 2007, doi:10.1186/bcr1704 [8] Cuzick, Jack. “Assessing Risk for Breast Cancer.” Breast DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
BREAST CANCER Cancer Research, vol. 10, no. S4, 2008, doi:10.1186/ bcr2173. [9] Ragaz, Joseph. “Radiation Impact in Breast Cancer.” Breast Cancer Research, vol. 11, no. S3, 2009, doi:10.1186/ bcr2433. [10] O'shaughnessy, Joyce A. “Lethal Breast Cancer.” Clinical Breast Cancer, vol. 10, 2010, doi:10.3816/cbc.2010.s.001. [11] Weiss, N., & Tarone, R. (2007). Breast Cancer Trends. Epidemiology, 18(2), 284-285. [12] Barthelmes, L., Davidson, L., Gaffney, C., & Gateley, C. (2005). Pregnancy And Breast Cancer. BMJ: British Medical Journal, 330(7504), 1375-1378. [13] Morrow, M., & Gradishar, W. (2002). Recent Developments: Breast Cancer. BMJ: British Medical Journal, 324(7334), 410-414. [14] Palmieri, C., Fishpool, S., & Dickinson, H. (2000). Breast Cancer Screening. BMJ: British Medical Journal, 321(7260), 567-568. [15] Phelps, J. (2005). Headliners: Breast Cancer. Environmental Health Perspectives, 113(5), A307-A307. [16] Anderson, E., Wisdom, C., Wisdom, D., & Swords, E. (2001). Breast Cancer Response. The American Journal of Nursing, 101(8), 13-14. [17] Breast Cancer Screening Controversy. (2002). Reproductive Health Matters, 10(19), 210-210. [18] Detecting Breast Cancer Early. (2012). The Science Teacher, 79(2), 23-24.
[24] Jones, Susan Dana, and Wayne A Marasco. “Antibodies for Targeted Gene Therapy: Extracellular Gene Targeting and Intracellular Expression.” Advanced Drug Delivery Reviews, vol. 31, no. 1-2, 1998, pp. 153–170., doi:10.1016/ s0169-409x(97)00099-9. [25] Hong, Nanfang, and Jianmin Fang. “Drug-Detached Naked Antibody Impairs ADC Efficacy.” ADC Review / Journal of Antibody-Drug Conjugates, vol. 5, 2017, doi:10.14229/jadc.2016.09.04.001. [26] Keri, Ruth. “Faculty of 1000 Evaluation for A Collection of Breast Cancer Cell Lines for the Study of Functionally Distinct Cancer Subtypes.” F1000 - Post-Publication Peer Review of the Biomedical Literature, 2006, doi:10.3410/f.1057706.509616. [27] Nonagase, Yoshikane, et al. “Heregulin-Expressing HER2-Positive Breast and Gastric Cancer Exhibited Heterogeneous Susceptibility to the Anti-HER2 Agents Lapatinib, Trastuzumab and T-DM1.” Oncotarget, vol. 7, no. 51, 2016, doi:10.18632/oncotarget.12743. [28] Diermeier-Daucher, Simone, et al. “Modular Anti-EGFR and Anti-Her2 Targeting of SK-BR-3 and BT474 Breast Cancer Cell Lines in the Presence of ErbB Receptor-Specific Growth Factors.” Cytometry Part A, 2011, doi:10.1002/ cyto.a.21107. [29] Askoxylakis, Vasileios, et al. “Preclinical Efficacy of AdoTrastuzumab Emtansine in the Brain Microenvironment.” JNCI: Journal of the National Cancer Institute, vol. 108, no. 2, 2015, doi:10.1093/jnci/djv313.
[19] Johnson, K., Dixon, J., & Steel, C. (2001). Risk Factors For Breast Cancer. BMJ: British Medical Journal, 322(7282), 365-365.
[30] Erickson, Hans K., et al. “The Effect of Different Linkers on Target Cell Catabolism and Pharmacokinetics/ Pharmacodynamics of Trastuzumab Maytansinoid Conjugates.” Molecular Cancer Therapeutics, vol. 11, no. 5, 2012, pp. 1133–1142., doi:10.1158/1535-7163.mct-110727.
[20] Zipser, Birgit, and Carol Schley. “Generating Monoclonal Antibodies.” Neuronal Factors, 2019, pp. 105– 110., doi:10.1201/9780429277634-5.
[31] Baselga, J. “Mechanism of Action of Trastuzumab and Scientific Update.” Seminars in Oncology, vol. 28, 2001, pp. 4–11., doi:10.1016/s0093-7754(01)90276-3.
[21] Leung, Wing-Yin, et al. “Combining Lapatinib and Pertuzumab to Overcome Lapatinib Resistance Due to NRG1-Mediated Signalling in HER2-Amplified Breast Cancer.” Oncotarget, vol. 6, no. 8, 2015, doi:10.18632/ oncotarget.3296.
[32] Bajorath, Jürgen. “Faculty of 1000 Evaluation for The Target Landscape of Clinical Kinase Drugs.” F1000 - PostPublication Peer Review of the Biomedical Literature, 2017, doi:10.3410/f.732194300.793540505.
[22] Chalouni, Cécile, and Sophia Doll. “Fate of AntibodyDrug Conjugates in Cancer Cells.” Journal of Experimental &amp; Clinical Cancer Research, vol. 37, no. 1, 2018, doi:10.1186/s13046-017-0667-1. [23] Rinnerthaler, Gabriel, et al. “HER2 Directed AntibodyDrug-Conjugates beyond T-DM1 in Breast Cancer.” International Journal of Molecular Sciences, vol. 20, no. 5, 2019, p. 1115., doi:10.3390/ijms20051115.
[33] Novotny, Chris J, et al. “Overcoming Resistance to HER2 Inhibitors through State-Specific Kinase Binding.” Nature Chemical Biology, vol. 12, no. 11, 2016, pp. 923– 930., doi:10.1038/nchembio.2171. [34] Jeong, Jaekwang, et al. “HER2 Signaling Regulates HER2 Localization and Membrane Retention.” Plos One, vol. 12, no. 4, 2017, doi:10.1371/journal.pone.0174849. [35] Austin, Cary D., et al. “Endocytosis and Sorting of ErbB2 88
vol. 12, no. 4, 2017, doi:10.1371/journal.pone.0174849. [35] Austin, Cary D., et al. “Endocytosis and Sorting of ErbB2 and the Site of Action of Cancer Therapeutics Trastuzumab and Geldanamycin.” Molecular Biology of the Cell, vol. 15, no. 12, 2004, pp. 5268–5282., doi:10.1091/mbc.e04-070591. [36] Askoxylakis, Vasileios, et al. “Dual Endothelin Receptor Inhibition Enhances T-DM1 Efficacy in Brain Metastases from HER2-Positive Breast Cancer.” Npj Breast Cancer, vol. 5, no. 1, 2019, doi:10.1038/s41523-018-0100-8. [37] Austin, Cary D., et al. “Endocytosis and Sorting of ErbB2 and the Site of Action of Cancer Therapeutics Trastuzumab and Geldanamycin.” Molecular Biology of the Cell, vol. 15, no. 12, 2004, pp. 5268–5282., doi:10.1091/mbc.e04-070591. [38] Zhang, Yingqiu, et al. “Neratinib Induces ErbB2 Ubiquitylation and Endocytic Degradation via HSP90 Dissociation in Breast Cancer Cells.” Cancer Letters, vol. 382, no. 2, 2016, pp. 176–185., doi:10.1016/j.canlet.2016.08.026. [39] Li, John Y., et al. “A Biparatopic HER2-Targeting Antibody-Drug Conjugate Induces Tumor Regression in Primary Models Refractory to or Ineligible for HER2Targeted Therapy.” Cancer Cell, vol. 29, no. 1, 2016, pp. 117–129., doi:10.1016/j.ccell.2015.12.008.
[46] Azim, Hamdy A, and Hatem A Azim Jr. “Systemic Treatment of Brain Metastases in HER2-Positive Breast Cancer: Current Status and Future Directions.” Future Oncology, vol. 8, no. 2, 2012, pp. 135–144., doi:10.2217/ fon.11.149. [47] Do, John, et al. “Ex Vivo Evans Blue Assessment of the Blood Brain Barrier in Three Breast Cancer Brain Metastasis Models.” Breast Cancer Research and Treatment, vol. 144, no. 1, 2014, pp. 93–101., doi:10.1007/s10549-014-2854-5. [48] Arvanitis, Costas D., et al. “The Blood–Brain Barrier and Blood–Tumour Barrier in Brain Tumours and Metastases.” Nature Reviews Cancer, 2019, doi:10.1038/s41568-0190205-x. [49] Murthy, Rashmi., et al. “Tucatinib, Trastuzumab, and Capecitabine for HER2-Positive Metastatic Breast Cancer.” New England Journal of Medicine, 11 Dec. 2019, doi:10.1056/NEJMoa1914609.
[40] “Biology of the Neuregulin/ErbB Signaling Network.” The Neuregulin-I/ErbB Signaling System in Development and Disease Advances in Anatomy Embryology and Cell Biology, pp. 3–46., doi:10.1007/978-3-540-37107-6_2. [41] Moasser, M M. “The Oncogene HER2: Its Signaling and Transforming Functions and Its Role in Human Cancer Pathogenesis.” Oncogene, vol. 26, no. 45, 2007, pp. 6469– 6487., doi:10.1038/sj.onc.1210477. [42] Munnink, Thijs H. Oude, et al. “Lapatinib and 17AAG Reduce 89Zr-Trastuzumab-F(Ab′)2 Uptake in SKBR3 Tumor Xenografts.” Molecular Pharmaceutics, vol. 9, no. 11, 2012, pp. 2995–3002., doi:10.1021/mp3002182. [43] Okita, Riki, et al. “Lapatinib Enhances TrastuzumabMediated Antibody-Dependent Cellular Cytotoxicity via Upregulation of HER2 in Malignant Mesothelioma Cells.” Oncology Reports, vol. 34, no. 6, 2015, pp. 2864–2870., doi:10.3892/or.2015.4314. [44] Tebbutt, Niall, et al. “Targeting the ERBB Family in Cancer: Couples Therapy.” Nature Reviews Cancer, vol. 13, no. 9, 2013, pp. 663–673., doi:10.1038/nrc3559. [45] Murthy, Rashmi, et al. “Tucatinib with Capecitabine and Trastuzumab in Advanced HER2-Positive Metastatic Breast Cancer with and without Brain Metastases: a NonRandomised, Open-Label, Phase 1b Study.” The Lancet Oncology, vol. 19, no. 7, 2018, pp. 880–888., doi:10.1016/ s1470-2045(18)30256-0.
89
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
CY S T I C F I B R O S I S
The Evolution of Cystic Fibrosis Therapy – A Triumph of Modern Medicine
BY SAM NEFF
A MEDICAL MIRACLE
On October 21st, 2019, the FDA approved Vertex Pharmaceutical’s drug Trikafta, a landmark medicine for the treatment of cystic fibrosis (CF). The drug – frequently referred to as the ‘triple combo’ – is a cocktail of three small molecules that work in concert to resuscitate the CFTR protein which is crippled in CF patients. The previous CF drugs Ivacaftor®, Symdeko®, and Orkambi® (also developed by Vertex) were approved for only half the patient population in total, whereas Trikafta® treats 90% of patients. Clinical trials preceding approval of the drug reported exceptional results. Patients treated with Trikafta® exhibited a dramatic increase in lung function compared to patients receiving a placebo (13.8% increase) or even the other CF drug Symdeko® (10% increase). And of greater importance, the increase in lung function remained stable for most patients in the months (and now years) following the onset of treatment.40 To translate, these numbers mean reduced hospitalization, a less precipitous decline in health, and even relief from some of the extra-pulmonary symptoms of CF like stunted weight gain and heightened risk of diabetes.30 In short, the development of Trikafta® is a remarkable story – a new lease on life for patients who until now were resigned to a fate of steadily declining health and an average life expectancy of less than five decades. FALL 2019
THE CAUSE OF CF
Let’s take a step back and talk about the science of CF. CF is the consequence of a mutation in the gene that codes for CFTR – the cystic fibrosis transmembrane conductance regulator – named as such upon its discovery in 1989. To acquire the disease, which is homozygous recessive in nature, patients must possess two mutant copies of CFTR. Wild type CFTR functions as an anion channel, shuttling chloride and bicarbonate ions across the cell membrane and depositing them outside the cell. This passage of ions is essential to the integrity of the airway surface liquid (ASL) – a fluid coating the surface of lung epithelial cells. The ASL provides two essential functions: the mucus it contains entraps inhaled particles, but at the same time, is fluid enough that the cilia (finger-like projections on the surface of lung epithelial cells) are able to brush the mucus and trapped particles out of the lung – a process referred to as muco-ciliary clearance.5 In CF patients, the volume of the airway surface liquid is reduced. Typically, excretion of salt from lung epithelial cells drives water from underlying blood vessels across and between
Cover Image: Ribbon structure of the protein CFTR. When both alleles of the CFTR gene are mutated to be inactive the affected individual is afflicted with cystic fibrosis (CF) Source: Wikimedia Commons
"To translate, these numbers mean reduced hospitalization, a less precipitous decline in health, and even relief from some of the extrapulmonary symptoms of CF like stunted weight gain and heightened risk of diabetes." 90
Figure 1: (A) Gram positive and gram negative bacteria differ in the composition of their outer barriers. Grampositive bacteria possess a peptidoglycan cell wall layer atop a single plasma membrane. It is the thick peptidoglycan layer that allows gram positive bacteria to be gram-stained (hence the designation gram positive), appearing deep violet when colored with the dye crystal violet. Gram Negative bacteria possess an inner membrane and an outer membrane. Sandwiched in between is the periplasmic space. Gram positive bacteria do not possess a thick peptidoglycan layer, and thus do not attain a violet color when stained. (B) The gram positive bacteria S. Aureus (purple) and the gram negative bacteria E. Coli (pink) in culture. Source: Wikimedia Commons
"But in CF patients this is not the case, so the ASL is dehydrated, its normally watery fluid becoming excessively viscous. The result is ciliary dysfunction. Imagine running your fingers through a tub of water, then switching to a tub of molasses - the latter task is quite a bit more difficult."
91
the cells to the airway surface. But in CF patients this is not the case, so the ASL is dehydrated, its normally watery fluid becoming excessively viscous.5 The result is ciliary dysfunction. Imagine running your fingers through a tub of water, then switching to a tub of molasses – the latter task is quite a bit more difficult. The ASL of the CF lung poses essentially the same problem. The cilia can’t brush through the viscous surface fluid as efficiently – causing entrapped foreign particles to linger and mucus to plug up the airways. In addition, the lack of bicarbonate transport acidifies the ASL – think of the anions as a sponge, soaking up protons (H+) at the airway surface. Their absence means that airway acidity is increased.31 Blockage of the airways by mucus plugging and increased airway surface acidity are not the only problems for CF patients. Both effects in aggregate – the high viscosity and acidity of mucus – provide an optimal environment for the growth of pathogenic bacteria. Why is this the case? For one, the impairment of muco-ciliary clearance prevents the removal of foreign substances from the lungs. Also, mucus plugs are particularly fertile ground for bacterial growth, as they trap not only bacteria, but nutrients like sugar, amino acids, and metal ions – all good sources of fuel for bacteria. CF is associated with stark immune dysfunction, both a cause and a consequence of high bacterial load. The acidity of the airway surface impairs the function of certain immune proteins that fight off bacteria.31 And the presence of bacteria
further contributes to immune dysfunction – common CF pathogens like the gram-negative bacterium Pseudomonas aeruginosa and the gram-positive Staphylococcus aureus recruits immune cells (neutrophils) to the lungs which produce an enzyme called neutrophil elastase [Figure 1]. Its function is to degrade bacterial proteins, but ineffective in this task, it turns on the human body itself – breaking down the collagen/elastin framework of the lungs. Unsurprisingly, this structural breakdown diminishes lung function as well.23
BEYOND THE LUNGS
Recently, the Journal of Cystic Fibrosis devoted an entire issue to the study of CF endocrinology. CF-related diabetes (CFRD) was the topic of several articles. As a form of diabetes, CFRD falls into a class of its own – its sufferers are both insufficient in insulin production due to pancreatic damage (a hallmark of type 1 diabetes) and insulin resistant (a hallmark of type II diabetes). Patients who develop the disease consistently exhibit less favorable clinical outcomes – living shorter lives with lower lung function than patients without CFRD. Why certain patients develop it, and how its development may be halted, are unclear (although there is evidence that Trikafta® may reduce the incidence of CFRD).12,20 Clearly, the clinical effects of a defective CFTR protein do not stop at the lungs. CFTR is present in the cells lining the exocrine ducts of the pancreas, from which digestive enzymes are secreted. Not only do many patients, especially older ones, experience CFRD, the majority of CF patients develop a condition called exocrine pancreatic insufficiency early in life. This means that they need to ingest supplemental digestive enzymes with every meal just to maintain weight. Another typical extra-pulmonary manifestation of CF is elevated sweat chloride. Just as CFTR in bronchial epithelial cells can’t pump chloride ions out, mutations in CFTR lead to an inability to remove chloride from sweat.26 Salty sweat is such a hallmark feature of CF DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
CY S T I C F I B R O S I S that the ‘sweat test’ has long been the standard for diagnosis. 90% of males with CF are infertile, owing to congenital bilateral absence of the vas deferens (the tube which carries sperm from the testicles to the urethra). Certain studies suggest that some male patients have hypogonadotropic defects as well – deficient p roduction of sex hormones by the gonads (primary hypogonadism) or deficient signaling to the gonads to produce sex hormones (secondary hypogonadism).39 Female CF patients are known to see a more rapid decline in health and a greater incidence of infection than males after puberty. It is thought that estrogen, which has receptors in the lung, is responsible. Yet the reason for the poorer clinical outcomes is unclear.14 Sex hormone deficiency is tied to another endocrinological defect – one of the many functions of estradiol (an estrogen) is the stimulation of bone growth. Given that estradiol is a derivative of testosterone, some male patients may develop bone disease in part as a consequence of low testosterone levels. But CF bone disease owes to multiple factors – it has long been associated with malabsorption of vitamin D and calcium due to exocrine pancreatic insufficiency, and er cent studies point to the heavy use of glucocorticoids to treat lung inflammation as a causative factor.25 These drugs have a high affinity for the cortisol receptor – mimicking the effects of cortisol, the ‘stress hormone’, to induce release of amino acids from bone as well as sugar from liver and muscle tissue into the blood. Not only does this accelerate bone deterioration, but it may also hasten the onset of CFRD.27 One reason for the increased focus on the extra-pulmonary symptoms of CF is the fact that CF lung disease, always the driver of mortality in CF patients, has been softened by Trikafta® and other CF modulators such as Symdeko® and Orkambi®. The clinical paradigm has shifted – just as was the case with the emergence of insulin as a treatment for diabetics, the advent of modulator therapy has solved one grave problem for CF patients and opened the door to numerous others. But this is no occasion for regret – it’s not that these problems didn’t exist before, but rather that most patients didn’t live long enough to experience them.
the middle ages, where written manuscripts provide unofficial diagnoses of the disease in infants. Owing to a lack of medical understanding, the disease was often seen as a consequence of witchcraft well into the Modern Era [From a Spanish folktale of the 17th century: “a child that taste[s] salty when kissed will soon die”].26 Some suspect that Frederic Chopin, the famed romantic composer, had a mild case of the disease. He frequently suffered respiratory infections, reported bouts of hemoptysis (coughing up blood), and was underweight throughout his life.24 Chopin died at 39 years old – an extraordinarily productive lifetime despite its brevity. The first official reference to the disease as cystic fibrosis did not come until 1938, when the pathologist Dorothy Andersen termed it ‘cystic fibrosis of the pancreas’ based on the autopsy results of malnourished children. Previously, it had been termed ‘mucoviscidosis’ for the thick mucus that obstructs the lungs.6 By 1963, sweat testing had become the gold standard for CF diagnosis. That year, a panel of CF experts published a series of guidelines for the treatment of the disease.8 The rate of diagnosis and the quality of CF care in the United States has progressed significantly since this first set of guidelines. Over the past century, CF patient centers have sprung up around the country. Likewise, centers of research (such as the Geisel School of Medicine at Dartmouth) have devoted serious attention to the disease, and significant efforts have been undertaken to standardize CF care. And over the last decade, the CF community has witnessed the development of highly effective CF modulator therapy – including Tricafta, a product of Vertex Pharmaceuticals, which has worked closely with the CF Foundation (an organization made up of clinicians, researchers, and passionate family members of CF patients). As a consequence of these developments, the lifespan of CF patients has risen markedly over the past 80 years [Figure 2]. One might ask – why has the disease remained in the human population for so long? If it is so deleterious that patients have historically died before reaching reproductive age (and recall that most males with CF are infertile), to what does the mutant CFTR gene
"The clinical paradigm has shifted - just as was the case with the emergence of insulin as a treatment for diabetics, the advent of modular therapy has solved one grave problem for CF patients and opened the door to numerous others."
Figure 2: Owing to the various innovations in CF care discussed in the article – antibiotics, chest physiotherapy, and now modulatory therapy – the life expectancy of CF patients has risen markedly over the past 80 years. Source: The image is created in excel and reflects the graph of CF life expectancy from the following source: https://www. erswhitebook.org/chapters/ cystic-fibrosis/
HISTORY OF CF CLINICAL CARE
From our perch atop this modern summit in CF care, we look back on a long trail of medical advances. How did we get here? The history of cystic fibrosis goes back as far as 92
"Beyond [strength training's] general positive effects on well-being (including physical and psychological changes) it strenghens the muscles in the upper body that assist breathing (muscles of the core, back, and chest) and improves bone strength both changes that stand to benefit CF patients."
owe its longevity, especially in the Caucasian population? CF is, after all, the most common fatal genetic disease among Caucasians in the United States. Some researchers suggest that the persistence of CF owes to the ‘heterozygote advantage’ that it confers. Individuals with one copy of the mutant gene are asymptomatic for CF (although recent studies suggest that they do present certain symptoms, like mild gastrointestinal dysfunction)17, but seem to possess some degree of resistance to cholera.11 The operative pathogen, vibrio cholerae, acts by stimulating production of intracellular cAMP. The working CFTR protein is normally activated by cAMP, but V. cholerae overstimulates it, causing excessive flow of water out of intestinal epithelial cells (and severe dehydration). Although controversial, the idea is that individuals such as CF carriers who possess lower levels of cell surface CFTR are less susceptible to this dumping of water into the intestinal lumen and experience less severe symptoms – hence the heterozygote advantage. By a different mechanism, researchers have found that CF carriers may also be protected from typhoid fever.29 Given that these two infectious diseases have historically been quite deadly, the heterozygote advantage for CF may be significant. The first treatment developed for CF patients involved physical techniques for airway clearance, developed in the 1950s. Early physiotherapy involved a hands-on massage of the chest and back in an attempt to clear out mucus – an arduous task for both the patient and their caretaker(s). This form of physiotherapy is now largely obsolete in the United States, replaced by medical devices that perform the same task. The Vest®, for example, is worn as its name suggests – encasing the patient in an inflatable suit that vibrates rapidly to shake up mucus, and thereby improve mucociliary clearance of bacteria. Another common element of CF care is inhaled hypertonic saline, often administered in concert with chest physical therapy [Figure 3]. The idea behind inhaling salty water is that some of the salt will enter the lungs, increase the osmolarity (salt concentration) of the airway surface liquid, and drive water from underlying blood vessels into the viscous mucus. In the mid 2000s, doctors in Australia reported that CF patients who surfed tended to have healthier lungs, presumably because of their high exposure to saltwater. This finding lent credence to the use of hypertonic saline as a therapy.9 That being said, certain studies suggest that it does little to improve lung function.36 Recently, studies that test the
93
efficacy of removing tasks like chest PT and inhaled therapies from the daily CF treatment regimen have become fashionable.10 A consequence of the new modulator therapies, these studies may render current therapies like hypertonic saline obsolete. The story of the surfers raises another question about CF care – is exercise itself an effective tool for mitigating the symptoms of CF? Aerobic exercise (like running or crosscountry skiing) is known to increase VO2 max – the max rate of oxygen use during exercise – and improve lung function. Strength training may also play a role. Beyond its general positive effects on well-being (including physical and psychological changes) it strengthens the muscles in the upper body that assist breathing (muscles of the core, back, and chest) and improves bone strength – both changes that stand to benefit CF patients. Studies of exercise in CF have found mixed results, however, owing in large part to the fact that the number of CF patients willing and able to undergo a consistent, monitored exercise regime is small. For people with low lung function, intense physical exercise can be a challenge. But this is likely to change with the advent of effective modulator therapy, given that many patients are seeing marked improvements in lung health. To prolong the life of CF patients, treatment of lung infection is crucial. Numerous studies have shown a pattern of early colonization by the gram-positive bacterium Staphylococcus aureus alongside a variety of other organisms, including anaerobic bacteria and several fungal species. Later in life, the gram-negative bacterium Pseudomonas aeruginosa tends to become dominant. This trend of declining species diversity is associated with deteriorating lung health.41 Many patients are co-infected with P. aeruginosa and S. aureus: a predicament associated with worsened health compared to patients who culture either species alone. Given their preeminence, microbiologists studying CF have focused their efforts on the development of therapies to treat these two pathogens. But other species, like the vicious gram-negative bacteria of the genus Burkholderia, are also associated with very poor patient outcomes. Infection with Burkholderia typically amounts to a death sentence - an outbreak at the Boston Children’s hospital in the late 90’s resulted in the deaths of over 40 patients.28 Anti-pseudomonal and anti-staphylococcal therapies were first discovered in the 1950s and 60s, and more effective therapies have emerged over time, resulting in a significant boost to patient life expectancy. But an effective treatment for B. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
CY S T I C F I B R O S I S Figure 3: This is a nebulizer device used for hypertonic saline therapy. It can also be used to administer Pulmozyme (see sidebar). Both therapies are often performed concurrently with chest PT. The device works by atomizing saline solution (stored in the bottom of the clear plastic nebulizer) into a saltwater mist of fine droplets. The mist is propelled up through the mouthpiece and inhaled into the lungs. Source: Wikimedia Commons
Cepacia has yet to emerge.
THE PATH TO HIGHLY EFFECTIVE MODULATOR THERAPY
In 1989, a team of scientists working in the U.S. and Canada, including the American geneticist Francis Collins, the Canadian Biochemist Jack Riordan, and the CanadianChinese geneticist Lap-Chee Tsui, discovered the gene responsible for cystic fibrosis. In the decade and a half that followed, a race to encode the entirety of the human genome would unfold. It has often been characterized as a competition between the publicly-funded human genome project (led by Collins at the helm of the NIH) and the private Celera Corporation headed by J. Craig Venter. While the NIH broke the genome into many tiny segments and shipped them off to research centers across the globe for sequencing, Venter and Celera aimed to sequence the whole genome on their own. Although the two teams differed in approach, the scientific techniques underlying their efforts were fundamentally similar â&#x20AC;&#x201C; break many copies of the genome into tiny overlapping fragments, then sequence each fragment individually and piece together the whole sequence by looking at the regions of overlap. Both teams mapped the human genome independently, and by the conclusion of the human genome project in 2003, the sequence of the human genome had been largely determined [Figure 4]. This elevated understanding of human genetics meant new opportunities for the treatment of CF. In 2000, the CF foundation entered into a partnership with the small West-Coast biotech .
company Aurora Biosciences. Aurora had developed a membrane voltage dye assay to detect CFTR activity, and of the many companies that Bob Beall (then president of the CF foundation) reached out to, they were one of the few enthusiastic about entering the CF space. The problem was that developing a drug, especially one sophisticated enough to treat CF, is an expensive endeavor. Drug development entails screening millions of candidate molecules, assessing their bioavailability and toxicological properties, developing an efficient mechanism for chemical synthesis, and scaling up that chemical synthesis such that the drug can be produced (safely) at industrial scale. This is not to mention all the measures that must be taken to ensure strict compliance to regulatory guidelines â&#x20AC;&#x201C; strict gowning procedures in the laboratory and drug production facility, rigorous document retention systems (to ensure compliance with government audits), and strict control of air flow, temperature, and humidity wherever drugs are handled. In short, Aurora needed funds. The CF foundation, with its team of dedicated advocates (patients, family members, doctors, and scientists), was willing to provide them. For despite significant improvement in the lifespan of CF patients from mid-century on, no treatment yet existed that actually treated CF at its root cause. The scientists at Aurora, despite their ability to detect CFTR activity, knew little about cystic fibrosis as a disease â&#x20AC;&#x201C; how CFTR dysfunction contributed to the broader physiological symptoms manifested by CF patients. An advisory committee of researchers - Bruce Stanton (currently head of Geisel's lung biology
"Drug development entails screening of millions of candidate molecules, assessing their bioavailability and toxicological properties, developing an effecient mechanism for chemical synthesis, and scaling up that chemical synthesis such that the drug can be produced (safely) at industrial scale."
94
Figure 4: Both forms of genome sequencing, that used by Craig Venter and the Celera Corporation, and that utilized by Francis Collins and the NIH, involve breaking a cloned genome into many different overlapping fragments. Once sequenced, these fragments can be checked for overlaps, and these overlaps give the proper order of the DNA sequence (see how the overlapping fragments in the accompanying image reflect the sequence of the initial cloned genomes)
health, but everyone else would be left in the dark, witness to the immense success of a drug that would not be of benefit to them. Beall decided to move forward with the funding of the drug program. For one, its success would act as a proof of principal; that a foundation and a biotech company could collaborate successfully and develop drugs that might dramatically improve the lives of patients. Also, it stood the reason that the development of VX-770 would lay the groundwork for development of drugs to treat patients with ∆F508, and other mutations in CFTR.
Source: Wikimedia Commons
In 2001, Vertex Pharmaceuticals bought Aurora Biosciences. Aurora had made great strides in the development of its CF modulator, but it was still a small biotech company. With forays into the development of immune-suppressants and anti-viral therapies (for AIDS and hepatitis C), Vertex had amassed significant clinical trial experience and established a global network of contract research organizations (CRO’s), contract manufacturing organizations (CMO’s), and vendors (for laboratory and drug production equipment). But even in this new arrangement, the CF foundation remained a loyal companion. Throughout the development of CF modulators by Vertex – starting in 2001 and culminating with the approval of Orkambi® in 2015, Symdeko® in 2018, and Trikafta® in 2019 – the CF Foundation provided significant monetary support. Without this assistance, Vertex would not have sustained its efforts in the CF space.
"For one, [VX-770's] success would act as a proof of principal; that a foundation and a biotech company could collaborate successfully and develop drugs that might dramatically improve the lives of patients."
95
center), Melissa Ashlock, Bill Guggino, Jeff Wine, Kevin Foskett and Ray Frizzell – helped bridge this knowledge gap. At the helm of the CF drug discover team at Aurora, Paul Negelescu took the advice of the board and drove his team to develop more sophisticated assays for CFTR detection. Screening compounds yielded several promising drug candidates (one of which would become VX-770, or Ivacaftor, after Aurora was acquired by Vertex Pharmaceuticals). But upon investigating further, the scientists were met with a disconcerting result. VX-770 worked on fibroblasts expressing ∆F508 mutant CFTR at a temperature of 27°C, but at 37°C (normal body temperature), it was ineffective. What accounted for the drug’s ineffectiveness a t 3 7°C? T he ∆ F508 m utation causes CFTR to accumulate inside of the cell at physiological temperature – the protein never makes it to the cell membrane. However, at 27°C ∆F508 reaches the membrane but it’s ability to secrete chloride ions is defective. Scientists at Aurora learned that VX-770 increases the ability of ∆F508 to secrete chloride if it reaches the plasma membrane, but not if it is trapped inside the cell. An important implication of these results was that VX-770 would stimulate chloride secretion by CFTR mutations like G551D, which is present in the cell membrane, but has a defect in the ability to secrete chloride. These findings presented Bob Beall and the CF foundation with a moral dilemma – Aurora had found a drug candidate that would help CF patients with the G551D mutation, yet these patients comprised only 5% of the CF patient population. If drug development moved further, including very expensive clinical trials, these patients were likely to see significant improvement in
Initially a minor part of the Vertex drug development program, CF modulator therapy rose in prominence during the late-2000s (In part due to successful clinical trial results for ivacaftor®, but also due to flagging prospects for their Hep-C drug Incivek®). The CF modulators fall into two classes: potentiators and correctors. Potentiators work to activate the mutant CFTR protein that makes it to the cell membrane. Correctors act to guide mutant CFTR that does not make it to the membrane. Scientists at Vertex quickly found that a potentiator alone was not capable of rescuing CFTR function in most patients, given that 90% possess at least one copy the common ∆F508 mutation in which the protein does not make it to the cell surface. Nonetheless, the first modulator approved by the FDA was the potentiator VX-770. A trial for the roughly 4% of patients with the G551D gating mutation (in which the protein does make it to the cell surface, but does not conduct chloride and bicarbonate well) found a greater than 10% DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
CY S T I C F I B R O S I S average improvement in lung function that was accompanied by a far lower frequency of hospitalization.1 Ivacaftor® provided a whole new life for patients who previously had only palliative therapy to treat the symptoms of CF, not a way to halt its relentless progression. But treating only 4% of CF patients was not a suitable stopping point for the company, nor for the broader community of CF researchers, doctors, and patients. In fact, many at Vertex felt more anxious than satisfied upon the approval of Ivacaftor®, given that the tantalizing promise of effective modulator therapy was still out of reach for the vast majority of individuals. Working closely with the CF foundation (who have funded a significant proportion of Vertex’s research efforts in CF) the company developed several CFTR correctors. The first studies of a potentiator-corrector combination produced promising results for the ∆F508 homozygous CF population. Orkambi® and then Symdeko® improved lung function, but not quite as significant as the effect of Ivacaftor® for patients with the G551D mutation. 32 By 2019, after years of dedicated research, the outlook had improved dramatically. Vertex had successfully built a triple combination therapy for CF – composed of the potentiator VX-770 (ivacaftor) and two correctors, VX-661 (tezacaftor) and VX-809 (lumacaftor). In a trial of ∆F508 homozygotes, the triple combo exhibited the same level of success as Ivacaftor® in the treatment of G551D mutant patients: an average improvement in lung function greater than 10% and a significant reduction in rates of hospitalization.13 Remarkably, when the combo was tested on patients with only one ∆F508 allele (the other being any rarer CF mutation besides G551D), the results were equally positive.19 After several years of clinical trials, and the swapping of VX-809 with VX-445 (elaxacaftor) to minimize adverse drug-drug interactions, the triple combo (Trikafta®) was approved on October 23rd, 2019 for patients 12 years and older. For the first time, the majority of CF patients (roughly 90%) had a treatment that targeted mutant CFTR directly and slowed the progression of the disease.
LOOKING TO THE FUTURE
The outlook for most CF patients is far brighter today than it was just several years ago. For adult patients whose lung health has yet to decline to the point where lung transplantation is necessary (around 20% lung function), the news of Trikafta’s approval came at just the right time; a solution to stave of the slow destruction of the lungs.
Once Trikafta® is approved for patients under 12 years of age, life expectancy for CF patients has the potential to be extended dramatically. As Carolyn Johnson put it in her recent article for the Washington Post: “Patients who were unsure about whether they should bother attending college because they had always known they would die young are now being told they should think about planning for retirement.”15 Some significant challenges still remain – 10% of patients are yet unable to benefit from CF modulator therapy. They possess severe, class I mutations that cause the complete absence of CFTR protein production. In these cases, there is no misshapen protein to correct or potentiate. Several approaches have emerged to tackle this problem – from the stimulation of alternate chloride channels to the use of gene therapy.3 Scientists at Stanford University just published an article proclaiming that gene therapy in airway stem cells corrects the mutant CFTR protein such that wild type CFTR is expressed after the cells differentiate.34 This study suggests that gene therapy for CF may not be far off. So what does it mean to find a ‘cure’ for CF? The modulators, although extremely effective, do not restore CFTR expression to normal levels. In part, this owes to suppression of modulator-rescued CFTR at the cell surface by bacterial virulence factors. A study conducted at Dartmouth’s Geisel School of Medicine showed that outer membrane vesicles (OMV’s) from P. aeruginosa interface with lipid rafts on the surface of bronchial epithelial cells. Once docked at the epithelial cell surface, they deposit the virulence factor cif to the cell interior. Cif reduces expression of CFTR at the cell surface, likely by causing it to be tagged for degradation by the cell’s own machinery.4 This means that even though gene therapy for CF patients may soon be a reality, it will not be the end of the road. It will be essential to address the CF lung microbiome – the network of bacteria, fungi, and viruses that interact in complicated ways with each other and the host. Research at the Geisel School of Medicine at Dartmouth has started to unravel some of the mysteries of the CF lung disease. For example, the research team led by Dr. Gerry O’Connor, an investigator in The Dartmouth Institute for Health Care Delivery, added five years of life on average to CF patients across the country. The research by O’Connor, in collaboration with the National Quality Program for CF, encouraged patient centers across the country to publicly report clinical outcomes – eliminating discrepancies in care by allowing each center to make improvements
"By 2019, after years of dedicated research, the outlook has improved dramatically. Vertex had successfully built a triple combination therapy for CF - composed of the potentiator VX-770 (ivacaftor) and two correctors, VX-661 (tezacaftor) and VX-809 (lumacaftor)."
96
"CF therapy will advance in tandem with the development of gene therapy techniques. Advances in understanding the lung microbiome too will confer broad benefits to CF patients and people with other lung diseases."
in clinical areas where other centers performed better. The current director of the Lung Biology Program at Dartmouth, Dr. Bruce Stanton, leads a team in the investigation of P. aeruginosa lung infections. Stanton frequently travels across the country, meeting with the heads of other CF centers to evaluate their performance and suggest improvements to their standards of care. He sat on the scientific advisory board (SAB) for Vertex Pharmaceuticals when it acquired Aurora Biosciences. Folded into Vertex operations at the San Francisco research site, scientists from Aurora (headed by Dr. Paul Negulescu) were the group at Vertex most responsible for the efforts to develop the CF modulators.
the lung microbiome too will confer broad benefits to CF patients and people with other lung diseases. The future is bright, but there are clearly many challenges remaining in the treatment of CF – helping the remaining non-∆F508 patients, dealing with extrapulmonary symptoms, and finding a way to better manage CF lung infection. In keeping with the motto of the CF foundation, the CF community – of researchers, doctors, patients and patient advocates, and the team at Vertex Pharmaceuticals – will keep searching for new treatments for the 10% who do not have effective drugs ‘until it’s done’. CF care has improved tremendously over the past eight decades, and it continues to evolve.
In the lungs, P. aeruginosa forms biofilms, complex structures that increase antibiotic resistance. Biofilms are composed of bacterial cell communities which rest in a matrix of extracellular polymetric substances and fragments of DNA released by dead immune cells. It is well documented that bacterial biofilms resist antibiotics more tenaciously than independent (planktonic) bacterial cells. This is why it is so hard to eradicate P. Aeruginosa and S. Aureus from the lungs of CF patients.16,21,22 Proteomics and transcriptomics studies (examining what proteins are produced and what genes are expressed) of bacterial biofilms have yielded insight into the molecular mechanisms of biofilm formation and provided potential therapeutic routes.
References 1. Accurso, F. J., Rowe, S. M., Clancy, J. P., Boyle, M. P., Dunitz, J. M., Durie, P. R., … Ramsey, B. W. (2010). Effect of VX-770 in Persons with Cystic Fibrosis and the G551D- CFTR Mutation. New England Journal of Medicine, 363(21), 1991–2003. https://doi.org/10.1056/NEJMoa0909825
Thus far, the focus of CF microbiome research has been the lung. Given that respiratory failure is the leading cause of death for CF patients, this emphasis is warranted, since studies have shown that although modulator drugs improve lung function they do not eliminate bacterial lung infections. However, as patients are living longer, the need to address the CF gut microbiome has increased – for gut dysbiosis (a disturbance to the composition of the normal bacterial community) impacts the gastro-intestinal, pancreatic, and liver symptoms of CF. Studies have found that CF liver disease (now the fourth leading cause of death for CF patients) is exacerbated by the molecular products of CF pathogens in the gut.2 And CF patients are at a 5-10x greater risk of developing colorectal cancer than the general population, a trend that is most likely influenced by an abnormal gut microbiome.33 Innovations in CF care intersect with broader developments in the realm of biological research. CF therapy will advance in tandem with the development of gene therapy techniques. Advances in understanding 97
2. Al Sinani, S., Al-Mulaabed, S., Al Naamani, K., & Sultan, R. (2019). Cystic Fibrosis Liver Disease: Know More. Oman Medical Journal, 34(6), 482–489. https://doi.org/10.5001/ omj.2019.90 3. Amaral, M. D., & Beekman, J. M. (2019). Activating alternative chloride channels to treat CF: Friends or Foes? Journal of Cystic Fibrosis. https://doi.org/10.1016/j. jcf.2019.10.005 4. Barnaby, R., Koeppen, K., & Stanton, B. (2018). Cyclodextrins Reduce the Ability of Pseudomonas Outer Membrane Vesicles to Reduce CFTR Cl- Secretion. American Physiological Society. 5. Castellani, S., Di Gioia, S., di Toma, L., & Conese, M. (2018). Human Cellular Models for the Investigation of Lung Inflammation and Mucus Production in Cystic Fibrosis. Analytical Cellular Pathology, 2018, 1–15. https:// doi.org/10.1155/2018/3839803 6. Clague, S. (2014). Dorothy Hansine Andersen. The Lancet Respiratory Medicine, 2(3), 184–185. https://doi. org/10.1016/S2213-2600(14)70057-8 7. Collins, F. S. (2019). Realizing the Dream of Molecularly Targeted Therapies for Cystic Fibrosis. New England Journal of Medicine. https://doi.org/10.1056/ NEJMe1911602 8. Farrell, P. M., White, T. B., Derichs, N., Castellani, C., & Rosenstein, B. J. (2017). Cystic Fibrosis Diagnostic Challenges over 4 Decades: Historical Perspectives and Lessons Learned. The Journal of Pediatrics, 181, S16–S26. https://doi.org/10.1016/j.jpeds.2016.09.067 9. Fink, L. (2008, July 9). The Ocean Becomes a Pipeline to DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
CY S T I C F I B R O S I S a Cure for Cystic Fibrosis. 10. Gifford, A. H., Mayer-Hamblett, N., Pearson, K., & Nichols, D. P. (2019). Answering the call to address cystic fibrosis treatment burden in the era of highly effective CFTR modulator therapy. Journal of Cystic Fibrosis. https://doi.org/10.1016/j.jcf.2019.11.007 11. Goodman, B. E., & Percy, W. H. (2005). CFTR in cystic fibrosis and cholera: From membrane transport to clinical practice. Advances in Physiology Education, 29(2), 75–82. https://doi.org/10.1152/advan.00035.2004 12. Granados, A., Chan, C. L., Ode, K. L., Moheet, A., Moran, A., & Holl, R. (2019). Cystic fibrosis related diabetes: Pathophysiology, screening and diagnosis. Journal of Cystic Fibrosis, 18, S3–S9. https://doi.org/10.1016/j. jcf.2019.08.016 13. Heijerman, H. G. M., McKone, E. F., Downey, D. G., Van Braeckel, E., Rowe, S. M., Tullis, E., … Majoor, C. (2019). Efficacy and safety of the elexacaftor plus tezacaftor plus ivacaftor combination regimen in people with cystic fibrosis homozygous for the F508del mutation: A doubleblind, randomised, phase 3 trial. The Lancet. https://doi. org/10.1016/S0140-6736(19)32597-8 14. Hughan, K.S., Daley, T., Rayas, M.S., Kelly, A., Roe, A. (2019). Female reproductive health in cystic fibrosis, Journal of Cystic Fibrosis, 18: S95-S104. https://doi. org/10.1016/j.jcf.2019.08.024 15. Johnson, C. (2019, October 31). Long-awaited cystic fibrosis drug could turn deadly disease into a manageable condition. The Washington Post. 16. Limoli, D. H., Whitfield, G. B., Kitao, T., Ivey, M. L., Davis, M. R., Grahl, N., … Goldberg, J. B. (2017). Pseudomonas aeruginosa Alginate Overproduction Promotes Coexistence with Staphylococcus aureus in a Model of Cystic Fibrosis Respiratory Infection. MBio, 8(2). https:// doi.org/10.1128/mBio.00186-17 17. Lin et al., Y. (2019, October 17). Defining the phenotypic signature of CFTR mutation carriers in the UK Biobank. Presented at the American Society of Human Genetics 2019 Annual Meeting, Houston, TX. 18. Magaret, A. S., Mayer-Hamblett, N., & VanDevanter, D. (2019). Expanding access to CFTR modulators for rare mutations: The utility of n-of-1 trials. Journal of Cystic Fibrosis, S1569199319309798. https://doi.org/10.1016/j. jcf.2019.11.011 19. Middleton, P. G., Mall, M. A., Dřevínek, P., Lands, L. C., McKone, E. F., Polineni, D., … Jain, R. (2019). Elexacaftor– Tezacaftor–Ivacaftor for Cystic Fibrosis with a Single Phe508del Allele. New England Journal of Medicine. https://doi.org/10.1056/NEJMoa1908639 20. Norris, A. W. (2019). Is Cystic Fibrosis–related Diabetes
Reversible? New Data on CFTR Potentiation and Insulin Secretion. American Journal of Respiratory and Critical Care Medicine, 199(3), 261–263. https://doi.org/10.1164/ rccm.201808-1501ED 21. Orazi, G., & O’Toole, G. A. (2017). Pseudomonas aeruginosa Alters Staphylococcus aureus Sensitivity to Vancomycin in a Biofilm Model of Cystic Fibrosis Infection. MBio, 8(4). https://doi.org/10.1128/mBio.00873-17 22. Orazi, G., Ruoff, K. L., & O’Toole, G. A. (2019). Pseudomonas aeruginosa Increases the Sensitivity of Biofilm-Grown Staphylococcus aureus to MembraneTargeting Antiseptics and Antibiotics. MBio, 10(4). https:// doi.org/10.1128/mBio.01501-19 23. Oriano, M., Terranova, L., Sotgiu, G., Saderi, L., Bellofiore, A., Retucci, M., … Blasi, F. (2019). Evaluation of active neutrophil elastase in sputum of bronchiectasis and cystic fibrosis patients: A comparison among different techniques. Pulmonary Pharmacology & Therapeutics, 59, 101856. https://doi.org/10.1016/j. pupt.2019.101856 24. O’Shea, J. G. (1987). Was Frédéric Chopin’s illness actually cystic fibrosis? The Medical Journal of Australia, 147(11–12), 586–589. 25. Putman, M. S., Anabtawi, A., Le, T., Tangpricha, V., & Sermet-Gaudelus, I. (2019). Cystic fibrosis bone disease treatment: Current knowledge and future directions. Journal of Cystic Fibrosis, 18, S56–S65. https://doi. org/10.1016/j.jcf.2019.08.017 26. Quinton, P. M. (2007). Cystic Fibrosis: Lessons from the Sweat Gland. Physiology, 22(3), 212–225. https://doi. org/10.1152/physiol.00041.2006 27. Rayas, M. S., Kelly, A., Hughan, K. S., Daley, T., & Zangen, D. (2019). Adrenal function in cystic fibrosis. Journal of Cystic Fibrosis, 18, S74–S81. https://doi.org/10.1016/j. jcf.2019.08.023 28. Roux, D., Weatherholt, M., Clark, B., Gadjeva, M., Renaud, D., Scott, D., … Yoder-Himes, D. R. (2017). Immune Recognition of the Epidemic Cystic Fibrosis Pathogen Burkholderia dolosa. Infection and Immunity, 85(6). https://doi.org/10.1128/IAI.00765-16 29.. Science News Staff. (1998, May 6). A Silver Lining for Cystic Fibrosis? Science. 30. Sergeev, V., Chou, F. Y., Lam, G. Y., Hamilton, C. M., Wilcox, P. G., & Quon, B. S. (2019). The Extra-pulmonary Effects of CFTR Modulators in Cystic Fibrosis. Annals of the American Thoracic Society. https://doi.org/10.1513/ AnnalsATS.201909-671CME 31. Simonin, J., Bille, E., Crambert, G., Noel, S., Dreano, E., Edwards, A., … Sermet-Gaudelus, I. (2019). Airway surface liquid acidification initiates host defense abnormalities 98
Edwards, A., … Sermet-Gaudelus, I. (2019). Airway surface liquid acidification initiates host defense abnormalities in Cystic Fibrosis. Scientific Reports, 9(1). https://doi. org/10.1038/s41598-019-42751-4
for helping to edit my paper and sharing the story about the relationship between Aurora Biosciences, Vertex Pharmaceuticals, and the CF Foundation.
32. Taylor-Cousar, J. L., Mall, M. A., Ramsey, B. W., McKone, E. F., Tullis, E., Marigowda, G., … Rowe, S. M. (2019). Clinical development of triple-combination CFTR modulators for cystic fibrosis patients with one or two F508del alleles. ERJ Open Research, 5(2), 00082–02019. https://doi. org/10.1183/23120541.00082-2019
Much of the story in the section titled 'The Path to Highly Effective Modulator Therapy' and the discussion of Gerry O' Connor's national standardization efforts, is based on a 1-on-1 interview with Dr. Stanton
33. Tilg, H., Adolph, T. E., Gerner, R. R., & Moschen, A. R. (2018). The Intestinal Microbiota in Colorectal Cancer. Cancer Cell, 33(6), 954–964. https://doi.org/10.1016/j. ccell.2018.03.004 34. Vaidyanathan, S., Salahudeen, A. A., Sellers, Z. M., Bravo, D. T., Choi, S. S., Batish, A., … Porteus, M. H. (2019). High-Efficiency, Selection-free Gene Repair in Airway Stem Cells from Cystic Fibrosis Patients Rescues CFTR Function in Differentiated Epithelia. Cell Stem Cell, S1934590919304576. https://doi.org/10.1016/j. stem.2019.11.002 35. Vertex Pharmaceuticals. (2019, November 19). CRISPR Therapeutics and Vertex Announce Positive Safety and Efficacy Data From First Two Patients Treated With Investigational CRISPR/Cas9 Gene-Editing Therapy CTX001® for Severe Hemoglobinopathies. 36. Wark, P., & McDonald, V. M. (2018). Nebulised hypertonic saline for cystic fibrosis. Cochrane Database of Systematic Reviews. https://doi.org/10.1002/14651858.CD001506. pub4 37. Werth, B. (1994). The Billion Dollar Molecule: One Company’s Quest for the Perfect Drug. New York: Simon & Schuster. 38. Werth, B. (2014). The Antidote: Inside the World of New Pharma. New York: Simon & Schuster. 39. Yoon, J. C., Casella, J. L., Litvin, M., & Dobs, A. S. (2019). Male reproductive health in cystic fibrosis. Journal of Cystic Fibrosis, 18, S105–S110. https://doi.org/10.1016/j. jcf.2019.08.007 40. Zemanick, E. T., & Accurso, F. J. (2019). Entering the era of highly effective CFTR modulator therapy. The Lancet. https://doi.org/10.1016/S0140-6736(19)32676-5 41. Zhao, J., Schloss, P. D., Kalikin, L. M., Carmody, L. A., Foster, B. K., Petrosino, J. F., … LiPuma, J. J. (2012). Decade-long bacterial community dynamics in cystic fibrosis airways. Proceedings of the National Academy of Sciences, 109(15), 5809–5814. https://doi.org/10.1073/pnas.1120577109
ACKNOWLEGMENTS
I’d like to give special thanks to Dr. Bruce Stanton, the head of Geisel’s Lung Biology Lab,
99
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
H U M A N I M M O R TA L I T Y
Human Immortality: When, How, and Why or Why Not BY ZHANEL NUGMANOVA
INTRODUCTION
Francis Bacon, the brilliant English philosopher and statesman whose scientific method gave birth to the Scientific Revolution in the middle of the 16th century, defined true progress as the successful implementation of existing scientific principles to improve people's lives. Starting in the early 1950s, the rapid development of medicine-related technologies and foundational knowledge attracted remarkable public attention and was named the “biological revolution” of the age.1 Following Bacon’s observations of progress, people for the last half-century have appreciated the medical industry’s continuous advancements achieved by adjusting newly discovered technologies to the urgent health-care needs of society. Some of the recent biomedical breakthroughs that have disrupted previously established medical procedures target the most severe diseases humanity faces today. For instance, the introduction of cancer immunotherapies has saved millions of lives and reversed the pace of cancer for many patients by harnessing the body’s own immune system to fight tumors that were previously almost impossible to treat. Currently, there are two ways cancer immunotherapeutics that have proved to be beneficial. The first FALL 2019
one activates the natural functions of specific immune elements (for example, genetically engineering T-cells to have chimeric antigen receptors) to identify and eliminate active cancer cells. The other method hinders cancer cells’ signals that prevent the immune system from recognizing them.2 Now, the direction of current cancer research is being driven towards more sophisticated and efficient cancer diagnostics using biomarkers and genomics that in the near future will allow the detection and analysis of circulating tumor DNA through a simple and affordable blood test.3 But new cancer treatments are not the only biomedical innovations made during the last decade. The progress in genome editing using the CRISPR-Cas technology has presented a possible solution for many incurable diseases. As an example, germline editing - the heritable alteration of the human genome - can cure illnesses with permanent intergenerational evolution (like cystic fibrosis) which cause these infections to mutate and pass the new mutation to the next generation, while somatic engineering can eliminate many severe acquired diseases (i.e. hypercholesterolemia).2
Cover Source: Pexels
"Some of the recent biomedical breakthroughs that have disrupted previously established medical procedures target the most severe diseases humanity faces today."
However, an abrupt integration of novel biomedical technologies into the accessible 100
Figure 1: (Francis Bacon, the English philosopher and statesman whose Scientific Method paved the way for countless advancements. Source: Wikimedia Commons Figure 2: Anti-aging research has begun, but has yet to reach widespread implementation due to socioeconomic and cultural implications involved. Source: Needpix
"In contrast, the biological view of senescence provides more details about the origin of this phenomenon rather than focusing on its consequences."
clinical sphere raises a myriad of issues in various fields including medicine, economics and ethics. One of the most problematic risks that the medical innovations pose to our society is their affordability to the general public. Every year, American health care expenses account for approximately three trillion dollars, corresponding to more than 19% of the US GDP.4 And even though it seems that the improved efficiency of newly invented medical technologies should decrease costs in the health industry, the financial projections say the opposite. The global health spending is calculated to grow from $7.83 trillion in 2013 to $18.28 trillion in 2040.5
CURRENT TRENDS AND ATTITUDES TOWARD ANTI-AGING TECHNOLOGIES
These innovations are only the beginning of the biological revolution of our age. The most disruptive biomedical innovation with the potential to revolutionize human activity from world economy to global social justice is already on the horizon: reversed aging. Staying youthful and healthy has always been one of the strongest and most elusive desires of humanity, since aging weakens the human body and debilitates its natural processes of defense, recovery and fertility. And for all times, people had no more than just two choices: to age or to die young. This could be one of the substantial explanations of why people call it “inevitable” or even “natural” to lose all normal abilities one by one with time and then spend their last years and money to only postpone a painful death. This fate sounds harsh and yet, surprisingly, the idea of infinite life in a vigorous and everhealthy body induces more concerns and
101
mass objections than inevitable senility. On the one hand, such a response is expected due to potential social side-effects such as the overpopulation problem and the possibility of dramatically increasing the gap between rich and poor classes. On the other hand, antiaging technologies exhibit endless potential to endow humanity with immortality and could outweigh the possible drawbacks associated with their seemingly disruptive realization.
WHAT IS AGING? ACCUMULATION OF DAMAGE
One of the main concerns regarding infinite longevity can be linked to the ambiguity that surrounds the concept of aging. Although the word “aging” may sound familiar to the general public, when it comes to a precise, single definition of this term, many people would interpret it differently. Some associate it with the flow of time but hold no particular attitude towards it, while others prefer to relate this word to the natural process of becoming old. Nevertheless, a statistical perspective of this term defines aging as “the collection of changes that render human beings progressively more likely to die”.6 In contrast, the biological view of senescence provides more details about the origin of this phenomenon rather than focusing on its consequences. It defines senescence as “a gradual deterioration of physiological function with age, associated with the intrinsic, inevitable, and irreversible age-related process of accruing loss of viability and increase in vulnerability”.7 Therefore, rather than referring to the aging process as something natural, it is crucial for opponents of human longevity to understand that it is just the accumulation of body damage and can be cured with a targeted scientific approach.
INTRODUCTION OF ANTI-AGING TECHNOLOGIES, CURRENT RESEARCH AND ADVANCEMENTS
Nowadays, despite the common skepticism around reversed-aging technologies, this futuristic field draws significant attention from DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
H U M A N I M M O R TA L I T Y futuristic field draws significant attention from many talented biologists all over the world. The anti-aging research currently focuses on 9 essential hallmarks of biological aging: genomic instability caused by DNA damage, attrition of the protective telomeres (chromosomal endcaps), alterations to the epigenome that controls which genes are turned on and off, loss of healthy protein maintenance, known as proteostasis, deregulated nutrient sensing caused by metabolic changes, mitochondrial dysfunction, accumulation of senescent zombielike cells that inflame healthy cells, exhaustion of stem cells, altered intercellular communication, and the production of inflammatory molecules.8
on 8 various hallmarks of aging including but not limited to the most popular branches of age-reversing research today: chromosomal instability, cellular senescence, stem cell exhaustion, epigenetic alterations and others. According to Market Watch, investors are also closely watching Rejuvenate Bio which was founded by Harvard Medical School professor George Church. Having tested more than 60 anti-aging gene therapies, this biotech startup now reverses aging in beagles by manipulating and modifying their natural DNA instructions. In 2020, the CEO announced a series of new trials on the Cavalier King Charles Spaniel breed to eliminate the mitral valve disease that is directly caused by aging.12
However, scientists are not the only ones interested in a “universal cure”. The world's biggest biotech investors keep a close eye on its rapid development. Bank of America has predicted the anti-aging market will balloon to $610 billion by 2025 from an estimated $110 billion currently.9 Such huge numbers proves the increasing market demand for antiaging solutions leading to “big deals” made between bio startups and venture capital firms. In mid-August of 2019, the British antiaging startup Juvenescence completed its $100 million Series B round of financing, where Series B round is the third stage of startup financing and the second stage of venture capital financing.10,11 The company currently has a fair share of competition including two Boston-based biotech giants in the anti-aging sphere, Life Biosciences and Rejuvenate Bio. The former was founded in 2016 by praised biomedical scientists David Sinclair and Tristan Edwards and raised $50 million in Series B, currently being evaluated at the approximately $500 million.10 Having a portfolio of 8 daughter companies, Life Biosciences focuses
Another promising anti-aging research study is presented by the Juvenescencebacked program called AgeX Therapeutics. Its research on stem cells resulted in the in vitro development of living embryonic cells that have the ability to generate pluripotent or selfrenewing stem cells of any type to potentially cure a range of age-related degenerative diseases. In 2010, AgeX successfully presented the reversal of the developmental aging of human cells by implementing novel transcriptional reprogramming technology.13 Impressed by such fascinating results, the research group continued its investigation of certain genes that could be involved in the process of aging reversal, and in 2017, they discovered the the embryonic-fetal transition marker (EFT), COX7A1 gene. By using the Deep Neural Network (DNN) ensembles, commonly known as Deep Learning, that can identify the novel markers associated with the mammalian EFT, the AgeX team was able to find EFT molecular markers mainly responsible for regulatory mechanisms of normal development, epimorphic tissue regeneration and cancer.14 In the near future, with the startup’s induced Tissue Regeneration (iTR™) platform, Age X aims to unlock cellular longevity and regenerative capacity in order to eliminate age-related modifications in the body.15
"Bank of America has predicted the anti-aging market will balloon to $610 billion by 2025 from an estimated $110 billion currently."
Figure 3: Aging is frequently associated with the shortening of telomeres. Source: Needpix
PROS AND CONS OF ANTI-AGING TECHNOLOGY
Despite the attractiveness of youthful immortality, heated debates about extending the human lifespan are still common. One of the most substantial arguments that opponents of reversed aging usually employ is that which is related to wealth and injustice. Even without using sophisticated anti-aging technologies, rich Western countries are superior in providing high-quality health services to poorer third world nations ones. One of the major indicators 102
"Thus, the invention of the 'fountain of youthfulness' could be considered ethically and morally incorrect until people can guarantee and provide every single individual with such a universal cure."
of this is the average life expectancy in different countries. While the average lifespan in a number of poorer countries is less than 40 years, the Western developed countries usually enjoy a life expectancy of 70-80 years.16 Unsurprisingly, it is the combination of AIDS and poverty that causes such extremely high rates of mortality in some developing countries: about 60% of overall HIV patients are in Sub-Saharan Africa, corresponding to 25 to 26 million people.16,17 With such terrifying statistics, one can easily reach the conclusion that necessary treatments are not distributed evenly between people worldwide. Then, how can the ultimate anti-aging technology, probably the most powerful biomedical innovation in human history, be accessible to everyone? As John Harris, the renowned British bioethicist, argues in his article “Immortal Ethics”: “if immortality or increased life expectancy is a good, it is doubtful ethics to deny palpable goods to some people because we cannot provide them for all”.18 Thus, the invention of “the fountain of youthfulness” could be considered ethically and morally incorrect until people can guarantee and provide every single individual with such a universal cure. From economic and societal perspectives, the victory over death could lead to some fundamental problems like overpopulation. Fair enough, as the anti-aging technology would lead to a rapid increase in the elder and, thus, overall population; the population crisis seems to be inevitable and unmanageable. But as the bright British pro-longevity and biomedical gerontologist Aubrey de Grey points out, “it would be many decades, if ever, before even a complete cure for aging exerted an effect on global population that compared with acknowledged uncertainties inherent in our ignorance of future birth rates”.19 And according to the statistics provided by the United Nations, “by 2050 the effect of a total cessation of death of those over 60—even starting today!—would be under two billion”.19,20 This figure rises rapidly, of course, for more distant projections—but so does the uncertainty in birth rate.19 On the other hand, a future without aging still sounds promising and has many advantages that could potentially outweigh the cons in the long run. One of the most obvious outcomes of prolonged lifespans is the growth of people capable of contributing to the country's economy. With no people disabled by age-related diseases, such a world holds a future with no senior health care debts and insurances, leaving the necessary resources for the even distribution of novel anti-aging technologies around the world. In addition, as
103
Dr. Michael Hufford, Chief Executive Officer of LyGenesis, said, tissue regenerative innovations can solve “the problem of the imbalance between organ supply and demand”. Instead of one donated organ treating just one patient, one donated organ could serve as the seed to treat dozens of patients simultaneously, while the procedures associated with their development would not require major transplantation surgery but would instead use a minimally invasive outpatient endoscopy, decreasing costs while enabling patients considered too sick or having too many comorbidities to qualify for traditional organ transplantation to receive treatment”.10 And advantages of biological immortality discussed above are just the beginning of the long list of possible future benefits that the reversedaging technologies will bring to this world.
CONCLUSION
The speed with which anti-aging research continues to accelerate is shocking and has a great potential to completely disrupt not only all the long-established institutions on which humanity relies but also our whole understanding of morality, ethics and individuality. And, yet, with all the current evidence brought by successful slow-down of biological aging in animals, model organisms and even in a small control group of humans, this future of immortality seems to be inevitable and leaves no better option for people but to prepare for the great changes and challenges it will create.21,22 One of such initiatives to get ready for this scientific revolution is led by Harvard Medical School Department of Genetics’ Professor David Sinclair, as he and a group of 15 researchers from Harvard, MIT and other universities in the US and Europe launched the nonprofit Academy for Health and Lifespan Research. This institution’s main goal is to promote future work, ease collaborations between scientists, and ensure that governments and corporations are making decisions based on the latest facts instead of rumor, speculation, or hype.23 Also, it facilitates the young researchers and bioethicists’ engagement in reversed aging issues and aims to ensure the safest and efficient way to achieve biological immortality as soon as possible. Overall, as all previous successful disruptive technologies were first criticized and underestimated but eventually were massively recognized and praised, anti-aging is also destined to walk through this thorny path, being first considered as a world threat but eventually becoming a bridge between us and an ocean of opportunities and answers to our endless questions. DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
H U M A N I M M O R TA L I T Y References [1]Institute of Medicine (US) Committee on Technological Innovation in Medicine. (1990, January 1). Medical Technology Development: An Introduction to the Innovation-Evaluation Nexus. [2] Dzau, V. J., & Balatbat, C. A. (2018, October 17). Health and societal implications of medical and technological advances. [3] K. C. A. Chan, J. K. S. Woo, A. King, B. C. Y. Zee, W. K. Jacky Lam, S. L. Chan, S. W. I. Chu, C.Mak, I. O. L. Tse, S. Y. M. Leung, G. Chan, E. P. Hui, B. B. Y. Ma, R. W. K. Chiu, S.-F. Leung, A. C.van Hasselt, A. T. C. Chan & Y. M. D. Lo. (2017). Analysis of plasma Epstein–Barr virus DNA to screen for nasopharyngeal cancer. N. Engl. J. Med. 377, 513–522 (2017).
Nasonkin, I., Chapman, K. B., … Zhavoronkov, A. (2017, December 28). Use of deep neural network ensembles to identify embryonic-fetal transition markers: repression of COX7A1 in embryonic and cancer cells. [15] Technology. (n.d.). Retrieved from https://www. agexinc.com/technology/ [16] Pijnenburg, M. A. M., & Leget, C. (2007, October). Who wants to live forever? Three arguments against extending the human lifespan. [17] HIV and AIDS in East and Southern Africa regional overview. (2019, October 31). Avert. [18] Harris, J. (2004, June). Immortal ethics. NCBI. Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/15247080
[4] J. L. Dieleman, T. Templin, N. Sadat, P. Reidy, A. Chapin, K. Foreman, A. Haakenstad, T. Evans, C. J. L. Murray, C. Kurowski, National spending on health by source for 184 countries between 2013 and 2040. Lancet 387, 2521– 2535. Google Scholar
[19] de Grey, A. D. N. J. (2003, September). The foreseeability of real anti-aging medicine: focusing the debate. NCBI. Retrieved from https://www.ncbi.nlm.nih. gov/pubmed/12954478
[5] H. Moses III., D. H. M. Matheson, E. R. Dorsey, B. P. George, D.Sadoff, S. Yoshimura, The anatomy of health care in the United States. JAMA 310, 1947–1964 (2013).
[20]Annexes. (n.d.). Statistical Papers - United Nations (Ser. A), Population and Vital Statistics Report World Population Ageing 2015. doi: 10.18356/b997152e-en
[6] Medawar, P. B. (1952). An Unsolved Problem of Biology. H. K. Lewis, London.
[21]Singh, B., Schoeb, T. R., Bajpai, P., Slominski, A., & Singh, K. K. (2018). Reversing wrinkled skin and hair loss in mice by restoring mitochondrial function. Cell Death & Disease, 2018; 9 (7) DOI: 10.1038/s41419-018-0765-9
[7] Comfort, A. (1964). Ageing: The Biology of Senescence. Routledge & Kegan Paul, London. [8] Sinclair, D. A., LaPlante, M. D., & Delphia, C. L. (2019). Lifespan: why we age - and why we don’t have to. New York: Atria Books.
{22]Lanese, N. (2019, September 10). Could a Drug Cocktail Reverse Biological Aging? LifeScience. [23]Powell, A. (2019, April 1). Anti-aging research: 'Prime time for an impact on the globe'.
[9]Grover, N. (2019, August 20). Healthier, longer lifespans will be a reality sooner than you think, Juvenescence promises as it closes $100M round. [10]Jefferson, R. S. (2019, August 28). 'Extraordinary' Breakthroughs In Anti-Aging Research 'Will Happen Faster Than People Think'. Forbes. Retrieved from https://www. forbes.com/sites/robinseatonjefferson/2019/08/26/howextraordinary-breakthroughs-in-anti-aging-research-willhappen-faster-than-people-think/#41d81c8033dd [11]Series B Financing - Overview, How It Works, Participants. (n.d.). Corporate Finance Institute. [12]Saigol, L. (2019, August 24). Companies race to find the key to eternal life. Marketwatch. [13]Vaziri, H., Chapman, K. B., Teichroeb, J., Lacher, Sternberg, H., Wheeler, J., … Funk, W. D. (2010, May 10). Spontaneous reversal of the developmental aging of normal human cells following transcriptional reprogramming. Future Medicine. [14] West, M. D., Labat, I., Sternberg, H., Larocca, D., 104
DUJS
Dartmouth Undergraduate Journal of Science ESTABLISHED 1998
ARTICLE SUBMISSION What are we looking for? The DUJS is open to all types of submissions. We examine each article to see what it potentially contributes to the Journal and our goals. Our aim is to attract an audience diverse in both its scientific background and interest. To this end, articles generally fall into one of the following categories: Research This type of article parallels those found in professional journals. An abstract is expected in addition to clearly defined sections of problem statement, experiment, data analysis and concluding remarks. The intended audience can be expected to have interest and general knowledge of that particular discipline. Review A review article is typically geared towards a more general audience, and explores an area of scientific study (e.g. methods of cloning sheep, a summary of options for the Grand Unified Theory). It does not require any sort of personal experimentation by the author. A good example could be a research paper written for class. Features (Reflection/Letter/Essay/Editorial) Such an article may resemble a popular science article or an editorial, examining the interplay between science and society. These articles are aimed at a general audience and should include explanations of concepts that a basic science background may not provide. Guidelines: 1. The length of the article should be under 3,000 words. 2. If it is a review or a research paper, the article must be validated by a member of the faculty. This statement can be sent via email to the DUJS account. 3. Any co-authors of the paper must approve of submission to the DUJS. It is your responsibility to contact the co-authors. 4. Any references and citations used must follow the Science Magazine format. 5. If you have chemical structures in your article, please take note of the American Chemical Society (ACS)â&#x20AC;&#x2122;s specifications on the diagrams. For more examples of these details and specifications, please see our website: http://dujs.dartmouth.edu For information on citing and references, please see: http://dujs.dartmouth.edu/dujs-styleguide 105
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE
Dartmouth Undergraduate Journal of Science Hinman Box 6225 Dartmouth College Hanover, NH 03755 dujs@dartmouth.edu
ARTICLE SUBMISSION FORM* Please scan and email this form with your research article to dujs@dartmouth.edu
Undergraduate Student: Name:_______________________________ School _______________________________
Graduation Year: _________________
Department _____________________
Research Article Title: ______________________________________________________________________________ ______________________________________________________________________________ Program which funded/supported the research ______________________________ I agree to give the Dartmouth Undergraduate Journal of Science the exclusive right to print this article: Signature: ____________________________________
Faculty Advisor: Name: ___________________________
Department _________________________
Please email dujs@dartmouth.edu comments on the quality of the research presented and the quality of the product, as well as if you endorse the studentâ&#x20AC;&#x2122;s article for publication. I permit this article to be published in the Dartmouth Undergraduate Journal of Science: Signature: ___________________________________
*The Dartmouth Undergraduate Journal of Science is copyrighted, and articles cannot be reproduced without the permission of the journal.
Visit our website at dujs.dartmouth.edu for more information
WINTER 2018
106
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE Hinman Box 6225 Dartmouth College Hanover, NH 03755 USA http://dujs.dartmouth.edu dujs@dartmouth.edu
107
DARTMOUTH UNDERGRADUATE JOURNAL OF SCIENCE