07
Global Issues, Local Solutions: A Conversation with Dr. Turhan Canli
10
Robotic Surgery: Application of the Da Vinci System in Modern Medicine
31
Look Here! Cognitive and Behavioral Correlates of Selective Attention in Adults with Autism Symptoms
Spring 2018 Volume 10
Staff 2017-2018 Editor-in-Chief: Sahil Rawal ’19
Layout Editor-in-Chief: Dana Espine ’18
Head of Cabinet: Peter Alsaloum ’19
Managing Editors: Rachel Kogan ’19 Jenna Mallon ’18
Assistant Layout Chief: Dahae Jun ’19
Cabinet: Benjamin Kerner ’18 Jesse Pace ’20 Jerin Thomas ’19
Associate Editors: Stephanie Budhan ’21 Nina Gu ’21 Samara Khan ’19 Bridgette Nixon ’21 Lillian Pao ’18 Anna Tarasova ’19
Layout Editor: Lauren Yoon ’21 Webmaster: Ronak Kenia ’18
Faculty Advisors: Dr. Peter Gergen Dr. Laura Lindenfeld Graduate Advisor: Amanda Ng
Copy Editors: Nomrota Majumder ’21 Aaradhana Natarajan ’20 Caleb Sooknanan ’20 Daniel Walocha ’19 Nita Wong ’21
Writers: Fatin Chowdhury ’19 Christopher Esposito ’19 Muhammad Hussain ’21 Meenu Johnkutty ’21 Matthew Lee ’21 Neomi Lewis ’21 Maryna Mullerman ’20 Marcia-Ruth Ndege ’21 Rideeta Raquib ’19 Lee Ann Santore ’19 Alexis Scida ’18 Kunwar Ishan Sharma ’20 Anna Tarasova ’19 Vivekanand Tatineni ’18 Ruhana Uddin ’19 Elizabeth Varghese ’21 Daniel Walocha ’19 Gene Yang ’19 Gabriela Zarankov ’19
LETTER FROM THE STAFF Stony Brook Young Investigators Review is proud to celebrate our ten-year anniversary and the release of the tenth issue of our biannual publication. We have worked tirelessly to provide an outlet for student research on campus, as well as to increase our presence on campus. To celebrate the release of each issue, we host an honorary speaker at our colloquium; in the past, we have had speakers such as the Queen of Carbon Science, Dr. Mildred Dresselhaus, Hero of the Planet, Dr. Sylvia Earle, the Founding Director of the Wyss Institute for Biologically Inspired Engineering at Harvard University, Dr. Donald Ingber, and Dr. Arthur Horwich, whose research at Yale University led to the uncovering of chaperonin proteins. In this issue, readers will find articles that range from topics such as dermatitis and Parkinson’s disease to an interview with Dr. Turhan Canli, a Stony Brook researcher and the Director of the Graduate Program in Genetics. We also present articles on the latest forms of cancer detection methodologies, as well as the development of novel robotic surgeries. Readers will gain the opportunity to delve into the primary research completed by undergraduates at the university, as Lee Ann Santore and Christopher Esposito present their findings on selective attention in adults with autism symptoms. Our ten-year anniversary will be marked by hosting Dr. Joshua Willis, Ph.D., a Project Scientist at NASA’s Jet Propulsion Laboratory. His research focuses on the warming temperatures around Greenland and how this affects the melting ice caps. His work in the field of climate change has been revolutionary, and our student body is thrilled to see what his research has yet to show. None of this could be possible without the help of our staff members and writers who worked countless hours to create this publication. Furthermore, we would like to thank our partners at the Alda Center for Communicating Science, as well as our faculty advisors, Dr. Peter Gergen and Dr. Laura Lindenfeld, and our graduate advisor, Amanda Ng, for their guidance. This issue could not have been published without the help of our generous donors, so we also thank the Departments of Undergraduate Biology and Undergraduate Biochemistry. Welcome to SBYIR. We sincerely hope you enjoy.
1
TABLE OF CONTENTS Spring 2018 . Volume 10
07
20
interviews Global Issues, Local Solutions: A Conversation with Dr. Turhan Canli
The Role of the LRRK2 Mutation in the Development of Parkinson’s Disease By Vivekanand Tatineni ’18
By Anna Tarasova ’19
10
Getting to the Heart of the Matter: A Conversation with Dr. Roy Lacey
By Muhammad Hussain ’21
12
22
Biomarkers: The Future of Medicine
24
Leading NonInvasive Procedures in Cancer Diagnosis
Research reviews
By Elizabeth Varghese ’20
By Gabriela Zarankov ’19
Biologics and Atopic Dermatitis: When B.A.D. is Finally Good
28
By Kunwar Ishan Sharma ’20
Robotic Surgery: Application of the Da Vinci System in Modern Medicine By Ruhana Uddin ’19
15
H3N2: An Overview of the 2018 Flu Outbreak
primary research
31
By Rideeta Raquib ’19
18
2
The Future of the Battery
Look Here! Cognitive and Behavioral Correlates of Selective Attention in Adults with Autism Symptoms By Lee Ann Santore ’19 and Christopher M. Esposito ’18
By Alexis Scida ’18
Image retrieved from https://pxhere.com/en/photo/20088
RESEARCH HIGHLIGHTS Find more news online! Visit us at sbyireview.com
The potential of Nanomaterial’s in Cancer Immunotherapy By Daniel Walocha ’19 Immunotherapy is a promising therapy that has potential since it uses the patient’s own immune system to kill cancer cells. This unique quality, that is not usually present in radiation or chemotherapy, has promise in that it can present a durable treatment that limits metastasis and future recurrences. Since cancer cells rely heavily on immune evasion or suppression to avoid cell death, targeting these mechanisms will make the cancer cells vulnerable to the host’s own immune cells. Previous trials with ipilimumab, a monoclonal antibody that inactivates the “off” switch of T cells, appears promising for immunotherapy. Wantong Song, Ph.D, and a team of researchers from UNC Eshelman School of Pharmacy described the potential of nanomaterials in delivering cancer immunotherapies. Nanomaterials that can encapsulate the desired immunotherapy are small enough to avoid filtration by the kidney and can therefore remain in the host’s system for a longer amount of time— increasing duration and effectiveness. The nanomaterial itself may also present anti-tumor properties, which include immune response
promotion and antigen presentation. Copolymers containing tertiary amines such as PC7A, for example, were shown to induce the stimulator of interferon gene pathways in addition to their primary function of delivering the tumor immunotherapy, tumor antigens. The tumor antigens were able to condition T-cells to the glycoproteins that are more readily expressed in tumor cells than on normal cells (tumor-associated antigens). Peptide vaccines that present antigen proteins must accumulate near the lymphoid organs to elicit a better immune response; encapsulating the contents in nanodiscs allows for better delivery. The possibilities for nanomaterial-mediated delivery are still being explored. Extensive clinical trials and better evaluation of cancer efficacy will allow for better immunotherapy treatments. References 1. W. Song, et al., Nanomaterials for cancer immunotherapy. Biomaterials 148, 16-30 (2017). doi: 10.1016/j.biomaterials.2017.09.017. 2. Image retrieved from: http://cisncancer.org/ research/new_treatments/nanotechnology/ promise.html
Figure 1 Nanomaterial has the potential to effectively deliver immunotherapy treatments to target cancer cells.
As Plastic Spreads, Diseased Coral Reefs Follow
Figure 1 Coral reefs provide $375 billion worth of goods and services to people in the form of fisheries and tourism every year.
By Gene Yang ’19 275 million people rely on coral reefs for food, coastal protection, and income from tourism. However, as plastic waste continues to spread throughout the ocean, it transmits disease to these same coral reefs. In the first large-scale study to evaluate the impact of plastic on corals, Dr. Joleah Lamb’s team from Cornell University, along with other research teams and institutions around the globe, surveyed 124,884 corals in 159 coral reefs in Asia-Pacific—a region that contains 50% of the world’s coral reefs. By visually examining coral colonies, the researchers found the probability of corals exposed to plastic waste catching diseases increase from 4% to 89% when compared to plastic-free corals. In particular, plastic debris was 8 times more likely to affect structurally complex corals. While very large coral colonies were the least likely to maintain contact with plastic waste, they showed the greatest increase in disease risk, up to 98%, when exposed to plastic. Three particularly lethal coral diseases that increased in the presence of plastic were also identified: skeletal eroding band disease (24% increase), white syndrome (17%), and black band disease (5%). Although the exact biological mechanism of how plastic increases disease risk is a topic of ongoing research, it has been determined that plastic waste can cause physi-
cal injury in coral tissue, leading to pathogen invasion. The researchers also used data to estimate with 95% confidence that 11.1 billion plastic items are currently spread across Asia-Pacific coral reefs. Notably, this number is likely underestimated, as it did not include regions near China or Singapore. By 2025, this number is likely to increase to 15.7 billion, which will have a noticeable impact on the $375 billion worth of goods and services provided by coral reefs annually. The results of this study highlight the increasing need of improved waste management to preserve our coral reef ecosystems. References 1. J. Lamb, et al., Plastic waste associated with disease on coral reefs. Science 359, 460-462 (2018). doi: http://10.1126/science.aar3320. 2. Image retrieved from: https://cdn. pixabay.com/photo/2017/08/19/19/28/fish2659613_1280.jpg
3
Social Media and Academic Achievement: Connections Explored By Meenu Johnkutty ’21 While logging onto Facebook during a study session may be one of your guilty pleasures, new research shows that the link between social media use and academic performance may be more complicated than one may think. Studies conducted on the relationship between academic performance and social media use often report contradictory results, with some citing social media use as positive and others deeming it distracting. Researchers led by Dr. Markus Appel from the Julius Maximilians University in Bavaria, Germany, conducted meta analyses on 59 studies that looked into the link between social media use and academic performance. They then drew conclusions based on the results of these studies. The researchers’ first conclusion was that those students who used social media to communicate about school-related topics reported higher academic performance; this result was expected. Secondly, the researchers found that students who used Instagram and other forms of social media while doing homework or studying did slightly worse in their academics than students who did not. Further analysis suggested that students whose use of social media can be categorized as “intense” (i.e. posting photos,
commenting) reported slightly lower grades. Lastly, the researchers discovered that those students who did use social media did not spend less time studying and did not report poorer grades. The four conclusions that the researchers compiled from current studies allowed them to conclude that using social media does not significantly affect academic performance. Nonetheless, the researchers did advise that parents consistently monitor their children’s social media use and advocated an open-minded perspective that would facilitate open communication between parents and their children. References 1. C. Marker, et. al., Active on Facebook and failing at school? Meta-analytic findings on the relationship between online social networking activities and academic achievement. Springer Link 30, 1-27 (2017). doi: 10.1007/s10648-0179430-6. 2. Image retrieved from: https://www. pexels.com/photo/apple-applications-apps-cell-phone-607812/
Figure 1 Common applications present on smartphones.
4
An Evolutionary Arms Race: Speed and Hunting in The African Savannah
Figure 1 Researchers analyzed various performance characteristics of savannah predator-prey pairs.
By Maryna Mullerman ’20 Prey must run faster than predators to avoid being killed, while predators must overwhelm prey to avoid starving. While there have been numerous studies published on predator-prey relationships, little research has analyzed high-speed savannah animal locomotion. Dr. Alan M. Wilson and researchers from the Royal Veterinary College at the University of London aimed to analyze the locomotor characteristics in two predator-prey pairs: the lion-zebra and the cheetah-impala. The researchers hypothesized that predators had to be more athletic then their prey to follow the unpredictable movements of the prey. The animals were found in Botswana, Southern Africa with a nonpredetermined sample size. All animals were sedated adult species consisting of nine lions, five cheetahs, seven zebras, and seven impalas. Front and hind leg and body lengths were measured, and biopsies were taken from biceps femoris muscles. Muscle power is very temperature-dependent, so fibers were activated using temperature changes. Fiber power was measured by exposing samples to four different force-control events. Performance classifications such as maximum muscle power output, animal acceleration and deceleration, turning ability, and stride frequency of prey were compared to those of their predators. Global Positioning System/Inertial Navigation System (GPS-INS) processing was used to track live animal activity, position, and velocity for navigational equations.
The model produced possible position profiles for predators and prey in the two-stride chase. The initial hypothesis was supported by fiber analysis, revealing that predators had 20% higher muscle fiber power, 37% greater acceleration, and 72% greater deceleration capacity than their prey. For velocity and stress at peak power, no significant differences were found. Researchers concluded that lower-power muscle fibers found in prey species were more economical and advantageous for their survival, while high prey speed resulted in high capture probability. Slower-moving herbivores were able to make unpredictable turns and use more escape options. The data supported the researchers’ hypothesis and emphasized the importance of predator-prey relationships for the relative equilibrium of the ecosystem. Future studies should use a larger sample size and analyze more predator-prey pairs, such as the leopard-gazelle pair, as well as the other features in predator-prey relationships that contribute to coexistence. References 1. A.Wilson, et. al., Biomechanics of predator – prey arms race in lion, zebra, cheetah and impala. Nature 554, 183-188, (2018). doi: 10.1038/ nature25479. 2. Image retrieved from: https://www. google.com/search?q=running+cheetah&safe=strict&rlz=1C1CHBF_enUS698US698&tbs=sur:fc,imgo:1,isz:l&tbm=isch&source=lnt&sa=X&ved=0ahUKEwiSg5OH1ZHZAhXNmVkKHdjEBuIQpwUIIA&biw=1707&bih=844&dpr=1.13#imgrc=r3g7vkPK1N0C0M
Using stars to find dark matter By Neomi Lewis ’21 Astronomers and cosmologists continue the elusive mission to understand the nature and existence of dark matter, a substance believed to be pervasive throughout the universe and near our own planet. Work by a team led by an undergraduate at Princeton University, Jonah Herzog-Arbeitman suggests that people may be able to better decipher the speed of dark matter moving near Earth by using the speed of old stars as guides. Dark matter is believed to comprise around 26.8% of matter in the Universe, but it is still a hypothetical substance that is not observable using traditional means of analysis, such as electromagnetic radiation. It has been hypothesized to play a significant role in several astronomical phenomena and provide important keys to various mysteries, including determining the age and ultimate fate of the Universe. Like its mysterious name suggests, dark matter is still relatively unknown, and its very existence is still disputed by some. However, there are several theories on how to corroborate the properties of dark matter. Herzog-Arbeitman suggests using the oldest known stars for clues, stating “Our hypothesis was that there’s some subset of stars that, for some reason, will match the movements of the dark matter.” Herzog-Arbeitman’s team used Eris, a computer simulation that uses supercomputers to replicate the physics of the galaxy. The team was able to match the velocities of groups of dark matter with that of stars with low metallicity (with lighter metals in their composition, these stars are generally older because heavier elements take more time to form). To describe this, Lina Necib from the California Institute of Tech-
nology said: “The dark matter and these old stars have the same initial conditions: they started in the same place and they have the same properties, so at the end of the day, it makes sense that they’re both acted on only through gravity.” Being able to approximate dark matter speed is especially important because it helps corroborate the validity of other major experiments performed with dark matter. One primary way by which dark matter is being detected is through “direct detection,” a process in which physicists try to force dark matter to interact directly with very dense materials like xenon. These processes have not yielded strong evidence and researchers from this team believe that this may be because dark matter does not have the right kind of velocity and kinetic energy to cause significant reactions. Although the data produced by the Eris simulation is novel, it has not yet been verified by real-world astronomy, so there is still uncertainty in determining the relevant velocities. This verification process is expected to take years, as the European Space Agency’s Gaia telescope has been recording information about the Milky Way since July 2014, yielding massive amounts of data to sift through. The wealth of data received on nearly a billion stars will hopefully help take this exciting theory forward. References 1. J. Herzog-Arbeitman, et. al., Empirical determination of dark matter velocities using metal-poor stars. Physical Review Letters 120, (2018). 2.Image retrieved from: https://commons.wikimedia.org/wiki/File:Dark_Matter_Cloud.png
Figure 1 Our general perception of the Milky Way in space.
The Diagnosis of Necrotizing Soft Tissue Infections via CT Scans
Figure 1 Computed tomography scans of the human brain.
By Fatin Chowdhury ’19 Imaging technology has become more useful in preventing surgery and aiding in its success. Technologies such as Computed Tomography (CT) scans have expanded the breadth of diagnostic methodologies available to physicians. Dr. Myriam Martinez from Boston, Massachusetts, sought to pinpoint the usefulness of CT technology to allow for necrotizing soft infections (NSTI) diagnoses. The researchers found that NSTIs could be examined effectively with CT scans of the contrast-enhanced category. The researchers analyzed medical record data relating to 184 patients admitted for a suspected NSTI. The patients underwent a CT scan with intravenous contrast, along with gas in soft tissue, multiple fluid collections, tissue enhancement, and connective tissue inflammation as NSTI markers sought out from the scan results. STATA software was used to perform t-test analyses. Positive CT scans occurred in 9% of the selected patients, and following surgery, 76% were found to have a NSTI. In patients with negative CT scans, 23% underwent surgery because it was thought that they had NSTIs, but these patients had non-necrotizing infections. 77% of patients were treated without any operations. Two variables that were significantly different between
NSTI diagnosed and non-NSTI diagnosed patients were white blood cell counts and hemoglobin levels. Although the sample size was small, the researchers suggested that CT imaging technology was an effective supplement to surgery-related decisions concerning suspected NSTIs. They suggest that NSTI symptoms are often missed or indistinguishable from other diseases, leading to high mortality rates that could be reduced through CT technology. In the future, computed tomography could be refined into a non-invasive way of detecting NSTIs, with surgeons being trained to identify them. References 1. M. Martinez, et al., The role of computed tomography in the diagnosis of necrotizing soft tissue infections. World Journal of Surgery 42, 82-87 (2018). doi: 10.1007/s00268-017-4145-x. 2. Image retrieved from: https://www.twenty20.com/photos/bab509ff-475c-4602-8ddeeedc4b866bc7
5
Odor from a Rotten Egg Could Combat Hyperglycemia By Matthew Lee ’21 Modern industrialized countries are plagued by diseases that usually manifest severe symptoms after many years. One such condition is hyperglycemia, in which high blood glucose may lead to diabetes and atherosclerosis. Dr. Jiaqiong Lin of Guangdong General Hospital and a team of researchers investigated how hydrogen sulfide (H2S) could protect human umbilical vein endothelial cells (HUVECs) against injury from high glucose. The team paid particular attention to how H2S affected the expression of receptor-interacting protein (RIP3), which promotes necroptosis, a harmful result of hyperglycemia. Necroptosis is a mechanism of cell death, similar to apoptosis, necrosis, and autophagy. Administering 40 mM glucose as the condition for high glucose (HG), was found to cause a peak in RIP3 in HUVECs 9 hours after delivery (~1.2 RIP3/GAPDH ratio). The increased RIP3 was indicative of necroptosis. The team then found that adding exogenous H2S prior to HG treatment exhibited cytoprotective effects, as cell viability was highest at 400 µM H2S (~76% as compared to 45% with HG alone). These effects were confirmed as the expression level of RIP3 decreased from ~0.9 ratio for HG alone to ~0.7 ratio for HG + H2S treatment.
In addition, H2S with HG was found to mitigate the loss of mitochondrial membrane potential, another indicator of cell death, more than HG alone. However, H2S alone did not inhibit necroptosis, as it was unable to decrease MMP concentration; it was able to do so only in combination with HG. The importance of understanding how cells die in response to hyperglycemic conditions is critical to developing future treatments. Future studies could examine the benefits of cell exposure to hydrogen sulfide beyond inhibiting cell death. Investigating this treatment could have wider implications as well, considering that this mechanism of cell death has been recently linked to other conditions, such as atheroscelerosis. References 1. J. Lin, et. al., Exogenous hydrogen sulfide protects human umbilical vein endothelial cells against high glucose-induced injury by inhibiting the necroptosis pathway. International Journal of Molecular Medicine 41, 1477-1486 (2018). doi: 10.3892/ijmm.2017.3330. 2. Image retrieved from: https://upload.wikimedia.org/wikipedia/commons/9/99/Necroptosis_Pathway_Diagram.png
Figure 1 A detailed necroptosis pathway featuring RIP3, a key necroptosis mediator.
6
Synthetic Bioluminescence Allows Scientists to See Deep Tissues Using Cameras
Figure 1 Lucriferase can be found on organisms such as fireflies, jellyfish, click beetles, and bacteria.
By Marcia-Ruth Ndege ’21 Bioluminescence is the ability of a living organism to produce light, and this is made possible by luciferase, an enzyme derived from fireflieswhich catalyzes a substrate known as D-luciferin. Dr. Atsushi Miyawaki and his team of researchers from the RIKEN Brain Science Institute in Tokyo is currently using luciferase’s abilities to create bioluminescence in mammals to capture deep tissue images. Miyawaki and his team first created a variation of luciferin that is hundreds of times stronger than the natural form. Previous research has shown that AkaLumine-HCl, a synthetic luciferin, is able to penetrate the blood-brain barrier, producing a reddish glow. This synthetic luciferin, however, is incompatible with the natural luciferin; to account for this, the researchers mutated the natural enzyme. The resulting combination, AkaBLI, produced a bioluminescence signal that is stronger than the natural signal and can be used in vivo. While bioluminescence can be induced in animals by simply infusing animals’ drinking water with AkaBLI, researchers were able to achieve a greater light intensity by introducing the AkaBLI via injections. When tested in mice, AkaBLI produced a bioluminescence signal 1,000 times stronger than that produced by natural luciferase-luciferin reactions. This intensity allowed scientists to track cancer cells, which may have major implications for cancer treatment research. AkaBLI even allowed re-
searchers to track deep-brain neurons in a marmoset monkey and thereby non-invasively observe how brain activity and structures change with behavior over time; scientists were able to capture images of the tissues using light-sensitive cooled charge coupled device (CCD) cameras. Miyawaki concluded that the contributions that bioluminescence may make to scientists’ understanding of neural circuitry have extreme potential. References 1. S. Iwano, et. al., Single-cell bioluminescence imaging of deep tissue in freely moving animals. Science 359, 935-939 (2018). doi: 10.1126/science.aaq1067. 2. Image retrieved from: https://pixabay.com/ en/animal-blue-creature-danger-dark-21649/
Global Issues, Local Solutions: A Conversation with Dr. Turhan Canli By Anna Tarasova ’19
Ever since he arrived at Stony Brook as an Assistant Professor in 2001, Dr. Turhan Canli has both broadened the scope of his work and increased its depth. His research and outreach initiatives connect many fields, including psychology, neuroscience, neuroethics, and molecular biology. Dr. Canli earned his B.A. in Psychology from Tufts University, and his Ph.D. in Psychobiology from Yale University. He has participated in publishing over 50 peer-reviewed journal articles, as well as writing several textbooks on neuropsychology. He also served on the editorial boards of several journals. Other than holding his position as an Associate Professor in the Departments of Psychology and Radiology, Dr. Canli currently serves as the Director of the Graduate Program in Genetics, the Director of the Social, Cognitive, and Affective Neuroscience (SCAN) Center, co-founder of the International Neuroethics Society, and is involved in several other organizations. He is also a mentor to many graduate and undergraduate students. To start off, could you talk a little bit about your career path? How did you initially decide to do research? I started very early, when I was in high school. In eighth grade, I started doing research projects and entering them into research competitions. I grew up in Germany and went to school there. At age 14, I decided I wanted to become a neuroscientist. There was no such thing as studying neuroscience in Germany at the time, so I decided to come to the United States to study. I went to Tufts as an undergraduate and was a biopsych major there – I loved it. I was involved in research from almost the first day that I started there. Then, I went to grad school at Yale and did my Ph.D. At the time, undergrad and grad were all animal-research-based; a lot of studies used learning and memory in rats and rabbit eye-blink conditioning, which were classic behavioral methods. I finished my
Ph.D. in 1993, just around the time fMRI was invented. After a brief two-year period of doing a postdoc with more animal research, I decided it was time to switch. The standard career path would have been that with a Ph.D. under your belt and two years of postdoc experience, you would go and apply for an assistant professorship somewhere. Instead, I reinvented myself to be a human neuroscience researcher using fMRI at a time when that was brand-new. It was a little bit of a gamble because a lot of people at the time thought that brain imaging was going to be nothing more than a fad and that it would not really contribute anything useful or important to science. With that I began a second postdoc at Stanford. That is how I got to know all of the leading researchers in the world of emotion, personality, and social psychology. How did you initially decide to pursue neuroscience? I was a very geeky kid, so I was really into the philosophy of the mind and consciousness. I grew up in Germany, but we had a very close family friend of my mother’s who at the time lived in LA with her two sons. We visited them on a summer break. Her older son, who was probably two years ahead of me, was in the process of applying to colleges. There were all kinds of college brochures lying around the place and one was from UCLA, which was a local school for him. There was the Brain Research Institute [at UCLA], and I’d never heard of anything like that before. Once I saw that, I knew that’s what I wanted to do - I wanted to become a neuroscientist and I wanted to move to America. I was 14 at the time. Everything I did in high school was geared towards that. This was in the ‘80s so there was no internet then; you couldn’t just Google or download some form, so you had to write letters. You had to call international offices somewhere for information. You still had to figure out how to do the SATs.
7
The SATs were set up in Germany because there were a lot of military families living there. You would go to one of those big military bases and they would have their American high schools. You would show up on a Saturday morning and take your SATs just like everybody else. That’s how I got started. How did you end up at Stony Brook? How have your interests changed since you got here? I came to Stony Brook in 2001 after my postdoc. I continued to do brain imaging, but I also became very interested in genetics. I joined the graduate program in genetics, eventually becoming the director of that program for three years. I was in charge of the Ph.D. program for genetics, and through that I got to know a whole bunch of other researchers who were affiliated with the program. In recent years, I’ve become increasingly interested in trying to have a policy component or global affairs component to my research. I’ve always been interested in those topics dating all the way back to college. How does the International Neuroethics Society fit into your research interests? The International Neuroethics Society is an organization concerned with the applications of neuroscience in the real world - the intersection of brain research, public policy, ethics, and legal and social issues. We founded the institution back in 2006. Even more recently, I’ve become interested in global mental health issues and human rights issues related to mental health. By background, I’m half-German, half-Turkish, and what’s happening in Syria is something that has affected much of Europe because of the large number of refugees that went to Europe, mostly through Turkey or Greece. Then, I applied and got into a certification program run through Harvard on refugee trauma, which was a half-year course that you do on the side. Through that course, I built this global network. Now I’m in the phase where I’m trying to attract funding through research grants that help me bring resources to some of these places. Also in parallel, I joined an organization called the SPSSI - the Society for the Psychological Study of Social Issues. I will join them in the fall for a period of three months in a sort of advisory function. What is the Mind/Brain Center on War and Humanity and why did you create it? It’s an umbrella structure that I use to bring all of the different interests I have under one roof. It’s not a physical center - it’s more of a network. I try to bring in local researchers and colleagues from Stony Brook, as well as many of the people that I met during that Harvard course or others I’ve met since then. Right now it has no funding, so everything that’s been done is based on activities that I don’t have to spend resources on other than my time and energy. I consider teaching global issues classes and other seminars a part of the contributions made by that center. I also spend a lot of my own money to travel and to give talks elsewhere. In the spring of last year, I went to Mastricht University in Holland, which was partnering with the UN, to give a talk about refugee mental health. I’ve done a similar talk in Toronto, and I credit the center with those kinds of things. Like I’ve said before, now I’m trying to get some research funding, and ultimately I want some funding for the
8
center itself. Although it just doesn’t have a physical dimension to it, the center is real but I can point to a lot of things that I give the center credit for. Fellows contribute to the intellectual life of the center. There are also clinicians who are working in the field. Later this spring, probably in May, I will go to Gaziantep on the Turkish-Syrian border, which is where volunteer rescuers are trained. I have a couple of colleagues that I’m going to meet that are working in mental health and psychiatry. I am going to bring some resources to them - I will offer them a training workshop so that they know the cutting edge of science. They will be part of the research team, if and when we get funding, to proceed in that area. It’s useful to have an organizational structure under which we can put those kinds of things together - it’s research, it’s clinical outreach, it’s education, public speaking, student opportunities, etc. I have a little bit of that’s already going. And maybe policy as well, to the extent that I can play an advisory role with the UN or some other bodies. You explained the “war” aspect, but how does neuroscience fit into this concept? What’s the mind/brain part of all of that, right? That’s where the neuroscience comes in because in the end, I still want to contribute towards neuroscience discoveries. One of the ways in which I conceptualize that is by talking about war and humanity. Let’s say we focus on the war part of that - we could talk about traumatic experience that people are exposed to during and after war. I want to conduct a line of work that investigates how individual differences come about. How do people respond differently to trauma? Who are the ones that end up being resilient? What is it that makes them resilient? Is it a psychological coping skill, is it a biological predisposition that makes them particularly strong? Is it social or cultural elements that protect them in some way, social support, things like that? Is it a combination? Those are questions that have to do with the mind and the brain, because the mind is what absorbs the trauma, but the brain is what then processes the information and generates behavior and individual differences. What about humanity? How does that tie in? I don’t want everything to be about these themes that bring you down, so that’s why I added the humanity part. When I was taking the Harvard course, what I saw really shook my faith in humanity. We were exposed to a lot of things that would break your heart and break your spirit, but what restored it at the same time were the relief workers, because they’re all really special people doing their part. What does that mean in terms of neuroscience? Well, there is research on topics like empathy, forgiveness, and trust. Some people are capable of forgiveness and some aren’t – most of us are somewhere in between. I want to better understand where those differences come from, both from the biological and psychological perspectives. Also to understand political thinking – in the current political scene, everything is very polarized. It seems that people are less and less capable of understanding how the other side thinks. My newest interest, which can be called political neuroscience, involves using the techniques that people have applied in cognitive psychology and cognitive neuroscience and affective
neuroscience to understand the decision-making process or information processing. The political aspect is that we’re looking at processing of political information, opinions, belief systems, and biases. We’re beginning to collect some pilot data for studies on that topic. This, too, fits into the framework of war and humanity - political strife and dissent could lead to war. After a war has been fought out, there is a process of post-war healing. I find it very interesting to better understand where individual differences in healing come from. The mind/ brain aspect is a reminder that we’re trying to understand not just social issues, because then this would be a social psychology or political science exercise. I wanted to go beyond that. I wanted to ask: What is human nature? What is the core of human nature? How come we are the way we are? The mind/ brain center is a very broad title for this concept that I came up with, but I think it’s a very fitting title because it gives me a lot of flexibility in how I want to apply it in the future as new ideas come up. Are there any short-term goals that you hope to achieve both in your research and the organizations you’re involved in? Oh, definitely. I mean, it becomes very concrete once you’re dealing with people one-on-one or if you can think of specific places. Again, through that Harvard course, I became friends with a clinical psychologist who originally is from Uganda. He lives in Milwaukee now; he’s been living there for 30 years, but he regularly goes back home and tries to do mental health outreach work. Through him I was introduced to someone who works in a refugee camp on the northern border of Uganda. I visited the camp and spent some time with the staff. It was a very short visit, unfortunately, and I didn’t get to meet the refugees because we were on a very tight itinerary. I had the opportunity to talk to the staff and ask what their needs were in terms of their resources. I also learned a little bit about some curious mental health issues that they’ve observed in the camp, which is how these issues became a very real of me. Part of what we ended up doing was giving Stony Brook students an opportunity to participate in the Pen Pal project. Students and refugees are writing to each other using modern communication based on text messaging. They text each other back and forth and each side gets to experience what the other side’s life is like. Apparently, that is helping, because people feel that there is somebody out there in the world who knows about them. This seems to improve morale. That could become an actual research project if we were quantifying the degree to which people’s well-being improves, for instance. A student of mine also started a student organization called The Advocate, which is involved with fundraising. I will definitely rely on her to help me do some fundraising or maybe some toy drives to collect a whole bunch of things and send it over to this camp in Uganda. So those are some short-term, concrete things that have nothing to do with neuroscience or me being a professor or anything like that. It’s nice to be able to help whenever you can. Going back to your research, how would you say the field has changed since you started? That’s a good question. The technologies keep improv-
ing; imaging remains a critical aspect of a lot of the research that’s moving the field forward, especially in psychology. There are intense efforts to improve the rigor of science these days. Psychology, genetics, neuroscience, as well as all of the biomedical fields, have begun to recognize that the level of replication across studies isn’t as good as it should be. There’s been a lot of soul-searching happening in the field about why that is: is it a structural issue? Is there something wrong with the way we do science? The way we fund projects? The way we reward scientific accomplishments? What are the ways we can fix it? That seems to be the prevailing zeitgeist in science right now. In terms of technology, the big breakthroughs of recent years have been on the genetics front. Specifically, in optogenetics, which is a way to manipulate individual neurons at a very high level of specificity, either exciting them or inhibiting them. That is a technology that came into existence between 2006 and 2010. After that, another gene editing technique came about called CRISPR, which might be even more impactful. It may be used even more widely than the optogenetics. I assume that we’ll continue pushing that boundary of what you can do with manipulating molecules to the point that things like synthetic biology would be a new frontier as well. This would involve designing life forms from scratch, putting together genomes of organisms that actually don’t exist yet. We can get into speculating where that’s going to lead us, what the ethics of that are, but synthetic biology will be a big field in the coming years. Artificial intelligence as well. Do you have any advice for students doing research at Stony Brook, or someone who is not really sure what they want to do? Well, some people know exactly what they want. Good for you if you fall into that category. I was lucky in the sense that I did. If you don’t know what you want to do, I would say don’t sweat it. Just try things out. Play around with ideas, and play around with experiences. One of the reasons why I wanted to study in the United States and not in Germany was that my way into neuroscience in Germany would’ve been to get a medical degree and then go into a neurology-related field. That meant that I would’ve gone straight from high school onto a medical track. I had the grades to do it, but there would’ve been nothing else for me to study, and I didn’t want that. I would also say to go out there and do things and stay engaged and interested. As you see in my example, I also reinvent myself every 7 years - and sometimes that means going back to school and getting a degree or a certificate. That never really stops. That’s my piece of advice - to not freak out if you don’t have it all figured out. Instead, make that an advantage. Sample things and try things out. An undergraduate has the benefit of being young enough to make mistakes. You can afford to lose a couple of years to something that ended up not being what you thought it was. Later on, it becomes a lot harder to do that because you have more and more responsibilities that you can’t pull yourself away from. You have the benefit of time so that’s a pretty big advantage. If you know what you want, then go for it, and good luck with it.
References 1. A. Tarasova, Interview With Dr. Turhan Canli. Rec January 2018. MP3.
9
Getting to the Heart of the Matter: A Conversation with Dr. Roy Lacey By Muhammad Hussain ’21
Dr. Roy Lacey, a professor in Stony Brook’s Department of Chemistry, is a leading researcher in the field of nuclear chemistry. Dr. Lacey works with other scientists around the world to investigate the behavior of nuclear matter under extreme conditions. Dr. Lacey earned his B.S. in Chemistry from the University of the West Indies, and his Ph.D. from Stony Brook University. Following graduate school, Dr. Lacey has served as a research fellow at the Commissariat a l’Energie Atomique (CEA), the Centre d’Etudes Nucléaires de Saclay (CNRS) in France, and the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University. While he primarily serves as a professor, he is also a mentor to many students. How did you achieve your current position? I am originally from Jamaica. It was unlikely that I would become a scientist, but I believe the driving force behind my career path was my passion for science. I was fascinated by it and devoted a significant amount of time to it during high school. I then went on to study chemistry at the University of the West Indies. With a scientific career in mind, I knew that in order to further my education, I needed to study abroad. It would’ve been difficult to accomplish my academic goals in an environment with so few opportunities. As many of my siblings came to study in the US, I considered this the best option and continued my fellowship here. How did you develop an interest in nuclear chemistry? That was pure coincidence. As I went through graduate school, I was introduced to several different fields. I spent a summer exploring nuclear science with John Alexander (a world-renowned professor of chemistry), and found the field to be really captivating.
10
How would you describe your current research focus? The nucleus of an atom has protons and neutrons. However, protons and neutrons are not fundamental particles. These fundamental particles are called quarks. Since quarks are confined inside protons and electrons, it is not easy to reach and examine them. That is why we use an accelerator facility at Brookhaven National Lab (BNL). First, we strip the electrons from an atom so it becomes fully charged. It is then possible to accelerate a charged particle and induce a collision with another charged particle. When the collision occurs, for a brief instant, the nucleons, the protons and neutrons, melt away to give the quarks and gluons. This is the material that we are trying to study. Thus, the first step of our work is to create the substance, and the second is to study its properties. We accomplish this by drawing a phase diagram for the substance. We vary the temperature and pressure conditions and plot its phase behavior. Essentially, we use these collisions as a proxy for extreme temperatures and pressures. We can then navigate the diagram and map out the phases of the substance. You mentioned an accelerator at BNL. Could you talk a little bit about that? BNL has an accelerator complex. This consists of a larger ring and two smaller accelerators that inject the particle of interest. When the particle is at its maximum energy level, it is very, very close to the speed of light so it becomes a particle beam. There are multiple stages of acceleration. First, ions are boosted up in energy. They are injected in a small synchrotron, and then into the bigger ring. The beam is then split into two groups. One set goes clockwise in one ring and the other goes counterclockwise. Then, at particular pre-designed regions, the beams are forced to cross so that the particles can collide. When the collisions occur, the fundamental particles are produced as I described earlier.
Where would you like to see your research in 10 years? We’ve been doing this kind of research for quite a number of years. It started even before I came to Stony Brook in 1991. We have a pretty good track record and are getting closer to mapping out phase diagrams for nuclear matter. In the next several years, the hope is that we can refine the process. Successfully characterizing these diagrams is fairly work-intensive. Suppose we have water and want to map out its diagram. We need to determine its viscosity, heat capacity, conductivity and several other measures to successfully identify the phase. When we try to map out the phase diagram, not only do we have to identify that there are different phases which exist, we also want to characterize their properties. That is what makes it a fairly rigorous undertaking.
“
“
Scientific advancement is ultimately more important than personal success.
Do you have any advice for students who are interested in doing research? Your most important task as an undergraduate is to explore and look for something that genuinely interests you. Research seems refreshing, but what most people don’t realize is that there will be slow periods along the way. If you don’t like what you’re studying, those periods can be brutal. However, if the topic is something that excites you, every time you encounter a new obstacle you treat that as an exercise in finding a solution. In contrast, if you’re not interested in the topic, then you will be disappointed time after time. Genuine interest is a very important aspect of choosing your career path because finding your passion means you will end up conducting additional investigations out of interest that are not necessarily required of you. Of course, there is no guarantee that you’re going to explore and immediately find something that attracts your attention. This has to be balanced with the timelines of undergraduate and graduate education. What I would recommend is that you don’t go through fields arbitrarily. For instance, if I was trying to find research activity in a particular department, I would look at the full portfolio of research going on in this department and read about it in detail. That way you will be able to narrow down to a few researchers and explore in that direction. There is nothing wrong in talking to several different people so you can get a better sense of what they’re doing and whether or not that matches with your expectations - many professors are open to that. It’s important to go into research knowing what to expect.
In addition to being a researcher, you’re also a mentor to many students. What do you enjoy most about being a mentor? Mentoring involves knowing when to provide advice and to encourage your students. The most rewarding aspect of being a mentor is seeing your students flourish. After they complete their work in your lab, they go on to do bigger and better things and you feel that you provided some enrichment
to them. The biggest honor is when these students then go on to make important scientific contributions. That is the most important reason to pursue a career in science: to advance the frontiers. Sometimes, advancing the frontiers does not necessarily mean that you will be the one to achieve it, but mentoring can help you facilitate that advancement. Scientific advancement is ultimately more important than personal success. What are the advantages of pursuing a field that you are passionate about? Society tends to emphasize a particular direction, primarily because some graduate educations, particularly medical school, tend to be regarded as providing a more stable range of career opportunities. That is not to say that it will be an easier path, but it certainly comes with a good salary and a lot of respect. However, some people do not realize that a research career is also quite respectable. Another important aspect of research is that it is a publically deliverable profession. Medicine requires the fundamentals of research because the labs are where new medications will be developed and understood. By developing the basics, the people who interact with the public at large, the doctors, will be able to do a better job. Any society requires a balance between basic and applied science to advance. This is also relevant in that basic science research is frequently the basis of technology that then becomes common - for example, cell phones. For students who think they have a good idea of what they would like to do, would you suggest more exploration of the academic opportunities available? Different students tend to gravitate toward different disciplines, but the reality is that it only takes one course to pique your interest in something else. The problem that I see as a professor in most instances is that many students are pressured by family and friends into something they might not necessarily want to pursue. If you’re doing something highly valued by that circle, even if you like something else, you tend to want to go in a direction because it is comfortable. However, there is a flip side to that. For example, many students know from the start that they want to go into the basic sciences, and they are not apologetic about it. Fortunately, I was one of those people. I remember back in the day when I was asked what I wanted to do, I would give a quick explanation of nuclear chemistry. The next question would be, what is it good for? That would be equivalent to saying to the people who developed the mathematics that we need and use to launch communication satellites into space: why do that math? Research is important because basic science fundamentals have many practical applications. There needs to be a balance between the number of engineers and the number of people doing engineering research, the same way you have good doctors at the hospital but you also want doctors who are doing research in treatment or technique. This reduces the amount of trial and error in practice. That is why it is important to encourage students who are interested in pursuing basic science, instead of pressuring them into other fields. References 1. M. Hussain, Interview with Dr. Roy Lacey. Rec February 2018. MP3.
11
Biologics and Atopic Dermatitis: When B.A.D. is Finally Good By Kunwar Ishan Sharma ’20
Image retrieved from https://commons.wikimedia.org/wiki/File:Lichen_planus_intermed_mag.jpg
Introduction Atopic dermatitis (AD), commonly referred to as eczema, is a chronic skin condition characterized by itchy, flaky, and scaly skin (1). Skin inflammation, a trademark of atopic dermatitis, is caused by a hyperactive immune system. Within the immune system, interleukins, which help the body respond to invading pathogens, cause inflammation throughout the body. While this is normally a healthy immune response, in individuals with atopic dermatitis, interleukins 4 and 13 are overactive and target normal skin cells, leading to chronic inflammation (2). This inflammation is often exacerbated by allergies, exercise, and stress. Atopic dermatitis can range in classification from mild to medium to severe. While a pinpointed cause of atopic dermatitis has not yet been determined, genetic predisposition is known to play a large role in its manifestation across generations. Currently, 20% of children around the world are born with atopic dermatitis and 3% of all adults have the condition (3). Given that there is no reported cure for atopic dermatitis, scientists and doctors alike have suggested treatments over the years in an effort to stabilize or diminish the presence of this disease. Prior Treatment Methods One of the most common treatment methods is the use of lotion to maintain epidermal moisture (4). Unfortunately, atopic dermatitis can often be too severe for lotion to manage, so doctors and patients are looking towards other methods to assuage the illness. Prescription topical steroids, in the form of ointments and creams, behave as anti-inflammatory agents against the patients’ overactive immune systems. These topical steroids’ potency levels vary based on the severity of the individual’s lesions or rashes. While topical steroids are known to alleviate inflammation in many patients, they can cause serious side effects, such as Steroid Withdrawal Syndrome (SWS) (5). An individual with SWS experiences improvement under steroid treatment, but once they stop using the topical treatment, the
condition worsens significantly. A more invasive treatment for atopic dermatitis includes the prescription of immunosuppressants. These can be injected or ingested and work by suppressing the immune system to reduce inflammation and itching (4). While this can alleviate significant lesions or discomfort in the short term, this medicine is often dangerous as it suppresses the immune system’s ability to fight off infections. In addition, immunosuppressants have been linked to liver damage and stunted growth in some children (6). While all of these medications have helped alleviate discomfort in AD patients over the years, issues associated with long-term worsening symptoms and other associated side effects have deterred doctors and patients alike. For this reason, scientists over the past few years have been working towards a new treatment, known as biologics to diminish side effects and increase symptom relief. Biologics Unlike many conventional drugs such as the aforementioned topical steroids and immunosuppressants, biologics are medications that are not chemically synthesized. Biologics can be administered either as therapeutic proteins or as vaccines. In terms of their makeup, biologics can vary based on a variety of mixtures of nucleic acids, sugars, and proteins (7). In the past year, two new biologics have been introduced into the market: Eucrisa™ (Crisaborole) and Dupixent® (Dupilumab). While both of these biologics target atopic dermatitis, the age ranges they are intended for and the atopic dermatitis severity that they treat differs. In addition, the trials and means of administration for both drugs differ, as well. Crisaborole, generically referred to as Eucrisa™, is applied topically via ointment and is meant for individuals over 2 years old with mild to moderate atopic dermatitis (8). Dupilumab, generically referred to as Dupixent®, is given by a subcutaneous injection and is intended for anyone 18 years or older with moderate to severe atopic dermatitis (8). While
Crisaborole targets PDE-4, Dupilumab inhibits interleukin-4 receptors, resulting in decreased inflammation-inducing interleukin-13 and type-2-helper-cells downstream (8, 9). Trials In order to determine Crisaborole’s effectiveness, two randomized trials were conducted by a team of researchers led by Dr. Amy S. Paller at the Feinberg School of Medicine of Northwestern University (10). In these trials, one subset of individuals was given the medication while the other was provided with a vehicle containing no medication. Both the medication and vehicle were applied twice daily (morning and evening) to the patients’ lesions. After the first application, patients came back into the trial facility at 1 month, 3 month, 6 month, and 12 month intervals, so the researchers could ask them questions and document their progress (11). To determinine how a patient progressed with the treatment, the Investigator’s Static Global Assessment (ISGA) was used (8). The ISGA denotes the percentage of the body that is covered in rashes or affected skin on a scale from 0 to 4, with the lower numbers representing a low percentage of the body covered in AD rashes and higher numbers indicating a high percentage. For both trials, there was a reduction of at least two points from the start of the study to the end of the study. This indicates a significant decrease in affected skin in patients the medication. In addition, both trials showed a statistically significant difference (p <.05) in disease reduction between the placebo and medication, favoring Crisaborole (8). Aside from these beneficial results, there is only one known drawback: temporary pain at the medication’s application site. In order to show Dupilumab’s effectiveness, 4 studies and 2 trials were conducted by a team of researchers led by Dr. Lisa Beck at the University of Rochester Medical Center (9, 12). All six of the studies and trials were randomized, placebo-controlled, and double-blind. The first two studies spanned 4 weeks during which the medication was administered to prove its short-term safety. The third study measured the effectiveness of the treatment. The fourth study evaluated the possibility of any adverse events occurring when the treatment was coupled with steroids (9). After these four studies were performed, two major trials were performed, which allowed for drug approval by the FDA. Over the span of 16 weeks, the individuals involved in the two trials were given injections of a placebo or medication either weekly or bi-weekly (8). For trial 1, 37 % and 38 % of the 671 patients who were given weekly and bi-weekly administration, respectively, ended up with the lowest possible ISGA scores of 0 or 1. For trial 2, 36% of the 708 patients ended up with the lowest possible ISGA scores of 0 or 1 (9). This indicates that in both trials, those individuals had close to or no affected skin symptoms of atopic dermatitis after the trial. Among both trials, there was also a reduction of at least 2 points in patients who received the actual medication (8). All in all, there was improvement in at least 75% of the patient. Patients not only reported a decrease in skin-related markers of atopic dermatitis, but they also reported improve-
14
ments in mental health and quality of life (9). This is a vital aspect of atopic dermatitis that often takes a serious toll on AD patients’ lives. Given these beneficial results, only two possible side effects were determined: conjunctivitis (pink eye) and irritation at the injection site (8). Conclusion While these drugs have had a positive impact in trials and for individuals who have acquired the medications, further research must be performed. Firstly, since these drugs have been on the market for less than a year and only shortterm trials have been conducted, very little is known about the long-term effects of the medications. Thus, more trials and more time is needed to see if any long-term side effects exist. Additionally, the medications are expensive, as creating a biologic is more difficult than creating a chemically synthesized medication. Currently, the cost of the medicine is $600 a tube for Crisaborole and $38,000 a year for Dupilumab (8). This is a major limitation for many individuals and their insurance companies. In the meantime, individuals who are able to attain the medication will be able to live more comfortably with a disease that they have suffered from for decades. While the impact on AD patients is yet to be seen on a national scale, these medications have not only given many patients relief, but, more importantly, hope for the future that one day they can live an atopic dermatitis-free life. References 1. Pimecrolimus for the treatment of adults with atopic dermatitis, seborrheic dermatitis, or psoriasis: a review. Canadian Agency for Drugs and Technologies in Health, (2017). 2. L. Roesner et al., The adaptive immune system in atopic dermatitis and implications on therapy. Review. Expert Review of Clinical Immunology 12(7), 787-96 (2016). doi: 10.1586/1744666X.2016.1165093. 3. S. Nutten, Atopic dermatitis: global epidemiology and risk factors. Annals of Nutrition and Metabolism 66(1), 8-16 (2015). doi: 10.1159/000370220. 4. J. Pavlis, Y. Gil, Management of Itch in Atopic Dermatitis. American Journal of Clinical Dermatology, (2017). doi: 10.1007/s40257-017-0335-4. 5. T. Hajar et al., A systematic review of topical corticosteroid withdrawal (‘‘steroid addiction’’) in patients with atopic dermatitis and other dermatoses. Journal of the American Academy of Dermatology 72(3), (2015). doi: 10.1016/j. jaad.2014.11.024. 6. Atopic Dermatitis Found To Be an Immune-Driven Disease. National Eczema Association, 6 (2015). 7. What Are ‘Biologics’ Questions and Answers. U.S. Food & Drug Administration, (2018). 8. M. Maeda, Atopic Dermatitis: Clinical Refresh. Pharmacy E-News, (2017). 9. R. Vangipuram, S. Tyring, Dupilumab for moderate-to-severe atopic dermatitis. Skin Therapy Letter, 23(1), (2018). doi:10.1016/j.jdermsci.2018.01.016. 10. A. Paller et al., Efficacy and safety of crisaborole ointment, a novel, nonsteroidalphosphodiesterase 4 (PDE4) inhibitor for the topical treatment of atopic dermatitis (AD) in children and adults. Journal of the American Academy of Dermatology 75(3), (2016). doi: 10.1016/j.jaad.2016.05.046. 11. Crisaborole (Eucrisa) for Atopic Dermatitis. The Medical Letter on Drugs and Therapeutics 59(1515), (2017). 12. L. Beck et al., Dupilumab Treatment in Adults with Moderate-to-Severe Atopic Dermatitis. New England Journal of Medicine 371, (2014). doi: 10.1056/NEJMoa1314768.
H3N2: An Overview of the 2018 Flu Outbreak By Rideeta Raquib '19
Figure 1 H3N2 virus has caused severe flu activity in the United States during the 2017/18 season. Image retrieved from: https://commons.wikimedia.org/wiki/File:Steven_H3N2_Flu_ET.jpg
Overview Viruses are nucleic acids enclosed in a protein coat called a capsid. In addition to a capsid, viruses may have an additional lipid bilayer envelope with embedded proteins. Flu virus envelopes also contain protrusions called spikes, because they can attach to host receptors and infect host cells. The flu is mainly attributed to influenza viruses, particularly type A viruses, that have caused numerous pandemics. Influenza A is enclosed in an envelope that consists of glycoproteins, hemagglutinin (HA), and neuraminidase (NA) embedded within. The HA and NA components enable the different types of influenza strains to be grouped into subtypes via phylogenetic and antigenic analysis (1). One of the earlier influenza A pandemics occurred in Hong Kong in 1968. Attributed to the subtype H3N2, the pandemic lasted about a year and killed 3 million people (2). In 2009, another subtype, H1N1, emerged in a global outbreak. Recently, the H3N2 subtype has resurfaced, and this time it is even more virulent than before (3). The current flu vaccine has been deemed ineffective, prompting scientists to develop a new one to combat this strain. Evolution and Phylogenetic Analysis The H3N2 virus strain has evolved since its emergence in 1968; over the last 50 years, it has undergone changes in receptor-binding properties. This strain has seven clades with numerous new subclades, which poses a challenge to vaccine development. Currently, clade 3 is the most prominent subset; in a phylogenetic analysis conducted in 2017 at the Francis Crick Institution by Dr. Yipu Lin, this specific clade displayed greater changes in amino acid sequences in Madine Darby canine kidney (MDCK) cells than the other clades. Virus evolution can result from the antigenic drift or the antigenic shift. The antigenic drift is a rapid series of mutations that accumulates in the viral genome. This can be the result of natural selection during which certain strains with greater fitness outcompete other strains. The antigenic shift, on the other hand, is a much more rapid process that involves a sudden change in the HA and/or the NA proteins. HA is a glycoprotein found on the surface of influenza viruses. It aids in binding to the sialic acid component of host cell membranes, which allows the viruses to enter via endocytosis. Specifically for H3N2, there have been substitutions at two positions (position 222 and 225) on the HA residues of the polypeptide HA1 (4). Mutations in the HA protein enable H3N2 to escape detection via antibodies, causing increasing resistance to vaccination. In a 2017 study conducted by Dr. Nicholas Wu at the University of Pennsylvania, the L149P gene substitution in HA was found common in certain H3N2 subtypes cultured from chicken eggs, and did have an impact on antibody effectiveness. The HA head domain has a receptor binding site (RBS), where the antibodies specifically bind. The study examined the crystal structures of the HA proteins from 194 H3N2 human isolates, and discovered variations in the amino acids on antigenic site B of L149P substitution, which
16
causes a decreased affinity toward the binding of the RBS (5). Due to the antibodyâ&#x20AC;&#x2122;s inability to bind to the RBS of the HA head domain, it is difficult to detect the virus and foster an appropriate immune response via vaccinations. NA is an enzyme that cleaves the glycosidic bonds of neuraminic acid in glycoproteins, which aids in releasing the virus out of the host cell to the rest of the body. In a 2016 study by Chaudhry et al., a Canadian patient with H3N2 was found resistant to oseltamivir, commonly known as Tamiflu and the most frequently prescribed antiviral drug. The resistance was caused by a mutation on the R292K gene of the NA that is present in certain H3N2 strains (6). Researchers have been analyzing viral genomes for the past few viral epidemics. During the H3N2 outbreak in 1968, the prominent strain (strain A/Hong Kong/1/1968 H3N2) was classified to have a polymerase subunit, PB1, derived from an avian virus precursor (2). According to a 2017 study conducted by Dr. Julie McAuley at the University of Melbourne, the PB1-F2 protein is encoded by an alternate reading frame from the PB1 gene. A reading frame describes how sequences of nucleic acids are grouped. These groupings, in pairs of three called codons, facilitate the translation to amino acids. Thus, if a reading frame is altered, it can potentially produce a different polypeptide. The PB1-F2 protein with 90 amino acids is encoded in all influenza A isolates, but the 2009 H1N1 pandemic virus illustrated the presence of a truncated version of PB1-F2 with only 57 amino acids. An even more truncated version with 34 amino acids has emerged in the recent H3N2 outbreak. Researchers speculate that this smaller protein might lend some advantage in terms of viral fitness.
Figure 2 PCR is one of the most common detection and sequencing tools for bacteria and viruses.
Image retrieved from: https://www.flickr.com/photos/madlabuk/7090137881
Symptoms and Detection The symptoms that arise from the H3N2 strain are similar to the ones associated with any influenza A subtype and the seasonal flu. These signs include fever, headache, cough, malaise, and discomfort. Any age group can be affected, but the likelihood of catching the H3N2 strain is higher among individuals younger than 25 and individuals above 65 due to their more vulnerable immune systems. According to the Center for Disease Control and Prevention (CDC), influenza viruses tend to target the respiratory system, and as the immune system fights back, inflammation occurs. The degrading effects on the respiratory tract leads to coughing and a sore throat (3).
A study conducted from 2010 to 2011 by Dr. Naoki Kawai at Japan’s Physicians Association examined the differences in symptoms between the H1N1 and H3N2 strains. These two particular types of strains were compared because both H1N1 and H3N2 have caused pandemics in the past. The study involved family doctors and physicians from 13 clinics that reported the symptoms exhibited by patients who were tested positive for either the H1N1 or H3N2 virus. Most H1N1 patients had body temperatures greater than 99.5 degrees Fahrenheit, and showed symptoms of rhinorrhea, sore throat, cough, general fatigue, loss of appetite, or headache. The patients with H3N2 exhibited the same symptoms with the exception of appetite loss (7). There are various ways in which influenza viruses are detected, including viral cell cultures, rapid cell cultures, immunofluorescence, reverse transcriptase polymerase chain reaction (RT-PCR), and rapid influenza diagnostic tests (RIDTs). The most common methods are RIDTs and RT-PCR. RIDTs are the most frequently employed out of the two because they can yield results within approximately 15 minutes; however, RIDTs may produce a false negative result, where the virus in infected patients is not detected by the antigen. RT-PCRs, which are more sensitive and specific, are therefore necessary to verify the absence or presence of the virus (3). RIDTs, particularly immunoassays of respiratory particles from nasal swabs and antigen-based tests, are administered at hospitals and utilize a nucleoprotein antigen to identify the presence of influenza. Our immune system consists of influenza A virus-specific cytotoxic T lymphocytes (CTLs) that can lyse cells infected with any influenza virus. The viral nucleoprotein (NP), an internal structure present in the influenza A virus, serves as an antigen for CTLs and other antibodies (8). In the clinic, when a patient reports symptoms of the flu, nasal swabs or throat swabs are taken, and a number of immunoassays, such as the hemagglutination inhibition assay (HIA) and enzyme immunoassay (EIA), are utilized to check for the presence of antibodies that are released in response to the infection (9). The technique of HIA involves taking advantage of the HA protein present on the influenza virus. The assay involves making serial dilutions of red blood cell samples and loading them to the wells of a transparent plate. The solutions that are not bound to the virus sink to the bottom of the wells, while the cells with viruses bound form a lattice around the well. PCR is a common method for DNA sequencing and amplification that takes advantage of the complementary base pairing feature of DNA to synthesize new strands from a single template. The double helix is first denatured to single-stranded templates, and the DNA polymerase enzyme carries out synthesis and elongation with the addition of primers and free nucleotides. RT-PCR works with a similar mechanism, but involves extraction and usage of viral RNA as the template and transcription of cDNA via the reverse transcriptase enzyme (10). This amplified cDNA is then coupled with various dyes to enable fluorescent detection. The amplification allows for cDNA sequencing to determine the differences between particular viral strains. Current Treatment and Vaccination The immune system’s enhanced recognition and neu-
Figure 3 Seasonal flu vaccinations are deemed less effective against H3N2 due to mutations in the HA component where antibodies bind.
Image retrieved from: http://www.cbc.ca/news/canada/montreal/ineffective-flu-vaccine-partlyresponsible-for-spike-in-flu-deaths-1.3067530
tralization of the influenza virus, as well as an immediate inhibition of these viral proteins via certain compounds, have been the subject of extensive research. In terms of vaccination, live and weakened influenza virus vaccines are preferentially utilized in order to induce T-cell and humoral responses when exposed to the external glycoproteins (5). The current influenza vaccine aims to elicit an immune system response that promotes the production of antibodies, targeting the globular head domain of the HA. Other local respiratory immune responses include the release of virus-specific local IgA and IgG antibodies in nasal wash. These immunoglobulins are antibodies that are produced by plasma cells to neutralize pathogens. Currently, antiviral drugs such as oseltamivir and zanamivir are prescribed to treat H3N2. Both of these drugs are neuraminidase inhibitors licensed in 1999, and as the H3N2 cases rise in the United States in 2018, oseltamivir is still the go-to prescription among hospitals. In clinical trial studies, it has been observed that despite oseltmativir causing greater viral resistance, this drug is effective in reducing the symptoms and the duration of influenza illness, as well as mortality rates. The current trends in increased influenza strength, particularly in H3N2, suggest that these viral strains are becoming more difficult to detect via antibodies due to mutations in the HA components. Despite these challenges, researchers are constantly developing and designing improved drugs and vaccines to effectively combat the new mutations and potentially catch future ones early to prevent further influenza pandemics. References 1.D.M. Tscherne, A. García-Sastre, Virulence determinants of pandemic influenza viruses. The Journal of Clinical Investigation 121(1), 6–13 (2011).doi:10.1172/ JCI44947. 2.J. McAuley, et al., Rapid evolution of the PB1-F2 virulence protein expressed by human seasonal H3N2 influenza viruses reduces inflammatory responses to infection. Virology Journal 14(1), 141-146 (2017). doi:10.1186/s12985-017-0827-0. 3.Influenza A (H3N2) Variant Virus. Centers for Disease Control and Prevention, (2016). 4.Y. Lin, et al., The characteristics and antigenic properties of recently emerged subclade 3C.3a and 3C.2a human influenza A(H3N2) viruses passaged in MDCK cells. Influenza And Other Respiratory Viruses 11(3), 263-274 (2017). doi:10.1111/ irv.12447. 5.N.C., Wu, et al., A structural explanation for the low effectiveness of the seasonal influenza H3N2 vaccine. Plos Pathogens 13(10), 1-17 (2017) doi:10.1371/journal. ppat.1006682. 6.Chaudhry, A., et al, Oseltamivir resistance in an influenza A (H3N2) virus isolated from an immunocompromised patient during the 2014-2015 influenza season in Alberta, Canada. Influenza And Other Respiratory Viruses 10(6), 532-535 (2016). doi:10.1111/irv.12415. 7.Kawai, N., et al, Increased symptom severity but unchanged neuraminidase inhibitor effectiveness for A(H1N1)pdm09 in the 2010-2011 season: comparison with the previous season and with seasonal A(H3N2) and B. Influenza & Other Respiratory Viruses 7(3), 448-455 (2013). doi:10.1111/j.1750-2659.2012.00421.x. 8.J.W Yewdell, et al., Influenza A virus nucleoprotein is a major target antigen for cross-reactive anti-influenza A virus cytotoxic T lymphocytes. Proceedings of the National Academy of Sciences of the United States of America 82(6), 1785–1789 (1985).doi: 10.1073/pnas.82.6.1785. 9.D.K. Kim, B. Poudel, Tools to detect influenza virus. Yonsei Medical Journal 54(3), 560–566 (2013). doi.org: 10.3349/ymj.2013.54.3.560. 10.S.V. Vemula, et al., Current Approaches for Diagnosis of Influenza Virus Infections in Humans. Viruses, 8(4), 96 (2016). doi: 10.3390/v8040096.
The Future of the Battery By Alexis Scida '18
Figure 1 Schematic showing the fundamental components and basic function of the common battery. Image retrieved from: https://www.edn.com/Home/PrintView?contentItemId=4458054
Introduction The worldwide issue of developing alternative fuel methods still manages to perplex scientists and engineers. Fossil fuels are broken down into three categories, which are coal, petroleum, and natural gas, and they make up over 80% of the energy sources consumed by the U.S. (1). Societal dependence on fossil fuels has been detrimental to the Earth, and will continue to have negative impacts on animal and human health, the environment, and the climate. In an effort to decrease reliance on fossil fuels, researchers have fostered novel developments in battery technology. The successive development of the Lithium-ion (Li-ion) battery, the Lithium-Sulfur (Li-S) battery, and the molten metal battery underscore this progressive evolution. Background Batteries consist of the same main components, an anode and cathode, a separator, and an electrolyte. The anode and cathode serve as the positive and negative sides, respectively, and work to store ions. The separator is a permeable membrane that prevents the movement of electrons through the battery (2). Without it, the anode and cathode are free to contact each other, making the battery nonfunctional. The electrolyte carries ions from the anode to the cathode. This movement creates an electric current that flows between a positively charged collector through an external device, to the negative collector. The battery’s energy density, or energy power, is the relationship between its size and performance. The higher a battery’s energy density value, the more energy it can store in a small volume.
18
The Lithium-ion Battery Li-ion batteries store lithium ions in the anode and cathode and are currently used to power devices such as cell phones, laptops, and electric vehicles. They have improved in their charge-holding abilities since they were first introduced, and they now hold nearly twice the charge as they did initially (3). In general, Li-ion batteries surpass nearly all of the current battery technology available on the market, including but not limited to Nickel-metal hydride, Nickel-Cadmium, and Lead-acid batteries. However, limitations in their performance fuel the continuing search for materials to enhance a battery’s ability to power electric vehicles, power grids, and more. As the number of cycles the battery undergoes is increased, the amount of buildup along the ion host increases. Buildup of reduced ions, dendrites, and unreacted gases can result in a decrease in the reactivity and functionality of the Li-ion battery over time (3). In addition, a large space is required in Li-ion batteries for layered graphite, which serves as an electrode to receive lithium ions. This insertion and removal of a molecule into a layered structure is called intercalation, a crucial mechanistic step in the battery’s operation. Although space for the graphite electrode is vital for the Li-ion battery’s function, it limits the battery’s efficiency. Without the graphite electrode space, the Li-ion batteries would be able to effectively power a cell phone for days, or a car to the same effect as that of petrol. Despite these limitations, the Li-ion battery still has a relatively high energy density in comparison to previously developed batteries. Additionally, there are still numerous possibilities of advancement for the Li-ion battery, including
altering metal combinations to increase energy density, and utilizing alternative components to decrease the cost of production. The Lithium-Sulfur Battery To further improve the Li-ion battery, both in its sizeto-output ratio and in its capacity over time, experiments have been conducted to find components that function better than lithium. Based on the work performed in 2014 by chemical engineer Dr. Elton Cairns at Lawrence Berkeley National Laboratory, the Li-S combination surpasses the Li-ion battery (3). In the Li-S battery, the space taken up by layered graphite is instead replaced with a thin layer of pure lithium metal, which heightens the battery’s performance. The lithium metal layer acts as both a source of Li ions and an electrode (4). Without the excess space required by the graphite component of the Li-ion battery, the Li-S battery’s size is drastically reduced, increasing its overall energy density. Li-S batteries also come with limitations, including issues of longevity similar to that of the Li-ion batteries. Additional problems involve the precipitation of Li-S products, specifically Li2S and Li2S2. These precipitated products clog the cathode, hindering the battery’s ability to charge. Additionally, Li2S easily undergoes an irreversible reduction reaction, decreasing the longevity of the material (4). To diminish this issue, the amount of reactive sulfur used can be decreased, and the amount of electrolyte increased; however, this eliminates the appeal of using affordable and plentiful elemental sulfur. Overall, the Li-S battery offers improvements in both energy-to-size ratio, and affordability due to the low cost of sulfur. Scientists believe that the issues associated with using sulfur as a main component to the battery can be resolved with further experimentation of sulfur and advancements in cathode and electrolyte functionality. The Molten Metal Battery In addition to the ongoing challenge of searching for a small battery able to provide a large amount of power, researchers are also working on creating a battery, of any size, that can support a network of buildings. Battery use for grid storage increases system durability, and allows for more efficient electricity restoration during power outages (5). Many of the complications associated with battery technology in cars -the creation of a battery with immense power density -- are not applicable to batteries utilized for grid energy storage. These batteries are not limited by size constraints; instead, they must have the ability to store large amounts of energy at a single time and be durable enough to recharge thousands of times (3). Additional considerations in this field include the reliability of the batteries (e.g., during inclement weather) as well as safety precautions. In 2014, a team at the Massachusetts Institute of Technology developed a battery using molten metal layers and molten salt layers as electrodes and electrolytes, respectively, which are capable of storing large amounts of energy for extended periods of time (3). Although there are risks associated with maintaining metals and salts in a molten state, the payoff is an indefinite number of discharge and recharge cycles. In addition, earth-abundant and inexpensive elements may be used. Since these elements are maintained in a liquid-like
phase, degradation and unwanted byproducts are not considerable negative factors (6). Due to the molten phase of the components, the involved ions are continuously mixed, which aids in maintaining purity and decreasing buildup of inoperative ions. These molten battery packs are currently being tested and experimented on at military bases and have prospects of being utilized for grid power in metropolitan areas (3). Contrary to Li-ion and Li-S batteries, molten metal batteries need not have a large energy density, as their main purpose is to store and discharge existing energy. This energy may be synthesized by means of water, solar, or wind power, which is readily available in comparison to the fossil fuels required to produce modern energy. Conclusion Our dependence on fossil fuels will continue to have detrimental effects on the planet and the organisms that inhabit it, but battery technology offers a possible solution to this energy crisis. Even though the most recent batteries are not particularly economical, improvements to decrease cost, such as incorporating varying metal combinations and utilizing nano-materials, are underway. Over the last few years, variations to the metal, electrolytes, and electrodes have been explored in an effort to find an ideal combination, offering hope that additional innovation and discovery will one day make progressive battery technology readily accessible to the general public.
Figure 2 Graph comparing the evolution of varying battery’s energy density capabilities over time.
Image retrieved from: https://www.nature.com/polopoly_fs/1.14815!/menu/main/topColumns/ topLeftColumn/pdf/507026a.pdf
References 1. Fossil fuels have made up at least 80% of U.S. fuel mix since 1900. U.S. Energy Information Administration, (2015). 2. How does a lithium-ion battery work? Office of Energy Efficiency & Renewable Energy, (2017). 3. R. Van Noorden, The rechargeable revolution: a better battery. Nature. 507, 2628 (2014). doi:10.1038/507026a. 4. L. Nazar, et. al., Lithium-sulfur Batteries. Materials Research Society Bulletin. 39, 436-442 (2014). doi: 10.1557/mrs.2014.86. 5. Renewable electricity-to-grid integration. National Renewable Energy Laboratory. 6. N. Stauffer. A battery made of molten metals. Massachusetts Institute of Technology News, (2016).
19
Image retrieved from https://www.flickr.com/photos/emsl/4704802544
The Role of the LRRK2 Mutation in the Development of Parkinson’s Disease By Vivekanand Tatineni '18
Background Parkinson’s disease (PD) is the second most common neurodegenerative disorder, affecting nearly seven million people worldwide, especially those over 65 years of age (1). Patients with PD have neurological deficiencies with respect to cognition, as well as severe motor impairment, including tremors, difficulty with walking, and lack of movement. As a result, early screenings and detection are paramount to helping those with the disease (1). Although the causes of PD are not exactly known, they are thought to be a combination of environmental and genetic factors. Although several risk factors for this disease have been proposed, age is considered one of the main factors to contribute to the development of PD. While the mean age of PD onset is 60, with around 1% of this population having the disease, this statistic increases to 4% when considering only those over 80 years old. Family history of the disease also plays a role; 15% of those with PD have a close relative with the disease (2). As a result of this correlation, researchers have considered the genetic factors that may increase an individual’s risk of developing PD. Several genes have been implicated in the development of PD, but mutations in the leucine-rich repeat kinase 2 (LRRK2) gene are among the most common causes of familial Parkinson’s disease, albeit with different levels of occurrence in various populations. How the LRRK2 Mutation Works The leucine-rich repeat kinase 2 (LRRK2) gene is thought be a significant contributor to the genetic basis of Parkinson’s disease. The LRRK2 gene codes for one of the largest kinases in humans. Kinases add phosphate groups to other molecules to regulate them, either activating or inactivating these other molecules (3). The LRRK2 mutation has also been implicated in cell toxicity, which ultimately leads to apoptosis or programmed cell death (4). Apoptosis overwhelmingly occurs in the dopaminergic neurons of the substantia nigra, leading to neurodegeneration associated with PD. With this information, researchers can create molecules designed to inhibit the activity of the LRRK2 molecule and thereby prevent neuronal degeneration that leads to PD. This may slow down the progression of PD or even prevent its onset (5). Preventing cell death in the substantia nigra is crucial because levodopa medications, commonly prescribed for PD, become less effective over time, and patients are forced to increase the dosages and number of medications they take. One of the most common LRRK2 mutations, the G2019S mutation, displays dominant inheritance and makes the gene product hyperactive. Some populations tend to have a high frequency of the mutation while other populations have very
few patients with the mutation, a phenomena known as the Founder Effect. The Founder Effect The Founder Effect is seen when a small segment of a population migrates and establishes a separate colony. The genetic makeup of that colony exhibits much less diversity than that of the larger population from which the founders came. The G2019S mutation is considered a founder mutation, and is found with high frequency in some populations, particularly amongst the Ashkenazi Jewish and North Africans populations; however, it is a rare mutation in other populations, such as French Canadians, Serbians, and Italians from Campania populations (6). Extensive research has been conducted to find the rate of occurrence of the G2019S mutation in various populations, along with other mutations in the LRRK2 gene. Using chromosomal microarray analysis, researchers found that while the G2019S mutation in LRRK2 wasn’t quite as prevalent in Moroccan populations. 45% of consanguineous Moroccan PD patients had pathogenic mutations that lead to their PD, many of which were found in other genes, namely PRKN and PINK1 (7). They also found that two related juvenile PD patients carried a pathogenic homozygous mutation, which could have resulted from a founder effect. This shows that PD does have a strong genetic basis in North African populations, and many PD patients are affected by it. Research conducted in Campania, Italy, using genetic screening of over 500 unrelated PD patients showed that the G2019S mutation was rare, and of those with mutations, all of them were from the province of Naples (8). The Norwegian PD population, however, showed a significant number test positive for mutations in the LRRK2 region, which could signify a founder mutation here. Thjis could result in such a high proportion of the population being carriers for LRRK2 mutation (9). Population
% containing LRRK2 mutation
Ashkenazi Jew
18.00
North African
30.00
French Canadian
0.00
Serbian
1.23
Moroccan
45.00
Campania (Italy)
0.80
Norwegian
40.00
Table 1 The occurrence of the G2019S mutation of the LRRK2 gene across various populations ranging from Europe to Africa and North America.
Genetic PD Leads to Faster Motor Deterioration In recent years, genetic testing for PD has become increasingly important because those with the common pathogenic mutations, such as LRRK2 and others, tend to have an accelerated disease progression (10). Additionally, having undefined PD tends to lead to an earlier onset of PD, but the disease progresses significantly slower than with those who have the LRRK2 mutations. Furthermore, those with the LRRK2 mutation tend to lose control of their motor coordination at a faster rate than those whose PD was not linked to a mutation in the LRRK2 gene. This means that the patients with those mutations need more immediate intervention, such as dopamine replacement therapy, in order to slow the disease and live longer healthier lives. Other studies have shown that in addition to the faster neurodegeneration, patients with genetically-based PD may require earlier intervention, and intervention of greater intensity. In a study of Norwegian PD patients, those who were positive for LRRK2 mutations needed, on average, a significantly higher dose of levodopa to improve symptoms than those who tested negative (9). This again highlights the importance of finding a better understanding of the genetic basis of PD to better help those with this disease. Future Directions Although research on the genetic basis of PD has come a long way in recent decades, there is still work to be done. Aside from genetics, studies with Norwegian populations have analyzed the levels of protein in the urine and cerebrospinal fluid to determine whether biomarkers for LRRK2 mutation can be found (9). They found a downstream marker that can be used in males as a biomarker for PD. Other early detection methods can be unlocked by further studying the genetics of PD, especially in founder populations that have high proportions of the certain mutations. These biomarkers can be used for earlier detection of pathogenesis and can help patients a great deal, whether by intervening early, or obtaining a faster, more reliable diagnosis. A faster diagnosis means less uncertainty for patients and gives them a chance to prevent or slow down the progression of the disease before it begins to destroy their mental and motor faculties, leading to a better prognosis and a higher quality of life. Better understanding the genetic basis of PD may allow scientists to create better methods for diagnosing PD, and increase awareness among those who may be more susceptible, such as those who belong to the Ashkenazi, North African, or Norwegian populations. Simultaneously, we can begin looking elsewhere to find causes of PD among patients whose disease is not highly influenced by genetic factors.
References 1. D. LM, et. al., Epidemiology of parkinson’s disease. The Lancet Neurology 6, 525-535 (2006). doi: 10.1016/S1474-4422(06)70471-9. 2. A. Samii, et. al., Parkinson’s disease. The Lancet Neurology 363, 1183-1193 (2004). doi:10.1016/S0140-6736(04)16305-8. 3.N. Dupre, et. al., LRRK2 is not a significant cause of parkinson’ disease in french-canadians. Canadian Journal of Neurological Sciences 34, 333-335 (2007). doi: 10.1017/S0317167100006776. 4. S. Biskup, et. al., Zeroing in on LRRK2-linked pathogenic mechanisms in parkinson’s disease. Biochimica et Biophysica Acta 7, 625-633 (2009). doi: 10.1016/j. bbadis.2008.09.015. 5. Z. Liu, et. al., Unique functional and structural properties of the LRRK2 protein ATP-binding pocket. Journal of Biological Chemistry 289, 32937-32951 (2014). doi: 10.1074/jbc.M114.602318. 6. M. Janković, et. al., Identification of novel variants in LRRK2 gene in patients with parkinson’s disease in serbian population. Journal of the Neurological Sciences 353, 59-62 (2015). doi: 10.1016/j.jns.2015.04.002. 7. A. Bouhouche, et. al., Mutation analysis of consanguineous moroccan patients with parkinson’s disease combining microarray and gene panel. Frontiers in Neurology 8, 567 (2017). doi: 10.3389/fneur.2017.00567. 8. A. Rosa, et. al., Genetic screening for the LRRK2 R1441C and G2019S mutations in parkinsonian patients from campania. Journal of Parkinson’s Disease 4, 123128 (2014). doi: 10.3233/JPD-130312. 9. S. Wang, et. al., Elevated LRRK2 autophosphorylation in brain-derived and peripheral exosomes in LRRK2 mutation carriers. Acta Neuropathologica Communications 5, 86 (2017). doi: 10.1186/s40478-017-0492-y. 10. F. Nabli, et. al., Motor phenotype of LRRK2-associated parkinson’s disease: a tunisian longitudinal study. Movement Disorders 2, 253-258 (2015). doi: 10.1002/ mds.26097.
Biomarkers: The Future of Medicine By Elizabeth Varghese '21
Figure 1 Biomarker detection techinques have improved in the past few decades, allowing for better methods of diagnosis.
Image retrieved from https://cmmedia.hs.llnwd.net/v1/phrmadotorg/images/dmImage/SourceImage/biomarker_1000x775_Large4.jpg
Introduction Biomarkers are measurable indicators of a biological state or condition that can be used to measure normal bodily processes, pathogenic processes, or responses to treatments. These substances are highly specific, and can allow for earlier detection of conditions such as bladder cancer and bone metastasis (1). Biomarkers can also be used for drug detection purposes, making them important in many aspects of diagnostic medicine (2).
22
Biomarkers for Bladder Cancer Bladder cancer is an aggressive disease that occurs when the urothelial cells, cells that line the urinary tract, become pathogenic and grow at an abnormal rate. These cells can reproduce rapidly and metastasize to other vital organs. Stage four bladder cancer has a 15% survival rate, while stage one has an 88% survival rate. Because of this, early detection of bladder cancer is extremely important for patient survival (3). Current detection methods for bladder cancer do not allow for early diagnosis. One such analysis, known as cytology, analyzes cell clusters taken from the urinary tract. The results from this examination depend on how the doctor chooses to interpret the images taken, adding a bias to the diagnosis (3). Additionally, positive cytology tests usually indicate very late stages of bladder cancer. Overall, cytology cannot successfully identify the presence of stage one cancer, and it is highly ambiguous and invasive. Because of the lack of non-invasive, accurate tests for early stage bladder cancer, researchers are currently working to optimize methods that identify biomarkers present in the diseaseâ&#x20AC;&#x2122;s first stage. One of the biomarkers currently analyzed is fibroblast growth receptor 3, or FGFR3 (4). Researchers have determined that an abnormality in FGFR3, a protein found on the surface of urothelial tumor cells, indicates the presence of stage one cancer. In its unmutated form,
FGFR3 proteins regulate cell division and proliferation. However, a mutation in the gene that encodes FGFR3 causes rapid, unregulated cell growth, which eventually leads to a tumor. Current studies are focusing on developing methods to detect the mutated form of this protein (3). One method of mutated FGFR3 detection currently being explored is urine filtration. Researchers headed by Dr. Andersson at Copenhagen University developed a filter that could extract tumor cells containing mutated FGFR3 out of urine. To optimize this filter for early detection methods, the researchers evaluated its ability to capture tumor cells at low concentrations. Samples were diluted to a concentration of 1000 cells per 100 milliliters of urine to replicate the conditions of stage one bladder cancer. It was found that the filter could capture 70% of the tumor cells, indicating that the device can be used as an effective early detection strategy (3). Researchers are also looking to expand their research beyond FGFR3, and are currently exploring other biomarkers. The two main biomarkers being explored for diagnostic purposes are BTA and NMP22. BTA, or bladder tumor associated antigens, are substances present on bladder tumor cells that can induce immune response s(5). Because these antigens are specific to bladder cancer, using an antibody specific to BTA can identify their presence in urine. When researchers analyzed bladder cancer patientsâ&#x20AC;&#x2122; urine, they found that 84% of patients had BTA present in the samples, suggesting that the presence of this biomarker is a plausible indicator for bladder cancer. Further analysis needs to be done to determine if BTA is a viable indicator for stage one bladder cancer. Researchers then analyzed the patientsâ&#x20AC;&#x2122; urine for the presence of nuclear matrix protein 22 (NMP22), a protein that indicates an abnormality in cell division. It was found that only 53% of individuals exhibited this tumor marker, indicating that further analysis would need to be taken before NMP22 levels can be used as a viable diagnostic test (5).
Biomarkers for Bone Metastasis Bone metastasis occurs when cells from a primary cancerous tumor relocate to the bone. The tumor invasion hinders osteocyte (bone cell) proliferation. Because of this, the rate of bone reabsorption is faster than the rate of bone development, causing problems related to bone function. Thus, a patient experiencing bone metastasis will likely have substantial issues with mobility (6). Bone metastasis indicates that the cancer present in a patientâ&#x20AC;&#x2122;s body is in its advanced stages. Because of this, it is important that the presence of this metastasis is detected quickly and accurately to ensure proper patient care. Imaging machines such as X-rays cannot pick up sudden changes in bone activity, and can only detect bone metastasis when there are substantial tumors present. Because of this, researchers are exploring two biomarkers for bone metastasis: pyridinoline and deoxypyridinoline (7). High concentrations of pyridinoline and deoxypyridinoline, molecules that are cross-linked to collagen in bone tissue, indicate an increased rate of bone reabsorption. High levels of these biomarkers often indicates bone metastasis (6). Researchers have developed a diagnostic tool called the Bone Scan Index (BSI) that tests for the presence of bone metastasis via imaging. The BSI uses a technique known as scintigraphy to test for the presence of tumor growths in the bone. In scintigraphy, radioactive compounds are ingested and subsequently travel to a specific organ or tissue. These radioactive compounds emit gamma radiation that can be picked up by an imaging device to create a 2D image. Dr. Delmas and his team of researchers found that high BSI scores indicate the presence of bone metastasis, as well as high concentrations of pyridinoline cross linked to the bone tissue. This indicates that pyridinoline is a viable indicator for the presence of bone metastasis (7). Researchers have found that treatment with a class of drugs called bisphosphonates prevents the loss of bone density and lowers pyridinoline levels.
were confirmed by using a more common analysis methods such as urinalysis. About 79% of the tests were supported by analytical findings in the urine, indicating that the breathalyzer test was effective in detecting drug traces (8). Conclusion Methods of diagnostic testing for conditions such as bladder cancer and bone metastasis and for the presence of drugs in the body are often inaccurate, inefficient, or too invasive. The exploration of biomarkers is opening up new methods of testing that allow diseases to be diagnosed earlier and more accurate. This will lead to an overall improvement in the health and wellbeing of individuals around the world. The further exploration of biomarkers for other diseases in the future could ultimately lead to more tests that allow conditions to be diagnosed earlier, leading to drastic improvements in diagnostic medicine.
Figure 2 Schematic of the lower urinary system and a tumor in the
Biomarkers for Drug Detection Currently, the most common type of drug test is urinalysis, which analyzes an individualâ&#x20AC;&#x2122;s urine for traces of drugs. However, urine tests often provide inaccurate resultsdue to issues with the sampleâ&#x20AC;&#x2122;s filtration. Because of this, researchers are exploring alternate methods of testing that make use of biomarkers specific to certain drugs. Researchers in Beroendecentrum, Stockholm have developed a device that identifies microparticles present in breath. When drugs are ingested, microparticles contaminate the fluid lining the airway. When an individual exhales, these microparticles are released with each breath. In this device, such microparticles are collected on a polymer filter. The contents collected on the filter represent the contaminants in the airway lining, and can be analyzed to determine what specific drugs were used (8). The study analyzed samples from 38 males and 9 females recruited from an addiction clinic in Stockholm. Each patient was asked to breathe into the device, which allowed the filtration process to occur. The microparticles on the filter were then investigated for analytes such as amphetamine, cocaine, opioid and tetrahydrocannabinol (THC) (8). The results demonstrated that the filter could effectively trap microparticles. Amphetamines were detected 100% of the time in the breath, even in cases where urine and plasma were low levels. Cocaine was detected 100% of the time, and opiates were detected 78% of the time; 89% of the cases involving THC, which is the active component found in marijuana, also yielded positive results. Results
bladder lining.
Image retrieved from https://upload.wikimedia.org/wikipedia/commons/d/d4/Bladder_Cancer_%2827785800576%29.jpg
References 1. Survival Rates for Bladder Cancer. American Cancer Society, (2018). 2. How much? The Wall Street Journal, (2015). 3. E. Andersson, et. al., Filtration device for on-site collection, storage and shipment of cells from urine and its application to DNA-based detection of bladder cancer. PloS one 10, 1-12 (2015). doi: 10.1371/journal.pone. 0131889. 4. FGFR3 gene. U.S. National Library of Medicine, (2018). 5. C. Guttman, Biomarkers for bladder cancer: current and future: uptake of protein-, cellbased tests likely to rise with AUA/SUO guidance. Urology Times, S14-S17 (2017). 6. L.Chiu et. al., Use of urinary markers in cancer setting: a literature review. J Bone Oncol 4, 18-23 (2015). doi. 10. 1016/j.jbo. 2015. 01. 002. 7. P.D Delmas, et. al. Urinary excretion of pyridinoline crosslinks correlates with bone turnover measured on iliac crest biopsy in patients with vertebral osteoporosis. J Bone Miner Res, (1991). 8. Overdose death rates. National Institute on Drug Abuse, (2017).
23
Leading Non-invasive Procedures in Cancer Diagnosis By Gabriela Zarankov â&#x20AC;&#x2122;19
Image retrieved from https://pixabay.com/en/cancer-cells-cells-scan-541954/
Introduction Cancer is currently the second leading cause of death worldwide, claiming millions of lives every year. This has placed great emphasis on cancer detection/monitoring and the administration of effective treatments. However, current treatment methods may be invasive and uncomfortable for a patient (1). Invasive treatments often involve specified needles and other medical equipment being inserted into the body, sometimes breaking through the skin if necessary. These treatments can be painful and often require anesthesia. The biopsy site after the procedure needs to be cared for and will most likely have scarring. As a result of this, research is currently underway to find new alternative procedures that could detect cancer in noninvasive ways. A large amount of that research is focused on implementing DNA sequencing in cancer treatment; this may improve both the accuracy and noninvasive quality of blood and urine tests that can be used to detect cancer (1). Liquid Biopsies A traditional biopsy involves extracting tissue samples for diagnostic purposes. Most modern-day biopsies are conducted by using a specially designed needle, although imaging equipment, endoscopes, and sharp tools are also used depending on the type of biopsy. The sample taken from the patient is analyzed in a lab and results can take anywhere from a couple of days to over a week to be determined. Biopsies have become notorious for being expensive, painful in the absence of anesthesia, and, most importantly, are invasive. New detection methods are being evaluated to identify less invasive alternatives.
Figure 1 Schematic of a bone marrow biopsy being performed.
Image retrieved from: https://commons.wikimedia.org/wiki/File:Bone_biopsy.jpg
Liquid biopsies are currently being developed as potential noninvasive alternatives to cancer detection. A liquid biopsy involves collecting a blood sample and subsequently sequencing and analyzing its contents. Blood samples are typically analyzed for cell-free tumor DNA, circulating tumor cells, and other materials expelled by the tumor into the bloodstream. Liquid biopsies can be used to detect different cancers and monitor the progression of these cancers. Although this procedure is still being tested, researchers are already investigating the ways in which liquid biopsies can be implemented into current cancer detection and monitoring methods (1). One such researcher is Dr. Paul Hofman from the University of Nice Sophia Antipolis, who believes that liquid biopsies could assist in the treatment of certain cases of non-small cell lung carcinoma, a type of lung cancer (2). There are more than 200,000 cases of non-small cell lung cancer a year, and lung cancer has the highest mortality rates amongst all cancers. In some cases, the anaplastic lymphoma kinase (ALK) gene is mutated. More specifically, the ALK gene seems to undergo gene rearrangement, or the breaking and reattaching of a gene to a different place in the genome. The ALK gene encodes the enzyme known as ALK receptor tyrosine kinase, which participates in signaling pathways that contribute to cell proliferation and differentiation. Thus, when this gene is mutated it produces a protein, EML4-ALK, which promotes tumor growth. Patients with advanced non-small cell lung cancer, who are also carriers of the ALK gene rearrangement, typically receive ALK inhibitor treatment. ALK inhibitors are anti-cancer drugs that target the tumors with the ALK gene rearrangement and inhibit ALK functioning (2). Unfortunately, ALK inhibitors are only effective for a short period of time, as new mutations can arise in the gene and subsequently cause tumors to become resistant to the inhibitors within one year. Hofman believes that liquid biopsies can be used to not only detect the ALK gene, but also monitor the effectiveness of the inhibitor treatment by detecting new mutations that may arise in the sequence (2). Similar procedures for cancer screening via liquid biopsy have been proposed for gastrointestinal cancer, metastatic colorectal cancer, prostate cancer, breast cancer, and many others. The apparent versatility, simplicity, and noninvasive quality of this procedure has resulted in increased interest in and experimentation with liquid biopsies. Despite these
promising ideas, more research needs to be done regarding the effectiveness of liquid biopsies. For instance, the samples collected with liquid biopsies often lack sufficient amount of tumor material; this can complicate the material analysis (3). Nevertheless, liquid biopsies in cancer detection and monitoring have much potential and many applications. Although liquid biopsies are currently confined to testing blood samples, this technique could be implemented to other bodily fluids such as urine, saliva, and cerebral fluid (3). DNA Sequencing  Urinary bladder cancer, uncontrollable cell growth in the bladder, typically begins to develop within the inner lining of the bladder, known as the urothelium or transitional epithelium. The more invasive forms of bladder cancers are those that develop in deeper layers of the bladder wall. For example, muscle-invasive bladder grows into the detrusor muscle of the bladder. In 2018 alone, over eighty thousand new cases of bladder cancer have been diagnosed and over seventeen thousand deaths have occurred due to bladder cancer. While the five-year survival rate is approximately 77%, this statistic declines to 65% survival at fifteen years. The current and most common diagnostic examination for bladder cancer is a flexible cystoscopy followed by a urine cytology. A flexible cystoscopy consists of inserting a thin, bendable instrument known as a cystoscope into the urethra and guiding it towards the bladder. The camera attached to the cystoscope exports photos for analysis. Sometimes a biopsy or blood sample analysis is conducted as well. During a urine cytology test, the patient provides a urine sample, sometimes being collected via a catheter, a tube that is inserted through an opening in the body and drains the liquid from the cavity it reaches. The sample is then placed under a microscope and analyzed for potential signs of cancer. Although effective, this dual procedure for detecting bladder cancer is considered very invasive and costly.
Figure 2 Image of a cystoscopy being performed on male and female.
Image retrieved from: https://commons.wikimedia.org/wiki/File:Diagram_showing_a_cystoscopy_for_a_man_and_a_woman_CRUK_064.svg00
26
Recently, scientists have sought an alternative, noninvasive procedure for cancer detection in DNA sequencing. Biological markers, or biomarkers, are measurable indicators of medical conditions like disease and infections. Research into genetic biomarkers, which utilize DNA sequences as indicators of disease, has previously been unsuccessful for urinary bladdar detection. However, a recent study analyzed the potential of the gene sequences known as TERT and FGFR3, which are two of the most frequently mutated genes in bladder cancer. In all cases of bladder cancer, 65% of tumors contained a mutation in the TERT gene (4). Moreover, cancer detection via genetic testing has shown that this gene can be detected with both high sensitivity (it is almost always detected) and high specificity (it is rarely mistaken for another gene). The FGFR3 gene has similarly been shown to detect cancer with high specificity. In this study, researchers collected urine samples from participants who had bladder cancer at varying stages. They subsequently collected DNA from the cell pellets in the samples. The DNA samples were then analyzed with a technique called polymerase chain reaction amplification (PCR). PCR analyzes short fragments of DNA or RNA by replicating the fragments in a precise and relatively quick manner (4). It was used to amplify the parts of the genome where the biomarkers would be present. The copies produced by the PCR were then cleaned, processed, and analyzed to a database. Sanger sequencing, or the chain termination method, is another DNA sequencing technique that was used to confirm whether mutations were present in the genes of interest (4). The results of the experiment showed that mutations in the TERT promoter were present in approximately 55% of subjects with bladder cancer (4). There were also mutations detected in 1 out of the 20 cancer free patients and in 1 out of the 89 postoperative, cancer free patients. Mutations found in the FGFR3 sequence were found in almost 30% of the subjects with bladder cancer. Overall, there was a 70% sensitivity for detecting the cancers using the biomarkers TERT and FGFR3 (4). These are promising results that indicate that with more research into which biomarkers indicate the presence of bladder cancer, this alternative detection procedure could become the primary diagnostic tool (4). Sampling Device Esophageal cancer develops when malignant cells grow uncontrollably on the lining of the esophagus. Adenocarcinoma is the most common type of esophageal cancer and has an 18.8% five-year survival rate (5). There were almost seventeen thousand new cases and fifteen thousand reported deaths due to esophageal cancer in 2017. Barrettâ&#x20AC;&#x2122;s esophagus is a condition that results from chronic gastroesophageal reflux disease. This condition causes damage to the tissue lining the esophagus. Having this condition may also increase oneâ&#x20AC;&#x2122;s chances of developing esoph-
ageal adenocarcinoma. Although less than 1% of people with Barrett’s esophagus actually develop esophageal adenocarcinoma, this condition is currently the only known precursor for esophageal adenocarcinoma, and therefore is an important condition to test for (5). The current diagnostic method for Barrett’s esophagus is an esophagogastroduodenoscopy. This procedure involves inserting a scope into the esophagus, stomach, and duodenum. The lining of these bodily structures is studied by physicians and, if needed, the scope is also capable of taking a biopsy (5).
Out of the 156 patients, 28 were unable to swallow the sampling device, but of those that were successful, over 90% agreed that they would do it again or recommend the procedure. The biomarkers, mVIM and mCCNA1, were tested separately but yielded the highest sensitivity results when the two biomarkers were jointly tested for each sample (5). In their detection of Barrett’s esophagus, dysplasias, and the limited cancer cases studied, the two biomarkers had a greater than 88% sensitivity. Although this device is still new and in the testing stages, it shows very promising results (5). Conclusion
Figure 3 An esophagogastroduodenoscopy passes through the esophagus, stom-
With the costly and invasive nature of biopsies, the development of new alternatives for detecting cancers or known precursors has become a dominant field of medical research. Liquid biopsies, genetic sequencing of DNA found in urine samples, and an esophageal-based sampling device are all new detection methods that are currently being researched and have the potential to improve cancer detection methods. Although these procedures have the potential to cause major change in cancer detection, more research must be done to improve the accuracy of their detection capabilities, so that these procedures can be noninvasive, effective, and improve the standard of cancer detection and diagnosis overall.
ach, and duodenum.
Image retrieved from: https://en.wikipedia.org/wiki/Esophagogastroduodenoscopy
Using these endoscopies to screen for Barrett’s esophagus is not recommended due to their high cost and invasiveness. Thus, new testing methods are already being considered. A recent experiment created a small, balloon-like sampling device, that collects biological data that can be tested for specific biomarkers. Past experimentation has shown that the vimentin gene (mVIM) has been reportedly found in 90% of patients with Barrett’s esophagus, making it an effective and highly sensitive biomarker for the condition (5). The biomarker has also been detected in an esophageal brushing procedure, which is considerably less invasive than a biopsy. During the experiment, mCCNA1 appeared to be a highly sensitive biomarker and together, the two biomarkers had over 90% specificity rates for detecting Barrett’s esophagus and the cancers (5). The device was encapsulated and was approximately 16 x 9 mm in size, making it smaller than some orally- ingested pills. It was then attached to a catheter, 2.15 mm in diameter, and swallowed by the test subjects. Once the device reached the subjects’ stomachs, air was pumped into the device through the catheter, thereby blowing up the balloon-like device (5). The device was then slowly brought back up to collect samples along the surface of the esophagus. Finally, the device was inverted back into the capsule to protect the samples and was pulled up back through the mouth. The DNA in the samples was then analyzed in search of the biomarkers (5).
References 1. J. Wan, et al., Liquid biopsies come of age: towards implementation of circulating tumour DNA. Nature Reviews Cancer 17, 223–238 (2017). doi: 10.1038/ nrc.2017.7. 2. P. Hofman, ALK status assessment with liquid biopsies of lung cancer patients. Cancers 9, 1-9 (2017). doi: 10.3390/cancers9080106. 3. E. Heitzer, et al., The potential of liquid biopsies for the early detection of cancer. npj Precision Oncology 1, (2017). doi:10.1038/s41698-017-0039-5. 4. D. Ward, et al., Multiplex PCR and next generation sequencing for the non-invasive detection of bladder cancer. PLOS One (2016). doi: 10/1371/journal. pone.0148756. 5. H. Moinova, et al., Identifying DNA methylation biomarkers for non-endoscopic detection of Barrett’s esophagus. Science Translational Medicine 10, (2018). doi: 10.1126/scitranslmed.aao5848.
27
Robotic Surgery: The use of the Da Vinci System in Modern Medicine By Ruhana Uddin '19
Figure 1.The Da Vinci enables the surgeon and two assistants who are present to obtain better visibility
of the field. The surgeon is seated at the console and the assistants are placed near the Da Vinci to exchange instruments if needed. Image retrieved from: https://commons.wikimedia.org/wiki/File:Cmglee_Cambridge_Science_Festival_2015_da_Vinci.jpg
Introduction Since the early 2000s, the use of robotic surgery, the employment of robots or technology to perform and maximize the efficiency of surgical procedures, has been gaining popularity among patient treatments in orthopedics, neurology, and cardiology (1). Although the field has had some negative connotations because of procedural costs and doctoral experience, the use of robots is innovative, and it improves a surgeonâ&#x20AC;&#x2122;s accuracy and precision, which are critical to the wellbeing of patients undergoing a procedure. The use of robotic surgical systems has steadily increased, thus drawing the attention of both doctors and patients. Utilizing a robot is a less invasive approach compared to traditional surgical procedures that make large incisions. This is because the robot is attached by four smaller incisions. Additionally, the robot can be used for surgeries that differ in complexity and field, thus making it much more versatile. This technology has revolutionized the lives of patients by allowing surgeons to perform more accurately, increasing their ergonomics, and giving patients the option to have minimally invasive procedures (2). A wide range of surgeries has been conducted with the use of a robot, especially in the fields mentioned previously. For example, in neurosurgery, robotic surgery has been used to assist in image guided methods to perform the surgery. This advantageous technique allows the surgeon to visualize the lesion in the brain when putting the instrument into the lesion and looking at the image simultaneously (1). Meanwhile in orthopedic surgery, there is more versatility with the use of the surgical robot, as it was applied in femur and acetabular preparation in hip surgery, knee surgery, and spine surgery. Furthermore, in general surgery the use of robotics has been
28
adopted in gynecology, urinary surgery, and gastrointestinal surgery. One creative procedure is the BABA surgery, which takes an innovative approach to a thyroidectomy through the use of the more recent and efficient Da Vinci method. For thoracic surgeries in particular, surgeons take full advantage of the dexterity of the arms of the robot and use them to maneuver instruments to behind anatomical structures, which could be useful for heart bypass procedures. Previous Models for Robotic Surgery Robotic surgery has had a history of evolving its techniques for more efficient methods of surgery. The application of robots in surgical procedures began in the 1970s and was pursued by NASA for a military project (3). The goal of the project was to eliminate the need for a surgeonâ&#x20AC;&#x2122;s presence in providing medical care to soldiers or astronauts. Since then, robotic surgery has come a long way, but the presence of a surgeon is still crucial. After many models were adjusted and replaced, the FDA approved two robotic surgical systems in 2003: the Zeus system and the Da Vinci system. While both of these systems were very similar, they had some minor differences. Each system consisted of robotic arms that were attached to the patient through small incisions. A surgeon was able to attach instruments to the arms and carry out the surgical procedure. In terms of the structure of the two robotic surgical systems, the Zeus system contained three arms while the Da Vinci system contained four arms (4). Additionally, each system had a small camera that was used to visualize the field inside the patientâ&#x20AC;&#x2122;s body. The Zeus system, in particular, contained a voice-controlled-camera. However, it did not allow the surgeon to have an accurate depiction of depth. The Zeus
system was eventually discontinued in favor of the Da Vinci system, which is the only surgical system in use today. The Da Vinci currently provides surgeons with greater benefits when carrying out surgeries, which ultimately help the patient. The Da Vinci Surgical System Since 2003, the Da Vinci Surgical System has been used to carry out a majority of surgical procedures, and it has enabled surgeons to have a wider range of motion and greater visualization of the area they are working on. All models of the Da Vinci Surgical System consist of four arms; three of these arms carry instruments for the surgeon to use and the fourth arm contains a dual camera to visualize the field (2). The dual camera provides the surgeon with a three-dimensional view of the field, whereas the instruments carrying the arms are inserted into the patients using incisions that are one to two centimeters long (1). One advantage of the Da Vinci is its lack of large incisions. In one particular study, Dr. Jason D. Wright and a group of researchers from Columbia University analyzed over 200,000 robot-assisted hysterectomies that were performed over the span of three years in the United States. They determined that this method avoided making abdominal incisions to remove the uterus, making the overall procedure less invasive (5). Additionally, they concluded that robot-assisted hysterectomies were easier to learn and allowed doctors to handle complicated cases. Using the robot allowed surgeons to overcome obstacles that may have arisen during surgery. If the robot was not used, the traditional approach would have been carried out and if the doctor encountered a problem, an abdominal incision would have been made. The study showed that the Da Vinci had potential for making procedures less invasive, when compared to their traditional versions. During a procedure, once the arms are placed, the surgeon sits at a console and manipulates and controls the arms of the robot (2). The robot is not programmed to carry out any actions on its own. In fact, everything that the robot does is controlled solely by the surgeon performing the procedure. Furthermore, the use of the robot allows for the development of different solutions for a task during surgery. For example, Dr. Andrea Cestari and her team of researchers from the Scientific Institute of Milan, Italy, demonstrated that the side docking technique with the Da Vinci was much more reliable that the traditional docking method. Docking refers to the angle and position at which the arms are placed. The traditional docking method involved raising the patientâ&#x20AC;&#x2122;s legs during a prostatectomy, which could cause issues with the patientâ&#x20AC;&#x2122;s hips. Side docking involved anchoring the arms of the robot on the left side of the patient (6). Side docking has been beneficial for some robotic surgeons as it prevented collisions with other instruments and the amount of time it took to dock the instrument is shorter. This demonstrates the potential the Da Vinci has in sparking creativity in the field of surgery. BABA Surgery: A Closer Look One procedure that has been changed to use the Da Vinci system is the open thyroidectomy, which occurs when portions of the thyroid gland need to be removed. This could be due to nodules or goiters that are present or to hormone imbalances caused by the thyroid or parathyroid glands. Traditionally, an eight-centimeter long incision is made near the
collar and then stretched out to take out the thyroid gland (7). In both the traditional and the modern approach, the vagus nerve is continuously tested for activity, as it is located close to the thyroid gland. In one particular study, Dr. Robert H. Howland and his team of researchers from the University of Pittsburgh Medical Center analyzed the important role that the vagus nerve plays in the autonomic nervous system and concluded that it has a key part in controlling heart rate and respiration (8, 9). Due to the importance of the vagus nerve, it is tested periodically throughout the BABA surgery to ensure that it is not damaged from the thermal heat of the instruments used. Since 2007, Dr. June Young Choi and his team from the Seoul National University College of Medicine took a different approach, known as the bilateral axillo-breast approach (BABA), to surgically remove the thyroid remotely so that the incisions were hidden. His team analyzed 512 cases of thyroidectomies they performed via the BABA approach (7). This approach involved four incisions that were two centimeters in length. One incision was made on each areola and one incision was made near each armpit. These four incisions were used to attach the arms of the robot to the patient. Furthermore, a working space was created in the neck area, located near the middle of all the incisions. From here, the surgeon used the Da Vinci to perform the thyroidectomy via one of the incision sites in the armpit. While this approach may be a novel idea and more complicated than the traditional approach of removing a thyroid, Dr. Choi uncovered that it had benefits for both the patient and the surgeon. Patients may prefer to undergo a BABA because the surgery does not produce a scar (10). The stitches near the neck area were done underneath the skin with the robot and were not visible at surface, thus producing a hidden scar. The scars from the incisions located at the armpits and areolas were hidden in the creases of the skin. Thus, at the end of surgery, there were no openly visible scars and this allowed the patient to keep their surgery private. Patients also experienced less pain compared to that of an open thyroidectomy. For the surgeon, this approach allowed him or her to have a greater visualization of the thyroid as well as accurately maneuver the instruments. This was especially helpful if the patient had a condition called Hashimotoâ&#x20AC;&#x2122;s disease, which makes the tissue surrounding the thyroid stickier and harder to handle (11). Moreover, the recovery time for patients was shorter, only three to seven days, than it would have been with a traditional surgery. Although there were many benefits to this new method, there were also some drawbacks. The BABA surgery was more expensive than an open thyroidectomy and the procedure was typically 1.3 to 2.4 times longer, which may be a concern as the patient would be under anesthesia for a much longer period (7). Regardless, it was seen as a feasible and innovative procedure, as it delivered cosmetic results that patients were looking for as well as lower postoperative complication rates (10). The Future for Surgeons While the Da Vinci system is a novel approach for performing surgeries, many are still confused as to what role surgeons play and how this method affects them. Traditionally, when performing surgery, a surgeon would be hunched
29
over the patient and directly working on a specific portion of the body. There may be other assistants present to help the surgeon with retraction to obtain better visualization, provide suction, or change out instruments. In contrast to the traditional approach, a surgeon utilizing the robotics method would be located at a distance from the patient and work on the patient remotely with the arms of the robot carrying out the action. However, the surgeon would be in control of every action that the robot carries out (2). Additionally, using a robot allows the surgeon to have greater dexterity. The arms of the robot are able to move 360 degrees, giving the surgeon a wider range of motion. The system is also able to remember the last position that the surgeon was in, which could be crucial if the surgeon needs to rest his or her hands during a long procedure (3). This ensures that the surgeon would be able to start right where he or she left off. Moreover, the surgeon gains better precision through the use of the robot due to the high magnification and resolution produced by the camera. The robot can also be calibrated to move at a certain rate for every movement that the surgeon made. This allows for higher surgical precision (2). The use of the robot does have a large learning curve for typically 1.3 to 2.4 times longer, which may be a concern because the patient would be under anesthesia for a much longer period (7). Regardless, it was seen as a feasible and innovative procedure as it delivered cosmetic results that patients were looking for as well as lower postoperative complication rates (10).
Additionally, using a robot allows the surgeon to have greater dexterity. The arms of the robot were able to move 360 degrees, giving the surgeon a wider range of motion. The system was also able to remember the last position that the surgeon was in which could be crucial if the surgeon needs to rest his or her hands during a long procedure (3). This ensures that the surgeon would be able to start right where he or she left off. Moreover, the surgeon gains better precision through the use of the robot due to the high magnification and resolution produced by the camera. The robot can also be calibrated to move at a certain rate for every movement that the surgeon made. This allows for higher surgical precision (2). The use of the robot does have a large learning curve for surgeons. Tasks such as suturing were associated with longer acquisition times. However, due to the dexterity and mobility of the arms of the robot, application of these techniques sped up the learning curve for other skills required for specific surgeries (2). Overall, using the Da Vinci allows the surgeon to perform the procedure eloquently and efficiently. It made their jobs a little easier, which results in a better outcome for the patients. Conclusion While the use of robotic surgery is still a fairly new topic, it is becoming widely popular due to the benefits it brings to both the patient and the surgeon. With this approach, patients are able to choose minimally invasive options, and surgeons are able to perform such procedures more efficiently and precisely. The use of the Da Vinci has also sparked creative methods of performing traditional procedures, such as the BABA thyroidectomy. This has allowed doctors to adjust traditional surgeries and increase their effectiveness via smaller incision sites, magnified visual field with increased visualization, and shorter recovery periods for patients. With increased usage of the Da Vinci system, robotic surgery has a lot of potential to expand and develop more efficient and beneficial outcomes for surgeons and patients.
References:
Figure 2 Surgeon peering into the Da Vinci system and controlling the movements mechanically.
Image retrieved from https://media.defense.gov/2017/Mar/29/2001724085/-1/-1/0/170328-FDN643-064.JPG
The Future for Surgeons While the Da Vinci is a novel approach to performing surgeries, many are still confused as to what role the surgeon plays and how this method affects them. Traditionally, when performing surgery, the surgeon would be hunched over the patient and directly working on a specific portion of the body. There may be other assistants present to help the surgeon with retraction in order to get a better visualization, provide suction, or change out instruments. In contrast to the traditional approach, a surgeon utilizing the robotics method would be located at a distance from the patient and work on the patient remotely with the arms of the robot carrying out the action. However, the surgeon would be in control of every action that the robot carries out (2).
30
1. R.D. Howe, et al., Robotics for surgery. Annual Reviews 1, 11-240 (1999). doi: doi.org/10.1146/annurev.bioeng.1.1.211. 2. D.M. Herron, et al., A consensus document on robotic surgery. Surgical Endoscopy 22, 313-325 (2008). doi: 10.1007/s00464-007-9727-5. 3. M. Diana, et al., Robotic surgery. British Journal of Surgery 102, 15-28 (2015). doi: 10.1002/bjs.9711. 4. A.R. Lanfranco, et al., Robotic surgery a current perspective. Annals of Surgery 1, 14-21 (2004). doi: 10.1097/01.sla.0000103020.19595.7d. 5. J.D. Wright, et al., Robotically assisted versus laparoscopic hysterectomy among women with benign gynecologic disease. The Journal of American Medical Association 309, 689-698 (2013). doi: 10.1001/jama.2013.186. 6. A. Cestari, et al., Side docking of the Da Vinci robotic system for radical prostatectomy: advantages of traditional docking. Journal of Robotic Surgery 9, 243-247 (2015). doi: 10.1007/s11701-015-0523-2. 7. Y. Choi, et al., Endoscopic thyroidectomy via bilateral axillo-breast approach (BABA): review of 512 cases in a single institute. Surgical Endoscopy 26, 948-955 (2012). doi: 10.1007/s00464-011-1973-x. 8. R.H. Howland, et al., Vagus nerve stimulation. Current Behavioral Neuroscience Reports 2, 64-73 (2014). doi: 10.1007/s40473-014-0010-5. 9. B.B. Lorincz, et al., Automatic periodic stimulation of the vagus nerve during single-incision transaxillary robotic thyroidectomy: feasibility, safety and first cases. Journal of the Science and Specialties of the Head and Neck 38, 482-485 (2015). doi: 10.1002/hed.24259. 10. J.H. Choe, et al., Endoscopic thyroidectomy using a new bilateral-axilla breast approach. World Journal of Surgery 31, 601-606 (2007). doi: 10.1007/s00268006-0481-y. 11. J.M. Lee, et al., Thyroidectomy for Hashimotoâ&#x20AC;&#x2122;s Thyroiditis: Complication and Associated Cancers. Thyroid 18, 729-734 (2008). doi: 10.1089/thy.2007.0384.
Look Here! Cognitive and Behavioral Correlates of Selective Attention in Adults with Autism Symptoms Lee Ann Santore ’19, Christopher M. Esposito ’18, & Matthew D. Lerner, Ph.D. ABSTRACT Individuals with Autism Spectrum Disorder (ASD) exhibit deficits in attention to faces, a characteristic critical to successful social functioning. These deficits can be measured cognitively and behaviorally via the P100 event related potential and reaction time (RT) to socioemotional stimuli, respectively. This study observed the relationships between the properties of the P100 and reaction time to emotionally salient faces, and the severity of ASD symptoms in adults with and without ASD. It was discovered that slower RTs related to greater ASD symptom severity and social skills deficits, and that P100 amplitudes, but not P100 latencies, related to faster RT. These results suggest that ASD symptoms may lead to slower RT and increased neural activity when encountering socioemotional stimuli.
INTRODUCTION Individuals with Autism Spectrum Disorder (ASD) demonstrate deficits in attention to highly emotional, salient stimuli, a facet of executive functioning that is crucial for successful social functioning (1, 2). It is believed that attention deficits in ASD (e.g., hypersensitivity, over-arousal, hyperfocus on off topic stimuli, and difficulty shifting focus) affect incoming information, suppressing areas of the brain that utilize executive functions (3). Executive functioning impairments have been argued to influence, or possibly be the cause of, emotion understanding deficits in individuals with ASD (4, 5, 6). Such deficits in executive functioning can be revealed by slower reaction times, which can cause individuals with ASD to take longer when making inferences about the emotions of others, compared to typically developing individuals (7). Often, it may take an individual with ASD a longer time to realize that a face is happy than it would a typically developing individual. Executive functioning dysfunction can be measured behaviorally via reaction time during laboratory tasks and physiologically via event-related potentials (3,7). Electroencephalography is a commonly used non-invasive medical imaging device that amplifies the electricity produced by the brain to produce real-time images of the brain waves. The P100 wave, an event-related potential marker of electroencephalography, is a measure of early face processing that is influenced and modulated by selected attention, or reactions to only select stimuli (8, 9, 10). Electroencephalography is a reliable measure of speed and intensity when researching discrete stages of cognitive processing concerning distinct events (11). This procedure can reveal early, rapid processing of simple and complex stimuli, and the salience of faces modulates the latency of the P100 (12, 13). The P100 has been observed in both ASD and typically developing populations. Literature indicates that, compared to typically developing controls, the P100 is delayed in adults with ASD; this may mean that it takes individuals with ASD longer to realize that a face is present and begin interpreting its emotions. The Diagnostic Analysis of Nonverbal Accuracy-2 (DANVA-2) is an age-normed, standardized paradigm used to assess emotion recognition of children and adult faces of all emotional intensities (14, 15). It has shown that certain event related potential latencies of facial processing are correlated with reaction time (11). However, researchers have yet to investigate the relationship between the P100 and reaction time on facial recognition tasks. To evaluate the cognitive and behavioral indices of early attention to salient social stimuli, this study examined the relationships between P100
latency and reaction time to emotionally salient faces, and severity of ASD symptoms in adults with and without ASD. It was hypothesized that in individuals with ASD, longer P100 latencies would be associated with slower reaction times. METHODS Sample 58 adults age 18 and older (Mage = 22.83, SDage = 6.20, 27 male) were recruited from the greater Long Island area. Recruitment took place via flyers, community recruitment events, and an online study pool system offered to Stony Brook University Psychology students. Of these 58 participants, 19 were diagnosed with ASD and the rest were typically developing (TD). No participants had significant cognitive impairment as determined via Intelligence Quotient (MIQ = 101.91, SDIQ = 13.17). Procedure Participants who reported having ASD attended an initial visit during which they had their diagnosis confirmed by the gold standard ASD diagnostic assessment, the Autism Diagnostic Observation Schedule – Second Edition (16). The assessment was administered by trained and reliable professionals (e.g. clinicians, doctoral students, etc.). ASD participants also completed the Kaufman Brief Intelligence Test, administered by trained research assistants (e.g. undergraduate and graduate students) (17). Participants who scored <70 were excluded from the study to ensure that all participants could competently perform tasks. TD participants were exempt from the initial visit, but still completed the intelligence test at the second visit. At the second visit, both the ASD and TD participants completed a measure of ASD symptom severity (18). During this visit, participants were also administered the Diagnostic Analyses of Non-Verbal Accuracy – Second Edition, a facial and vocal emotion recognition task, while being monitored by an electroencephalogram (EEG) (14). During EEG data acquisition, event-related potentials were recorded, particularly the latency and amplitude of the P100, as well as participant reaction times were extracted. RESULTS Bivariate correlations revealed that slower reaction times correlated with ASD symptoms measured by the Autism Quotient (r=.341, p<.01; Figure 1), as did the social skills (r=.271, p<.05), attention switching (r=.301, p<.05), communication (r=.266, p<.05), and imagination (r=.318, p<.05)
31
deficits subscales. Slower P100 latencies correlated with the social skills deficits subscale (r=.266, p<.05; Figure 2), but not overall ASD symptoms (r=.213, p=.108). Additionally, P100 latency did not significantly correlate with reaction time (r=.122, p=.367), however, larger P100 amplitudes were
associated with faster reaction times (r=-.291, p<.05; Figure 3).
Figure 1 Total Autism Symptoms, as measured by the AQ, correlated significantly with average reaction time to facial stimuli from the DANVA-2. Greater Autism Symptoms related to longer reaction times.
Figure 2 Total Social Skills, as measured by the AQ, correlated significantly with P100 Latency to facial stimuli presented by the DANVA-2. Greater social skills deficits correlated positively with increased P100 latency.
32
Figure 3 Average reaction time to facial stimuli presented by the DANVA-2 correlated significantly with P100 amplitude to faces; such that longer reaction times related to more salient P100 amplitudes.
Discussion Behavioral and neural evidence converges to suggest that individuals with social skill deficits and increased ASD symptoms demonstrate decreased attention and efficiency of response to socially relevant stimuli. This is the first study to demonstrate that social deficits relate to an early neural marker of attention (P100). Results suggest that P100 latency, a neural index of selective attention efficiency, is not related to reaction time, a behavioral indicator of executive functioning. However, P100 amplitude, which is thought to reflect recruitment of neural resources, was related to faster reaction time. This suggests that in both ASD and typically developing individuals, recruitment of neural resources during selective attention is more closely related to executive functioning during appraisal of emotional faces; as opposed to processing efficiency. The P100 latency is not a good indicator of reaction time to emotionally salient stimuli. Instead, P100 amplitude relates reaction time to emotionally salient stimuli. These findings indicate that the P100 event related potential may serve as a viable indicator of neural activity when attending and reacting to socioemotional stimuli. Future studies should utilize the P100 to further investigate and localize areas of neural abnormality that might contribute to attention and social deficits typically observed in individuals with ASD, with the aim of improving intervention and treatment. Additionally, future research should use the P100 as a neural marker to investigate the neural development of selective attention to emotionally salient stimuli in individuals with and without ASD. This would measure= improvements in social responsivity in individuals with ASD.
References 1. M. Batty, M. Taylor, The development of emotional face processing during childhood. Developmental science, 9(2), 207-220 (2006). doi: 10.1111/j.1467-7687.2006.00480.x. 2. R. Philip, et al., Deficits in facial, body movement and vocal emotional processing in autism spectrum disorders. Psychological medicine, 40(11), 1919-1929 (2010). doi: 10.1017/S0033291709992364. 3. E. Sokhadze, et al., Event-related potential (ERP) study of facial expression processing deficits in autism. Journal of Communications Research, 7(4) (2015). 4. S. Ozonoff et al., Executive function deficits in high-functioning autistic individuals: relationship to theory of mind. Journal of Child Psychology and Psychiatry, 32(7), 1081-1105 (1991). doi: 10.1111/j.1469-7610.1991.tb00351.x. 5. J. Perner, 13-The meta-intentional nature of executive functions and theory of mind. Language and thought: Interdisciplinary themes, 270 (1998). doi: 10.1017/CB09780511597909.017. 6. E. Russell, Autism as an executive disorder. Oxford University Press (1997). 7. N. Kaland et al., Response times of children and adolescents with Asperger syndrome on an â&#x20AC;&#x2DC;advancedâ&#x20AC;&#x2122; test of theory of mind. Journal of Autism and Developmental Disorders, 37(2), 197-209 (2007). doi: 10.1007/s10803-006-0152-8. 8. S. Hillyard et al., Sensory gain control (amplification) as a mechanism of selective attention: electrophysiological and neuroimaging evidence. Philosophical Transactions of the Royal Society B: Biological Sciences, 353(1373), 1257-1270 (1998). doi: 10.1098/rstb.1998.0281. 9. S. Luck et al., Event-related potential studies of attention. Trends in cognitive sciences, 4(11), 432-440 (2000). 10. G. Mangun, Neural mechanisms of visual selective attention. Psychophysiology, 32(1), 4-18 (1995). 11. M. Lerner et al., Multimodal emotion processing in autism spectrum disorders: an event-related potential study. Developmental cognitive neuroscience 3, 11-21 (2013). doi: doi.org/10.1016/j. dcn.2012.08.005. 12. H. Shen, et al., An event-related potential study on the perception and the recognition of face, facial features, and objects in children with autism spectrum disorders. Perceptual and motor skills, 124(1), 145-165 (2017). doi: 10.1177/0031512516681694. 13. M. Taylor, et al., Eyes first! Eye processing develops before face processing in children. Neuroreport 12(8), 1671-1676 (2001). doi: 10.1097/00001756-200106130-0003. 14. S. Nowicki, S, M. Duke, Individual differences in the nonverbal communication of affect: the diagnostic analysis of nonverbal accuracy scale. Journal of Nonverbal behavior 18(1), 9-35 (1994). doi: 10.1007/BF02169077. 15. S. Nowicki Jr, J. Carton, The measurement of emotional intensity from facial expressions. The Journal of social psychology 133(5), 749-750 (1993). 16. C. Lord, et al., Autism diagnostic observation scheduleâ&#x20AC;&#x201C;Second edition (ADOS-2). Los Angeles: Western Psychological Service (2015). doi: 10.1007/s10803-014-2080-3. 17. A. Kaufman, N. Kaufman, Kaufman brief intelligence test. John Wiley & Sons, Inc. (2004). 18. S. Baron-Cohen, et al., The autism-spectrum quotient (AQ): Evidence from asperger syndrome/ high-functioning autism, males and females, scientists and mathematicians. Journal of autism and developmental disorders 31(1), 5-17 (2001). doi: 10.1023/A:1005653411471.
33
Stony Brook Young Investigators Review would like to give a special thank you to all of our benefactors. Without your support, this issue would not have been possible.
HELP IN“ ” OTHERS.
Find out more: sbyireview.com www.facebook.com/sbyireview Follow us on twitter @sbyir