15
The Man Behind Undergraduate Biology: Dr. J. Peter Gergen
22
Resolving the Proton Radius Puzzle: the Research of Dr. Jan Bernauer
31
Build-A-Baby
Spring 2020 Volume 14
STAFF 2019-2020 Editor-in-Chief: Stephanie Budhan ’21
Head of Cabinet: Priya Aggarwal ’21
Layout Chief: Lauren Yoon ’21
Managing Editors: Nomrota Majumder ’21 Shrey Thaker ’22
Cabinet: Rohan Dayal ’23 Jessica Hui ’22 Debolina Chanda ’22 Hannah Philipose ’23
Layout Editors: Priya Aggarwal ’21 Matthew Ng ’22 Komal Grewal ’23
Associate Editors: Nina Gu ’21 Nita Wong ’21 Gabriela Zanko ’23 Elen Deng ’21 Copy Editors: Caleb Sooknanan ’20 Claire Garfield ’20 Rohan Shah ’21 Riya Gandhi ’22 Kimberly Lu ’22 Farah Hasan ’23
Webmaster Rohan Dayal ’23
Faculty Advisors: Dr. John Peter Gergen Dr. Nicole Leavey
Writers: John McMenimon ’20 Natalie Lo ’21 Travis Cutter ’22 Jasmine Kumar ’22 Lance Wang ’22 Kailyn Fan ’23 Robert Lum ’21 Terrence Jiang ’23 Shrila Shah ’23 Priyanshi Patel ’22 Panayiota Siskos ’23 Gaurav Sharma ’22 Joyce Chen ’23 Gwenyth Mercep ’21 Simran Kaur ’20 Ashley Goland ’23 Wendy Wu ’22 Ayesha Azeem ’23
LETTER FROM THE EDITOR-IN-CHIEF
Stony Brook Young Investigators Review (SBYIR) is proud to release its 14th biannual publication. As an undergraduate research journal, SBYIR strives to provide a forum for students interested in scientific writing, to make science accessible to the students and faculty here on campus and far beyond, and finally to highlight the incredible research being done on campus by both faculty and undergraduates. Despite the unprecedented challenges brought to Stony Brook University by the COVID-19 pandemic, the SBYIR staff has worked tirelessly to publish a journal publication this semester. This edition’s journal theme prompted writers to consider both the great human impact as well as the complex ethics behind scientific advancements and technologies in a wide variety of fields. Thus, each piece features an integration of science and society. In this issue, readers will find articles that evaluate the safety of self- driving cars, discuss the emergence of anti- CRISPR gene editing technology, and assess the applications of virtual reality in education and healthcare. This issue also features two interviews with experimental nuclear physicist Dr. Jan C. Bernauer and esteemed professor and undergraduate biology director Dr. John Peter Gergen. To celebrate the release of each issue, we host a colloquium during which an honorary speaker is invited to share their work. Speakers such as esteemed biologist Dr. Dany Adams who is the founding Editor- in- Chief of the new MaryAnn Liebert Inc. journal Bioelectricity and most recently Dr. Alan Hanash who is a physician- scientist who conducts research in hematopoietic stem cells at the Memorial Sloan Kettering Cancer Center. As always we would like to thank our staff of writers, editors, and cabinet members for their hard work and commitment, without whom none of this would be possible. Additionally, we would like to thank our partners at the Alda Center for Communicating Science for their constant support, as well as our faculty advisors, Dr. Nicole Leavy and Dr. Peter Gergen for their guidance and support. Welcome to SBYIR. We sincerely hope you enjoy.
STEPHANIE BUDHAN
TABLE OF CONTENTS Fall 2019 Volume 13
8
SCIENCE REVIEWS Genetic Engineering With Anti-CRISPR
26
Ethics of Cancer Care: Beyond Just Biology By Shrila Shah ’22
By Jasmine Kumar ’22
29
Analyzing Hybrid Closed Loop Systems By Robert Lum ’21
10
Human Effects on the Fossil Record Throughout Time By Travis Cutter ’22
12
Evaluating the Safety of Self-Driving Cars By Lance Wang ’22
31
15
Build-A-Baby
By Terrence Jiang ’22
INTERVIEW The Man Behind Undergraduate Biology: Dr. J. Peter Gergen By Natalie Lo ’21
18
Diverse Applications of Virtual Reality in Education and Healthcare By Kailyn Fan ’23
22
Resolving the Proton Radius Puzzle: the Research of Dr. Jan Bernauer By John McMenimon ’20
COUNTDOWN TO 2050 TO SAVE THE AMAZON PRIYANSHI PATEL ’22 The number of fires in the Amazon last year had renewed public concern for the future of the region’s forest biome. The concerns date back to the early 1970s when Brazil made the Transamazon Highway, after which the rate of deforestation increased. One of the principal questions Amazon scientists are asking is how much deforestation and global climate change can the Amazon’s tropical biome handle before rainfall is drastically reduced, consequentially releasing vast amounts of forest carbon into the atmosphere? A recent study published in the journal One Earth, conducted by the Departments of Geography and Latin American Studies at the University of Florida and the University of Texas aims to answer the question. The study found that the Amazon may be turned into a dry savanna in less than 30 years. The Amazon rainforest generates half of its own rainfall through evaporation and transpiration. As global climate change has intensified, there is a greater concern for what the
deforestation threshold should be. Earlier estimates suggest that the “tipping point” would be at 40% deforestation, but more recent estimates say that the effects of climate change, drought, and wildfires could bring the tipping point closer to 20 and 25% deforestation. The recent increases in deforestation must be reversed to have any hope of avoiding the tipping point. In the past, Brazil has made efforts to significantly curb deforestation, but the presence of political will to stop is uncertain. If the Brazilian government continues to promote agribusiness expansion, then this will be catastrophic for the Amazon and the world. However, continuing pressure from the media and global consumers is likely to create economic incentives to persuade the agribusiness sector that it would be more profitable to protect the Amazon than destroy it. 1. R.T. Walker, et al., Avoiding Amazonian Catastrophes: Prospects for Conservation in the 21st Century. One Earth, Cell Press, 1-14 (2019). doi: 10.1016/j.oneear.2019.09.009 2. Image retrieved from: https://www.pexels. com/photo/ash-blaze-bonfire-burn-266436/
3
CRYOPRESERVATION OF SEMEN GENE INACTIVATION IN KIDNEY WITHOUT USING EGG YOLK CANCER DEVELOPMENT PANAYIOTA SISKOS ’23
GAURAV SHARMA ’22
Figure 1 Egg yolk is used to protect sperm from cold shock in semen cryopreservation
Cryopreservation of semen conserves genetic information and allows fertilization via artificial insemination. Egg yolk is an ingredient of bull semen extender, which buffers sperm from temperature and environmental stressors. However, egg yolk composition is significantly variable among different producers. Cholesterol is a molecule that strengthens membrane structures. Increasing cholesterol content in sperm plasma membranes increases cryotolerance, allowing sperm to survive freezing temperatures. The amount of cholesterol present in sperm plasma membranes can be altered by cyclodextrins, oligosaccharides that bind to cholesterol. The main purpose of this study, led by scientists from the University of Saskatchewan, was to cryopreserve bull semen with a novel egg yolkfree extender containing exogenous cholesterol. Another goal was to compare the qualities of sperm frozen with the novel extender and varying concentrations of glycerol, a cryoprotectant that protects sperm from freezing temperatures. A third objective was to assess the in-vitro fertilization ability of sperm frozen using the egg yolk-free extender. Semen was first collected from bulls by an electroejaculation procedure. Some sperm was diluted in an extender containing egg yolk and glycerol, and subsequently frozen. The rest of the sperm was diluted in an egg yolk-free extender, cooled, and treated with a cholesterol-cyclodextrin extender to incorporate exogenous cholesterol into the sperm membranes. The percentage of total sperm
4
moving in sperm frozen with egg yolk-containing extender and egg yolk-free extender/cholesterolcyclodextrin, respectively, were similar. However, sperm velocity was higher in the egg yolk-free group compared to the egg yolkcontaining group. In addition, 9 common proteins were found in the post-thaw sperm between the two groups. Fresh sperm and sperm frozen with egg yolk-free extender had 19 common proteins, suggesting adequate preservation using the egg yolk-free extender. Additionally, increasing the glycerol concentration in the egg yolk-free extender resulted in increased postthaw sperm motility. Finally, sperm frozen with the egg yolk extender and egg yolk-free extender, respectively, had similar cleavage and blastocyst rates, suggesting similar in-vitro fertilizing ability. This study is important because it showed the success of a novel cryopreservation technique using egg yolk-free extender and exogenous cholesterol, which avoids the biosecurity risks associated with using egg yolk. Future studies could use cholesterol-cyclodextrin complex diluted in glycerol as an extender to study changes in proteins after cryopreservation, which may allow researchers to isolate membrane proteins and improve freezing abilities. In addition, in-vivo fertility trials of semen frozen without egg yolk are currently in progress and will be reported in the future. 1. A. Muhammad, et al., Egg yolk-free cryopreservation of bull semen. Plos One 14, (2019). doi: 10.1371/journal.pone.0223977 2. Image retrieved from: https://cdn-0. goodfreephotos.com/albums/other-photos/ eggs-and-cracked-egg.jpg
The epigenetic components pertaining to the onset of cancer have been of interest for many years due to the hope of developing approaches to delay the onset of cancer in the future for an individual. Recently, a tumor suppressor family has been discovered, called Ras-Association Domain Family (RASSF) which epigenetically inactivated breast, lung, skin, and thyroid cancers. Neither RASSF10’s in-vivo function nor its role in kidney cancer has been studied yet, but researchers from the Institute of Genetics in Germany suspect that it may also play a role in kidney malignancies. In order to study the effects of inactivating/silencing RASSF10 in kidney cancer, these researchers developed the first RASSF10 knockout mouse model. The mice showed no physical difference from regular mice and were still fertile. Double knockouts (p53 and RASSF10 or RASSF1A and RASSF10) were used as well. Researchers noticed an increase in tumorigenesis and spleen enlargement in the double knockout mice. Specifically, the p53/RASSF10 double knockout showed neoplasia in the kidney. Survival of the RASSF10 knockout mice decreased while KRAS, a protein that signals proliferation or differentiation, and MYC, a collection of transcription
factors for tumorigenesis, activity increased. Researchers then looked at cells from human kidney cancer and found that RASSF10 was inactivated in these cell lines as well. This observation supported the hypothesis that RASSF10 inactivation may also be a component responsible for kidney cancer in humans. These findings pose a possible target area in which researchers can develop an approach to activate RASSF10. One possible methodology suggested by the researchers is epigenetic editing similar to the method that created the knockout mice. Researchers also now have an approach to find other genes that may be inactivated and promote tumorigenesis. Hypermethylation was the primary sign of inactivation in RASSF10 so screening genes that are hypermethylated in knockouts may reveal more tumorigenesis candidate genes. The next step for researchers would be uncovering the details in the pathway of RASSF10 to see exactly how inactivating RASSF10 leads to kidney cancer.
1. A. Richter, et al., RASSF10 is frequently epigenetically inactivated in kidney cancer and its knockout promotes neoplasia in cancer prone mice. Oncogene (2020). doi: 10.1038/ s41388-020-1195-6 2. Image retrieved from: https://www.rcsb. org/3d-view/6MBT/0
Figure 1 KRAS, a protein involved in tumor growth, shows increased activity in RASSF10 knockout mice.
ROLE OF FACIAL MORPHOLOGY IN FIRST IMPRESSIONS
DAMAGE & TRAUMA: GET YOUR HEAD OUT OF THE GAME
JOYCE CHEN ’23
GWENYTH MERCEP ’21
When it comes to meeting someone new, a first impression is especially significant. Naturally, humans want to create an everlasting effect or influence on others from their very first impression; However, there are several drawbacks, one of which is dominance. A study conducted by Laura Clark at the University of Lincoln aims to investigate the judgments of people based on the facial structures of those who are presenting themselves by observing the interactions between humans and macaques. Humans tend to be more attracted to ’trustworthy’ faces rather than dominant facial structures. A prime example of this is the “baby schema,” a preference for babies due to their circular faces, rounded cheeks, and wide eyes. The same facial features can be applied to adults, especially in females. Therefore, baby schema creates a natural and instinctive bias for humans to willingly approach and interact with others, as well as to associate cuteness with roundness and femininity. Laura Clark and her team of researchers dedicated their research to understanding whether human judgments are based on facial features. The team provided a questionnaire for 227 test subjects to rate 17 monkey expressions based on age, sex, trustworthiness, dominance, socialness, cuteness, attractiveness, healthiness, and activity. Afterwards, they asked the participants for a measure of their subjective proximity for the
approach, feeding, and socialization with the macaques. Moreover, the researchers measured the face width to height ratio (fWHR) and baby schema of the monkeys to provide an accurate facial description of each monkey. The researchers discovered, however, that the participants preferred to approach the monkeys to feed them over taking pictures with them or just approaching them. Additionally, people were more likely to approach, feed, or take a photo with the monkeys if the monkeys appeared young, social, female, and trustworthy. These traits were negatively correlated with aggression, while dominance was positively correlated to aggression. Clark’s study proved that humans and primates have a natural built-in facial signaling system that propels humans to approach friendlier and more trustworthylooking individuals over dominant ones. This can indirectly prevent them from getting involved in conflict due to the faces looking ’safer.’ Although facial morphology can be applied to how humans see other human faces, there is still much yet to be discovered regarding the personality of the individual that initiates the approach, and whether or not they are bold or neurotic.
1. L. Clark, et al., The importance of first impression judgements in interspecies interactions. Sci Rep 10, (2020). doi: 10.1038/s41598020-58867-x 2. Image retrieved from: https://images.pexels. com/photos/327540/pexels-photo-327540. jpeg
Figure 1 Humans have a natural attraction towards more trustworthy-looking individuals due to an instinctual facial-signaling system.
Figure 1 Debilitating brain damage is linked to youth tackle football.
Brain disease chronic traumatic encephalopathy (CTE) is associated with exposure to repetitive head impacts, such as those from tackle American football. CTE can cause numerous and debilitating early-life symptoms like behavioral and mood disturbances, and, most notably, impulse control and depression. Episodic memory loss, dementia, and forms of cognitive dysfunction, are reported by patients with CTE later in life. CTE can only be officially diagnosed posthumously by discovery of abnormal “Tau” protein deposition, which makes clinical diagnoses and treatment difficult. Researchers from the Boston University School of Medicine focused on determining if the age at which tackle football players started playing the sport, exacerbated the severity of their condition or affected the onset of CTE symptomatology. The sample included amateur and professional tackle football players whose brains were donated to the Veteran’s Affairs-Boston University-Concussion Legacy Foundation Brain Bank. Of the 246 participants, 211 were diagnosed with CTE. By contacting living relatives, they were able to collect data on their age of first exposure to tackle football and the age of cognitive and behavioral or mood symptom onset for the individual being studied. CTE diagnostic severity was graded using a fourstage classification scheme that is based on the extent of the “Tau ’’ protein. The results concluded that the age of exposure was not associated with CTE severity, but
in the 211 participants with CTE, onset of cognitive dysfunction was reported 2.44 years earlier per each year younger a participant began to play tackle football. Similarly, onset of behavioral and mood symptoms were reported 2.50 years earlier for every one year younger an individual started playing tackle football. The results also predicted that exposure before 12 years old result in cognitive and behavioral/ mood symptom onset by 13.39 and 13.28 years earlier. This association between youth tackle football and earlier symptom onset of neurological damage is critical because youth exposure is prevalent in our society with football being an integral part of American culture. If efforts to cease youth participation in tackle football are not realistic, then increasing the age of first exposure can decrease premature symptom onset. Additionally, CTE is also associated with repeated head trauma, suggesting all head trauma should be treated with the same level of awareness, regardless of states of concussion. With this information and the growing public health concern surrounding tackle football, more prospective studies of former tackle football players are required to further understand the association between youth tackle football exposure and long-term neurobehavioral outcomes.
1. M. Alosco, et al., Age of first exposure to tackle football and chronic traumatic encephalopathy. Annals of Neurology 83, 886-901 (2018). doi: 10.1002/ana.25245 2. Image retrieved from: https://www.pexels. com/photo/helmet-on-the-ground-2862718/
5
S B U R E S E A R C H 6
GENE EXPRESSION NOISE IN DEVELOPING DRUG RESISTANCE SIMRAN KAUR ’20 Designing drugs is often challenging because identical cells within a specific network will exhibit varying genetic expression (noise), resulting in drug resistance. The source of this variation is most often stochastic, accumulations of random fluctuations occurring during transcription, translation, and post-translational regulation. Gene expression noise currently poses as the greatest barrier in finding a cure for cancer and Human Immunodeficiency Virus (HIV). Researchers in this study sought to determine the role of genetic noise in the evolution of mammalian drug resistance, by using two identical cell lines in isogenic Chinese Hamster Ovary (CHO) cells. Two synthetic gene circuits, low-noise positive-feedback (mNF)
and high-noise positive-feedback (mPF), were integrated into the CHO cells. The circuits were engineered to control the expression of Puromycin N-acetyltransferase (pac), the gene responsible for antibiotic Puromycin resistance. Puromycin concentrations of 0, 10, 22.5, 35, and 50 μg/mL were administered to both cell lines and population curves were created at these respective concentrations. Using parallel flow cytometry and microscopy, cell growth was observed and quantified. Significant observations were immediate growth with no adaptation for the low Puromycin dose and rapid growth that increased concurrently with stress for the high dose. Using these results, researchers identified three general steps in the process
IMPLICATIONS OF SOLAR GEOENGINEERING ASHLEY GOLAND ’23 Solar geoengineering is a technology that aims to reflect incoming sunlight away from the Earth to reduce the rise of global temperatures, and one proposed approach is to send aerosols into the atmosphere. Although this method may seem like a quick, relatively cheap way to delay further climate change, its possible effects on marine and terrestrial organisms are not yet known. In fact, Stony Brook professor Jessica Gurevitch, worries that the sudden changes brought about by use of this technology could threaten species that are not equipped to adjust quickly. Historical data and the Geoengineering Model Intercomparison Project (GeoMIP) allowed testing of future climate scenarios in which solar geoengineering was or was not implemented. Using numerous models to minimize errors,
researchers studied cases in which aerosols were injected into the atmosphere annually for 50 years and then stopped for 20 years; and observed the predicted consequences on temperature and precipitation. Rapid geoengineering implementations could stall or reverse global warming effects for a decade or two, but species migration driven by climate change were observed in both simulations with and without geoengineering. Decreased precipitation in the Earth’s tropical regions would occur, leading to increased risk of forest fires, poorer air quality, and biodiversity loss. Moreover, climate signals were projected to fluctuate with time, putting strain on certain species and leading to possible extinctions. On the tail end of the project, if the application of geoengineering was to be terminated just as
of adaptation development: growth suppression, rapid regeneration, and saturation at confluency. With increasing Puromycin concentrations, the duration of the growth suppression phase became increasingly variable. Gene expression noise plays an important role in cancer, because an increased amount of noise enables cancer cells to develop resistance against chemotherapy, and more broadly, microbes against antibiotics. This study indicates the significance of variation, more specifically cell heterogeneity, in the development of adaptations to drugs. A notable finding of this study is under high stress, mPF networks harbor evolution, but inhibit it under low stress. Nevertheless, future studies are necessary and beneficial to understanding the enigma of chemotherapy resistance in mammalian cancer cells. 1. K.S Farquhar, et al. Role of network-mediated stochasticity in mammalian drug resistance. Nature Communications, (2019).
suddenly as it began, the resulting large climate velocities (measurements used to track direction and speed of climate shifts) could cripple species that are unable to adapt to major changes in climate. Temperature velocities on land and sea in a post-geoengineering world are expected to be far higher than in a normal scenario, and as the most extreme projected velocities are expected in well-known hotspots of biodiversity such as tropical oceans and the Amazon Basin, a great number of species would be put at risk. Climate engineering is a viable route to buying time in a changing global climate. However, the outcomes of this research suggest that further consideration is needed to find the best way forward, bearing ecological preservation in mind. Investigations into other geoengineering procedures may allow researchers to develop a safer application of this technology. 1. C.H. Trisos, et al., Potentially dangerous consequences for biodiversity of solar geoengineering implementation and termination. Nature Ecology & Evolution 2, 475-482 (2018). doi: 10.1038/s41559-017-0431-0
AFTER THE FALL: TIE BETWEEN PTSD AND PROSTATE CANCER WENDY WU ’22 On the morning of September 11, 2001, the Twin Towers of the World Trade Center (WTC) collapsed. Within minutes, first responders arrived on scene. Amidst debris and smothering dust, they got to work, evacuating citizens and heading into the towers to rescue whoever they could. It was truly a display of heroic bravery and compassion, but unfortunately, many of the responders suffered for years from chronic illnesses due to the hazardous environment created by the fall. In particular, there have been reports of uncommonly high incidence of prostate cancer in male responders. Currently, no study has found a physical connection between the increase in prostate cancer and the fall of the WTC. There is, however, data
showing a relationship between prostate tumor growth and chronic hyperstimulation of the sympathetic nervous system. Sean Clouston, Assistant Professor of Public Health at Stony Brook Medicine, suspected that re-experiencing the traumatic memories of 9/11 may be the cause of high prostate cancer incidence in responders. Clouston studied 6,857 male WTC responders living on Long Island, NY, who were being monitored at clinical centers. Diagnosis of prostate cancer and severity were assessed along with PTSD symptoms. Researchers asked patients to rate the extent to which they were bothered by 17 PTSD symptoms on a scale of 1 (not at all) to 5 (extremely). Clouston and his team focused on five symptoms:
BREAKING DOWN THE OPIOID CRISIS ON LONG ISLAND AYESHA AZEEM ’23 The United States currently faces a growing opioid poisoning crisis. Opioid use can lead to significant impairment and distress, social problems, chronic relapsing abuse and even early death. According to the National Center for Health Statistics, New York is one of 5 states with the most opioid drug overdoses. Historically, those affected by nonfatal opioid poisoning tend to be white males, aged 18-34 years, lower income, urban-dwellers and non-private payers. However, as a recent study conducted by Stony Brook University researcher Dr. Elinor R. Schoenfeld suggests, the demographics are expanding to include all ages, more women, those with health insurance, higherincome individuals, and those who live in small cities, suburbs, and rural locations.
A Stony Brook University research team led by Dr. Schoenfeld investigated the current opioid poisoning trends on Long Island using patient-level New York State all-payer hospital data from 2010 to 2016, along with census data. The objective of this study was to determine the geographic, temporal, and sociodemographic factors related to opioid poisoningrelated visits to a hospital among Long Island residents. The team chose patients with an opioid poisoning discharge diagnosis and a NYS home zip code. The study included 3,426,563 patients (1,636,758 Nassau, 1,789,805 Suffolk) at any facility treatment location, including emergency rooms and inpatients. This study discovered that OP hospital visit rates increased 2.5 to 2.7 fold on Long Island
“being bothered by disturbing memories, dreams, reliving the stressful experience, reminders of the event, and having a physical reaction to the reminders.” Clouston found a positive correlation between prostate cancer diagnosis and old age, as well as increasing severity of PTSD symptoms. For years, researchers have tried to link certain carcinogens or environmental factors to increased illnesses in WTC first responders. Clouston is one of the first to try to link a psychiatric condition with the incidence of prostate cancer. While animal studies have shown that the nervous system directly affects the prostate gland, human studies have been, understandably, limited. Clouston’s work could inspire new ways of looking at causes of prostate cancer in traumatized populations, and perhaps suggest a way of early prevention. 1. S. Clouston, et al., Risk factors for incident prostate cancer in a cohort of world trade center responders. BMC Psychiatry 19, (2019). doi:10.1186/s12888-019-2383-1
from 2010 to 2016. During this period, Suffolk rates remained higher and Nassau rates lower than NYS. The demographics of OP patients also changed significantly for NYS and Long Island. Nassau and NYS observed changing rates by ethnicity, with an increasing number of Hispanic patients. The percentage of patients using Medicare instead of self-pay increased in both counties and NYS. This study helps develop an increased understanding of the opioid crisis and helps professionals determine the correct preventional tools needed. With the right information, New York can expand efforts to provide targeted OP interventions where they are needed the most. With the right methods, we may be able to prevent the crisis from spreading further by focusing on the factors that lead to opioid poisoning initially.
F A R B E Y O N D
1. E. Schoenfeld, et al., Geographic, temporal, and sociodemographic differences in opioid poisoning. American Journal of Preventive Medicine, (2019). doi: 10.1016/j.amepre.2019.03.020 2. Image retrieved from: Lauren Yoon ’21
7
GENETIC ENGINEERING WITH ANTI-CRISPR JASMINE KUMAR ’22 Whether through AP Biology, college labs, or popular headlines, news of CRISPR (clustered regularly interspaced short palindromic repeats) has been widely disseminated among the populace in recent years, with glowing reports on its potential, applications, and impact in twenty-first century research. Lesser-known, on the other hand, is the groundbreaking tool anti-CRISPR. In order to fully understand what anti-CRISPR entails, one must understand CRISPR. CRISPR proteins are part of the naturally-occurring bacterial defense system against viruses, containing 30 base pairs of repeated sequences that are identical to the genome sequences of the bacteriophage (the virus that infects the bacteria) (1). Upon viral infection, these proteins, together with crRNA (CRISPR RNA) and with the help of tracrRNA (trans-activating RNA), form a complex with the Cas9 enzyme (1). crRNA recognizes a specific DNA sequence in the virus and guides this complex to the target where Cas9 functions as a molecular scissor to cleave the DNA, thus damaging the genome and disabling the virus (1). In research laboratories, CRISPR is used as a gene editing tool, capable of repairing, disabling, or tweaking a gene of interest by simply introducing custom-designed RNA (1).
Figure 1 Joe Bondy Denomy, researcher that contributed to the discovery of anti-CRISPR.
Anti-CRISPR is a protein that evolved within bacteriophages and increased their immunity against the CRISPR defense mechanism of bacteria, thereby exemplifying a never-ending evolutionary arms race. Joe Bondy Denomy and his
8
laboratory colleagues discovered that bacteriophages contain proteins that serve as the “rocks to CRISPR’s molecular scissors ” (2). Anti-CRISPR proteins (Acr proteins) either inhibit CRISPR proteins from binding to the target DNA or block the cutting process of the Cas9 enzyme (2). The promising potential of anti-CRISPR has caught the attention of the scientific community. Although anti-CRISPR is currently only used to correct mistakes in the gene editing process—or in other words, reverse the action of CRISPR—this application has immense potential for further research. Additionally, given that anti-CRISPR proteins are harvested from viruses that infect bacteria, it is just as cost effective and simple to use as CRISPR. Qiong Liu and his colleagues published an article that described the two classes of CRISPR/Cas system. Liu et al. described that class 1 contains type I, III, and IV CRISPR proteins, whereas class 2 contains type II, V, and the lesser known VI CRISPR proteins. Many Acr proteins have been identified that inhibit type I, II, III, and V CRISPR/Cas systems. Type I Acrs were first identified in P. aeruginosa by Joe Bondy-Denomy and his colleagues. These proteins bind to the crRNA guide complex and thus prevent CRISPR proteins from binding to the target DNA. Type II Acrs bind directly to the Cas9 sgRNA complex, which inhibits the cutting of DNA. Type III Acrs require the transcription of protospacers, or the DNA sequence that follows the target DNA, for Cas9 to function. These Acrs bind to a complex that activates RNase activity of the crRNA complex so that the target DNA cannot bind to CRISPR proteins. This mechanism for Type III Acr proteins, however, has not been widely demonstrated. Like CRISPR , anti-CRISPR has been used in multiple research studies. The Alexander Hynes et al. study compared the effectiveness of AcrIIA6 and AcrIIA5 in inhibiting a range of CRISPR-Cas9 systems in the bacterium Streptococcus thermophilus and inhibiting genome editing in human cells. Phage DNA was extracted using a DNA kit and the gene for AcrIIA6 was subcloned into the S. thermophilus bacterial vector pETG-20A. Human vectors for the bacteria’s Cas9 and sgRNA (single guide RNA), a guide RNA similar to crRNA, were also provided to inhibit the genome editing process in human cells. Both Acr proteins effectively inhibited the CRISPR-Cas9 systems in bacteria and human cells alike. AcrIIA5 demonstrated the widest inhibition range of all of the CRISPR-Cas9 systems by inhibiting many different DNA targets of Cas9. This study demonstrates the control of anti-CRISPR over human and bacterial genome editing. It also raised the following questions: are there certain Acr proteins that are more evolutionarily advantageous for bacteriophages in their arms race against bacteria? If so, what makes them more effective (4)?
Anti-CRISPR proteins can be used not only to inhibit CRISPR Cas9 function in yeast and bacteria but also for genetic engineering in humans in the near future. For example, genetic disorders could potentially be reversed by CRISPR, and anti-CRISPR could further limit the risk of harmful side effects. Anti-CRISPR proteins increase the safety of CRISPR-induced genome editing by limiting the risk of gene translocations that are associated with “heritable disorders or various kinds of cancers, or large deletions” (5). Various research studies have used Acr proteins to inhibit CRISPR-Cas9 genome editing in the context of providing solutions to larger issues like genetic disorders. Jooyoung Lee and colleagues at the RNA Therapeutics Institute used Acr proteins to inhibit CRISPR-Cas9 function in genome editing in the liver of adult mice. Lee and his team wanted to improve the quality of tissue genome editing technology, specifically by using Acr proteins to provide “temporal, spatial, or conditional control” over CRISPR editing. In order to achieve spatial control over Acr proteins, Lee and his colleagues used tissue-specific miRNA (micro-RNA) to repress Acr expression in a target tissue while the Acr proteins inhibited CRISPR-Cas9 function in ancillary, or main, tissues. This spatial control over Acr proteins was applied to multiple Cas9 orthologs including the Cas9 and respective Acr proteins in the bacteria S. pyogenes and Neisseria meningitidis. The Cas9 orthologs in N. meningitidis were known as Nme1Cas9 and Nme2Cas9. Acr proteins were expressed in mice in order to inhibit genome editing by the CRISPR-Cas9 complex, specifically using the Nme2Cas9 enzyme. Nme2Cas9 was not inhibited when the tissue-specific miRNA for the liver of adult mice, miR-122, repressed the function of the AcrIIC3Nme protein placed into the mice . Therefore, genome editing was inhibited in the liver, the site of miR-122 expression, but not in ancillary tissues like the heart (5).
other pest management initiatives (6). If researchers can successfully inhibit the editing process in yeast, then they could potentially do the same in other fungi and in plants. Blocking these processes from occurring in crops infected with insect-borne diseases, for instance, may give the crops immunity against these diseases (4). Although anti-CRISPR has made many advances in biotechnology, there are a few harmful physiological effects that must be considered. Similar to the CRISPR/Cas-9 system, anti-CRISPR can be prone to rejection by the immune systems of bacteria or humans, in which case it would no longer be effective for genome editing (2). Furthermore, rejection by the immune system can trigger “dangerous, inflammatory reactions in [human] patients ” (2). Other tools that inhibit the CRISPR/Cas9 system have been made and may serve as potentially safer alternatives to “highly potent” anti-CRISPR proteins. For instance, in 2017 Basudeb Maji and colleagues developed small molecule inhibitors of S. pyogenes Cas9 that function to inhibit the SpCas9 during genome editing. Maji and his team created a screening assay to better identify these small molecule inhibitors. These small molecules are cell-permeable, reversible, stable under physical conditions, cost effective, non-immunogenic, and work rapidly. Although their advantages present them as a better alternative to Acr proteins, there are some disadvantages to these alternative tools. Specifically, these small molecule inhibitors can be difficult to make since their size requires microscopic precision and making the molecules in general requires extensive research, whereas Acr proteins are naturally found in bacteriophages and are readily accessible (7). Nevertheless, further advancements in research may result in alternatives to these difficult-to-develop molecule inhibitors or potentially harmful Acr proteins. Anti-CRISPR represents yet another advancement within scientific research. With advancement however, there come potential harmful effects that anti-CRISPR and various other scientific tools present. Although research still needs to be conducted to determine precise mechanisms of certain Acr proteins and to determine if there are other possible uses of these proteins, this tool has already progressed genetic engineering in multiple ways. References
Figure 2 This image is a crystal structure of the Acr protein AcrIIA6.
Similarly, the Erianna Basgall et al. study focused on using AcrIIA2 and AcrIIA4 proteins to inhibit gene editing in budding yeast, S. cerevisiae. A plasmid from the bacterium S. pyogenes that expressed sgRNA (targeting the yeast HIS3 locus) and Cas9 was transformed into the yeast. Both Acr proteins were successful in inhibiting the gene editing process. The significance of this study lies in its potential application to larger gene editing projects, such as combating malaria and
1. K. Chaudhary, et al., Anti-CRISPR proteins: counterattack of phages on bacterial defense (CRISPR/Cas) system. Journal of Cellular Physiology 1, 57-59 (2017). doi: 10.1002/jcp.25877 2. E. Dolgin, Kill switch for CRISPR could make gene editing safer. Scientific American, (2020). 3. Q. Liu, et al., Anti-CRISPR proteins targeting the CRISPR-Cas system enrich the toolkit for genetic engineering. The Febs Journal 287, 626-636 (2019). doi:10.1111/febs.15139 4. A. Hynes, et al., Widespread anti-CRISPR proteins in virulent bacteriophages inhibit a range of Cas9 proteins. Nature Communications 9, 1-8 (2018). doi: 10.1038/s41467-018-05092-w 5. J. Lee, et al., Tissue-restricted genome editing in vivo specified by microRNA-repressible anti-CRISPR proteins. RNA 25, 1421-1427 (2019). doi: 10.1261/ rna.071704.119 6. E. Basgall, et al., Gene drive inhibition by the anti-CRISPR proteins AcrIIA2 and AcrIIA4 in Saccharomyces cerevisiae. Microbiology 4, 464-472 (2018). doi: 10.1099/mic.0.000635 7. B. Maji, et al., A high-throughput platform to identify small molecule inhibitors of CRISPR-Cas9. Cell Press 177, 1067-1079 (2019). https://doi.org/10.1016/j. cell.2019.04.009 Images retrieved from: 1. https://pbs.twimg.com/profile_images/757267060074254338/WZDMbL7V_400x400.jpg 2. https://www.researchgate.net/figure/Crystal-structure-of-AcrIIA6-a-Ribbonview-of-the-AcrIIA6-dimer-The-left-monomer-is_fig1_326608008 Graphic created by: Komal Grewal’23
9
HUMAN EFFECTS ON THE FOSSIL RECORD THROUGHOUT TIME BY TRAVIS CUTTER ’22
Figure 1 Early humans used the earliest technology to hunt.
Introduction Dinosaurs were the apex land animals on Earth for over a hundred million years, and the fossil record has made this dominance abundantly clear. While humans have been at the top of the food chain for only about ten thousand years, their effect on the fossil record is already impressive to behold. Whether it is because of humans driving various species to extinction or attempting to resurrect said species, Homo sapiens have already added so many creatures to the underground tapestry. The future of the fossil record will also be dominated by human beings, ironically, thanks to the various advancements made throughout the past ten thousand years. Extinction Humans, from their earliest days, have apparently played a key role in the extinction of creatures which with they shared an environment. Such creatures included the Australian megafauna, animals exceeding forty-four kilograms, were prominent in the late Pleistocene epoch. However, the reason for this disappearance has been debated. While one argument posits that emerging humans were the death of these creatures, another states that changes in the climate were to blame, and a third argument claims that it was a mixture of the previous two. A report published in 2019 provides extensive evidence supporting the third compromising argument. By analysing pre-existing data detailing human and megafauna chronologies in southeastern Australia, researchers found that in 81% of the examined area, humans and
megafauna coexisted for some length of time between one thousand years and eight thousand years. The researchers then tested sixty different models exploring the variables that could have led to extinction, such as precipitation, temperature, and water availability. No model accurately explained the timing of megafauna extinction; however, the best model indicated that the timing of human arrival — in addition to water availability — had the strongest influences. The progenitor of both of these events was rapid climate shifts, specifically, ice age-related temperature and precipitation changes. At this point Australia’s environment was becoming consistently drier, and so humans and other animals would have competed for what little freshwater was available. This circumstance presented early man with easy opportunities to hunt, and thus presented a threat to the megafauna on two fronts. Either the animals would get closer to humans by drinking, or they would avoid the humans, but in doing so they would also lose asccess to the water, leading to death by dehydration. A limited water source, coupled with increased human-animal interactions in the age of hunter-gathering, led to the eventual extinction of a vast array of species (1). Resurrection On one hand, humans lead to the extinction of other creatures, while on the other, they endeavor to bring such creatures back. Wooly mammoths are perhaps the most wellknown extinct creatures. Like many other animals, the mammoth’s downfall can be attributed to humans. While the vast
majority of the population of wooly mammoths were killed in the same timeframe as the Australian megafauna, two populations managed to survive for several more years, as they were located on two remote islands, St. Paul Island and Wrangel Island. Given the advancements made in human society over these millenia, it is certainly possible that the Wrangel Island mammoths were killed by humans. What is definitive, however, is that humans have made efforts to resurrect these longlost beasts (2). The idea of resurrecting the wooly mammoth, or extinct creatures in general, has long-captivated the general public. And now the idea is much closer to reality than ever before. In 2019, researchers reported that they observed biological activity in the nuclei of millenia-old mammoth. They accomplished this feat by mincing and centrifuging tissue samples from a 28,000 year old mammoth specimen the cell nuclei were subsequently extracted and inserted into the egg cells of mice. The researchers then observed various signs of biological activity, such as histone activation and the formation of the mitotic spindle. However, these amalgamated cells were not perfect. No further cleavage was observed, meaning replication would ultimately be impossible (3). Legacy It is clear that humans have had a visceral effect on the fossil record, but that is only with regards to their indirect effect. In terms of a direct effect, human societal progress will leave the biggest mark on the geological record. Generally speaking, the most widespread species are the ones that are most prominent in the fossil record, and humans have the greatest control of which species become so numerous. Presently, the “Sixth Extinction” is occurring, and the species that fall prey to this event are not going to be as notable in the fossil record. Thus, the question is what animals exist in large quantities all around the world. A group of researchers determined the answer: livestock. The population of cows, sheep and pigs has increased dramatically since 1900, with the two former species numbering well over a billion, and pigs numbering just under a billion, as of 2019. These increases in population correlate strongly with increases in human population due to agriculture. One of the largest increases in livestock population occurred in the mid-20th century, with the advent of high volume farming. Rather than a species’ prominence being dictated by its fitness relative to its environment, a species’ fitness is now dictated by how useful it is to humans. Humans have usurped nature in deciding which species prosper and which species are left by the wayside. In addition to the prevalence of a given species, the fossil record is also dependent on where and how an animal dies. As opposed to wild animals, the carcasses of which tend to remain where the animal died, the remains of livestock are heavily processed, with the bones often being fragmented. The remains are then either buried en masse, composted, or rendered, wherein the remains are recycled into other products. Burial and composting both can contribute heavily to the fossil record, as high concentrations of remains are deposited in a small area. One researcher, Karen Koy, dug up the skeleton of a horse that had been buried three years prior and found that bones, teeth, hooves and soft tissue were very well preserved. Despite this efficient preservation, the bones are not al-
ways in their original shape or form. Trampling is a key factor in fossilization, as it can break and crush bone, bury smaller pieces of skeletal remains and move other pieces around, sometimes over great distances. In the landscape of the farm the greatest trampler is the tractor, but other vehicles, even those as small as the ride-on lawnmower can lead to immense bone damage and transportation. Along with agriculture, the standardized burials invented by humans could radically change how the species is fossilized in three main ways. First, rather than a chaotic mess of bones, there would be standardized, three dimensional grids, with each set of remains maintaining the same posture and orientation. Second, there is also an incredible density of remains in a graveyard when compared to natural fossil formations. Third, human remains are incredibly, and purposefully, well preserved. These three factors set human remains apart from other animals, meaning their appearance in the fossil record will be incredibly distinct. However, there is a confounding factor, which means this is not guaranteed. In contrast to farmland areas, which on average has a much higher concentration of water, graveyards are too variable in location to accurately gauge their effect on the fossil record. Nevertheless, the time when humans were dominant on the planet will be well-marked within the earth (3). Conclusion Given the cosmologically insignificant time period in which humans have existed, their effect on the fossil record’s past, present, and future is profound. The geological record is the most definitive history of the planet, and within only a few millennia, humans have contributed to this history in a plethora of ways, for better or for worse, and both in cooperation with and in opposition to the natural world. From environmental changes facilitating the hunting of even the most dominant and imposing of beasts, or human machines changing the fate of the remains of animals that have been discarded, the cost of progress has always been steep for the other species that inhabit the planet. That said, there is the natural benefit that comes with this progress: knowledge. The more that is known about the life and death of these creatures, the easier it will be to preserve the animals that exist today. One day, it might even be possible to restore the ecosystem as it once was, right when humans emerged. There is a balance in nature that humans have interfered with, but there is the opportunity to restore it, if the knowledge gained is used wisely.
References 1. F. Saltré, et al., Climate-human interaction associated with southeast Australian megafauna extinction patterns. Nature Communications 10 (2019). doi: 10.1038/ s41467-019-13277-0. 2. E. Fry, et al., Functional architecture of deleterious genetic variants in the genome of a Wrangel Island mammoth. Genome Biology and Evolution (2020). doi: 10.1093/gbe/evz279 3. K. Yamagata, et al., Signs of biological activities of 28,000-year-old mammoth nuclei in mouse oocytes visualized by live-cell imaging. Scientific Reports 9 (2019). doi: 10.1038/s41598-019-40546-1 4. R. Plotnick and K. Koy, The Anthropocene fossil record of terrestrial mammals. Science Direct 29 (2019). doi: 10.1016/j.ancene.2019.100233 Images retrieved from: 1. https://commons.wikimedia.org/wiki/File:Female_Elephant_Pursued_with_ Javelins.jpg
EVALUATING THE SAFETY OF SELF DRIVING CARS
-
BY LANCE WANG ’22 Background Each day, more than 3000 people die from road fatalities—about one thousand of them are younger than twenty-five. Tens of thousands more are injured or disabled from road injuries per day. By 2030, vehicle crashes will be the 5th leading cause of death worldwide [1]. It is clear that road safety is a major concern. A possible solution to these concerns is a remarkable new technology which promises to drastically reduce road fatalities: self-driving cars [2]. Self-driving cars, or autonomous vehicles (AVs), purport to solve the road fatality crisis by removing the human factor. More than 90% of vehicle accidents are caused by human error resulting from distraction, fatigue, intoxication, or other factors [3]. Unlike human drivers, AV systems are not prone to these stresses. Rather than operating under the control of human drivers, they operate by communicating with other vehicles and infrastructure over Vehicular Ad-hoc NETworks (VANETs). In addition to drastically improving safety, their efficiency would also drastically reduce traffic congestion, improve fuel economy, and mitigate pollution [3]. For instance, if a road blockage occurs, authorities can utilize VANETs to instantly redirect traffic [2]. Although AVs promise to resolve some safety concerns, they also introduce new safety problems into the equation. Since AV functionality relies heavily on the internet, severed connections pose an incredible threat. Cybersecurity is also another pressing concern. Moreover, there is the ethical dilemma regarding how algorithms should calculate crash scenarios. Cybersecurity Vehicular ad-hoc networks (VANETs) were developed in the early 2000s. These networks enable vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. VANETs were initially intended to improve road safety by enabling driver-assisted technologies, such as collision control or automatic braking. They also collect vehicle traffic data, which are used to optimize traffic control. Additionally, they make GPS and real-time weather and traffic updates possible [4]. VANETs are now being used to provide the infrastructure for AVs. However, in order to get AVs on the road, the next step is to secure VANETs against hacking. Among the numerous hacking methods, the most concerning for VANETs is denial of service (DDoS). This method utilizes neighboring nodes, which are devices
12
connected to a network, i.e. vehicles or infrastructure, as vectors to target a particular node. These nodes, called zombies, receive malicious commands to overwhelm the target with traffic. Eventually, the target node exhausts its bandwidth, and cannot serve the intended user. At this point, the hacker can control the target vehicle’s speed and direction, which directly jeopardizes the passenger’s lives [5]. A collaborative effort in India is working to design a model mitigating these DDoS attacks [6]. This model identifies a zombie node by monitoring bandwidth. If bandwidth passes a certain threshold, the node is isolated from the network. The zombie therefore cannot reach its target or zombify other nodes. The model is tested using a simulation program. The stimulated attack is vehicle-to-vehicle (V2V), where an attacking node infects two other nodes. The proposed model is better than existing methods across the board: more throughput, less packet loss, and less routing overhead. Throughput measures the network performance in packets delivered per unit time: the more throughput, the smoother the overall network. Packet loss refers to the number of malicious packets able to reach the target: the lesser the packet loss, the less effective the attack. Routing overhead is the number of malicious packets transmitted through the network: less routing overhead means less nodes are likely to be zombified. While promising, this approach only addresses DDoS attacks—there are still countless other hacking methods that pose a risk for AVs. For instance, a hacker may not necessarily intend to gain vehicle control, but to eavesdrop on personal data, which include travel routes IDs, and business transactions [5]. To keep the eavesdroppers at bay, an effective authentication system must be implemented. However, this is particularly challenging for VANETs, because authentication must be done quickly. Many methods proposed in the past involve strenuous verification, and while they may provide optimal cybersecurity, they also might cause long message delays. This is particularly problematic given that AVs need to know other vehicles’ whereabouts and traffic status in real-time to avoid collisions [7]. An effective authentication system must be equipped to detect attackers without causing significant network delays. In 2018, a trust-based authentication technique (TBAT) was designed for VANETs [7]. TBATs determine the degree of trust for neighboring nodes by examining past interactions. If a node is verified by the algorithm, it becomes a trusted
node, or a verifier. Verifiers monitor every nearby interaction. Every node’s reputation depends on monitored interactions: if it has a malicious reputation, it would be isolated by Certificate Authorities (CA), which are organizations that validate entities by assigning them cryptographic keys. TBATs only allow communication between trusted nodes. Once a connection is established, the nodes can send messages to each other. The messages are also encrypted by a public/ private key, ensuring that only the receiver can view it. Both the sender and receiver have their own digital signatures, and a secret key is created after each interaction. When the nodes re-connect, the secret key is used, which saves vital time for re-establishing connection. TBATs were demonstrated to be more effective than previous authentication methods in terms of authentication delay, delivery, overhead, and attacker detection. The qualities that make TBAT particularly effective are the incorporation of CAs and secret keys. However, attacker detection accuracy decreased quite significantly when there are multiple attackers present. It is clear that VANET cybersecurity is critical for AVs. Over the years, methods for mitigating cyberattacks improved, and VANET encryption developed to resolve the delay issue. However, the technology is not quite there yet: mitigating coordinated attacks involving multiple attackers proves to be a challenge. In order for AV transport to be widely available, attacker detection needs more development.
the Bauman Moscow State Technical University proposed a novel computing-vision algorithm for AV applications [9]. Computing-vision algorithms make sense of the depth depicted by a 2D image. However, this particular team’s algorithm demonstrates the ability to calculate Euler angles—angles that pinpoint locations of rigid bodies, i.e. buildings—with unparalleled precision between moving frames in the span of milliseconds. However, this method is only effective in dense urban environments, where there are many structures that provide orthogonal lines for reference. In open spaces, such as stretches of deserts, the algorithm struggles to recognize the environment. AVs may also use computing-vision to recognize road signs and gauge speed limits from them, due to the lack of ubiquitous digital infrastructure [10]. These vehicle orientation technologies are far from perfect. AVs may rely on them temporarily, however, they still require consistent internet connection to function. Locations must have 5G networks to prevent frequent connection loss [10]. This limits AV ubiquity to wealthy urban environments which can implement the necessary AV infrastructure.
Signal Loss Another concern of AVs is signal loss from VANETs. To address this, AVs are equipped with offline technologies. Such technologies include multiple cameras, ultrasonic sensors, or radars to assist in vehicle orientation. These technologies are already proven to work: Tesla’s newer models incorporated these features for its semi-autonomous “autopilot” function [8]. Furthermore, these technologies may be more effective than human drivers at vehicle orientation. For instance, the ultrasonic sensors can detect nearby vehicles in blind spots, while a human driver cannot. The sensors also enable reverse parking in extremely tight spots. All the data from these devices—ultrasound, radar, cameras, and satellite images—are processed by a central chip: the Mobileye Q3. However, these vehicle orientation technologies are not entirely reliable. Last March, the highly publicized Tesla Model 3 crash in Florida was attributed to the Mobileye malfunctioning. There have been recent developments with the camera system to improve reliability. Last July, a team at
Public Perception The public still seems wary about AVs: 78% of Americans fear riding AVs, while 84% of Europeans do not trust their loved ones with the technology [2]. However, those surveyed most likely did not experience riding in an AV. The unfavorable perception towards AVs is attributed to publicized accidents and negative media coverage [10]. There were even cases in Phoenix, Arizona where locals harassed Waymo prototype vehicles by throwing rocks and slashing tires [10]. Waymo deployed robo-taxis, devoid of drivers, cruising around Phoenix at 45 mph. These taxis became so familiar that hardly anyone stares at them in astonishment anymore [11]. In addition, it seems public perception is shifting: 70% of Waymo’s trips were rated 5 stars [11]. Another AV transport service, EasyMile EZ-10, is deployed in 200 locations across 25 countries. Two years ago, this driverless bus service was made available in Vantaa, Finland. A study evaluated the driverless bus service by collecting customer ratings on the bus’s traffic safety, invehicle security, and emergency management on a fivepoint scale [2]. A four rating indicated that the driverless bus experience was equal to travelling with a conventional bus, whereas a five rating indicated that the driverless bus experience was superior. On average, the traffic safety received a 3.12 rating, in-vehicle security received a 2.35, and
Figure 1 The Tesla Model 3, which utilized Mobileye sensoring, was the subject
Figure 2 Public perception towards robo-taxis like the Waymo Pacifica has
of the highly publicized Florida crash.
become more positive in recent years.
13
emergency management received a 2.50. While the ratings are not remarkably high, they are a far cry from the nearly unanimous fear of AVs documented in previous surveys. In addition, 37% gave a five rating for traffic safety. However, the customers concurred that the driverless bus was worse in the other two categories. The study attributes the disapproval of in-vehicle security to the cramped bus design. The EasyMile buses only accommodate up to ten passengers: six seated and four standing. However, the study speculates that ratings may improve if the buses were larger, and perhaps legitimized with in-bus business vendors. As for emergency management, the ratings are attributed to the spontaneous stops. The buses stopped 1.7 times per 100 km, mostly due to software freezes and partly by loss of location. Government Intervention The public’s ambivalence regarding AVs is justified, because there are still frequent malfunctions that render passengers helpless. Governments around the world have started to regulate AV standards, which fall into four broad categories: safety, liability, privacy, and cybersecurity [3]. Most governments, including the US, UK, Australia, EU, and China favor light-control of AV safety: they would allow the free market to decide which AV models are the most safe for customers. For liability, light-control is also the favored legislation: the liability does not entirely fall within manufacturers, but third-party software designers, and perhaps the passengers as well. Some countries, such as Japan and Australia, encourage the implementation of black boxes to provide evidence in liability lawsuits. Since passengers are still liable for accidents, this illuminates that governments are still hesitant to permit entirely autonomous vehicles without steering wheels. Until AV navigation systems are proven entirely reliable, passengers are expected to take the wheel whenever the autopilot fails. Regarding privacy, many countries favor heavy regulation: personal data cannot be sold to third parties, and manufacturers can only access data relating to safety [3]. Cybersecurity legislation, however, is not yet established. Many countries did not address this issue, including Australia, Germany, Japan, and South Korea. However, those that did favor heavy regulation. For instance, the US SPY Car Act requires manufacturers to be transparent about their AV cybersecurity; all attack attempts must be instantly mitigated and reported. In China, cybersecurity products must be approved by national reviews before being sold. The UK aims to be a leader in cybersecurity by 2021; it lauched the National Cybersecurity Strategy in 2016, which promotes cybersecurity research and considers AV security concerns.
Figure 3 Passengers found the EasyMile buses safe, but cramped.
14
Conclusion There are still many unaddressed issues with AVs, according to a 2019 study examining the trajectory of AV development up to 2025 [10]. For instance, there was limited research or policy regarding algorithms for crash scenarios. Programmers struggle whether to prioritize the safety of the passengers or the safety of pedestrians and cyclists. Besides safety issues, AVs have vast economic implications. AV technology and infrastructure is extremely expensive—only large manufacturers can afford to invest in this new market. This is concerning because the AV market would be dominated by oligopolies. AVs also threaten to eliminate traditional car service professions, such as truck drivers, taxi drivers, parking attendants, traffic officers, and others. There are countless industries implementing automation, not just car manufacturers. The rapid rise of automation is another ethical issue: while automation optimizes efficiency and improves the quality of life, it would replace more jobs than it creates. Nonetheless, in the foreseeable future, AVs would most likely be used for public transport in wealthy urban environments—where digital infrastructure is easiest to implement and vehicle orientation technologies, such as computing vision, are the most effective. Unfortunately, the lack of networks in rural areas would limit AV transport to cities. Even though AVs may not be entirely ubiquitous, they would dramatically reduce road fatalities wherever they are implemented. In addition, the reliance on internet access brings up concerns for cyber attacks, which can jeopardize passengers’ lives, disrupt traffic, or reveal personal information. Although, cybersecurity methods improved to reduce overhead and delays, these methods require more development to defend against multiple attackers. It also seems the public is ready for AVs: governments already started to impose regulations, and robo-taxi services became the norm in some places. References 1. Association for Safe International Road Travel, Road safety facts. 2. A. Salonen, Passenger’s subjective traffic safety, in-vehicle security and emergency management in the driverless shuttle bus in Finland. Transport Policy 61, 106-110 (2018). doi: 10.1016/j.tranpol.2017.10.011 3. A. Taeihagh, Governing autonomous vehicles: emerging responses for safety, liability, privacy, cybersecurity, and industry risks. Transport Reviews 39, 103128 (2018). doi: 10.1080/01441647.2018.1494640 4. S. Bhoi and P. Khilar, Vehicular communication: a survey. IET Networks 3, 204-217 (2014). doi: 10.1049/iet-net.2013.0065 5. I. Sumra, et al., Classes of attacks in VANET. 2011 Electronics, Communications and Photonics Conference (SIECPC), Saudi International, 1-5 (2011). doi: 10.1109/SIECPC.2011.5876939 6. C. Guleria and H. Verma, Improved detection and mitigation of DDoS attack in vehicular ad hoc network. 2018 4th International Conference on Computing Communication and Automation (ICCCA), Greater Noida, India, 1-4 (2018). doi: 10.1109/CCAA.2018.8777539 7. R. Sugumar, et al., Trust based authentication technique for cluster based vehicular ad hoc networks (VANET). Wireless Networks 24, 373-382 (2018). doi: 10.1007/s11276-016-1336-6 8. S. Ingle and M. Phute, Tesla autopilot: semi autonomous driving, an uptick for future autonomy. International Research Journal of Engineering and Technology, 3, 369-372 (2016). 9. B. Mikhail, et al., Orientation of the driverless car at loss of a satellite signal. 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), 548-551 (2019). doi: 10.1109/TSP.2019.8769105 10. M. Ryan, The future of transportation: ethical, legal, social and economic impacts of self-driving vehicles in the year 2025. Science and Engineering Ethics, (2019). doi: 10.1007/s11948-019-00130-2 11. A. Hawkins, Waymo’s driverless car: ghost-riding in the seat of a robot taxi. The Verge, (2019). Images retrieved from: 1. https://upload.wikimedia.org/wikipedia/commons/8/83/Tesla_Model_3_ parked%2C_front_driver_side.jpg 2. https://upload.wikimedia.org/wikipedia/commons/7/72/Waymo-Pacifica.jpg 3. https://upload.wikimedia.org/wikipedia/commons/3/33/EZ10_Kista_160425_1.jpg 4. https://www.pexels.com/photo/asphalt-buildings-city-city-lights-379419/
DR. J. PETER GERGEN: THE MAN BEHIND UNDERGRADUATE BIOLOGY BY NATALIE LO ’21 His physics teacher had a teletype set up for the students to program, instilling a “if it’s interesting, run with it, play with it,” attitude in the classroom. They would spend their days programming a computer by feeding it stacks of cards with commands. As an undergraduate, Dr. Gergen majored in chemistry and was on the pre-medicine track. He didn’t discover his true interest until he enrolled in a course at MIT on enzyme mechanisms and the structural and functional chemistry of proteins: “I found it fascinating that nature has evolved to have proteins that are able to accelerate reactions by eight orders of magnitude. That just got me thinking about what I wanted to do, and that’s when I decided I really did like taking things apart and figuring out how they work.” After completing his undergraduate degree, he decided that he didn’t have a passion for medicine and chose instead to attend Brandeis University to study enzyme mechanics. Dr. John Peter Gergen is a distinguished professor at Stony Brook University. He completed his undergraduate education at the Massachusetts Institute of Technology and his graduate education at Brandeis University and went on to work at several distinguished research labs, including the MD Anderson Cancer Center. He has been the Director of Undergraduate Biology for the past 10 years and is also an advisor for many clubs on campus. As a mentor to countless students, Dr. Gergen has spearheaded many efforts to introduce underprivileged undergraduate students to research. Every semester, he hosts an “Entering Research Workshop” to introduce undergraduate students to research, training them on how to contact professors regarding research opportunities. Gergen was recognized for his work in May 2017 when he was named by the SUNY Board of Trustees as a Distinguished Service Professor, one of SUNY’s highest honors.
How did you develop an interest in science and why did you decide to choose this career path? Dr. Gergen’s interest in the sciences started with his own family: his grandfather was a math professor, and his father was an MD on the research track. He states, “The science was always there. I always try to figure out how things work, trying to take them apart and put them back together.” While he never thought much about his career path, only knowing that he could “do whatever [he] wanted with [his] life after going to medical school,” he had several science teachers in high school who had a big impact on his career, including his physics teacher. Growing up in the research triangle in North Carolina, Gergen was exposed to computers early on.
What made you shift your interest from chemistry to genetics? Dr. Gergen pursued a PhD in biochemistry at Brandeis, so his switch from chemistry wasn’t a big change. It was in fact organic chemistry that hooked him into the field: it allowed him to “appreciate that nature has played with evolution over thousands of millions of years to create proteins that can accelerate and regulate chemical reactions.” His big switch to the field of molecular biology occurred when he was rotating through different labs at Brandeis. His first lab involved the study of suicide site inactivators, organic compounds that inactivate an enzyme based on the mechanism that pulls the enzyme into the transition state. It was the elegance that attracted[him, “like a really cool puzzle game -- playing puzzle with God and trying to figure out how these things worked.” However, Dr. Gergen finally found his niche in a recombinant DNA technology lab run by a professor who was the first person to clone a segment of a eukaryotic chromosome into bacteria. The lab made Dr. Gergen “appreciate how central genetics was to the logic and the rules of life. Genetic engineering is trying to take things apart and figuring out how they work.” His only exposure to molecular biology was a genetics course he took as an undergraduate, but this lab experience exposed him to the “rules of life [and] the template for evolution.” He explains that “it was the right time and the right place, -- when [the field of] molecular biology was opening up -- and that was [my] shift from chemistry to biochemistry to molecular biology and genetics.” Gergen explains that his varying explorations can be explained by his commitment to doing what he’s interested in: “The underlying principle is that if you do what you’re interested in, then you’ll be interested in what you’re doing.”
15
IF YOU DO WHAT YOU’RE INTERESTED IN, THEN YOU’LL BE INTERESTED IN WHAT YOU’RE DOING. Since you’re such an avid supporter of research, could you tell me a little about your own research with Drosophila?
What made you decide to pursue teaching and what do you think is the most rewarding part of being a professor?
Back when molecular biology was being invented, scientists only knew about basic gene regulation in bacteria, such as the lac operon; however, it was clear that “all cells had the same genetic information, but different cells do different things … The information is in the genome, there’s a reading out of that information, some decoding that allows cells to do different things, and it happens in a very orchestrated,
Teaching was not on Dr. Gergen’s mind when he first started working towards his PhD; he did not have much teaching experience as a graduate student nor as a postdoc. His teaching responsibilities started during his time on the faculty at MD Anderson Cancer Center: he taught detailed topics such as graduate-level developmental genetics to dedicated graduate students who were heavily interested and invested in what he taught. To him, “that’s very rewarding. You’re talking about the stuff you love to people who are interested in hearing about the stuff you love.” In his lab, he also began teaching undergraduate students, which had an enormous impact on his teaching career. He states, “Interacting with young people who are curious, motivated, and energetic is very rewarding.” He enjoyed seeing undergraduates develop from students who only knew what was in textbooks to students who are capable of designing and carrying out their own experiments and incorporating what they’ve learned into future experiments. After working at MD Anderson for a few years, he was recruited by Stony Brook, which gave him the opportunity to be involved in the education of undergraduates at an institution that values research: “That was very exciting to me,” he explains. The first course he taught at Stony Brook was BIO 220 (currently renamed to BIO 320), a class on general genetics, in front of 300 students in Javits 100. He states, “The idea wasn’t to convert all 300 students to become geneticists but for everybody to appreciate what genetics is, how important it is, and how it can be really useful and dangerous.” He hoped that his students would be able to learn enough to become a part of the conversation and be able to talk intelligently about genetic engineering or stem cell research. “That was something that was going to be a challenge, but I was excited to do that. [In the end,] the teaching part just sort of happened along the way,” he reflects.
beautiful way.” From the very beginning, Dr. Gergen was interested in the process of development. The development of gene-cloning methodology to study gene expression led him to make the decision in 1979 about what he wanted to work on for the rest of his life. By the time Dr. Gergen finished graduate school, developmental pathways in the fruit fly system were starting to be uncovered, and researchers like Eric Wischaus were winning Nobel Prizes for their groundbreaking studies on genetic development in flies: “I worked as a postdoc in his lab and was there right as the gold mine was being opened up,” he says. He started working with the Runt transcription factor, which encodes a DNA binding protein and regulates transcription in different pathways. Although he is working in the fruit fly system, there is a homologous gene in humans, and mutations or disruptions in this particular gene can result in many different diseases. The fly system has “powerful genetics [that] allows you to explore the underlying genetics [and] the logic and rules of how things work” and at the basic level is about regulating genes -- turning them on and off. Dr. Gergen’s lab at Stony Brook studies genes with dynamic tissue expression patterns and the factors that influence the expression of the gene in a specific versus a neighboring cell. One of these factors is the Runt gene from his postdoc years, which can be used to manipulate all of the cells in an embryo to either turn a gene on or off, determining the difference between cell fates and giving rise to different cell types. Biochemical assays allow researchers to trace the mechanisms backwards and understand in which step of the transcription cycle the gene is impacted. They discovered that Runt regulates a promoter for RNA polymerase by binding to it, initiating it, and pausing transcription 40 nucleotides downstream of the start site. As Dr. Gergen explains it, “If Runt and Ftz are there, you don’t get released and there’s a road block. If Runt is there and Ftz is not there, then Runt says go.” In genetics, he’s able to come back to molecular biology and chemistry to understand how processes are being regulated. Even though he’s been researching this gene for years, Dr. Gergen claims that “there’s still a lot of things to figure out.”
16
How would you describe your experience as the Director of Undergraduate Biology? Three words: “annoying, daunting and rewarding.” Dr. Gergen states that he has run his own lab since 1989 and wasn’t looking to take on such a position in charge of hundreds of undergraduate students. When the Dean of College of Arts and Sciences spoke to him about a change of direction and leadership for Undergraduate Biology, his response was that he’d “do it for one year, and after 6 months, [decide] whether or not [he’d] stay past one year.” After six months, while he wasn’t able to solve the problems within the program, he had a better understanding of what the issues were. He has now held this position for 10 years. He jokingly
claims, “I didn’t escape after my first year, but I will escape before I die -- that’s the goal here.” It’s annoying. He’s working around the clock and constantly receives emails from alumni, parents or students. It’s daunting. It’s a huge job, and completing all of the administrative tasks on top of his own research would be an impossible task without his postdoc. He states that she’s constantly training students and he wouldn’t have enough time in his day if she wasn’t taking the load off by training these students. It’s rewarding. The first change he implemented as director was an exit survey for undergraduate students asking questions about how the program is doing and what the program could do better. One question on the survey was “Did you do research? If you did, what did you think?” and found that respondents who have done undergraduate research generally had more memorable experiences. One memorable quote was, “Even though I learned that I never want to go to another lab again for the rest of my life, getting to know that faculty member and his team and being part of that team for a year was just so rewarding and positive.” These responses eventually prompted his initiative to promote more undergraduate participation in research through the “7 different ways to get involved in research” link on the Undergraduate Biology website. Dr. Gergen states that a benefit of the SBU campus is that we have a resource offered here that’s not available to many institutions: “We have hundreds of faculty across the street at the medical school who are willing to take on undergraduates, and we just need a roster of these faculty in order to facilitate their communication with interested students.” A major part of his initiative includes reaching out to more alumni to get support for undergraduate research such as URECA fellowships. He states, “We’re getting a lot of students into the labs for mentored research experience, and those have been valuable experiences.”
What inspired you to start the Entering Research Workshop? As previously discussed, Dr. Gergen has never had formal training as a teacher, and when he first became the Undergraduate Director, there was a national movement that started in 2004 to bring evidence-based practices into the classroom, called scientific teaching. Dr. Gergen started implementing these practices, such as the use of clickers for participation in 2010. He states, “When you engage students, they have to think about it and process it, so the learning is deeper.” Out of this movement, a summer workshop called the Summer Institute on Scientific Teaching was born, in which instructors nationwide can attend for a summer to learn about practices to implement into their lecture halls. After several faculty members attended this workshop and raved about how valuable it was, Dr. Gergen attended the workshop with Professor Deborah Spikes in 2011 at Yale. After attending, it “transformed [his] own thoughts about teaching” and he went to the Life Sciences chairs saying that they should send their faculty there too. Dr. Gergen was hooked and even arranged for Stony Brook to host the workshop one year. The inspiration for the Entering Research Workshop was created during his summer stay at Yale. As one of the “homework” assignments, they had to create a plan meant to
improve student life and education. Going into the workshop, Dr. Gergen knew that he wanted to expand research based on the exit surveys and saw that there wasn’t a constructive way to let students know how to find labs. So, he and Dr. Spikes created this workshop as a way to “let [students] know about the research landscape, how to find a lab, and by the end of the workshop, [they] expect students to write a letter of inquiry to a lab PI.” This Entering Research Workshop was started in Spring of 2012, and one workshop has taken place per semester since then. Even though not every student attends, the workshop is a great way to raise awareness about research as “students that take it are now sources of information for their suitemates and friends.”
As a mentor to many students, what would you say is your favorite part of mentoring? After mentoring hundreds of research students from the undergraduate to graduate level, his favorite part is definitely not writing the letters of recommendations, because they’re “sort of the end, the goodbye.” Dr. Gergen enjoys watching students develop into more independent critical thinkers who will move on and have a positive impact wherever they go. He states, “I can’t say there’s one moment during the relationship that’s the most special; it’s the continuing development, planting seeds.”
Do you have any advice for current freshmen and incoming Stony Brook students? As an administrative director, Dr. Gergen is always thinking about what advice the faculty should be making sure students get. He advises students to, “take advantage of opportunities [and to] be strategic and smart.” While there are many great things to do at Stony Brook, he cautions students to be productive in time management and balancing workload between schoolwork and extracurriculars. One important piece of advice he has is to go on the Classie website, look at the amount of time other students put into specific classes, and plan out their schedule accordingly.
What are some long term goals that you hope to achieve for Undergraduate Biology and for Stony Brook? Dr. Gergen hopes that the work he’s done with alumni continues to be a legacy that lasts after he’s gone; he hopes the tradition of engaging alumni with the institution -- whether it’s donating to undergraduate research or giving career advice -- will continue. Furthermore, he’s hoping to “provide more research opportunities and [have] an evidence-based curriculum that’s cutting edge. The cutting edge is going to move, and the curriculum, too, has to move over time.” Lastly, he jokes that he doesn’t want to have this job forever: he wants to work for “a couple of more years and then pass the mantle on to someone else who can take it a level further.” As he says, “The rewards of this job are annoying and daunting, but the last eight to nine years [have been] fun -- with the iGEM teams and a lot of other things -- and that’s rewarding.”
1. Image retrieved from: Dr. John Peter Gergen
17
DIVERSE APPLICATIONS OF VIRTUAL REALITY IN EDUCATION AND HEALTHCARE BY KAILYN FAN ’23
Figure 1 An example of two participants interacting in a Body Swap activity at the BeAnotherLab.
Introduction Virtual reality (VR) emerged in the 21st century as an innovative form of technology based on simulated experience. VR’s popularity has gained much of its traction in the entertainment business, with the opening of VR centers that offer games and movies for users to engage themselves in. However, recent scientific research has explored alternative, and perhaps more useful, purposes for VR, especially in the academic and medical fields. In education, VR has been utilized to enrich learning in the classroom, teach empathy to its users and even provide military training for soldiers. VR has also expanded into healthcare by functioning as a tool in surgical training, providing effective treatments for psychiatric disorders, and reducing pain for chronic pain patients. Despite all of VR’s diverse applications in the real world, there are some disadvantages to this invention, such as cybersickness. Moreover, the development and more widespread use of this technology has raised ethical questions regarding VR’s ability to alter the human consciousness so profoundly that the mind can mistake an artificial creation for something that is real. Background The origins of VR can be traced back to the mid-1960s, when computer scientist Ivan Sutherland invented a stereo head-mounted display (HMD) system whose computer graphics could completely immerse the individual into an artificial sensory experience of sight, sound, smell, and touch (1, 2). This was the first model of VR technology, a design very similar to the kinds of VR used today, but the HMD was so heavy and sinister-looking that it was nicknamed the Sword of Damocles. Nevertheless, Sutherland’s HMD inspired other scientists decades down the road to continue modifying and improving VR,
which allowed people to expand their imagination beyond the possibilities of human interaction in the real world. VR Applications In Education Many of VR’s affordances fall under the category of education, beginning with its application in the classroom setting. The decreasing costs of VR technology have made it more affordable, therefore more accessible in schools from grades K-12. One particular VR educational game, Cellverse, has enhanced traditional teaching methods by giving young students the opportunity to immerse themselves in cellular biology in new, collaborative ways. Previously conducted research found that providing biology students with tangible models to work with helped them better retain this knowledge. The creation of Cellverse as an educational tool was encouraged by these studies supporting the fact that visual models and student test scores are positively correlated. Cellverse functions as an immersive experience for students to explore the inner workings of a cell as if they were at a microscopic level, too. Instead of learning about protein synthesis and other cellular processes simply through vocabulary memorization, students can now participate in hands-on learning that involves communicating with their classmates in order to solve a hypothetical dilemma. The ability to visualize cellular biology topics not only makes the learning process more collaborative and memorable, but also more interesting to students by keeping them actively engaged (3). Improvements are still being made to Cellverse’s VR technology, however. Part of the appeal of this educational program is the flexibility resulting from the ability to add new features to the game to improve the original design periodically. Researchers have modified Cellverse to include even
more collaborative tools than it already possessed, such as the assignment of students to certain roles such as the navigator or the explorer to foster a greater sense of teamwork. In addition, the developers are inserting “light beacons” into the virtual environment to allow for non-verbal communication, as well as including practice with SLAM microscopy, a scientific method of marking organelles of a cell. Although further funding will be necessary in order to provide VR headsets to every student in a class and magnify the distribution of educational programs like Cellverse to schools nationwide, the overall increase in collaboration that VR fosters has clearly enabled students to learn complex material like cellular biology in a more efficient, interactive, and enjoyable manner. Another application of VR in the education field that has applications that extend beyond just students is empathy training. Empathy, or the ability to picture ourselves in someone else’s shoes, is an inherently good human quality to have. Yet there is a lack of actual implementation of empathy training in any school’s curriculum. It seems that without the use of advanced technology, it is difficult to teach people a life skill, such as empathy, that extends beyond a mere capacity for knowledge, unless the teaching method involves some sort of hands-on experience. VR presents a solution to this problem, because it has the unique capacity to effectivcely experience someone else’s perspective. VR accomplishes this by taking advantage of perceptual illusions that humans are naturally prone to making. Scientists refer to one of these illusions as simply presence (PI) or having a strong feeling of “being there” in the virtual environment, despite knowing that the experience isn’t actually real. Another concept called psi builds on the illusion of presence by adding credibility to the VR scenario. Although individuals under the illusion of psi are aware that what they’re experiencing isn’t real, the feeling of presence is strong enough that they are influenced to act accordingly within the environment established by VR (4). Perhaps the strongest illusion, however, is Body Ownership, a sense of embodiment in which the VR user feels like they have swapped bodies with another being and subsequently have taken on another identity. The Body Ownership illusion is further enhanced by additional features to the VR experience, including visuomotor synchronicity, which cues the user to act the role of the VR avatar, and visuotactile synchronicity, which matches the timing in which the user experiences stimuli in real life to when he or she experiences them as the avatar virtually (4). The combination of perceptual illusions like presence, psi, and Body Ownership in the VR experience effectively creates a seemingly real scenario necessary for empathy-related training. The establishment of an avatar that the user can temporarily but vividly embody allows the VR creator to better understand real-life perspectives of other people. For example, both the United Nations and The New York Times have taken on projects that incorporate VR into their media content to further engage users as they learn about the lives of people they otherwise would not come into direct contact with (4). VR’s ability to promote empathy towards marginalized groups of individuals, such as displaced refugees or ethnic minorities who are often misunderstood despite media attempts to portray them in a sympathetic light, is also backed with sci-
entific evidence. In a 2013 VR experiment, having participants with light skin envision themselves as a dark-skinned avatar reduced negative implicit associations towards black individuals. A similar experiment confirmed these findings and also indicated that the decrease in implicit racial biases lasts even a week after the VR experience. Whether or not empathy promoted by VR needs to be continually maintained in an individual remains unclear; future research can explore whether these prosocial effects are more long-term (4). There is a lack of quantitiative data supporting that VR envokes an empathetic responds largely because such observations depend on a subjective experience. Nevertheless, an additional study carried out by researchers in 2010 resulted in participants reporting they felt emotional distress as they embodied the avatar of a Guantanamo Bay prisoner and experienced life virtually from their perspective. The implications of studies like these support the potential of VR to generate greater understanding for neglected groups whose struggles are often left untouched or inadequately covered by the news industry, as already seen with the groundbreaking work of the United Nations and The New York Times (4). A final real-life application of VR in empathy instruction can be seen through the development of The Machine to Be Another (TMTBA), a VR system created by BeAnotherLab that utilizes the Body Ownership illusion to create empathy-related exercises for participants to partake in. In a “Body Swap” activity, the VR user trades perspectives with a performer, who presents an embodied narrative, or a true story of an individual who has undergone many challenges, such as the life of an asylum seeker, a victim of police brutality, or a war veteran. Since its development in 2012, TMTBA has been used as an empathy-related educational tool in over 25 countries around the world. With further advancements, researchers see the future potential of VR technology to not only provide effective empathy instruction in academic or journalistic contexts, but also eliminate an individual’s prejudices and tackle the greater societal stigma towards specific groups of people (4). VR in the education realm also applies, perhaps unexpectedly, to military training. In recent years, the U.S. De-
Figure 2 A glimpse into how students can study cellular biology through immersive visuals created by VR projects like Cellverse.
training of their soldiers and, contrary to one of the downsides of using VR in the classroom, implementation of VR technology in this setting actually saves money. VR aids the military in simulating realistically large-scale wartime scenarios that otherwise could not be replicated without greater funding and lengthier deadlines. Specifically, the simulation of virtual battlefields, in which soldiers can train in groups and together confront hypothetical war scenarios, is not limited by any physical constraints on size or number of military units. With VR, the possibilities regarding military training are endless. The military’s current VR technology can simulate single-soldier combat training, practices with driving war vehicles, and even aviation instruction, all of which prepare soldiers for when these skills are needed on the real battlefield. Last but not least, VR technology in the military has led to sweeping changes in how military technology is manufactured. Now, VR enables developers to repeatedly put their prototypes to the test in a virtual battlefield environment without wasting actual materials. This results in an overall improvement in the quality and performance of the technology before it is even manufactured. For example, the aircraft carrier CVN21 was the first of its kind to be designed completely using VR, and its assembly cost was significantly less than previous designs. Not only does the utilization of VR in the military enhance the training of their soldiers, but it also reaps financial benefits. In the future, as soon as 2025, researchers may be able to produce holographic command centers for soldiers-in-training to improve their battlefield analysis and decision-making skills (5). VR Applications in Healthcare As we transition from the benefits of VR in the academic field to the medical field, we will first discuss the recent implementation of VR in surgical training, which technically encompasses both categories but affects the world’s future healthcare system directly. Despite the limited scope of VR in surgical training so far due to its cost, there are immediate advantages to using VR to simulate surgical procedures as op-
Figure 3 Soldiers in the U.S. military training for an upcoming mission using the latest VR technology.
posed to training with real-life patients. The major advantage is the greater preservation of human life. Surgeries performed by less experienced surgeons could result in irreversible harm to the patient, place the patient at greater risk of a failed surgery, or at the very least cause greater levels of discomfort to the patient because surgeons-in-training often require more time to operate. In addition, VR simulations of realistic surgical scenarios allow the instructor to control all aspects of the case, and tackle specific medical skills at a time. In the case that the trainee does make a mistake during a surgical procedure, VR allows for leniency, enabling the trainee to carry on while making corrections in a low-risk environment (6). Though the use of VR to enhance surgical training has not been widely implemented among hospitals worldwide, a recent scientific development confirms the advantages of simulated surgeries as a precursor to real-life experience in the operating room. Surgery trainings involving the use of “the VR software for surgical simulations” (VSSR) allow the trainee to immerse themselves in an auditory and visiual environment closely matching that on a real surgical procedure using a VR headset and headphones. Using the VR controller, they can interact with this simulated environment to carry out the steps of a surger in a paced, low pressure environmen. Afterward, the trainee receives feedback as well as a numerical score rating their performance. These assessments effectively measure the trainee’s surgical competency as well as present an indicator as to when the trainee is ready to participate in real surgeries. While the surgical procedures simulated by VR technology were mainly oncology related, future developments may provide training on other impactful types of surgeries. For one, resident surgery trainees who rehearsed laparoscopic procedures through VR before transitioning to real-world training had higher accuracy rates and completed their training faster compared to trainees who only went through traditional training methods (7). Another way VR has led to advancements in healthcare involves treatments of psychiatric disorders. The freedom in designing a VR experience enables the therapist to control for all aspects of sensory stimulation, and for that reason has proven an effective method for helping people cope with psychiatric disorders not limited to anxiety, schizophrenia, phobias, PTSD, and OCD. One advantage of VR treatment over traditional treatment options is VR’s ability to tailor the user’s experience to their exact needs without putting them at risk of any physical harm. For example, some specific phobias, such as the fear of flying, can be dangerous and costly to replicate in real life. If in-vivo exposures to the feared stimuli, in this case an actual flight, are not feasible, therapists need to resort to having the patient merely imagine a scenario. Visualization can prove a challenge for patients, especially without VR to provide visual aids. In addition, VR as an alternative to traditional exposure therapy can create individualized stages of systematic desensitization that in-vivo exposures can’t always accomplish. For instance, in the case of flight phobia, if the patient is not ready for turbulence, the therapist can control for that variable with VR. Besides the customized pacing VR could facilitate between therapist and patient, VR as a means for exposure therapy can also guarantee the patient confidentiality regardless of what type of exposure is necessary, which can ease their feelings of anxiety during treatment. Although
there are currently limitations of the use of VR in this area of healthcare, such as the prevalence of glitches and its complex assembly, in the long run there is potential for this new field of technology to be incorporated into clinical practice (8). A final application of VR in the healthcare field that is worth recognizing is its utility in treating patients with chronic pain. Especially in the case of severely painful injuries that standard therapies cannot fully relieve, such as burn injuries, VR works as an active, engaging distraction for patients who undergo excruciating amounts of pain every waking hour of the day. In a 2014 experiment conducted with 30 adolescents who were undergoing burn wound care, researchers compared the ability of standard care, passive distraction (e.g. watching TV), and VR distraction (an interactive environment called SnowWorld) to ease their perception of pain. The results indicated that VR successfully lowered the adolescents’ pain perception more than the passive distraction methods. Additionally, VR has been shown to effectively treat chronic pain by providing a healthier alternative to using prescribing opioids for pain relief, which can leave patients vulnerable to misuse or even addiction (9). Ramifications of VR To Consider Although VR currently has a plethora of applications that have led to significant advancements in healthcare and education, there are also concerns about the continued use of VR as it becomes more prevalent in our daily lives. One concern is a side effect that some VR users experience known as cybersickness. Cybersickness can be described as VR-induced discomfort and motion sickness. Not only does cybersickness result in occasional unpleasant experiences for the VR user, but the symptoms associated with the temporary illness may also complicate the process of gathering data during VR experiments. Participants who experience cybersickness sometimes have to quit their VR session early, and therefore cannot be included in the experimental results (10). Besides the mild side effects of VR that some users experience temporarily, there is a larger ethical question to consider regarding the spread of VR into many aspects of our lives. This philosophical viewpoint of VR particularly involves the potential danger of social hallucinations as VR inevitably becomes more immersive, engaging, and lifelike to its audience. Social hallucinations is the term used to describe future VR experiences as designers create more sophisticated illusions, such as avatars with more realistic emotional expressions and facial features. Especially if VR technology is placed in the wrong hands, dangerous manipulation of VR experiences and threaten the individual’s right to privacy or free will. Hypothetically, if VR ever converged with real-life social networks, it could become more difficult to distinguish technology-induced visions and real-world events. In this case, increased exposure to severely manipulated VR can, in the long run, cause the user to place less value on reality and develop a kind of apathy towards life itself (11). Conclusion Current research on the diverse applications of VR in healthcare and education support the conclusion that the creative benefits VR technology provides for these two fields far outweigh the potential negative side effects a select few users
Figure 4 A surgeon trainee, equipped with a “scalpel,” practices a surgical procedure using VRSS.
may experience and diminishes the gravity of ethical concerns over future VR practices. As the scientific investigation of how VR can further be implemented to enhance or even save human lives continues, researchers predict the outcome will overall be beyond advantageous to society. The rising affordability of VR could also mean the acceleration of such breakthroughs and the broadening of the technology’s scope, a truly exciting process for people to witness in the near future.
References 1. P. Cipresso, et al., The past, present, and future of virtual and augmented reality research: a network and cluster analysis of the literature. Frontiers in Psychology 9, (2018). doi: 10.3389/fpsyg.2018.02086 2. M. Slater, M.V. Sanchez-Vives, Enhancing our lives with immersive virtual reality. Frontiers in Robotics and AI 3, (2016). doi: 10.3389/frobt.2016.00074 3. M. M. Thompson, et al., Authenticity, interactivity, and collaboration in VR learning games. Frontiers in Robotics and AI 5, (2018). doi: 10.3389/frobt.2018.00133 4. P. Bertrand, et al., Learning empathy through virtual reality: multiple strategies for training empathy-related abilities using body ownership illusions in embodied virtual reality. Frontiers in Robotics and AI 5, (2018). doi: 10.3389/ frobt.2018.00026 5. X. Liu, et al., Virtual reality and its application in military. IOP Conference Series: Earth and Environmental Science 170, (2018). doi: 10.1088/17551315/170/3/032155 6. H. Visser, et al., Progress in virtual reality simulators for surgical training and certification. Medical Journal of Australia 194, 38-40 (2011). doi: 10.5694/j.13265377.2011.tb02942.x 7. G. Parham, et al., Creating a low-cost virtual reality surgical simulation to increase surgical oncology capacity and capability. eCancer Medical Science 13, (2019). doi: 10.3332/ecancer.2019.910 8. J.L. Maples-Keller, et al., The use of virtual reality technology in the treatment of anxiety and other psychiatric disorders. Harvard Review of Psychiatry, (2017). doi: 10.1097/HRP.0000000000000138 9. A. Gupta, K. Scott, M. Dukewich, Innovative technology using virtual reality in the treatment of pain: does it reduce pain via distraction, or is there more to it? Pain Medicine 19, 151-159 (2017). doi: 10.1093/pm/pnx109 10. S. Weech, S. Kenny, M. Barnett-Cowan, Presence and cybersickness in virtual reality are negatively related: a review. Frontiers in Psychology 10, (2019). doi: 10.3389/fpsyg.2019.00158 11. T. Metzinger, Why is virtual reality interesting for philosophers? Frontiers in Robotics and AI 5, (2018). doi: 10.3389/frobt.2018.00101 Images retrieved from: 1. http://beanotherlab.org/home/work/tmtba/body-swap/ 2. https://education.mit.edu/project/clevr/ 3. https://community.mis.temple.edu/mis4596sec002fall2017/2017/09/21/utilizing-virtual-reality-technology-in-the-military/ 4. https://www.theverge.com/2018/8/14/17670304/virtual-reality-surgery-training-haptic-feedback-fundamentalvr
22
RESOLVING THE PROTON-RADIUS PUZZLE:
THE RESEARCH OF
DR. JAN BERNAUER BY JOHN MCMENIMON ’20
23
Dr. Jan C. Bernauer is an experimental nuclear physicist and assistant professor at Stony Brook University. He specializes in quantum chromodynamics and has had his research -- particularly the MUSE project, which is attempting to reconcile the different proton radius measurements found in the proton-radius puzzle -- recognized in numerous journals and publications. He conducts experiments throughout the world at MAMI, PSI, DESY, and Jefferson National Laboratory. We sat down and discussed his life and research. Dr. Bernauer grew up in Mainz, Germany. While in abitur (the German analogue to high school), he specialized in physics, math, and Latin and debated whether to pursue physics or computer science while in university. He made his decision by putting the two into perspective: “At some point I said ’Okay, let’s do physics’ essentially because computer science I could do at home as a hobby while with physics, some of the experiments are somewhat expensive and can’t be built in your basement. And I went for that, and I was happy with that quite quickly.” From there, he was encouraged to pursue nuclear physics by one of his professors whose research was on proton form factors. Protons are not solid spheres like how they are often depicted. They are a collection of electric charge whose distribution dictates the radius of the proton, and form factors describe that distribution. “It was a very interesting model, but it wasn’t clear if the structure was actually there or it was just an artifact of different data sets that you have to combine together,” Bernauer recalled. He heeded the advice of his professor and delved into testing the model, which soon enough brought him to the forefront of a much larger problem: “The original aim of the experiment was to measure just that [form factor] range again -- one measurement with high precision to see if that was true or not. And quite early on in the [experimental] planning I was contacted by people in the atomic community who heard that I wanted to do this and said, ’Okay if you do this, also try to measure the proton radius.’” This proton radius measurement would evolve into his PhD thesis: it was 2010 when he released his measured value
of 0.879 femtometers (that’s 879 preceded by fifteen zeroes). Most students are introduced to the structure of an atom through the Bohr planetary model, in which electrons orbit the atomic nucleus in a manner analogous to how planets revolve around a star. It is a conceptually simple and relatable model, but it is ultimately wrong. In reality, the electrons exist as a probability cloud around the nucleus: the electron could be at a particular point, but it could also not be there, and it is impossible to know for sure. This is not some mathematical sadism, but rather a consequence of the wave-particle duality present in quantum mechanics that has great bearing on the proton radius measurement. Since the electrons exist as a probability function, there is actually a non-zero chance that an s-state electron can be found within the proton itself. When the electron is inside the proton, it “sees” less electric charge as there is “less” proton to interact with, and thus the binding energy of the electron is slightly smaller. This is responsible for the Lamb shift. As Dr. Bernauer put it, “At some point, the proton radius shows up as a nuisance term.” Yet despite its nuisance status, Dr. Bernauer was approached by the scientific community to specifically measure the proton radius due to its fundamental importance “for [precision] QED tests, where you want to see if everything else is right -- you’ve got to put that in. You’ve got to put the proton radius in, calculate the Lamb shift, [and] get the correction out. You can test three more digits or so, depending on how well you know the proton radius. So they wanted a better value to get their tests better. You can turn this around and say, if QED is correct, how big can the proton be in order to get the calculations and measurements to agree?” The continued debate about the size of the proton may come as a surprise to many, but given that the proton’s status as the cornerstone of atomic and nuclear theory, it would be presumptuous to assume that its radius measurement has conclusively settled by the physics community. Yet in 2010, Dr. Bernauer helped set the debate in motion with the publication of data from his own work at the same time as the data collect-
Figure 1 Collective proton radius measurements and uncertainties in femtometers conducted by the various experiments since 2008. The listed MUSE measurements are the expected range and will be released in the future.
24
ed from the Kamer collaboration. Prior to the data release, the consensus for the proton radius rested at 0.88 fm; however, a round of experiments conducted by the Kamer collaboration took a different measuring approach and used muonic hydrogen. Muonic hydrogen is a special breed of the atom in which the electron is replaced with a muon. Muons are two-hundred times more massive than electrons and are therefore two-hundred times closer to the proton and eight million times more probable to be inside the proton. This gives a much bigger correction from the Lamb shift and is much easier to measure. These tests returned a value of 0.84 fm. That is a four percent smaller radius, making the proton twelve percent denser than and seven sigma off the orthodox value. This was significant, sending shockwaves through the physics community and exciting physicists with ideas of new physics or reformulations of old theories. Dr. Bernauer’s work catalyzed the still continuing efforts of numerous teams and institutes vying to measure this new proton radius. The proton-radius puzzle has required those pursuing it to develop heavy-duty computer programs and machine Figure 2 Schematic of MUSE experiment. Particles will enter via the beam-line and pass through a series of scintillators and GEM detectors before hitting the liquid learning simulations to model the multitude of events gener- hydrogen target chamber. Scattered particles will be collected in the two scintillator ated during measurement. Physics and computer science have walls angled on the outside. become more ingrained in recent years as the sheer amount of data produced during cutting-edge experiments requires where you can solve it maybe elegantly in a more direct way complex processing and analysis. Bernauer sees it as a way to and then run it on a standard computer, or run it with this integrate his hobbies into his work. His advice? “Something big hammer of machine learning but it’s still faster because its [that] I think is really important for students [to understand is hardware is so much faster than any other hardware and it’s that] if you have to fight with your computer every day, you’re so cheap because everyone’s buying it so it’ll still be worth it a lot less productive than if you really know what’s happening doing it this way.” and write code and all of this. That’s enhancement learning, Nowadays, Bernauer is a member of the MUSE collabwhich will pay back a thousand-fold over your life.” Most of oration, which was formed in response to the proton-radius his programming is done in C++, but he does enjoy Python puzzle and aims to measure the radius using electron-muon and especially Assembly because “it gives you a much better scattering. MUSE would fire a stream of electrons and muons understanding of what’s actually happening in the computer… at a vat of liquid hydrogen to scatter them. Detectors placed to [and] I would miss it if I wasn’t able to do that.” His favorite the sides would catch the scattered electrons and muons and, text editor is Emacs. by measuring the angle at which they scattered, would solve Many of the simulations used nowadays in the physics for the cross-section and thereby the radius of the proton. community, including ones that Dr. Bernauer uses, involve Electron-proton scattering has long been a form of experimenmachine learning. The current state of machine learning has tation in nuclear physics, but MUSE is novel in that it is the not made it the best at obtaining numerical values, but it has first program to use muon scattering. The electron scattering been good at tracking. Bernauer described machine learning component acts as a form of insurance -- a measurement by as, “essentially just fitting. Something that we did for years, which to determine the accuracy of the muon scattering. The but coming from a completely different direction -- we fit some experiment is currently in its simulation phase, which utilizes model that we have where we actually know how the model sophisticated computer programs and machine learning softlooks like, and where machine learning comes in you can acware to simulate the multitude of events generated during the tually write down a very general model that has millions of pascattering. rameters but it’s still that, it’s still essentially a version where The proton-radius problem is a major and modern unyou throw in some input and you get an output and you fit it so solved problem in physics. Its consequences have bearing on that the output is what it should be.” On how deep of an impact nuclear and atomic physics and even cosmology. It utilizes machine learning will make on the physics community, he state of the art computing and is an exemplary case of the digsaid, “I think that it will continue and it will continue just beitization of physics. It is a conundrum that has captured the cause the computing requirements will grow -- we have more attention of many, and Dr. Bernauer, who helped set it off, data at a faster rate [and] we do more sophisticated analysis hopes to solve it. on it. That’s one side of it. The other side of it is that there’s a lot of technology out there and a lot of firm companies that are trying to do something with machine learning and from that there’s a lot of investment in hardware optimized to do machine learning. So, if you want to use that hardware, you have to map out our problems to be a machine learning probAll images retrieved from: Dr. Jan Bernauer. lem. Just from that, I think that there will be a lot of advances
25
ETHICS OF CANCER CARE: BEYOND JUST BIOLOGY BY SHRILA SHAH ’23 Difficult Decisions in Cancer Care A 31-year-old woman was found with a right breast mass, and her gynecologist suggested a mammography, ultrasound, and later biopsy (a variety of medical tests used to detect masses). These procedures confirmed the lesion to be a high-grade ductal carcinoma which is a common type of breast cancer. After discussion of standard therapeutic options with her oncology team, the patient agreed to a neoadjuvant chemotherapy, a treatment given prior to surgery in order to help shrink the mass and possible subsequent breast-conserving surgery (lumpectomy). The patient tolerated well when treated with neoadjuvant chemotherapy, and her mass began to shrink. After therapy completion, magnetic resonance imaging (MRI) showed no evidence of the mass, indicating a treatment success. Being that the average five-year survival rates for this form of breast cancer are over 90% (1), this outcome was expected for the patient. However, she was referred back to surgery, as the potential for residual, life-threatening tumor remained. For reference, at her initial consultation, the patient was informed that neoadjuvant therapy alone has been proven to be insufficient in guaranteeing complete tumor regression. After several missed appointments, the patient explained how she had found alternative beneficial treatment approaches through an herbalist; the holistic, natural treatment approach including acupuncture, diet change, cannabis, etc., is often used in conjunction with standard therapies to relieve the treatment’s negative side effects. Despite further discussions with her medical oncologist, however, she completely refused surgery. Her physicians are concerned that she is placing her life at an unnecessary risk, and were unsure of how they should approach the patient, given the hospital’s protocol for patient autonomy (2). The physicians try consulting the patient several times to the best of their ability, but at the end of the day, the final decision falls in the hands of the patient. Decision making regarding cancer, especially treatment options, may seem to be simple: simple: aim to prolong life, maximize its quality, and minimize adverse side effects. However, the context of making this informed decision is muddled by the vast number of treatment options and complex medical terminology (3). For example, in the situation outlined above, the patient might not have any medical background knowledge, thus preventing her from truly understanding her treat-
26
ment options and making an informed decision. In addition, the increased commercialization and advertisement of “natural” therapies could have swayed her decision making. Maintaining patient autonomy is an important aspect of healthcare, especially when it comes to decisions regarding treatment and care.. The trend toward greater patient involvement in medical decision-making is built on increasing the understanding and respect for the values and preferences patients have. However, as this situation outlines, respecting patient autonomy presents a variety of its own challenges and opens an entirely new topic of the importance of doctor-patient communication, and its central function in the effectiveness of choosing and following through with a cancer treatment plan (4). Determining a Treatment Plan Cancer biology is inherently complex with the interplay between cancer cells and their intrinsic alterations that juxtapose the host cell’s ability to control the progression of cancerous lesions. The ever-evolving characteristic of cancer cells also coins cancer as a chronic condition. Even “stable,” or controlled cancers cannot be considered cured, as the chance of remission always remains. In addition, just as every patient is different, so is their cancer; even cells within an individual tumor can vary enormously. For this reason, specialists are forced to extensively understand and visualize the patient’s cancer, and devise a plan of action best suited to the patient. However, when taken into perspective, this is quite difficult. Options within cancer therapy include: surgery, radiation, chemotherapy, hormone therapy, targeted therapy, immunotherapy, active surveillance, palliative care, and hundreds of clinical trials. A physician must not only pinpoint a course of action among these options, but also predict which will lead to the lowest chances of remission -- this is what makes cancer therapy so daunting. As with any modern medical issue however, a physician will also have to take into consideration the commercial aspect of these therapies. People with cancer have nearly a 4 times higher mean expenditure ($16,346) per person than those without cancer ($4,484). The negative effects on patients include medical debts, bankruptcies, forgoing or delaying necessary medical care, or avoiding filling prescriptions (5). The
consequential poor adherence to cancer treatment can have drastic impacts on cancer patients, such as less effective treatment, shorter survival, and poorer prognosis. Thus, it is undeniable that the commercialization of the growing types of cancer therapies would play a huge factor in impacting a patient’s choice of treatment. Although the doctor may strongly recommend a course of action, it is not always the most feasible for the patient. Complex Cancer Continuum Being a multi-step disease, cancer is a combination of cellular proliferation and immortality, thus making the cancer continuum (prevention, diagnosis, treatment) complex on various levels. The determinants of cancer are not limited to genetic predisposition or environmental influence. In fact, one of the more common cancers, lung cancer, clearly correlates with adult socioeconomic position, or measures of disadvantage within the course of one’s life, such as inequalities in smoking and occupational exposures. Low-income neighborhoods are associated with risky health behaviors (smoking, obesity, and physical inactivity), as well as expensive and low-quality healthcare. The CDC in fact found that people in the most socioeconomically deprived groups (>$12,500 family income) have higher lung cancer risk than those in more affluent groups. This trend continues with people who have less than a high school education (who are more at risk for lung cancer). In addition, people living in rural, deprived areas have 18-20% higher lung cancer incidence than people living in urban areas (6). In the prevention phase, people face decisions with implications for cancer. However, people living in poverty smoke cigarettes for a duration of nearly twice as many years as people with a family income of three times the poverty rate, just as blue-collar workers are more likely to be exposed to secondhand smoke at work, than white-collar workers. This trend continues, as adults who live below the poverty level have less
success in quitting (34.5%) than those who live at or above the poverty level (57.5%). Essentially, adopting a healthy lifestyle, which is shown to prevent over half of cancers (7), requires committed decisions and favorable social-environmental conditions such as a high socioeconomic status. Like prevention, screening and diagnosis ideally would occur before symptoms appear. However, the treatment process does not usually occurs as this sequence of events. The process can be further complicated by the patient’s uncertainty about its benefits and their lack of initiative. For this reason, even for preventable cancers, outcomes vary between varying socioeconomic populations. For example, even before diagnosis, low income patients find challenges in accessing cancer screening and prevention services. Women received significantly fewer preventative care services when seeing their primary care physician, including important cancer screening tests like the Pap test and mammograms, and both men and women undergo less colorectal screening. This lack of access to screening complicates cancer treatment, as patients are more frequently present with later stages of cancer. This delay in diagnosis is critical, as later-stage cancers are more difficult to treat. In addition to the difference in incidence and early diagnosis of specific cancer types that are associated with varying socioeconomic status, there is also a large body of research that shows the correlation between socioeconomic differences and treatment accessibility. In one study, breast cancer patients were analyzed on their likelihood to receive radiation and chemotherapy after diagnosis (8). It was found that patients residing in high-income zip codes were more likely to receive treatment (69%) than patients residing in low-income zip codes (43%). Reasons for this range from lack of access itself, knowledge on what to do after diagnosis, fear of potential financial burden, etc. Another aspect of cancer treatment that is often overlooked, is recurrence. Like the situation outlined in the be-
Figure 1 Even with proper initial treatment, the cancerous legion has proven possibility to return with greater resistance.
27
ginning, even though a patient may be treated and cured, it is equally essential to prevent a relapse in which the previous chemotherapy will no longer be functional. Again, it was found that there was a three-fold difference between high-income and low-income patients completing the entire treatment plan (9). This could potentially explain why in today’s worldwide cancer survival rates, after recurrence, are so low. It is uncertain whether treatments are actually working, due to the large discrepancy in patients properly going through with their specialized treatment plan. Cancer and Patient-Physician Communication Based on the situation outlined previously and the multitude of challenges, discrepancies, and inequalities that are associated with cancer treatment, effective doctor-patient communication is key, in order for the patient to have the best treatment outcome. In the past few decades, patients have become increasingly involved in directing their own medical care. Patients diagnosed with cancer are expected to not only cope with the emotional trauma of a cancer diagnosis, but are also expected to digest complicated and often threatening information about treatment procedures. Patients seek out information and approach their health care providers from varying degrees of medical knowledge, opinions, and socio-economic backgrounds. Two reviews have underscored the importance of patient-physician communication, and identified a positive association between improved patient health outcomes (e.g. emotional status, physical health, blood pressure, etc,) and good patient-physician communication (10, 8). In addition, it has been found that patients reporting good communication with their doctor are more likely to be satisfied with their care, and especially to share pertinent information for accurate diagnosis of their problems, follow advice, and adhere to the prescribed treatment. Studies have even correlated this effective communication with pain tolerance, recovery from illnesses, and most importantly, decreased tumor growth. Despite the obvious benefits of maintaining effective communication, it has been observed that communication skills actually decline as medical students progress through their education. In fact, it has been shown that the emotional and physical brutality of medical training (e.g. internships and residency) suppresses empathy. There are reported observations of doctors avoiding discussion of the emotional and social impact of patients’ problems because it distresses them. This situation negatively impacts doctors and increases patients’ distress (11). This avoidance is particularly detrimental, and may result in patients being unwilling to disclose problems, which could delay and adversely impact their recovery. With the advent of new technologies and treatments, and a growing body of public health knowledge, the prevention, screening, diagnosis and treatment of cancer has become increasingly complex (12). In addition to the complexity of the disease itself, there is an added burden for low- and middleincome populations, and it is often beyond their financial capacity to receive proper treatments. Due to this, patients have begun looking into treating cancer on the avenue of not only the conventionally best treatment, but also one that is financially sustainable and realistically available to them. Thus, it is clear that the determinants of the cancer are affected by many complex factors: geographical location, lifestyle choices, stig-
28
Figure 2 Cancer affects all population groups in the US, but certain groups may bear a disproportionate risk for cancer.
ma, economic standpoint, etc. In this complex socio-economic perspective, establishing the definition of ethical, equitable, and acceptable cancer treatment for all is a growing challenge, and one that should be outlined. In order for the growing body of knowledge, technologies, and treatment options regarding cancer care to make a difference for all cancer patients, it is essential that doctors are able to effectively communicate this to patients.
References 1. C. Anderson, et al., Delivering high-quality care to prostate cancer survivors. Cancer 119(19), 3426–3428 (2013). doi: 10.1002/cncr.28236 2. S. Ghose, et al., Ethics of cancer care: beyond biology and medicine. Ecancermedicalscience 13 (2019). doi: 10.3332/ecancer.2019.911 3. J. Peppercorn, Ethics of ongoing cancer care for patients making risky decisions. Journal of Oncology Practice 8 (2012). doi:10.1200/jop.2012.000622. 4. K. Martinez, et al., How can we best respect patient autonomy in breast cancer treatment decisions?. Breast Cancer Management, 4, 53–64 (2015). doi:10.2217/ bmt.14.47. 5. J. Park, et al., Health care expenditure burden of cancer care in the United States. INQUIRY: The Journal of Health Care Organization, Provision, and Financing 56, p. 004695801988069 (2019). doi:10.1177/0046958019880696. 6. Cigarette smoking and tobacco use among people of low socioeconomic status. Centers for Disease Control and Prevention, Centers for Disease Control and Prevention, (2019). www.cdc.gov/tobacco/disparities/low-ses/index.htm. 7. Komen, Steps to adopting a healthier lifestyle. Susan G Komen San Diego, (1970), www.komensandiego.org/steps-to-adopting-a-healthier-lifestyle/. 8. K. Sethuraman, Doctor-patient communication: an overview. Communication Skills in Clinical Practice (Doctor-Patient Communication) 1 (2001). doi:10.5005/jp/books/11607_2. 9. P. Ross, How american cancer care leaves the poor behind. Medium, Healthcare in America, (2018). healthcareinamerica.us/how-american-cancer-careleaves-the-poor-behind-76dd4d47b603. 10. R. Ruiz-Moral, The role of physician-patient communication in promoting patient-participatory decision making. Health Expectations 13, 33–44 (2010) doi:10.1111/j.1369-7625.2009.00578.x. 11. P. Maguire, Key communication skills and how to acquire them. BMJ 325, 697–700 (2002). doi:10.1136/bmj.325.7366.697. 12. R. Enrique, et al., Circunstancias socioeconómicas y mortalidad prematura por enfermedades crónicas. Medicina Clínica 120, 201–206 (2003). doi:10.1016/ s0025-7753(03)73652-5. itle. Images retrieved from: 1. https://www.mdlinx.com/oncology/article/293 2. https://www.cancer.gov/about-cancer/understanding/disparities
ANALYZING HYBRID CLOSED LOOP INSULIN SYSTEMS ROBERT LUM ’21 Introduction In the past decade, the determination of the optimal method for diabetes care has been a consistent issue within the United States healthcare system. As of 2019, it has been reported that 18-20% of all hospitalized patients typically have some form of diabetes (1). To combat this growing problem, medical device companies such as Medtronic and DexCom have continuously updated their continuous glucose monitoring (CGM) technology to allow for the development of closedloop insulin systems as effective options for diabetes therapy. The first hybrid closed-loop insulin pump, Minimed 670G, was approved by the FDA in 2016. This type of technology was nicknamed the “artificial pancreasâ€? due to its ability to automatically adjust basal insulin rates and thus improve glycemic control in diabetic patients (2). Studies in the past five years have shown that closed-loop technology for insulin delivery has been successful in reducing glycemic variability within the inpatient setting. Although this technology has revolutionized diabetes management, it also poses a number of safety issues when diabetics are left to individually manage their care outside of the hospital. Closed-Loop Insulin Delivery Closed-loop insulin delivery systems are able to emulate the negative feedback loops of insulin release in normally functioning đ?›˝-cells of the pancreas by relying on two components: CGM technology and an insulin pump. CGM devices allow for the subcutaneous measurement of interstitial glucose levels via a sensor. These levels are consistently measured and communicated to a device with a control algorithm in the form of an insulin pump or a separate controller. The algorithm then initiates or suspends fast-acting subcutaneous insulin delivery depending on the measured interstitial glucose level (3). The autonomous nature of this process has allowed for the development of its nickname, the “artificial pancreas.â€? The usage of this technology continues to grow due to its ability to reduce glycemic variability and increase “time in range,â€? or the time in an individual’s optimal glucose ranges. Past studies have revealed the efficacy of these systems within a variety of settings, as well as the potentially harmful ramifications of utilizing this technology for diabetes care. Closed-Loop Insulin Therapy For Inpatient Diabetics While it is important for diabetic patients to constantly be in glycemic control to avoid complications that come with uncontrolled diabetes such as vision and kidney problems, it is especially significant to achieve this state during pregnancy. Hyperglycemic trends during pregnancy can increase rates of stillbirth, preterm delivery, and even neonatal death (2). It can be difficult to minimize glycemic variability for pregnant dia-
betics as their bodily insulin demands are increased dramatically during this time. A recent systematic review by Boughton and Horvorka revealed a study displaying the effectiveness of implementing a closed loop insulin delivery system in a setting with pregnant type 1 diabetics involving a Cambridge hybrid closed-loop insulin system. Patients were placed on this therapy plan for 4 weeks while at home and their glycemic trends were monitored. They showed increased time spent within optimal glucose ranges for pregnant diabetics, specifically 63140 mg/dL, relative to standard methods of diabetes care involving non-closed loop sensors and insulin pump treatment. Overall, sensor measured glucose levels were found to be in range 68.7% of the time with a decreased frequency of hypoglycemic episodes (2). Evidently, the potential risks that can exist with fluctuating glucose levels during periods of pregnancy can be significantly reduced by placing type one diabetics on this type of care.
Figure 1 The negative feedback loop of a hybrid closed-loop insulin delivery system.
In addition to pregnancies, other illnesses that diabetics encounter during hospital admission can be harmful to their glycemic control and thus can be mitigated by closed-loop insulin delivery systems. Integrating automated insulin therapy via a hybrid closed-loop system in the inpatient setting has been proven to be efficient by recent studies. A literature review on this topic cited a study conducted within general wards, comparing the conditions of diabetic patients managed on a closed-loop insulin delivery system to those on standard insulin care. This study was done for 15 days and each time in
29
range was measured and compared. On a closed-loop system, diabetics spent 65.8% in range relative to 41.5% standard insulin care and also experienced a lower frequency of hypoglycemic episodes (2). This therapy also proved to be effective for hemodialysis patients, a unique subgroup of diabetic patients for whom managing diabetes can be especially challenging amongst their other medical problems. A study on this subject revealed that hemodialysis patients on closed loop insulin delivery systems had a time in range that was 69% higher than those without closed loop insulin therapy. The integration of a closed loop insulin treatment plan allows for better glycemic control, which can often be hindered by the multitude of health problems that hemodialysis patients suffer from (2). As nutritional support is common for admitted patients, this can be an obstacle in producing stable glycemic patterns which closed-loop insulin delivery technology can help to overcome. A study was done on forty-three patients in the general ward that were on nutritional support and suffering from hyperglycemia. They were managed on a closed-loop insulin system and were able to achieve eight more hours than expected in their optimal glucose ranges in comparison to those without this type of treatment (2). Hybrid Closed-Loop Insulin System Safety Concerns Although the hybrid closed-loop system has proven its potential to improve glycemic outcomes in diabetic patients within the inpatient setting, the growing usage of this technology raises a multitude of concerns for diabetes care in the future. It is important to note that while in the inpatient setting, a team of healthcare professionals take responsibility for managing the closed-loop system. Outside of the hospital, this responsibility can be daunting to diabetics who have to manage hybrid closed-loop insulin pumps independently. This can be especially problematic considering the flaws of the safety features integrated within hybrid closed-loop insulin pumps. The MiniMed 670G system, a hybrid closed-loop insulin pump developed by Medtronic, is designed with a multitude of safety features that are conditionally effective. Furthermore, the pump relies on notifying the user of an urgent situation that requires medical attention via vibrating and auditory warnings. For example, if the sensor detects a blood sugar level that is too high or too low, it will inform the user to manually check their blood sugar to confirm the sensor’s measurement of blood glucose (4). This is to ensure that the patient takes action to cure their hypoglycemic or hyperglycemic episode and that the technology is working accurately. While this may seem effective, this notification can also serve as a false positive if the sensor is implanted into the skin improperly. This can occur if the injection site lies in an area with scar tissue build up. The pump also has maximum values of insulin delivery that are programmed into the device in order to prevent accidental delivery of incorrect dosages. This value can be adjusted within the settings of the pump. The most significant safety feature of the insulin pump is its ability to detect proper insulin flow. Depending on the angle of infusion site injection, the cannula for insulin delivery can become kinked, hindering insulin flow. In this scenario, the pump is able to detect whether insulin is flowing through the tubing and through the cannula and into the body. If it does not detect flow, the user
30
is immediately warned. This is exceptionally important in preventing situations in which diabetics think they are receiving insulin but in reality they are not. Although these safety features exist, the maintenance patients need to perform to correct technological problems can be burdensome. This was a common problem in 2019 as over one-third of type 1 diabetics came off of the Medtronic MiniMed 670G. They faced difficulties staying within “auto-mode” which allows for the pump to act as an artificial pancreas. It was reported that technical difficulties such as sensor placement, sensor alerts, calibration frequency, and supply issues made it difficult for patients to stay within this mode (5). This constant need to provide the insulin pump with information to remain in auto-mode can serve as a hindrance for diabetics and dissuade them from using this type of technology. Additionally, Dr. Carolyn Johnson of the University of Melbourne cited that the opportunity for care via an artificial pancreas has allowed diabetics to adopt a “do it yourself” mentality (6). This can potentially be detrimental to their overall health if they feel as though a regular follow up with an endocrinologist is unnecessary due to the fact that their sole care is controlled by the insulin pump itself. Another present issue is that hybrid closed-loop systems require a specific CGM that works hand in hand. This limits the freedom of diabetics and can restrict them to a type of insulin pump and sensor combination that may not be ideal for them (2). As diabetes care and technology moves forward in the near future, these issues need to be resolved in order to enhance the overall efficacy of a hybrid closed-loop system for all types of patients that may want to use it. Conclusion According to past studies, hybrid closed loop insulin delivery systems have proven to be effective in the inpatient setting due its ability to improve glycemic outcome. This is a primary treatment option for diabetics upon hospital admission, especially when they are under conditions where they may experience extreme fluctuations in glucose levels due to the type of care they are managed on. However, the ramifications of using a closed loop insulin system outside of the hospital can potentially be more harmful than beneficial. When an individual has complete autonomy over the management of their closed loop system, this can be an overwhelming responsibility and cause a distrusting relationship between the patient and the endocrinologist they regularly see. Further studies on this topic can focus on the psychosocial impact that this system may have on a diabetic, including how the usage of an artificial pancreas may influence the trust of a doctor-patient relationship.
References 1. C. Boughton, L. Bally, & R. Hovorka, Closed-loop management of inpatient hyperglycemia. Aging 11, 5292-5293 (2019). doi: 10.18632/aging.102144. 2. C. Boughton & R. Hovorka, Automated Insulin Delivery in Adults. Endocrinology and Metabolism Clinics 49, 167–178 (2019). doi: 10.1016/j.ecl.2019.10.007. 3. FDA approves fire automated insulin delivery device for type 1 diabetics, FDA News Release, (2016). 4. Important Safety Information, Medtronic. 5. C. Johnston, DIY diabetes management: An ethical dilemma. Pursuit, (2020). 6. K. Monaco, Over One-Third T1D Patients Bail on Closed-Loop System. Medpage Today, (2019). Graphic created by: Komal Grewal’23
BUILD-A-BABY BY TERRENCE JIANG ’23 What is Genetic Editing? The idea of constructing the perfect baby through genome editing may have passed through couples’ minds at some point in life, especially now. With advances in CRISPR techniques within the last decade, genetic editing may soon become available to the public. However, recent limited live births resulting from human embryonic CRISPR editing have raised global ethical concerns, and have been compared to eugenics, a term often associated with racism, genocide and forced sterilization campaigns. To understand the ethicality and practicality of genetic modification of embryos, we must first understand its technical process. In order to splice a specific area of a genome, scientists use a type of enzyme called an ’engineered nuclease,’ the most common one being CRISPR-Cas9 (Clustered Regularly Interspaced Short Palindromic Repeats). For an engineered endonuclease Cas-9 to be effective, it must be composed of two distinct segments: a part that cuts the DNA, and one that guides the nuclease to a specific sequence of DNA. To insert a specific genetic sequence into the genome, scientists take advantage of the cell’s natural repair system, homology directed repair. During mitosis, all of the cell’s DNA is copied so that both the daughter cells receive the same copy of the parent genome. If the cell recognizes a break in its DNA, it would repair that break by using the other copy. This repair process could be manipulated after Cas9 makes the cut and the genomic double stranded DNA homologously recombines with a modified piece of DNA similar in sequence to the target gene. The cell then uses the modified piece of DNA as a template to repair the break, effectively filling the break with the desired genetic sequence. Currently, the preferred approach to enhancing the embryos is preimplantation genetic screening of in vitro fertilization (IVF) embryos, which involves taking a cell from the embryo for genetic testing while the embryo is growing in the IVF laboratory. This method of screening has been used for years to avoid the implantation of disease-carrying embryos, but now, recent progress in genetics has made genetic editing feasible. Recent studies have shown that even after the embryos were genetically altered, the phenotype of the desired trait in the offspring may be minimal and inaccurate (1). With such limitations, it is still too early for scientists to take advantage of genetic editing and we must explore the ethics behind applying an invasive technique. History of Eugenics There is a history of eugenics. What genetic trait would constitute an ideal baby or a perfect person? Such a question has been seen tossed around in the late 19th century and early 20th century during World War II and post-Civil War. The
Nazi Empire had mandated tall, blonde Germans to represent the ideal human, a human deemed superior above all others. Additionally, in the United States, there were pro involuntary sterilization campaigns for the poor and black people of society, later deemed to be legal in the Supreme Court decision Buck v Bell. Such ties to eugenics often stem from racism that has inevitably led to many conflicts in history. If we are to continue the practice of genetically modifying embryos, will we have the emotional capacity to combat the inevitable ethical dilemma that comes with such a genetic tool? Its use should be reconsidered when its moral implications are better understood. Practical Application of Genetic Editing Today, the manipulation of genetic information serves a multitude of practical applications. Primarily, genome editing serves as a valuable tool for scientists to change the genetic makeup of cells or organisms to further understand biology and how creatures function. Revolution in cell genetics allows for advancements in science and medicine, giving more options for patients suffering from diseases, such as leukemia or AIDS, to undergo blood cell transfusion. Economically, genetic editing allows farmers to modify their crops to improve yields and resistance to pests and diseases. It’s no question that genetic engineering serves an important practice in modern society. But, that sentiment does not extend as far as modifying the DNA of human embryos. The underlying mystery behind the effects of genetic editing brings fear to those who are unfamiliar with it, and as of June 4, 2019, the United States House Committee voted to continue banning the practice of genetically modifying babies. One representative lamented, “This is a prohibition that is accepted by nearly every nation in the world due to its unknown risks.”(4) This statement may undermine the practice of genetic engineers, but it holds significant weight regarding the unknown risks.
31
Figure 1 U.S. adult’s perception on genetic editing
Risks Regarding Genetic Editing There are four main risks when it comes to genetic editing. First, there is a potential for off-target effects on the modified DNA. When cutting a specific section of the genome and editing its components, there is a possibility that DNA sequences, separate than the intended target DNA sequences, could be changed. The variability in changes in the DNA sequences have a potential to be fatal as a direct result of point mutations and genetic shifts in the chromosomes (11). Secondly, there is a possibility that gene editing increases the risk of cancer development. A study conducted by scientists at Cambridge University and the Karolinska Institute in Sweden found that using the CRISPR-Cas9 gene editing technique can actually produce a response in the cell to prevent DNA damage. This response involves the activation of the p53 gene, which either repairs the break in DNA or causes the cell to go through apoptosis. The problem with this mechanism is that CRISPR-Cas9 works efficiently only when p53 is dysfunctional, and dysfunctional p53 genes are much more likely to cause cancer (7). Genetic editing technology may also be potentially used as a weapon of mass destruction. Many countries are concerned that terrorist groups will use genetic editing techniques to modify bacteria or viruses as another form of warfare (12). An example of such development can include terrorist groups unleashing genetically modified infectious diseases on their enemies. With no country of allegiance, these terrorists have no ethical concern in using chemical warfare, as written in the treaty created by the Chemical Weapon Convention. Such uses could profoundly affect the way we perceive the biological ethics of genome editing. Lastly, there is a concern for gene drive, which is when manipulated genes become integrated into gamete DNA and are passed onto future generations. This raises the ethical concern of what it means to be a human. If humans were able to actively change every little mistake in their genetic makeup, will the future of the species be considered natural? People must consider the true purpose of genetic editing and whether the benefits outweigh the risks before applying such scientific advancements.
32
First Reported Instance of Genetic Editing In November 2018, a Chinese scientist from Southern University of Science and Technology, He Jiankui, announced to the public his key role in implanting modified human embryos into a patient. This announcement brought an immediate outcry from the scientific community, condemning his actions, calling it unethical and irresponsible. He had intended to modify a key gene that prevented the contraction of HIV in the babies later in life (6). Although his intentions were noble, his consideration of the effects of modifying the DNA of those embryos and its impact on the babies was not considered. Furthermore, there is a chance that those manipulated genes are unable to prevent HIV contraction.. A study conducted by Karvanai and his team predicted the average increase in height and intelligence for genetically modified embryos to be minimum where the average gain in height was 2.5 cm and the average gain in IQ was 2.5 points (1). The small increase of desired traits in altered embryos is evidence that we have only scratched the surface in genetic modification in embryos. More caution is needed when applying relatively new techniques to actual human embryos. Moving Forward Although genetic editing may seem impractical and unethical in the current climate, its benefits to humanity should not be overlooked. The main concern for its use today stems from the lack of research and the unreliable results that come with genetic modification. If humans are able to possess the capability of altering genes in the near future, its practical use in preventing diseases and disabilities should be considered first. However, without the necessary literature, scientists must proceed with extreme caution because playing god with the blueprints of humanity may cause disastrous and unexpected consequences that will alter our definition of what it means to be human.
References 1. E. Karavani, et al., Screening Human Embryos for Polygenic Traits Has Limited Utility. Cell, (2019). doi:10.1016/j.cell.2019.10.033. 2. H. Fridman, et al., Preconception carrier screening yield: effect of variants of unknown significance in partners of carriers with clinically significant variants. Genetics in Medicine, (2019). doi:10.1038/s41436-019-0676-x. 3. P. Liang, et al.,CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes. Protein & Cell, 6(5), 363–372 (2015).doi:10.1007/s13238-015-0153-5. 4. R. Stein, House Committee Votes To Continue Ban On Genetically Modified Babies. NPR All Things Considered. (2019). 5. D. Baltimore, et al., A prudent path forward for genomic engineering and germline gene modification. Science, 348(6230), 36–38 (2015) .doi:10.1126/ science.aab1028. 6. K. Bosley, et al., CRISPR germline engineering—the community speaks. Nature Biotechnology, 33(5), 478–486 (2015). doi:10.1038/nbt.3227. 7. J. Taipale, et al., CRISPR-Cas9 genome editing induces a p53-mediated DNA damage response. (2018). doi:10.1038/s41591-018-0049-z. 8. K. Suzuki, et al., Correction of a pathogenic gene mutation in human embryos. Nature. (2017) doi:548:413–9. 9. T. Friedmann, Genetic therapies, human genetic enhancement, and … eugenics? Nature. (2019). 10. D. Simmons, Genetic inequality: human genetic engineering. Nature Education. (2008). 11. J. Wilson, et al., Manipulating the mammalian genome by homologous recombination. (2001). doi:10.1073/pnas.111009698 12. K. Curran, How on earth are we currently regulating human genetic modification? Rising Tide Biology. (2020). Images retrieved from: 1. https://www.emaxhealth.com/4214/brain-wave-reading-detects-autism-riskbabies 2. https://www.pewresearch.org/science/2018/07/26/public-views-of-gene-editing-for-babies-depend-on-how-it-would-be-used/
Stony Brook Young Investigators Review would like to give a special thank you to all of our benefactors. Without your support, this issue would not have been possible.
Cover image created by Priya Aggarwal ’21