STAFF STAFF
EDITOR’S EDITOR’S NOTE NOTE
Editor-in-Chief Jonathan Kuo
Scientists have always been concerned about origins. Questions about the origins of life, for example, have guided many areas of biological research for centuries. In physics, inquiries into origins have traditionally tended toward the cosmological; that is, toward investigating the genesis of the universe. Indeed, the scientific method—often described as the harbinger of the modern age of science—is the result of an almost monolithic focus on origins; it is a way of thinking that seeks to produce knowledge by distinguishing the original from the new.
Managing Editor Rosa Lee Outreach and Education Chairs Melanie Russo Saahil Chadha Features Editors Shivali Baveja Nick Nolan Interviews Editors Ananya Krishnapura Elettra Preosti
Jonathan Kuo
Research & Blog Editors Andreana Chou Isabelle Chiu
In this issue, our writers travel through origins, destinations, and everything in between. We continue conversations with scientific experts about the COVID-19 pandemic and hear their thoughts on long COVID, vaccine education, and emerging viral variants. In “Bringing Philosophy into Scientific Research,” Marley Ottoman revisits the divide between the humanities and the sciences in order to envision how ideas like emergence theory may motivate interdisciplinary work for scientists and philosophers alike. Each of our departments covers the latest technologies in a variety of contexts—ranging from features pieces on ‘microscopic astronauts’ and assembloids, interviews discussing AlphaFold and CRISPR, and original research designing underwater autonomous vehicles— to envision how emerging technologies might shape our future.
Layout Editors Stephanie Jue Michael Xiong Copy Editors Noah Bussell Emily Pearlman Features Writers Anna Castello Nachiket Girish Jessica Jen Marley Ottoman Natalie Slosar Interviews Team Liane Albarghouthi Lexie Ewer Timothy Jang Emily Matcham Kaitlyn Wang Sabrina Wu
Lilian Eloyan Anisha Iyer Anderson Lee Emily Pearlman
Rosa Lee Hosea Chen Kenneth Hsu Esther Lim Rebecca Park Allisun Wiltshire Yi Zhu
Research & Blog Team Noah Bussell Sara Amare Tiffany Liang Afroze Khan Aarthi Muthukumar Nanda Nayak Veronica Paul Edith Noyes Shreya Ramesh Leighton Pu Chris Zhan Layout Interns Lexie Ewer Nanda Nayak Shreya Ramesh
2
Thinking through the lens of emergence, however, is an invitation to focus not only on origins, but to trace the threads that link origins to destinations. And as science journalists, our creed can almost be summarized in this curious process. As journalists, we seek to bring to light topics that often resist being made visible, concepts that have a muddled (or even absent) appearance in the public consciousness. Through writing, we capture snapshots of the present and glide along its connections to the past and the future, in hopes of understanding where we’ve come from, and to where we might go.
Aarthi Muthukumar Rebecca Park
Berkeley Scientific Journal | SPRING 2021
As a journal, we too have been considering how to respond to new channels for information emerging in our modern media ecosystem. This semester, we launched our monthly newsletter, which serves to introduce subscribers to our writing teams and provide updates on our pieces throughout the year. We updated our practices for processing blog pitches and research submissions to streamline work with our writers. And in a conversation with writer Elena Conis, our students received tips on how to speak with the press to ensure their own research is represented accurately. Emergence, for us, has given us a chance to imagine what our future might look like, at a time when the future seems so uncertain. Reader, as you flip through the pages of the Spring 2021 issue of our journal, we invite you to do the same—to reflect on the present moment and imagine what tomorrow will bring. Jonathan Kuo Editor-in-Chief Rosa Lee Managing Editor
TABLE OF CONTENTS Features 10. 13. 25. 32. 35. 43. 57. 61. 64.
The Evolution of Beauty Lilian Eloyan Seeking Serendipity Emily Pearlman Avians to Airplanes: Biomimicry in Flight and Wing Design Natalie Slosar To Live or Not to Live: Defining Life Outside Biological Systems Nachiket Girish Microscopic Astronauts: Engineering Bacteria to Aid Human Space Travel Anderson Lee Crossing the Blood Brain Barrier to Treat Glioblastoma Multiforme Anisha Iyer An Anthology of Violet in Science Jessica Jen Bringing Philosophy into Scientific Research Marley Ottoman Assembloids: The Model of the Future Anna Castello
Interviews 4.
COVID-19 Year in Review: Insights from UC Berkeley’s Infectious Disease Experts
16.
Testing the Theory of General Relativity at the Galactic Center (Dr. Reinhard Genzel)
22.
The Function and Future of CRISPR Genome Editing (Dr. Jennifer Doudna)
28.
Environmental Design: Solar Envelopes and Workplace Evaluation (Dr. Giovanni Betti)
38.
A Deep Dive into Modeling and Mechanisms (Dr. John Moult)
47.
Developing a New Choice: The Unraveling of Reproductive Mysteries (Dr. Polina Lishko)
53.
Data Analysis for Formula 1 (Guillaume Dezoteux and Franz Tost)
Liane Albarghouthi, Lexie Ewer, Timothy Jang, Esther Lim, Emily Matcham, Rebecca Park, Kaitlyn Wang, Allisun Wiltshire, and Sabrina Wu Sabrina Wu, Yi Zhu, and Elettra Preosti
Liane Albarghouthi, Emily Matcham, Kaitlyn Wang, Ananya Krishnapura, and Elettra Preosti Hosea Chen, Rebecca Park, Sabrina Wu, and Elettra Preosti Bryan Hsu, Timothy Jang, and Ananya Krishnapura
Lexie Ewer, Esther Lim, Allisun Wiltshire, and Ananya Krishnapura Elettra Preosti
* Any personal opinions and perspectives of the professors interviewed for this issue do not necessarily reflect those held by the Berkeley Scientific Journal. The Journal does not endorse any viewpoints that interview subjects may hold that are not explicitly expressed in each interview piece.
Research 68.
Potential Relationships Between NAFLD Fibrosis Score and Graft Status in Liver Transplant Patients
74.
Reimagining Autonomous Underwater Vehicle Charging Stations with Wave Energy
Haaris Kadri, Raja R. Narayan, Sunnie Y. Wong, and Marc L. Melcher
X Sun, Bruce Deng, Jerry Zhang, Michael Kelly, Reza Alam, and Simo Makiharju
SPRING 2021 | Berkeley Scientific Journal
3
COVID-19 Year in Review: Insights from UC Berkeley’s Infectious Disease Experts
BSJ by?
Stefano Bertozzi, MD, PhD, is the former Dean and a professor of Health Policy and Management at the UC Berkeley School of Public Health. He serves as Editor-in-Chief for Rapid Reviews: Covid-19, an open access journal accelerating peer review of COVID-19 research.
Sandra McCoy, PhD, MPH, is an associate professor in the Division of Epidemiology and Biostatistics at the UC Berkeley School of Public Health. She is also on the editorial board of the journal mHealth.
Bruce Conklin, MD, is a professor in the Departments of Medicine, Cellular and Molecular Pharmacology and of Ophthalmology at UC San Francisco and Deputy Director of the Innovative Genomics Institute.
Arthur L. Reingold, MD, is the Division Head of Epi d e m i o l o g y an d Biostatistics at the UC Berkeley School of Public Health. He has studied the spread and prevention of infectious diseases for over 40 years.
: Reflecting upon the past year, what was predictable about the course of the pandemic, and what were you surprised
Schaletzky: There has been a real renaissance in biology in the last year, and people who did not previously specialize in viruses are now entering the field and bringing all kinds of new technologies into it. That is wonderful. Additionally, I was positively surprised by the speed of the science being developed. However, I was negatively surprised by the lack of deregulation. The United States administration has been ossified; we have not really been able to deal with this pandemic quickly enough. It gives me a bad feeling for future disasters.
Schekman: I did not think that the pandemic would be as overwhelming as it has been, and I am shocked at the level of death and severe illness. On the other hand, it amazes me how quickly vaccines have been developed. Even so, wealthy countries have taken advantage of this speed by hoarding vaccines at the expense of developing countries, which is discouraging because we cannot return to normal until a majority of people globally are vaccinated. Fortunately, there are non-governmental agencies and projects, such as a vaccine initiative supported by the Gates Foundation, that aim to distribute vaccines in developing nations.
4
Berkeley Scientific Journal | SPRING 2021
BSJ
: Since the last time we talked, which new findings on SARS-CoV-2’s mechanism of infection surprised you the most? Are there any clinical mysteries that you are still curious about?
INTERVIEWS
In this piece, we discuss the evolution of the COVID-19 pandemic and how it has both driven and shifted the scientific community. We met with eight infectious disease and genetics experts in late March to discuss their perspectives on global response, vaccination, and lessons learned.
BY LIANE ALBARGHOUTHI, LEXIE EWER, TIMOTHY JANG, ESTHER LIM, EMILY MATCHAM, REBECCA PARK, KAITLYN WANG, ALLISUN WILTSHIRE, AND SABRINA WU
Julia Schaletzky, PhD, is the Executive Director of the Henry Wheeler Center for Emerging and Neglected Diseases (CEND). She is also the co-founder of the COVID Catalyst Fund, which aims to provide rapid funding to accelerate COVID-19 research.
John Swartzberg, MD, FACP, is an infectious disease specialist and Clinical Professor Emeritus at the UC Berkeley School of Public Health. Before becoming a full-time faculty member, Dr. Swartzberg had 30 years of clinical experience working in infectious diseases and internal medicine.
Randy Schekman, PhD, is a professor of Cell and Developmental Biology at UC Berkeley and an investigator at the Howard Hughes Medical Institute. He launched the open access journal eLife in 2012 and won a Nobel Prize in Physiology or Medicine in 2013.
Stacia Wyman, PhD, MS, is a Senior Genomics Scientist at the Innovative Genomics Institute. Dr. Wyman works on computational research involving CRISPR and has recently transitioned to researching COVID-19 genomics.
Swartzberg: What surprised me the most with COVID-19 was that I anticipated it to behave very much like influenza. However, it can be much more aggressive than most cases of influenza and carries a significantly higher morbidity and mortality rate. There have also been some disconcerting developments, specifically regarding how SARS-CoV-2 attacks many other parts of the body aside from the airways. The myocarditis issue [inflammation of the heart] and neurological complications following infection, for example, were not anticipated and were difficult to predict. Another concern we have regards what has been described as the post-COVID syndrome or long COVID. This is a condition that is probably multifactorial in cause and of which we have a very poor understanding, even though it appears to be extremely common. It defies a really good pathophysiologic mechanism because it seems
to occur in everyone, from people who were very ill with COVID-19 to those who were asymptomatic. Post-COVID syndrome could be a very significant long-term tragedy for millions of people and could also be a tremendous burden on our healthcare system going forward. Another issue that is fortunately not as common as we were initially afraid of is the multisystem inflammatory syndrome in children. It appears to be an inflammatory process triggered by the virus, but we neither know what causes it nor why certain children are more predisposed to it. We see it more commonly, for example, in African-American children than in other racial and ethnic groups, but we do not have an explanation as to why. We also do not know about the long-term sequelae from that. These are just a few things that immediately come to my mind when I think of how confusing COVID-19 really is.
INTERVIEWS
SPRING 2021 | Berkeley Scientific Journal
5
“Post-COVID syndrome could be a very significant long-term tragedy for millions of people and could also be a tremendous burden on our healthcare system going forward.”
BSJ
: Within this past year, how has the COVID-19 pandemic affected you and your colleagues’ research focuses?
Conklin: My lab has become pretty good at pivoting and trying new things. For example, we were able to start two completely new projects. One of these, which was inspired by virologist Dr. Melanie Ott, explores the reaction of cardiac cells to SARS-CoV-2. Many researchers in the pathology community thought that COVID-19 was only a lung disease; however, we found that the tissue in cardiac cells not only gets infected, but actually produces the live virus! There have been signs of cardiac effects—specifically, increased troponin levels—among COVID-19 patients. Troponin is a cardiac-specific protein; its normal levels should be very close to zero, so chest pain and a small bump in troponin is a sign of a heart attack. COVID-19 patients have increased troponin levels, but it has been very hard to show that the virus was in the heart. Overall, the assessment is that the SARS-CoV-2 virus can circulate around your body and rarely breaks into cardiac cells. But when it does infect the heart, troponin levels go up and mortality rates increase across the board, even in relatively young people. What is really unknown is how severe the effects on the heart are and whether or not they will be long-term. Wyman: It has had a huge impact. On March 13 of last year, Dr. Jennifer Doudna, the head of the Innovative Genomics Institute (IGI), got a slew of IGI and UC Berkeley scientists to come together for our last in-person meeting to discuss how we can pull together our resources and expertise to address the pandemic. That meeting sparked many different COVID-related projects since researchers could only return to the lab for COVID-19 work. For example, Dr. Patrick Hsu and Dr. David Savage have developed a rapid COVID-19 test that can be read with an iPhone. It has been amazing to see individuals from different areas of expertise come together to work on this one thing. This kind of scientific pivot has had a huge impact on getting everything ramped up fairly quickly, such as the vaccines, new technologies for testing, and genomics efforts.
BSJ
: What are some widespread misconceptions about how mRNA vaccines work?
Schaletzky: A misconception is that the mRNA vaccine causes some kind of DNA modification of the host genome, but that is not true. We have echo chambers on social media now, where conspiracy theories can grow and be distributed so easily. I would say to people that they should follow the central tenet of the Enlightenment and just think for themselves and look at the data that has been published. The mRNA vaccines work very well, better than other modalities. In Israel, we already have a whole country more or less vaccinated with really good results.
BSJ
: How do the Moderna and Pfizer COVID-19 vaccines compare to vaccines we have seen in the past? What would you say to someone who is on the fence about receiving a vaccine? How do social systems impact a person’s perspective on vaccination? Schaletzky: The only change is that the mRNA basically allows our own body to make the antigen that is needed for generating immunity. In the past, we purified the antigens outside of the body using recombinant and biotechnology methods and injected the protein that is already formed into the body. In this case, the mRNA acts as a transcript or blueprint for this protein, and your own body makes the protein for the immune reaction—and this works better than the old way of making a vaccine, it turns out. I would trust this vaccine because it is more efficacious than the subunit vaccines that are generated via the usual recombinant methods, and it works in more than 95% of the cases. We currently also have no evidence to say that it would not work against the other emerging variant strains. McCoy: Vaccine hesitancy is a really important issue, and I think we know from behavioral science and psychology that trying to convince someone that they are wrong often just makes them more staunch in their beliefs, and it can actually backfire. It is really important to understand where their concerns come from. Concerns about side effects or contraindications for pre-existing conditions are valid concerns that we as public health professionals can ease by providing patients with information so that they can make informed decisions. The more finger-wagging that we do as scientists, the less effective we will be in achieving our public health goals, so we have to be coming from a place of understanding and wanting to find a middle ground. For one, in the U.S. there has been a history of horrifying ethical violations tied to racism against communities of color, so it is no surprise that even several generations later many communities are distrustful of the biomedical community and their products or services, such as vaccines. I think that is a really important context to consider when working with communities of color as well as any disadvantaged or vulnerable communities, as there are existing legacies that predate all of us but still influence people’s willingness to engage in health services and use health
“Many researchers in the pathology community thought that COVID-19 was only a lung disease; however, we found that the tissue in cardiac cells not only gets infected, but actually produces the live virus!”
6
Berkeley Scientific Journal | SPRING 2021
INTERVIEWS
“No vaccine is perfectly riskfree. No medication you take is risk-free. In fact, nothing you do in life is risk-free, but these vaccines are very safe and effective.”
products. Certainly, we also have to acknowledge that there is a political element that is shaping people’s mistrust or disbelief of science. However, it is really important to not immediately discount people who are vaccine-hesitant as being uninformed or any host of other adjectives because it is really difficult to navigate the swarm of information and misinformation that is readily available at our fingertips. Reingold: I have a lot of experience talking to people about vaccination, including people who are not interested in vaccination. In general, people love their families and their children, and they want to do what is best for them or for themselves. It can be difficult to make a wise decision when there is an enormous amount of misinformation, rumors, and outright nonsense. I understand why it is hard for people. I understand why people are disinclined to listen to an elderly Berkeley professor. Having said all of that, it is very much my judgment, and the judgment of people who are a lot smarter than me, that the benefits of vaccination against COVID-19 far outweigh the risks. There are unfortunately some young, healthy people who get seriously ill and even die from COVID-19. No vaccine is perfectly risk-free. No medication you take is risk-free. In fact, nothing you do in life is risk-free, but these vaccines are very safe and effective. People of color might be more resistant or hesitant about vaccination because of a long history of medical mistreatment. Concerns about vaccination have always been there in every group, whatever your race, whatever your socioeconomic status, and whatever your political views. The enormous difference in concern about this pandemic and interest in this vaccine by political parties may be new, and those things influence the extent to which people worry about COVID-19 and the extent to which they are willing to change their lives. It has become highly politicized, and that is very unfortunate.
BSJ
: As new SARS-CoV-2 variants are spreading throughout the world, do you notice any trends in individual responses to these variants? In what ways do the variants affect the vaccine response and the goal of achieving herd immunity? Wyman: A virus is always accumulating mutations, and that is completely normal. Sometimes these mutations are deleterious to us but beneficial for the virus. When we had a huge surge of the virus in December and January, it gave the virus the opportunity to accumulate these deleterious mutations, and then the resulting variants were able to take over some populations. For example, Alpha (B.1.1.7) has been in the U.S. for a while now, but in Northern California, it is rare compared to Southern California. Instead, we have a California variant, which now makes up more than 50% of the cases in Northern California and constitutes a large number of cases in Oregon and is now found in most states. For these variants, there are certain mutations that have been shown to have a negative functional impact, and those mutations have been arising independently in different locations. The E484K mutation spontaneously arose with Alpha, and now it has spontaneously arisen in some other variants as well. It is really important to be aware of what is out there and monitor the spread of new variants but not be panicked before having any functional evidence. I personally feel
INTERVIEWS
quite optimistic about where we are heading with vaccinations and with surveillance of the variants. Right now, we have information about the variants that have been resistant to the vaccines, as well as how they react with convalescent plasma (to simulate if someone was reinfected). We need to continue to do extensive sequencing and sampling around the world, and in particular, continue monitoring and sequencing the cases of vaccine breakthrough. This will help us know what is out there and what kind of impact it is having so that vaccine manufacturers can adapt to the current situation and give us boosters or modify the vaccine. Reingold: Our primary concern as human beings in a race with these viruses is figuring out which variants can, on average, make you sicker if you are getting infected. The second question is whether they are more readily transmitted from one person to another. The third, and in some ways the most important, is whether they pose a problem in terms of vaccine-induced immunity or even naturally acquired immunity to a different variant. That raises the question of whether people have to generate other variations of a vaccine and whether this will increase the need for booster doses. The program I run, the California Emerging Infections Program, is currently conducting surveillance across multiple Bay Area counties. We work with the state and county health departments and have received a large amount of additional funding to help answer these questions. Nationally, we are part of a network of about 10 or 15 sites, and we have all been given more money by the CDC in Atlanta to allow us to contribute to answering these questions, particularly in regards to mapping the variants and studying vaccine effectiveness against the various areas.
BSJ
: In what ways has this pandemic called attention to the need for open access journals?
Bertozzi: The pandemic has done more to support preprints than it has open access journals. Overwhelmingly, new scientific advances with respect to COVID-19 have been posted on one of the preprint servers, and then moved from there into traditional publishing. However, even if they are moved into a journal, the preprints may be published by a journal that is not open access. So, the fact that the preprint is publically available means that people have access to that information—even though it may not be the final version. Nevertheless, it is usually enough to create the kind of access that open access strives to achieve. Interestingly, this devalues what people are willing to pay for the published version in a subscriptionbased journal. So, I think it helps to drive the whole world in the direction of open access.
SPRING 2021 | Berkeley Scientific Journal
7
Schekman: This pandemic has really changed the public perception of science because even commercial journals, which are closed access and are available only by license agreements with institutions, have realized that COVID-19 research needed to be put out there for all to see. However, these journals only made available the research that was relevant to the pandemic; they did not do so for all of the other work that was published. As soon as this pandemic is over, the door is going to close again. A trend that happened even before the pandemic—which is a good thing—is that there was a lot of pressure for people to put their publications in a public archive once they were ready to be reviewed. Overall, preprint servers have allowed more eyes on every paper. The usual closed-review approach is as follows: the paper is submitted to a journal, it is assigned to several experts, and these experts read and offer comments—usually on specific mistakes or limitations. However, if this process is only done by three or four people, there could be things that are missed, even by experts. Now, when researchers post something on the preprint servers, they can have thousands of people reading it. As a result, flaws can be pointed out more effectively.
BSJ
: What are your thoughts regarding the likely return to in-person instruction by fall? What kinds of precautions do you think need to be in place for in-person instruction to begin? Swartzberg: First and most importantly, it is going to require a cooperative student body, along with our staff and faculty, following public health rules. Second, it is going to require careful monitoring: we are going to need to have robust testing of staff, faculty, and students, as well as the alacrity to act on that testing quickly with contact tracing. Third, we are going to need to have vaccine availability. My aspiration is that all of our students, faculty, and staff will be vaccinated. I really want to stress individual behavior. The vast, vast majority of the students on our campus behave really responsibly and do the right things. But when you have over 30,000 students, you are going to have a small percentage who do not. In sum, we have the tools to make it safe. This virus does not cause as much disease in younger people (although it certainly can). It causes most deaths in older people. However, the major driver of transmission is primarily young people. This creates an interesting conflict—we are asking people to tell themselves that they need to protect others from getting infected because they are driving the pandemic. There is not as much in it for them to do those things, and that can be problematic. Reingold: A key issue is going to be vaccination. There is discussion about whether vaccination will be mandatory for students to come on campus. This is a system-wide discussion across ten campuses. That could only happen if the various vaccines get full approval from the FDA. At the moment, they only have emergency authorization. If they, as planned, get full authorization in June along with increased availability of vaccines, it is quite likely that students will not be allowed to come to campus if they have not been vaccinated. There is nothing unusual about doing that. I am sure there will be resistance. I am sure there will be people who object. I am sure there will be lawsuits, but mandatory vaccination will enhance safety and the
8
Berkeley Scientific Journal | SPRING 2021
“This creates an interesting conflict—we are asking people to tell themselves that they need to protect others from getting infected because they are driving the pandemic.” ability to be together in a classroom.
BSJ
: What lessons has the pandemic taught you, both in your personal and professional life? What are some important takeaways from the global response to COVID-19? Bertozzi: There is so much we did wrong that it is difficult to point to the positive lessons learned. One specific lesson is that for each type of pathogen, the international response needs to be differentiated by the kind of transmission with which it is associated—for instance, respiratory, blood-borne, or sexually transmitted. Another major takeaway is that PPE can and needs to be stockpiled for the next pandemic. Since respiratory pathogens spread the most quickly, we need to have the appropriate PPE in massive quantities so that they can be deployed immediately. We need to invest in advancing production capacity for the tools that we know we will need the next time we have a problem, whether it is a new variant of this virus or a new pandemic. That means we must improve our ability to quickly diagnose disease and develop monoclonal antibodies that can fight or prevent infection. Ideally, we should also invest in monoclonal antibody technology, which will enable treatments to be just a squirt in the nose rather than an infusion, simplifying the treatment modality by a lot. Thirdly, not only do we need to invest in vaccine development, but we should also develop a consolidated, coordinated, preexisting understanding of how we will rapidly scale up manufacturing as soon as a vaccine is available. Further, we should have predetermined financing available. This time, we had to start looking for financing when the pandemic hit, even though we should have had an immediate infusion of money to prepare manufacturing capacity for vaccines, drugs, and diagnostics. Finally, we need to think about our public health infrastructure and how that can be rapidly activated. Conklin: I think a pandemic is one of those events that really makes you most aware that we are one people living on one planet and that what happens in some other part of the world is going to affect you, even if it is far away. The countries that were most prepared were the ones that actually had SARS-CoV-1 outbreaks and knew how to handle this exact kind of virus with targeted public health measures. Our own public health system, in particular the CDC, was remarkable in how it dropped the ball. By suppressing testing and targetedly dismantling the public health system, their errors really had a huge effect on our country and our economy. However, I am hopeful that we can come out of this with a greater sense of urgency and responsibility towards global public health measures,
INTERVIEWS
“I am hopeful that we can come out of this with a greater sense of urgency and responsibility towards global public health measures.” especially for infectious disease. This is also the first time that mRNA has been used really as a therapeutic in any kind of large scale, and it is a huge addition to the biomedical armamentarium. People are now thinking of other applications for mRNA technology, like cancer immunotherapies and CRISPR systems. The power of this technology is something that can really accelerate our approach, biomedically speaking and beyond.
8. Wyman headshot: [Photograph of Stacia Wyman]. Stacia Wyman. Reprinted with permission.
McCoy: We are learning just at the university level that we can pivot really fast, and that students and faculty are very resilient. The fact that we all went remote within weeks’ notice is incredible. I am the program lead for the online epidemiology and biostatistics program, and one thing that has been a silver lining for us is just that people are becoming increasingly interested and respectful of remote education, which can bring education to many more vulnerable populations who might be left out of the formal university setting but can attend from other places. This interview, which consists of multiple conversations with each expert, has been edited for brevity and clarity.
IMAGE REFERENCES 1. Bertozzi headshot: [Photograph of Stefano Bertozzi]. School of Public Health, University of California, Berkeley. https:// publichealth.berkeley.edu/people/stefano-bertozzi/. Reprinted with permission. 2. Conklin headshot: [Photograph of Bruce Conklin]. Gladstone Institutes. https://gladstone.org/people/bruce-conklin. Reprinted with permission. 3. McCoy headshot: [Photograph of Sandra McCoy]. School of Public Health, University of California, Berkeley. https:// publichealth.berkeley.edu/people/sandra-mccoy/. Reprinted with permission. 4. Reingold headshot: [Photograph of Arthur Reingold]. School of Public Health, University of California, Berkeley. https:// publichealth.berkeley.edu/people/arthur-reingold/. Reprinted with permission. 5. Schaletzky headshot: [Photograph of Julia Schaletzky]. Henry Wheeler Center for Emerging and Neglected Diseases, University of California, Berkeley. http://cend.globalhealth. berkeley.edu/julia-schaletzky-phd/. Reprinted with permission. 6. Schekman headshot: [Photograph of Randy Schekman]. Department of Molecular and Cell Biology, University of California, Berkeley. https://mcb.berkeley.edu/faculty/all/ schekmanr. Reprinted with permission. 7. Swartzberg headshot: [Photograph of John Swartzberg]. School of Public Health, University of California, Berkeley. https:// publichealth.berkeley.edu/people/john-swartzberg/. Reprinted with permission.
INTERVIEWS
SPRING 2021 | Berkeley Scientific Journal
9
The Evolution of Beauty BY LILIAN ELOYAN
The thought of natural selection, a struggle for existence whereby organisms better adapted to their environment survive to pass on their traits to offspring and less fit organisms die out, may seem grim. However, as Charles Darwin states in the last line of his book The Origin of Species, “There is grandeur in this view of life.”1 It is this fierce fight for survival that has driven organisms to incredibly complex and oftentimes beautiful evolutionary adaptation. The evolution of beauty has especially blossomed in flowering plants, or angiosperms, since the first flower recognized by paleobotanists bloomed 125 million years ago.2 From that moment on, the coevolution of plants and their pollinating counterparts has become one of the most intimate symbiotic relationships.
FLOWERING PLANTS Today there are roughly 300,000 flowering plants, yet they are a relatively new addition to life on Earth.2 450 million years ago plants took over land, but for 300 million years flowers did not have a place in the sun. Just before the emergence of flowers, during the Jurassic period, dinosaurs lived alongside a world dominated by gymnosperms, non-flowering plants that bear cones, such as ferns, which used the natural movement of the wind to spread their pollen across the landscape.2 However, this method is costly to the plant and does not guarantee pollination. Angiosperms, on the other hand, rely on biotic pollination, or pollination by animals which oftentimes ensures that the male pollen, located on a flower’s stamen, will reach the female pistil of another flower.3 As a pollinator searches for a strategically located reward, pollen gets attached to its body and is conveniently transported to the next flower as the animal continues to search for food.4 Biotic pollination was a game changer, as the simultaneous evolution of floral beauty and pollinators led to species of flowering plants now making up 80% of the plant kingdom.5 Pollinators know a reward awaits them because of the appealing features of flowering plants that humans and animals alike admire, called attractants. Just as birds of paradise flaunt their decorative feathers in hopes of attracting a mate, flowers that stand out in the green backdrops of their environments and release enticing fragrances are not only easier to find and to pollinate, but also signal to animals that they have more to offer than beauty. These rewards come most commonly in the form of either nutritious pollen or sweet nectar.6 One of the earliest insects to reap the benefits of these rewards were carnivorous wasps who swapped out the meat in their diets for nutrient-rich pollen from magnolia trees.7 Over time these wasps found that pollen foraging was more energy efficient than hunting for a meal and eventually evolved into what we now call bees.7 This connection is now called a mutualistic relationship, meaning both organisms benefit from the interaction as bees pollinate two-thirds of the crops grown in the U.S.8 But bees aren’t attracted to just any flower.
WHAT MAKES A FLOWER BEAUTIFUL Insects pollinate about 65% of flowering plant species so attractants play to their pollinators’ strongest senses, but beauty is in the eye of the beholder, as each organism prefers a different kind of flower.8 The visual spectrum of bees, for example, is shifted toward shorter
10
Berkeley Scientific Journal | SPRING 2021
FEATURES
wavelengths of light relative to human sight, allowing them to see ultraviolet light.9 For this reason bees are unable to see the color red which appears black to them.9 Consequently, bees are attracted to flowers that are often blue, yellow, or ultraviolet, even though humans might not be able to appreciate their beauty.9 Moths also pollinate several flower species but are most active at dusk. Therefore, plants that have co-evolved with moths tend to have lighter-colored, or white, flowers that are visible under low-light conditions and produce strong, easily detectable, scents.9 Unlike flowers pollinated by bees or bee flowers, moth flowers generally do not have a landing platform as moths remain in flight as they feed on nectar, and nectar tubes are generally narrow, in keeping with the narrow proboscis of moths.9 Butterflies, on the other hand, prefer brighter colors and sweeter scents as they have exceptional color vision and powerful olfactory capabilities, making them the most successful of the biotic pollinators.9 Birds also have outstanding visual abilities and can better sense warmer shades of the color spectrum, but they have poor olfactory sense, so birdpollinated flowers are typically red and odorless.9 In addition, birds have high metabolic demands compared to insect pollinators, so bird flowers produce relatively large amounts of nectar as a reward and must have thick, rigid parts to withstand probing by strong beaks.9 These mutualisms are sometimes so specialized that they function as a lock and key, with only one pollinator able to access a flower’s unique reward.
COEVOLUTION Darwin used symbiotic relationships between organisms as evidence for evolution as they exemplified a key component of evolutionary theory; predictability. In Madagascar, Darwin and his colleagues were perplexed by a species of star orchid that has a narrow 16-inch long nectar tube.4 Darwin predicted that the star
“Attractants play to their pollinators’ strongest senses, but beauty is in the eye of the beholder, as each organism prefers a different kind of flower.”
FEATURES
orchid must have a complementary counterpart that would be able to reach its nectar.4 Sure enough, 20 years later, the hawk moth, with a 16-inch long proboscis, was spotted on the island and named Xanthopan morganii praedicta, which translates from Latin to “predicted moth.”4 Other flowering plants have evolved more deceptive modes of pollination. Their relationships with their pollinators are sometimes commensal, meaning only one member of the partnership benefits, while the other is largely unaffected. Ophrys apifera, also known as the bee orchid, has striking yellow and black colored petals that are shaped like a female bee.10 As male bees attempt copulation, they inadvertently gather pollen on their backs and carry them to the next flower in a futile mating attempt.10 Other examples of deceit pollination, however, aren’t as pretty. The largest flowering plant in the world, Rafflesia, mimics the look and scent of rotting flesh to encourage flies to lay their eggs on the plant and unwittingly carry pollen between flowers whereas the larvae, upon hatching, starve as they need animal flesh to survive.11 Beyond these floral oddities, the necessity of floral beauty to attract pollinators is a happy side effect of evolution when viewed through a human lens. Although the world around us can be a hostile and competitive place, we’re lucky that beauty has evolved to become a dominant element in the natural world and that we humans are not immune to these charming adaptations. With all the selective pressures that exist to define beauty in the natural world, it is important to realize that as humans, our definition of beauty remains similarly in a perpetual state of evolution; while one person smells the roses, another may see nothing but thorns. The coevolution between flowering plants and their pollinators has designed a world of incredible diversity that makes life on Earth so beautiful today. Acknowledgements: I would like to acknowledge my wonderful editors, Shivali Baveja and Nicholas Nolan, of the Berkeley Scientific Journal features department, for their indispensable contribution to the writing process.
SPRING 2021 | Berkeley Scientific Journal
11
Figure 1: The bee orchid (Ophrys apifera) uses sexual deception to get pollinated by mimicking the look of a female bee.
“The necessity of floral beauty to attract pollinators is a happy side effect of evolution when looking at the phenomenon through a human lens.” REFERENCES 1. Darwin, C. (1859). The origin of species. Vintage. 2. Pennisi, E. (2009). On the origin of flowering plants. Science, 324(5923), 28–31. https://doi.org/10.1126/science.324.5923.28 3. Walker, T. (2020). Animals. In Pollination: The Enduring Relationship between Plant and Pollinator (pp. 66–95). Princeton University Press. https://doi.org/10.2307/j.ctv10tq6j3.6 4. Zimmer, C. (2006). Evolution: the triumph of an idea. Harperperennial. 5. Barras, C. (2014, October 10). The abominable mystery: How flowers conquered the world. BBC Earth. http://www.bbc.com/ earth/story/20141017-how-flowers-conquered-the-world 6. Walker, T. (2020). Rewards. In Pollination: The Enduring Relationship between Plant and Pollinator (pp. 126–153). Princeton University Press. https://doi.org/10.2307/j. ctv10tq6j3.8 7. Goulson, D. (2014, April 25). The Beguiling History of Bees [Excerpt]. Scientific American. https://www.scientificamerican.
12
Berkeley Scientific Journal | SPRING 2021
com/article/the-beguiling-history-of-bees-excerpt/ 8. Kearns, C., & Inouye, D. (1997). Pollinators, flowering plants, and conservation biology. BioScience, 47(5), 297–307. https:// doi.org/10.2307/1313191 9. Walker, T. (2020). Attraction. In Pollination: The Enduring Relationship between Plant and Pollinator (pp. 96-125). Princeton University Press. https://doi.org/10.2307/j. ctv10tq6j3.7 10. Fenster, C. B., & Martén‐Rodríguez, S. (2007). Reproductive assurance and the evolution of pollination specialization. International Journal of Plant Sciences, 168(2), 215–228. https:// doi.org/10.1086/509647 11. Beaman, R., Decker, P., & Beaman, J. (1988). Pollination of Rafflesia (Rafflesiaceae). American Journal of Botany, 75(8), 1148–1162. https://doi.org/10.2307/2444098
IMAGE REFERENCES 1. Banner: Image made by BSJ. 2. Figure 1: Dupont, B. (2014). Bee Orchid (Ophrys apifera) (14374841786) - cropped [Photograph]. Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Bee_Orchid_ (Ophrys_apifera)_(14374841786)_-_cropped.jpg
FEATURES
Seeking Serendipity BY EMILY PEARLMAN
W
hen Dr. Peter Walter was approached by one of his stress-response genes.1 This was the first example of an mRNA graduate students with an unusual finding, a shorter- splicing reaction serving as a molecular switch; the process, it turns than-expected mRNA molecule, he brushed the student aside, telling out, is conserved across eukaryotes from single-celled yeast to him to “go back and repeat the experiment, and try not to degrade humans. It breaks all of the conventions of mRNA splicing, which your mRNA this time.” When it became clear that this peculiar result was not erroneous, however, he paused. “Let’s try to make some sense of this,” he said. It all started with a simple question: How do different parts of the cell communicate with each other? When unfolded proteins build up in the endoplasmic reticulum (ER)—the cell’s protein folding factory—the ER sends out an SOS signal to the nucleus. Like any good mission control center, the nucleus responds by expressing rescue genes that help the ER mitigate its stress. Walter and his graduate students wanted to elucidate the molecular details of this signaling pathway, known as the unfolded protein response (UPR).1 What they found was entirely novel and unexpected. The buildup of unfolded proteins in the ER activates a pair of molecular scissors, which snips a small segment of genetic material out of the middle of a particular piece of mRNA. Figure 1: An mRNA splicing reaction activates the unfolded protein response. When IRE1 senses With this segment missing, the mRNA a buildup of unfolded proteins in the endoplasmic reticulum, it cuts XBP1 mRNA, removing can be translated into a protein that binds an intron. The resulting spliced mRNA is translated into the XBP1 transcription factor, which to DNA and activates expression of ER activates expression of stress-response genes. Image licensed under CC BY-NC-ND 4.0.
FEATURES
SPRING 2021 | Berkeley Scientific Journal
13
Luck
Serendipitous
Preparedness Aim Conventional Figure 2: Serendipity lies at the intersection of luck, aim, and preparedness, while conventional scientific approaches often dismiss the role of luck in discovery. Image adapted from source. normally takes place in the nucleus and requires dozens of different enzymes. “Nothing made sense,” Walter tells me. “It was a beautiful moment.” The history of science is filled with these beautiful moments. In 1928, Alexander Fleming discovered penicillin, the first antibiotic, when he noticed patches devoid of bacteria on a petri dish contaminated with mold.2 In 1943, Albert Hofmann inadvertently discovered the psychedelic properties of LSD when he absorbed some of the compound through his ungloved fingertip; three days later, he intentionally ingested LSD and rode his bicycle home from the lab while experiencing wild hallucinations.3 In 1963, Arno Penzias and Robert Wilson were measuring radio signals from nearby galaxies when they noticed a strange background noise in their data. The noise wasn’t due to pigeon droppings, as they initially expected, but was actually the cosmic microwave background radiation, a key piece of evidence for the Big Bang Theory.4 The list goes on and on. In fact, when you look closely, nearly all discoveries involve an element of serendipity. This notion of unexpected and beneficial discoveries has long fascinated scientists and philosophers alike.5 Serendipity is more than simple luck or accident; it requires the ability to recognize when unexpected results are significant and the flexible mindset to pursue these unexpected results further.6 Louis Pasteur put it perfectly in an 1854 lecture: “In the fields of observation, chance favours only the prepared mind.”2 Scientific discovery thrives here, at the intersection between chance and wisdom. Walter’s investigation of the UPR doesn’t end here. After deciphering how the ER communicates with the nucleus, he began searching for molecules that stopped that chatter. Aberrant activation
of the UPR throws cells into a state of chaos, so it’s associated with many diseases; inhibitors of the UPR are promising candidates for therapeutics. Starting with a pool of hundreds of thousands of molecules, he subjected them to a series of tests, weeding out unlikely candidates at every step. At the end of Walter’s screen, only one molecule remained: ISRIB, or integrated stress response inhibitor. Intimately related to the UPR, the integrated stress response alters protein expression in response to intra or extracellular stress.
“Serendipity is enhanced when researchers with different expertises and perspectives are brought together in a collaborative setting; colleagues from different disciplines may have previously unrecognized insight or alternative interpretations of unexpected results.” This screening approach, which Walter describes as “disease agnostic,” is unconventional in that he began without the intent of curing a specific disease. The current paradigm of grant allocation can make it difficult to receive funding for this type of untargeted biological research. More often than not, grants drive researchers to address particular problems, a process that discourages flexibility and makes it hard to change directions in light of unexpected developments, stifling serendipity.7 Furthermore, funding agencies are more likely to support projects where successful outcomes can be predicted with reasonable certainty, in order to maximize their returns.8 This is not to say that all research should be curiositydriven—unexpected findings can arise from targeted studies as well—but curiosity-driven research should be given a chance.4 After discovering ISRIB, Walter’s next move was to call up his colleague, Nahum Sonenberg, to see if he had any materials that would be useful for studying ISRIB’s effects on protein translation. Though the contents of his lab freezer didn’t wind up being useful, during their conversation, Sonenberg happened to mention an interesting project related to protein translation in the brain that his postdoc, Mauro Costa-Mattioli, was working on. Coincidentally, the very step of translation that ISRIB acts on is involved in long-term memory formation. This connection set off a cascade of scientific discovery; subsequent work by Walter’s research team and others revealed ISRIB’s remarkable capacity for cognitive enhancement. ISRIB’s mechanism of action is not yet entirely clear, but it likely reverses the global decrease in protein translation caused by aberrant activation of the ISR (which is implicated in some neurological
“Serendipity is more than simple luck or accident; it requires the ability to recognize when unexpected results are significant and the flexible mindset to pursue these unexpected results further.” 14
Berkeley Scientific Journal | SPRING 2021
FEATURES
Figure 3: Molecular structure of the integrated stress response inhibitor (ISRIB). Dr. Walter came across ISRIB in his screen for small molecule inhibitors of the unfolded protein response (UPR). ISRIB interferes with the integrated stress response, restoring normal protein production to the cell, and shows potential to treat a broad spectrum of diseases. diseases), restoring long-term memory formation and normal cognitive function.9 If not for this phone call, ISRIB’s cognitive enhancement properties may have gone undiscovered, or at least unrecognized for many years. “It changed the spectrum of what we could do with ISRIB entirely,” says Walter. Serendipity is thus critical not only at the individual level, but also at the community level. External features of a researcher’s environment, including their network of colleagues, influence their aptitude for recognizing and following up on serendipitous findings. Serendipity is enhanced when researchers with different expertises and perspectives are brought together in a collaborative setting; colleagues from different disciplines may have previously unrecognized insight or alternative interpretations of unexpected results.2 Serendipity is also enhanced in settings where knowledge is regularly shared and results are published early.6 ISRIB human trials haven’t begun yet, but the results of mouse studies are nothing short of amazing: old mice, when treated with ISRIB, are able to solve difficult mazes significantly faster than their placebo-receiving counterparts.8 In addition to treating age-related cognitive decline, ISRIB has shown a good deal of promise for treating traumatic brain injuries.10 None of this was obvious when Walter started out. He was just trying to answer the simple question, “How does one part of the cell communicate with another?” His story certainly doesn’t end here. It’s just a matter of seeing where the meandering path of discovery leads him next. He reminds us that scientists are explorers, forging ahead into the universe of the unknown, following the guiding star of serendipity. Acknowledgements: I would like to acknowledge postdoctoral fellow Dr. Rosalie Lawrence (UCSF) for her detailed and thoughtful feedback. REFERENCES 1. Walter, P. (2010). Walking along the serendipitous path of discovery. Molecular Biology of the Cell, 21(1), 15–17. https:// doi.org/10.1091/mbc.e09-08-0662 2. Bosenman, M.F. (1988). Serendipity and scientific discovery. The Journal of Creative Behavior, 22, 132–138. https://doi. org/10.1002/j.2162-6057.1988.tb00674.x 3. Hofmann, A. (1979). How LSD originated. Journal of Psychedelic Drugs, 11(1–2), 53–60. https://doi.org/10.1080/02791072.1979 .10472092
FEATURES
4. Oliver, K. (2019). “The lucky start toward today’s cosmology”? Serendipity, the “Big Bang” theory, and the science of radio noise in Cold War America. Historical Studies in the Natural Sciences, 49(2), 151–193. https://doi.org/10.1525/hsns.2019.49.2.151 5. Yaqub, O. (2018). Serendipity: Towards a taxonomy and a theory. Research Policy, 47(1), 169–179. https://doi.org/10.1016/j. respol.2017.10.007 6. Copeland, S. (2019). On serendipity in science: Discovery at the intersection of chance and wisdom. Synthese, 196(6), 2385– 2406. https://doi.org/10.1007/s11229-017-1544-3 7. Gillies, D. (2015). Serendipity and chance in scientific discovery: Policy implications for global society. In D. Archibugi & A. Filippetti (Eds.), The Handbook of Global Science, Technology, and Innovation (pp. 525–539). John Wiley & Sons, Ltd. https:// doi.org/10.1002/9781118739044.ch25 8. Fabian, A. C. (2009). Serendipity in astronomy. ArXiv. http:// arxiv.org/abs/0908.2784 9. Krukowski, K., Nolan, A., Frias, E. S., Boone, M., Ureta, G., Grue, K., Paladini, M.-S., Elizarraras, E., Delgado, L., Bernales, S., Walter, P., & Rosi, S. (2020). Small molecule cognitive enhancer reverses age-related memory decline in mice. eLife, 9, Article e62048. https://doi.org/10.7554/eLife.62048 10. Chou, A., Krukowski, K., Jopson, T., Zhu, P. J., Costa-Mattioli, M., Walter, P., & Rosi, S. (2017). Inhibition of the integrated stress response reverses cognitive deficits after traumatic brain injury. Proceedings of the National Academy of Sciences, 114(31), E6420–E6426. https://doi.org/10.1073/pnas.1707661114 IMAGE REFERENCES 1. Banner: Yami89. GreenCircleFractal [Digital image]. Wikimedia Commons. https://fr.wikipedia.org/wiki/ Fichier:GreenCircleFractal.png. Licensed unde CC BY-SA 3.0. 2. Figure 1: Peschek, J., Acosta-Alvear, D., Mendez, A., & Walter, P. (2015). A conformational RNA zipper promotes intron ejection during non‐conventional XBP1 mRNA splicing. EMBO Reports, 16:1688-1698. https://doi.org/10.15252/embr.201540955 3. Figure 2: see reference 8. 4. Figure 3: Image made by author.
SPRING 2021 | Berkeley Scientific Journal
15
TESTING THE THEORY OF GENERAL RELATIVITY AT THE GALACTIC CENTER INTERVIEW WITH DR. REINHARD GENZEL BY SABRINA WU, YI ZHU, AND ELETTRA PREOSTI Reinhard Genzel, PhD, is a German astrophysicist, codirector of the Max Planck Institute for Extraterrestrial Physics, an honorary professor at the Ludwig Maximilian University of Munich, and an emeritus professor at the University of California, Berkeley. He also sits on the selection committee for the Shaw Prize in Astronomy. Dr. Genzel has received numerous honors and awards, including the Herschel Award and the Harvey Prize. Most notably, in 2020, he was awarded the Nobel Prize for the discovery of a supermassive compact object at the center of our galaxy. In this interview, we discuss Dr. Genzel’s decades-long work developing ground and space instruments to study physical processes at the center of galaxies that led to his groundbreaking discovery.
BSJ RG
: What were some of your early, memorable experiences that led you to work in “extraterrestrial physics”?
: My father was a physicist. In particular, he was an experimentalist working in solid-state physics, so I essentially grew up in physics. Why extraterrestrial physics? After high school, I began to explore a number of fields when I discovered that there was a new Max Planck Institute in Bonn conducting research in radio astronomy. At the time, they had the largest single dish radio telescope, almost one hundred meters in diameter, in the world. That seemed like a terrific opportunity!
BSJ 16
: What is the theory of general relativity?
Berkeley Scientific Journal | SPRING 2021
RG
: In his theoretical work on special relativity, Einstein started to take into consideration a very important experiment, called the Michelson-Morley experiment, which demonstrated that there is a speed limit to communication. We can only communicate at the speed of light or less. Using this limit, Einstein developed a theory that explained how to communicate between reference frames moving at high speeds relative to each other. This is known as the theory of special relativity. Einstein’s next step was to take into account gravity, as he knew that gravitational fields affect the motions of photons (i.e., particles of light). It was very clear to Einstein that light is bent by masses in its vicinity and that light loses energy when escaping from regions of mass. Einstein then put these discoveries into a proper theory known as the theory of general relativity. Overall, general relativity takes into account both the consequences of special relativity as well as the effects of mass on spacetime.
INTERVIEWS
Figure 1: Turning the Earth into a black hole.
BSJ RG
: What is a black hole?
: We can determine from Newtonian mechanics that the escape speed of a rocket launched from the Earth’s surface is approximately eleven kilometers per second. Next, if we take the Earth and shrink it to one centimeter in diameter without changing its mass, the rocket’s escape speed is determined to be the speed of light. Then, if we shrink the Earth even further, the rocket’s speed will become faster than the speed of light. However, no object can travel faster than the speed of light, so this means that neither the rocket, nor anything else—not even light—can escape from this shrunken Earth. Thus, we have transformed Earth into a superdense object called a black hole.
BSJ RG
: How are black holes created?
: In order to understand how black holes are created, we must first examine stellar black holes. At some point, stars much more massive than the Sun run out of fuel, allowing gravity to pull the stars’ material inward. Since there is no longer any radiation pressure from the star to resist the pull of gravity, the star collapses inwards on itself, resulting in a supernova—a powerful and luminous stellar explosion. What remains becomes a stellar black hole.
BSJ RG
: How can we prove the existence of a black hole?
: In our solar system, the Sun, which is hundreds of times more massive than all of the planets, lies at the center. Then, by Kepler’s laws, we know that all of the planets in the solar system orbit around the Sun in ellipses. Let us now do a gedanken
INTERVIEWS
experiment—a thought experiment—and switch off the light of the Sun. What happens to the planets? They will still orbit around the Sun because their motion has nothing to do with the radiation from the Sun, only the force of gravity. That is how we can make visible something which is not: we look at the effect it has on its environment.
BSJ RG
: Why did researchers theorize the existence of a supermassive black hole at the center of our galaxy?
: Another species of black holes which we now know to exist are supermassive black holes. They were first discovered in 1963, when Martin Schmidt, a professor at Caltech, used the Panama Telescope to observe a “star” and found that its optical spectrum consisted of emission lines (from hydrogen, helium, neon, etc.) that were all redshifted by sixteen percent. He then realized that if the redshift of these emissions lines was caused by the expansion of the universe, this little “star” must actually be over two billion light years away. And if he could see this “star” when it is so far away from him, then what looked like a faint star must actually be very luminous. In fact, it must be about two thousand times more luminous than the entire Milky Way Galaxy. We call this “star” a quasar, an object that emits large amounts of energy. The question then becomes: how can we perform the same thought experiment we used to prove the existence of black holes on quasars? We cannot; quasars are too far away! More recently, inspired by the discovery of quasars, theoretical physicists from the UK began to propose that every galaxy contains a black hole. That is in fact what we now believe: every galaxy has a massive black hole, and it is just during certain phases of activity that these black holes show up in such spectacular ways as quasars. That would also mean that there is a black hole at the center of our very own Milky Way.
SPRING 2021 | Berkeley Scientific Journal
17
BSJ RG
: How, then, did you try to show the existence of the black hole Sagittarius A* at the center of the Milky Way?
: Initially, we thought that we would be able to detect some sort of motion at the Galactic Center, or the rotational center of the Milky Way Galaxy, to indicate the presence of a supermassive black hole just through observation. But, we faced a major obstacle: it is not possible to optically observe the Galactic Center. This is because there exists an immense amount of dust in the interstellar space between the Earth and the Galactic Center, twentyseven thousand light years away, which completely extinguishes the visible radiation emitted by the Galactic Center. To overcome this challenge, we realized that while we may not be able to optically observe any motion due to the visible waves radiated by the potential massive black hole, any radio waves it emitted would be able to penetrate the dust and thus be detected. That is exactly what Charles Townes did. In collaboration with his research group, which included myself as a postdoc, we started building infrared spectrometers to measure the motions of gas clouds near where we thought the Galactic Center was located. Sure enough, the gas clouds we observed moved at enormous speeds, about one hundred times faster than the speed of the Earth. Now, since the Galactic Center is much closer to the Earth than the quasar discovered by Schmidt, we can use Kepler’s laws to find that there must be a concentration of a few million solar masses at the Galactic Center. This could be a supermassive black hole!
BSJ
: What kind of new technologies have improved our observations of the Galactic Center where the supermassive black hole is?
RG
: Gas is not a perfect tracer of gravity as it can be influenced by many other factors such as magnetic fields or stellar winds, so we instead have to observe the behavior of stars. In fact, by measuring the positions of nearby stars in the sky and tracking how they change over time, we can actually derive more information about a possible supermassive black hole at the Galactic Center. So, how then are we able to obtain these measurements? To accomplish this, we need very sharp images as we are looking at motions on the scale of milliarcseconds (2.78 × 10-7 degrees) per year. That is not something a normal telescope can help us observe. In fact, because Earth’s atmosphere distorts starlight, not even a really big telescope would be able to help us in observing these tiny motions. So, to remedy this, we initially took very fast snapshots of the stars’ motion to avoid any distortion from wind. Afterwards, using a computer, we added these images together to make them as sharp as possible. We then used what is called “adaptive optics,” a method used to measure and correct for distortions in the atmosphere by utilizing a deformable mirror before even taking an image. Adaptive optics is able to improve the resolutions of images by factors ranging from ten to twenty.
BSJ
: With a more precise understanding of the stars’ motion at Galactic Center, your group was able to use a technique called orbit fitting to constrain the mass of Sagittarius A*. Can you
Figure 2: A laser guide star (1) reveals distortions in the atmosphere. The distorted wavefronts are corrected by a deformable mirror (2) so that the image of the star (3) is clear. Image copyright © The Royal Swedish Academy of Sciences. Reprinted under fair use.2
18
Berkeley Scientific Journal | SPRING 2021
INTERVIEWS
describe the process of orbit fitting?
RG
: We know that by Kepler’s law, if we have a dominant mass, then the stable orbits of nearby bound test particles, such as stars or gas particles, will be elliptical. So, first we collected data on the velocities and separations of objects close to the Galactic Center in order to determine the curvature in their motions. Next, we used statistical techniques to fit Keplers’ ellipses to the curvatures that we had calculated. We were then able to directly use these orbit fits in order to determine the mass of the nearby black hole.
BSJ RG
: Why is the star S2 such a good candidate for orbit fitting?
: When observing the movement of stars around black holes, we wanted to track a bright star that travels as close to the black hole as possible. The general wisdom at the time, however, was that we should not expect to have bright stars very close to a black hole. This is because near a black hole, interstellar gas clouds are pulled by the black hole’s tidal forces. As a result, they are not able to collapse under their own gravity to form bright stars. But from pictures taken in the 1990s, we already had evidence that there are bright stars at angular distances of approximately half an arcsecond away from the Galactic Center. Moreover, when we actually took measurements using the spectral technique, we were able to find a number of stars near the Galactic Center that are actually quite bright. There seemed to be a trick that nature was playing to bring bright stars next to black holes. In fact, we now know that binary stars formed way outside of the Galactic Center can, by chance, move inwards towards the Galactic Center where the supermassive black hole Sagittarius A*
lies. Once close enough, the binding energy between the two stars becomes smaller than the kinetic energy from their motion relative to the massive black hole, meaning that the stars no longer want to stay bound together. When this happens, only one of the stars remains near the black hole while the other one gets flung out of the Milky Way. We believe that, using this mechanism S2 has been able to remain near the black hole while its companion was lost. At perigee (closest approach), S2 comes close to 14 milliarcseconds from Sagittarius A*.
BSJ the two?
: You use both a Newtonian orbit fit and a relativistic fit in your calculations. Can you explain the difference between
RG
: There are various ways of approaching relativistic orbit fitting, one of which is using parameterised postNewtonian approximations. Essentially, the post-Newtonian approximation method performs orbit fits by adding more and more terms, on the order of v/c2, to the model of the orbit shape without the need to solve the Einstein field equations. The first term that is added represents the Roemer effect, which accounts for the finite speed of light. The next term accounts for the gravitational redshift, the reddening of light due to the fact that clocks tick slower near big masses. The third term represents the Schwarzschild precession, which describes the precession in the direction of orbit of ellipses traced by orbiting stars. The final term accounts for the precession of stars about the spin of a black hole. That is, if a black hole has a spin and the orbit of the stars around the black hole makes an angle relative to the spin, then over time, the orbit of the star starts to wobble—to precess about the spin of the black hole. On the other hand, the relativistic fit method directly solves the field equations numerically using a computer. In the relativistic method, we formulate the fitting problem as a Newtonian fitting problem and add a fudge factor multiplied by the aforementioned terms (Roemer effect, gravitational redshift, etc.). We then perform the fit using the computer and solve for the fudge factor. If Einstein’s theory of general relativity is correct, the fudge factor will be one. Otherwise, the factor will deviate from one.
BSJ
: The orbit of S2 has revealed the existence of a supermassive black hole at the galactic center. How else has S2 improved our understanding of general relativity?
RG
Figure 3: Observation of the orbits of 17 stars surrounding Sagittarius A* from data collected over a 25-year period. Image copyright © The American Astronomical Society. Reprinted under fair use.3
INTERVIEWS
: At perigee, when S2 is closest to Sagittarius A*, the effects of general relativity are most visible. We know that S2 has an orbital period of sixteen years, so when we observed S2 at perigee in 2002, we began preparing for its return in 2018. Our goal was to do something which had never been done before: to use the Galactic Center as a laboratory to test general relativity. Indeed, we developed a completely new instrument called GRAVITY with four eight-meter (in diameter) telescopes separated by as much as one hundred and thirty meters. GRAVITY has milliarcsecond resolution, and we can measure distances of about ten to twenty microarcseconds. This advancement in precision enabled us to observe several effects of general relativity when S2 came back around in 2018. One particularly exciting result
SPRING 2021 | Berkeley Scientific Journal
19
Figure 4: Artist’s rendition of the Schwarzschild precession of S2’s orbit. Image licensed under CC BY 4.0. is that we detected Schwarzschild precession of the orbit of S2 [1].
BSJ RG
: What is the advantage of GRAVITY over adaptive optics?
: Using adaptive optics, we are typically able to measure the position of a star at a precision of approximately 0.4 milliarcseconds. For example, let us say we have fifty statistically independent measurements. So, the accuracy to which we can measure apogee (the point in the orbit farthest from Sagittarius A*) should be 0.4 milliarcseconds divided by the square root of fifty, which is about 0.06 milliarcseconds.We also know that everytime S2 returns to apogee, the apogee is moved by about 0.6 milliarcseconds. Since 0.06 is much less than 0.6, by using adaptive optics, we should have already been able to measure the Schwarzschild precession of S2 ten years ago. Why were we not able to? What had we done wrong? Well, the problem with adaptive optics is that we are not able to image Sagittarius A* directly—it is not visible in the optical band— so we can only see S1, S2, and the surrounding stars. Because the systematics of the telescope can change day-to-day due to slight variations in temperature and other environmental factors, it is difficult to obtain a stable reference for the location of Sagittarius A*. If we do not know the location of the Galactic Center, we have to acquire its value as a fit parameter, and that then destroys the available information to a considerable extent. Now GRAVITY employs radio astrometry to pinpoint the location of Sagittarius A* and S2 at the same time. With both objects
20
Berkeley Scientific Journal | SPRING 2021
visible, finding their distance becomes simple triangulation and we can make full use of our data.
BSJ RG
: How does GRAVITY use interferometry to improve resolution?
: The resolution of a telescope at a given wavelength is specified by the ratio of the size of the telescope and the wavelength of the light we are observing. However, in terms of interferometry, if we have two telescopes, instead of thinking of them as two individual telescopes, we can think of each telescope as the edge of an even bigger telescope. So, now, instead of four eightmeter telescopes, we have one large telescope that is one hundred and thirty meters. This alone improves our resolution by a factor of about fifteen to twenty. Of course, if you have a complex image, such as a galaxy or a planet, it is not yet possible to image them well without dozens of telescopes. However, you can do very good astrometry on point sources such as S2.
BSJ RG
: What are the future plans for the GRAVITY instrument?
: The GRAVITY instrument has been a major breakthrough, not only for understanding what is happening at the Galactic Center, but for astrophysics research in general. So far, GRAVITY has allowed us to resolve the broad line region around our
INTERVIEWS
“I would say that the most rewarding part of my decades of work is how much we have learned. This route that my fellow researchers and I have taken, which has largely been made possible due to the generosity of the Max Planck Society, has been very long, risky, and ambitious.” active galactic nucleus. We now hope to improve the sensitivity of GRAVITY by another factor of, hopefully, one hundred. This upgrade would make GRAVITY approximately one hundred thousand times more sensitive than interferometric telescopes designed even just a few years ago. Improving sensitivity by factors this large would be an absolute game changer. In fact, with GRAVITY, we have been able to obtain the best exoplanet spectra to date. In addition to improving our detectors, we are also looking to equip all four of our eight-meter telescopes with laser guide stars so that we can use these stars as reference points (Figure 2). This will allow us to look at faint, high redshift quasars and take broadline measurements for cosmology experiments for the first time. But these experiments take a long time. In my case, forty years have passed since I have worked on the early stages of this experiment with Charles Townes, and the work is still ongoing. While it is fantastic to see all of this, it requires a lot of patience.
BSJ
: In your Nobel Lecture, you describe you and your team’s work on proving the existence of a black hole at the galactic center as a 40 year journey. What has been the most rewarding part of this decades-long endeavor?
RG
: I would say that the most rewarding part of my decades of work is how much we have learned. This route that my fellow researchers and I have taken, which has largely been made possible due to the generosity of the Max Planck Society, has been very long, risky, and ambitious. Over time, we have been able to improve our work and enhance it, which is not always possible in science. I hope that this work will continue even after I retire. I now have the chance to continue my research for some time, not only exploring the Galactic Center, but also looking at time travel. In fact, most of my research is actually in time travel. That sounds crazy, but astronomers are actually time travelers in the sense that when we look through a telescope at far away objects in space, due to the finite speed of light, we are actually seeing the galaxy as it was billions of years ago.
BSJ RG
: What advice do you have for undergraduates still early in their career?
: There is a German phrase that translates to ‘nothing comes from nothing.’ What that means is, if you do not work hard, it will be very unlikely that you will be successful. To be successful in research, you need to work hard, try hard, and be excited about your work. So, go ahead, pursue your interests, and have fun!
REFERENCES 1. Abuter, R., Amorim, A., Bauböck, M., Berger, J. P., Bonnet, H., Brandner, W., Cardoso, V., Clénet, Y., De Zeeuw, P. T., Dexter, J., Eckart, A., Eisenhauer, F., Förster Schreiber, N. M., Garcia, P., Gao, F., Gendron, E., Genzel, R., Gillessen, S., Habibi, M., … Zins, G. (2020). Detection of the Schwarzschild precession in the orbit of the star S2 near the Galactic centre massive black hole. Astronomy & Astrophysics, 636, Article L5. https://doi. org/10.1051/0004-6361/202037813 2. The Nobel Committee for Physics. (2020). Scientific background on the Nobel Prize in Physics 2020: Theoretical foundation for black holes and the supermassive compact object at the Galactic centre. The Royal Swedish Academy of Sciences. https://www. nobelprize.org/uploads/2020/10/advanced-physicsprize2020. pdf 3. Gillessen, S., Plewa, P. M., Eisenhauer, F., Sari, R., Waisberg, I., Habibi, M., Pfuhl, O., George, E., Dexter, J., Von Fellenberg, S., Ott, T., & Genzel, R. (2017). An update on monitoring stellar orbits in the Galactic center. The Astrophysical Journal, 837(1), 30. https://doi.org/10.3847/1538-4357/aa5c41 IMAGE REFERENCES 1. Banner: Pokornyi, T. (2019, November 30). Landscape Photography of Tree’s Reflection on Body of Water Under a Starry Night Sky [Photograph]. Pexels. https://www.pexels.com/photo/ landscape-photography-of-tree-s-reflection-on-body-of-waterunder-a-starry-night-sky-3307218/ 2. Headshot: Max-Planck-Gesellschaft – Max-Planck-Institut für extraterrestrische Physik. Portrait of 2020 Physics Laureate Reinhard Genzel. The Nobel Foundation. https://www. nobelprize.org/prizes/physics/2020/genzel/photo-gallery/. Reprinted with permission. 3. Figure 1: Image produced by BSJ. 4. Figure 2: See reference 2. 5. Figure 3: See reference 3. 6. Figure 4: ESO/L. Calçada. (2020). Artist’s impression of Schwarzschild precession [Illustration]. European Southern Observatory. https://www.eso.org/public/images/eso2006a/
This interview, which consists of one conversation, has been edited for brevity and clarity.
INTERVIEWS
SPRING 2021 | Berkeley Scientific Journal
21
The Function and Future of CRISPR Gene Editing
Interview with Dr. Jennifer Doudna
Dr. Jennifer Doudna is a professor in the Departments of Chemistry and Molecular and Cell Biology at the University of California, Berkeley. She is also the Li Ka Shing Chancellor’s Chair in Biomedical and Health Sciences and a Howard Hughes Medical Institute Investigator. Dr. Doudna is currently the president of the Innovative Genomics Institute (IGI), an organization focused on applying genome engineering to global problems. In October 2020, Dr. Doudna was awarded the Nobel Prize in Chemistry alongside her colleague, Dr. Emmanuelle Charpentier, for their development of CRISPR-Cas9, a powerful gene editing tool. In this interview, we discuss the implications of CRISPR technologies for society, as well as how to ensure equitable access to gene editing therapies in the future.
BSJ
: You have recently collaborated with researchers at UC Berkeley and the Gladstone Institutes to develop a CRISPR-based COVID-19 diagnostic test that uses mobile phones to detect SARS-CoV-2 within half an hour. Could such examples of the versatility of CRISPR technology change the way society perceives scientific discoveries?
JD
: I certainly hope so. This past year has demonstrated in real time the value of science and technology in the context of understanding how to detect and fight back against viruses. When a technology like CRISPR is used to address ongoing realworld issues, like detecting the COVID-19 virus, it elicits greater public appreciation for the value of the science that led to the technology.
BSJ
: Since 2012, research has uncovered the potential of several other Cas proteins aside from Cas9. How are these proteins functionally different from one another, and what is their significance in the context of CRISPR?
JD
BY LIANE ALBARGHOUTHI, EMILY MATCHAM, KAITLYN WANG, ANANYA KRISHNAPURA, AND ELETTRA PREOSTI 22
Berkeley Scientific Journal | SPRING 2021
: What is really interesting about CRISPR is that it is highly variable in nature. Naturally, CRISPR is a part of the immune system in bacteria, and there are many different versions of it. This is likely because viruses are evolving all of the time, so for the bacterial immune system to be effective against viruses, it also has to evolve. CRISPR works as an immune system by cutting up foreign DNA and RNA. Each CRISPR system has its own molecular scissors in the form of a Cas gene. What makes these Cas genes so interesting biologically (and technologically) is that when we look into the details of how they work, they are each a little bit different. For example, the Cas9 protein, the first type of CRISPR-Cas we studied with our collaborator, Emmanuel Charpentier, turned out to be a very robust tool for changing DNA in cells. Another type of Cas protein called Cas12 can also act as a programmable system in bacteria and as a technology for genome editing. However, Cas12 has an additional biochemical activity that allows it to work as a diagnostic. When Cas12 detects the presence of DNA, it can then trigger a fluorescent marker, which is something that Cas9 does not do. It is really interesting to see how that difference in behavior at a molecular level dictates how these proteins can be used for different technologies. Cas9 is a great genome editor, but it is not a great diagnostic, while Cas12 is an okay genome editor, but it is a great diagnostic.
INTERVIEWS
BSJ
: You have previously said that there is a growing disparity in biomedical research between diagnostics and therapeutics. Can you briefly describe what you mean by this disparity? What needs to be done to propel further research or studies centering on therapeutic applications of genome editing?
JD
: One thing that comes to mind is thinking about whether people are able to access these technologies. As exciting as CRISPR is, as a technology, it is only going to be impactful if people can access it, afford it, and benefit from it. That has really been the focus and goal of my work over the last few years at the Innovative Genomics Institute. In the case of diagnostics, it would be great if CRISPR could be used as either a point-of-care test or an at-home test for virus detection. With enough research, I think that this could be possible. However, it will be harder to achieve affordability and accessibility for therapeutic applications of genome editing since there are several steps that need to happen in order to make sure the technology is safe and functional. Understandably, all of those steps would add up to a significant cost of treatment. However, by paying closer attention to the steps in that process, we can start to reduce the financial burden.
BSJ
: It is essential for bioethicists, scientists, clinicians, and regulators to work together to ensure safe, effective and affordable outcomes. Given that many of these stakeholders have disagreements about genome editing, what possible additional steps
“As exciting as CRISPR is, as a technology, it is only going to be impactful if people can access it, afford it, and benefit from it.”
“How do we make sure that technologies move forward in a responsible way such that other stakeholders benefit from, and are not harmed by, the technology?“ need to be taken to ensure efficient collaboration moving forward?
JD
: Collaboration is critical in science and is responsible for much of the progress that is made. As smart as any one scientist might be, nobody has all of the ideas. From my experience, it has been great—and certainly more fun—to work collaboratively with other people on projects. That being said, how do we make sure that technologies move forward in a responsible way such that other stakeholders benefit from, and are not harmed by, the technology? How do we make sure that they are engaged and informed and that their points of view are taken into account? These are hard questions to answer since there are a lot of potential stakeholders. We need to ensure that we are reaching out to people and working in an open, transparent environment. The way that I have been approaching this is to start by engaging with people who are interested in CRISPR— some of whom may be stakeholders who agree with us and some of whom may be looking at this issue with a different point of view. I still remember a conference we had sometime in the last five years that focused on agricultural uses of CRISPR and genome editing. In addition to scientists and bioethicists, we also had people attend who were very anti-GMO and believed that one should not manipulate the genome of any animals or plants. It was a fascinating meeting. The good thing about it was that although people did not share the same viewpoints, they were willing to listen and discuss. I think that is
Figure 1: Illustration of Cas9. After the single-stranded guide RNA in the CRISPR-Cas9 complex recognizes the target DNA sequence downstream of a short protospacer adjacent motif (PAM), the Cas9 nuclease proceeds to cleave the target DNA.
INTERVIEWS
SPRING 2021 | Berkeley Scientific Journal
23
Figure 2: Image of COVID-19 Testing Lab Robot at the Innovative Genomics Institute (IGI). where progress starts. Even if individuals have different perspectives on something, we can make progress if they are willing to discuss their differences. The goal is to create that open community and environment where people feel comfortable discussing their ideas, even if they are not in agreement.
BSJ
: What technical or general advice do you have for undergraduate researchers to be more innovative and imaginative with science?
JD
: In my experience, a lot of the most creative ideas actually come from people like yourselves who are new to an area of science. They come to the field unbiased by other ways of thinking and ask key questions. I have had college students come to the lab and ask the most probing questions that make us step back and consider, “Why am I doing this?” They make us stop and think. For all of you who are going into science, be willing to ask these questions—you might actually be cutting right to the heart of something that is really, really important to discuss. Secondly, follow your passions. I think if you are really curious about something, that curiosity will often drive innovation and creativity; that certainly was true for me. When I began my work on CRISPR, it was a field that had just started off with a handful of scientists, and they were not working in fancy labs and publishing papers in fancy journals. They were just microbiologists who noticed an interesting
24
Berkeley Scientific Journal | SPRING 2021
“I think if you are really curious about something, that curiosity will often drive innovation and creativity.” phenomenon in bacteria fighting back against viruses and wondered how it worked. This interview, which consists of one conversation, has been edited for brevity and clarity. REFERENCES 1. Headshot: Houser, K. [Photograph of Jennifer Doudna]. Innovative Genomics Institute, UC Berkeley. https:// innovativegenomics.org/resources/media-resources/ 2. Figure 1: Cas9 illustration [Illustration]. Innovative Genomics Institute, UC Berkeley. https://innovativegenomics.org/ resources/media-resources/ 3. Figure 2: COVID-19 Testing Lab Robot. [Photograph]. Innovative Genomics Institute, UC Berkeley. https:// innovativegenomics.org/resources/media-resources/
INTERVIEWS
AVIANS TO AIRPLANES: BIOMIMICRY IN FLIGHT AND WING DESIGN
BY NATALIE SLOSAR
I
n 1903, Orville and Wilbur Wright invented the first heavier-than-air machine capable of powered, sustained, and controlled flight. Though rudimentary, the brothers’ apparatus laid the groundwork for far more complex aircrafts. Further technological breakthroughs in the 20th century advanced their fabric-covered, wood-carved planes to sophisticated vehicles capable of traveling at high altitudes and great speeds for long periods of time.1 The invention of human flight has reshaped our world––now, international travel can be accomplished in a single day, rather than several months. Of course, humans did not invent flight. Thirteen thousand species of warm-blooded vertebrates, as well as almost a million species of insects, have possessed the ability to fly for millions of years.2 These flying groups have evolved sophisticated structures made up of wings to generate and maintain lift; muscles to flap, twist, and plunge; and sensing systems for guidance and maneuverability. Wings can adapt to sudden changes in the environment, such as wind gusts, giving them incredible flight stability and control.4 Acknowledging this, researchers have studied the morphological characteristics and functions of biological flight in an attempt to improve airplane design. And now, designers are beginning to
FEATURES
model wing and aircraft designs off of pre-existing ones––the ones found on birds, bats, and insects. Applications of animal flight to its mechanical counterparts would make aircrafts faster, as well as more efficient and ecofriendly.5 But how does flight work, anyway? Before delving into recent studies of flier morphology and subsequent advances in airplane design, we must first understand the fundamentals of animal and aircraft flight. An object in flight is constantly engaged in a tug of war between lift, the upward force, and weight, the downward force provided by gravity. Two other lateral forces, thrust and drag, also compete. Birds and animals’ muscles provide thrust, while engines generate thrust for aircrafts. The opposing force, drag, is a result of air resistance, but is counteracted by thrust and can be reduced by increasing the aerodynamics of the flying body. The wing, or airfoil, provides the lift required to fly.6 To take off and maintain flight, lift must overcome weight, and thrust must overcome drag. The most puzzling of these forces is perhaps lift. Lift can be understood through the lens of Bernoulli’s Principle, which states that as any fluid, including air, moves faster, it produces less pressure.7 As an airplane or bird moves, air is split above and below its wing. To ensure that the air reaches the same endpoint along the
SPRING 2021 | Berkeley Scientific Journal
25
Figure 1: Animal and aircraft wings utilize similar mechanisms to fly. Both have a curvature that creates reduced air pressure across the top of the wing, which allows the animal or aircraft to achieve maximum lift. In the public domain. back edge of the wing, the air that moves over the curved, longer surface of the wing speeds up. The faster air moving over the airfoil produces less pressure than the air moving under. And since there is more pressure pushing the wing up than there is pushing the wing down, the air below the wing produces lift. Whether an aircraft or flying animal flies fast, is more agile/maneuverable, or is more energy efficient depends on both thrust, as well as the flier’s ability to generate lift.5 This is affected by a wide range of factors––the camber (or curve), size, and thickness of each wing, as well as its positioning on the fuselage and its angle relative to the plane body. As is the case with flying animals, different wing configurations perform best for different types of flight. Perhaps the most important
aspect of these configurations are flaps and ailerons. Flaps increase lift or drag depending on their configuration, while ailerons change roll, the rotation around the axis running from nose to tail. The horizontal and vertical stabilizers on the tail of the aircraft have components called elevators and rudders, which control pitch, nose up or down, and yaw, left or right, respectively.1 20th century engineers achieved this extensive knowledge on aircraft design and flight performance through theoretical studies, use of wind tunnels, computer modeling, and actual flight tests.1 But now, shifting their focus to model wing and aircraft designs of flying animals. Natural selection has fine-tuned these animals’ flight systems, allowing them to truly optimize the “tug of war” between the four forces behind flight. Unlike the defined components on a manmade wing, birds’ wings are one fluid entity, connected seamlessly to their muscles. To pitch up or down, birds simply angle their wings up or down; to control yaw, they twist their wing tips left or right. This is accomplished through wing morphing, the ability to modify wing shape to accommodate different aerodynamic requirements. An aircraft with morphing wings would have greater agility and aerodynamic efficiency than one that requires separate, hinged panels to control motion.8 Such an aircraft wing has been designed by NASA and MIT researchers, as shown in Figure 3. The team’s design, which uses tiny lightweight pieces called “digital materials,” ensures that the whole wing shape can be altered with the activation of two motors, which apply a pressure to each wingtip that twists the wing uniformly along its length.9 Neil Gershenfeld, director of MIT’s Center for Bits and Atoms (CBA), explains that previous attempts at morphing wings failed because they used mechanical control structures to deform the wing. These structures were so heavy that any efficiency advantages provided by the smoother wing surface were lost. But according to Gershenfeld, this new design makes “the whole wing the mechanism. It’s not something we put into the wing.” Millions of years of bird
“An aircraft with morphing wings would have greater agility and aerodynamic efficiency than one that requires separate, hinged panels to control motion.”
Figure 2: Aircrafts can move in multiple ways across the lateral, longitudinal, and vertical axes. Pitch is the nose up or down movement, while rolling is the rotation around the nose-to-tail axis. Yawing is a left-toright motion. Image licensed under CC BY-NC 4.0.
26
Berkeley Scientific Journal | SPRING 2021
FEATURES
Figure 3: Researchers’ morphing wing can twist and change shape seamlessly when pressure is applied to each wingtip. In the public domain. evolution prove the same thing––the most efficient fliers use their wings as undivided bodies, not structures consisting of individually controlled flaps or ailerons. Other scientists have recognized that a bird wing is composed not only of an integrated musculo-skeletal system, but is also covered in overlapping feathers. These feathers employ a folding mechanism that serve as a sort of extension of the morphing wing, amplifying the effects of the changing wing shape.8 These biomimetic designs have developed past research and into industry. At the forefront of biomimicry and airplane flight is the aerospace company Airbus. Airbus recently introduced the “Bird of Prey,” the eagle-inspired conceptual airliner seen in the banner image.10 The aircraft’s wing and tail structure mimics the eagle’s long broad wings. The Bird of Prey has individually controlled feathers– –a step away from aerodynamically inefficient panels. Though the visionary design seen in Figure 3 does not represent an actual aircraft, it embodies industry’s extensive research on bio-inspired aircraft designs. This appreciation of the results of millions of years of evolution will undoubtedly provide further inspiration for more maneuverable, aerodynamic, and efficient aircrafts. REFERENCES 1. Rumerman, J. The evolution of technology. US Centennial of Flight Commission. www.centennialofflight.net/essay/ Evolution_of_Technology/TechOV2.htm 2. Shyy, W., Kang, C., Chirarattananon, Ravi, S., & Liu, H. (2016). Aerodynamics, sensing and control of insect-scale flappingwing flight. Proceedings of the Royal Society A, 472(2186), 1–37. https://doi.org/10.1098/rspa.2015.0712 3. Hutchinson, J. R. (1996). Vertebrate Flight: The origins of flight. University of California Museum of Paleontology. https://ucmp. berkeley.edu/vertebrates/flight/origins.html
4. Thielicke, W., & Stamhuis, E. J. (2018). The effects of wing twist in slow-speed flapping flight of birds: trading brute force against efficiency. Bioinspiration & Biomimetics, 13(5), Article 056015. https://doi.org/10.1088/1748-3190/aad5a3 5. Airman Testing Standards Branch. (2016). Aerodynamics of Flight. In Airman Testing Standards Branch, Pilot’s Handbook of Aeronautical Knowledge (pp. 5-1–5-51). Federal Aviation Association, United States Department of Transportation. https://www.faa.gov/regulations_policies/handbooks_manuals/ aviation/phak/media/07_phak_ch5.pdf 6. Pūtaiao, P. A. (2020, February 5). Principles of flight. Science Learning Hub. www.sciencelearn.org.nz/resources/299principles-of-flight 7. NASA. (2017). Bernoulli’s Principle (Principles of Flight Series) [Educational lesson]. www.nasa.gov/sites/default/files/atoms/ files/bernoulli_principle_5_8.pdf 8. Di Luca, M., Mintchev, S., Heitz, G., Noca, F., & Floreano, D. (2017). Bioinspired morphing wings for extended flight envelope and roll control of small drones. Interface Focus, 7(1), Article 20160092. https://doi.org/10.1098/rsfs.2016.0092 9. Chandler, D. L. (2016, November 3). A new twist on airplane wing design. MIT News. https://news.mit.edu/2016/morphingairplane-wing-design-1103 10. Airbus. (2020). Biomimicry: a fresh approach to aircraft innovation. www.airbus.com/newsroom/stories/biomimicrya-fresh-approach-to-aircraft-innovation.html IMAGE REFERENCES 1. Banner: Sambo, R. (2019). [Photograph of an airplane wing in flight]. Pixabay. https://pixabay.com/photos/airplane-skyplane-aircraft-flight-4469400/ 2. Figure 1: The Knowledge Network for Innovations in Learning and Teaching. Lesson three: How does Bernoulli’s principle work on a plane? https://knilt.arcc.albany.edu/Lesson_Three:_ How_does_Bernoulli%E2%80%99s_principle_work_on_a_ plane%3F 3. Figure 2: See reference 5. 4. Figure 3: Cheung, K. Composite cellular material morphing wing (t=4x) [Digital animation]. NASA. https://news.mit. edu/2016/morphing-airplane-wing-design-1103
“Millions of years of bird evolution prove the same thing— the most efficient fliers use their wings as undivided bodies, not structures consisting of individually controlled flaps or ailerons.” FEATURES
SPRING 2021 | Berkeley Scientific Journal
27
ENVIRONMENTAL DESIGN: SOLAR ENVELOPES AND WORKPLACE EVALUATION
Interview with Giovanni Betti BY HOSEA CHEN, REBECCA PARK, SABRINA WU, AND ELETTRA PREOSTI
BSJ
: Growing up in Italy, what experiences fueled your passion for urban infrastructure and workplace design, and what led you to continue your research in the U.S.?
GB
: I know that everyone has their own fond memories of childhood, but I think that growing up in Italy, and in particular Rome, has allowed me to enjoy a very storied urban environment. The school of architecture is in the heart of the historic city centre. Even if I was living in the periphery, I remember cycling to class so that I could explore the little streets and alleyways of Rome. In fact, my Master of Architecture thesis back in 2005 was about the feasibility of a distributed bicycle network, which is a common enterprise now. For me, the concept of using an emerging technology to piggyback on the historic fabric of a city that was obviously not designed for cars was really intriguing. The work from this thesis really started my fascination with urban fabric and architecture. After I finished my Master of Architecture in Italy, I moved to the US for a year to get a Master of Science in Built Ecologies at the Rensselaer Polytechnic Institute (RPI). Back then, the field of sustainability and environmental design had just started emerging. What brought me to this field was a sense of duty; I felt I could not be a good architect if what I did was not good for anybody other than my clients. After RPI, I moved to London and afterwards to Germany for both work and familial reasons. My research at the time was essentially a journey of trying to find some sort of clarity for myself on a number of questions. I eventually realized I could not pursue those questions while working for other businesses that had different interests; I wanted to have more agency in directing my own research questions. That was when the opportunity to join Berkeley came up. I applied out of the blue, and here we are.
BSJ
: What is a solar envelope? Why is it important for buildings to get sunlight, and why specifically two hours as some regulations suggest?
GB Giovanni Betti (MA, MSc) is a licensed architect in Italy and the UK with over ten years of professional experience in a wide variety of innovative international projects. His work focuses on the link between environmental performance and architectural design. As an Associate Partner of the Specialist Modelling Group at Foster + Partners, he has contributed to the development of architectural spaces such as the new Apple Campus (Cupertino, USA). In 2015, he moved to Berlin to found and lead the Performance Based Design Team at HENN. Currently, he is an assistant professor in Architectural Design for Building Performance at UC Berkeley. In this interview, we discuss Betti’s research on solar envelopes and workplace evaluation.
28
Berkeley Scientific Journal | SPRING 2021
: The solar envelope is a virtual solid figure that architects can use to make building designs that adhere to a certain set of conditions related to the sun. This idea stems from the nexus between building design and health. Access to daily sunlight heavily affects the lifestyle and health of residents, especially because direct sunlight produces an almost germicidal effect that promotes hygiene of the living space. If we can ensure that buildings, especially living quarters, are getting at least a certain amount of direct sunlight every day, this access improves both the residents’ overall health and their environment. The regulation for two hours of sunlight referenced in the study is specific to China, where these rules are quite strictly enforced and often drive the urban form. For example, a lot of designs for Chinese residential blocks consist of repetitive units with certain heights and distances between them to ensure that each resident receives at least two hours of daily sunlight. These regulations also vary by locality. The goal in general, though, is to set certain regulations that will ensure that, even in the depths of winter, most residents will still have some direct sunlight in their living quarters. In this case, the solar envelope would be the area under which all of the adjacent buildings can receive those two hours of daily sunlight.
INTERVIEWS
Figure 1: An illustration of solar envelopes on a portion of Los Angeles’ Spanish street grid system (right) and buildings within those solar envelopes (left). Reprinted with permission.
BSJ GB
: What factors are taken into account when calculating solar envelopes?
: There was actually a competition in China which we won that called for the design of a cluster of towers just south of an existing residential development. We wanted to find a way to sculpt the solar envelope more effectively. The methods for developing a solar envelope at the time were primarily focused on geometric spacing, which usually results in quite a conservative solar envelope that does not give you the full possible building height. This is because architects at the time treated each building as a homogenous, binary structure whose surfaces collectively receive the same hours of daily sunlight. We resolved to subdivide the elevations of the surrounding buildings in a number of sensor points where we could calculate sunlight availability. With this information, we then used a recursive approach to grow virtual towers by adding voxels, or units of volume, to our stack while iteratively checking whether we still complied with regulations. By adjusting this model with compliance information, we were able to develop models with a more nuanced approach. In this way, we were able to maximize the height of the towers we wanted to build while preserving the solar access of the residential structures. In the end, we actually managed to have a solar envelope that was larger than the one that we would have gotten with the simpler, binary geometric spacing method.
BSJ GB
: How does your new differential growth paradigm work?
: In the model, we have vertical towers that we grow by adding units. After deciding the size of these units, we can check whether the addition of the units would still allow for us to remain compliant with the solar access regulations. Basically, rather than a top-down approach where we would cut down the homogenous model to fit the hypothetical sunlight vectors, this is a more bottom-up approach, where we literally let the solar envelope grow to its maximum size. We can then influence growth ourselves by making the solar envelope grow faster in areas where we want to have more height or more mass build and adjust designs from there.
INTERVIEWS
BSJ
: One of the tools you used to design the differential growth model was the Ladybug add-on for the Grasshopper plugin for Rhinoceros. Can you explain what this tool is?
GB
: Nowadays, when we design, we live in this paradigm of “computer-aided design” (CAD). One of the most-used CAD softwares for architecture is Rhinoceros 3D, which is very good for early-stage conceptual studies and all non-standard applications, such as solar envelopes, as it does not make too many assumptions about what a building should look like. Then, inside of Rhinoceros 3D, we have a plugin called “Grasshopper”—which is, interestingly, a visual programming language that uses directed acyclic graphs. These graphs are essentially a series of interconnected nodes which manipulate the way in which data flows. Because data is manipulated by each node, we also have the ability to write proper code in C#, Python, etc. So, it is really great in that sense, because it is a relatively easy way of doing non-standard things. Ladybug and Honeybee are another couple of plugins inside of this ecosystem that relate specifically to solar geometry and environmental analysis.
BSJ
: How does architectural design impact an employee’s experience in the workplace in terms of interpersonal communication and productivity?
GB
: We currently live in a world of digital communication. Even though remote work has proven that we can work without physically being in the same place, we can all acknowledge that there is a lot missing, especially if the work we need to do consists of innovative content. Specifically, when people are isolated, they find it more difficult to produce innovative content. In fact, isolation is probably the cause of most of the struggle accompanying remote work. Additionally, we lack the unplanned casual encounters with our peers and colleagues that spark innovation. As architects, when we design a workplace, we build a story that tries to understand and enhance who our client is. We ask, “What is the company, and what are its core values?” We hope that certain building configurations can lead to
SPRING 2021 | Berkeley Scientific Journal
29
“We currently live in a world of digital communication. Even though remote work has proven that we can work without physically being in the same place, we can all acknowledge that there is a lot missing, especially if the work we need to do consists of innovative content." a higher volume of innovative content being produced. But, one thing architects struggle with is how to get actual evidence that their structures actually have a certain effect. For example, with company feedback, we know a space that is seemingly “inefficient” due to a lack of work areas might actually be loved by everyone and become a social hub where the identity of the company is formed. Even though employees are not “productive”or do not talk about work in this space, it is still valuable to the enterprise because if there is social cohesion, there is a sense of common purpose and camaraderie. Both of these spur lateral thinking, innovation and a greater sense of purpose and belonging. However, while we know this information anecdotally, there are not enough studies on how we can scientifically determine what will work and why. To solve this problem, I actually borrowed a lot of techniques from urban analysis, which tries to understand how people perceive urban space.
BSJ
: What are the indoor connectivity, indoor visibility, and indoor environmental scores used to analyze different workspaces? Why did you choose to analyze these factors?
GB
: When we run an analysis or create a model, we simplify reality one slice at a time—often, the smaller the slice, the better. But we need to acknowledge that humans do not perceive reality through such a narrow lens; rather, they are influenced by a number of factors. In this study, we quantified these factors into three main scores: the indoor visibility score, the indoor connectivity score, and the environmental score. For the indoor visibility score, I first considered that when we navigate through space, the vast majority of information entering our brain is visual. That is why, as architects, we primarily focus on the visual experiences of people. Thus, I used the visibility score to encompass visible lines of sight, especially in workplaces where people often frequent. I associated the indoor connectivity score with mental maps. It has been proven that there is a very sharp decay of face-to-face communication in the workplace with increased distance between individuals. So, the probability of me talking with someone who is just four meters away is much higher than the probability of me talking with someone that is forty meters away. The environmental score is based mainly on daily access to sunlight and is a proxy for the overall environmental quality of a space and its effect on human behavior. For instance, we know that a more pleasant environment is a healthier environment: it is nicer to talk to somebody outdoors where it is sunny rather than indoors where it is darker. In our study, we plan to look more closely as to whether conversations happen more often or if they tend to last longer in environments that have daylight compared to those that only have access to artificial lights.
30
Berkeley Scientific Journal | SPRING 2021
BSJ GB
: What is isovist sampling, and how did you use it to analyze the visibility score?
: Isovists are used to trace the area of visible spaces in order to determine various properties. Conducting analyses on the complexity of isovists allows us to use them as tools for examining the quality of spaces. For example, imagine that you are inside of a perfectly square room. If you are standing at the center of the room, the isovist is going to be a square. On the other hand, imagine that you are instead inside of a forest. Now, since there are a lot of trees, the isovist is going to have a star-like appearance, similar to a sunburst. We can then relate these descriptions to, for instance, the presence or absence of visual interest.
BSJ
: After generating the multivariable map model, you applied it to three buildings. What parameters did you consider in choosing these three buildings, and what was the purpose of applying the model to these buildings?
GB
: The primary reason we analyzed these three buildings is that they are three milestone workplaces. The first building, built in 1965 for Osram, is a milestone in architectural design because it is the first completely open plan, free-seating arrangement office. The idea is that, in addition to having no individual offices, the furniture is clustered on the large open plan depending solely Figure 2: This diagram depicts the interconnectedness between connectivity, daylight, and visibility—the factors being measured to calculate the connectivity, environmental, and visibility scores. Reprinted with permission.
INTERVIEWS
Figure 3: Extracted analysis meshes of the three buildings: the “OSRAM,” the BMW Innovation Center, and the Merck Innovation Center. Reprinted with permission. on the current workplace activities in order to provide flexibility for reconfiguration. This project was seminal and inspired so many open plan offices. The second building we studied is the BMW Innovation Center, completed in 2004. It is a much larger building than the OSRAM and focuses on the idea of travel distances, which are calculated using different walking speeds and delays from one location of a workspace to another. One key feature of this building is its central hub where everyone can come together and exchange information. The third building, the Merck Innovation Center in Darmstadt, is about the same size as the OSRAM. Like the OSRAM, the Merck Innovation Center also has an extremely free arrangement of furniture. While in the Osram building you can see a clear separation between circulation (i.e., stairs, lift lobbies, etc.) and office spaces, they are completely enmeshed in the Merck Innovation Centre. The idea is that moving through the building, going to your desk is not a lesser activity than concentrating on your work: in order to be productive, creative and connected with your colleagues, the office environment needs to provide you with a rich social environment that allows for both concentration and interaction. Overall, it was really interesting to see how these buildings would reflect different ideas of what a workplace should be like and how we could capture these abstract concepts with tangible metrics. I was specifically interested in investigating how we can move towards this understanding of connecting daily productivity with the overall mood, atmosphere, and feeling of a space. One disclaimer: I was not involved in the design of any of these three buildings.
“When we run an analysis or create a model, we simplify reality one slice at a time—often, the smaller the slice, the better. But we need to acknowledge that humans do not perceive reality through such a narrow lens; rather, they are influenced by a number of factors. ” INTERVIEWS
BSJ
: What changes do you hope to see in future calculations of solar envelopes and in the design of future workplace environments?
GB
: I hope that solar envelopes become more frequently used as a tool for urban design, as it will make designing healthy urban landscapes easier. Moreover, with a more fine-tuned understanding of solar envelopes, architects no longer have to plan homogeneously distributed urban volumes. In this way, we can have more healthy and varied buildings, not just cookie cutter architecture. In terms of workplace design, we are living in a moment of big upheaval. I think that we are going to see a lot of changes. Specifically, we will see a persistence of mixed modes of work, where part of a person’s work can actually be done more effectively at home. Consequently, the social nature of the workplace is going to grow in importance, and we will have to recalibrate between focused and open-ended work. This interview, which consists of one conversation, has been edited for brevity and clarity. IMAGE REFERENCES 1. Headshot: [Photograph of Giovanni Betti]. Berkeley Research, University of California, Berkeley. https:// vcresearch.berkeley.edu/faculty/giovanni-betti. Reprinted with permission. 2. Figure 1: Betti, G. (Author), Arrighi, S. (Author), & Knowles, R. L. (Illustrator). (2017). A differential growth approach to solar envelope generation in complex urban environments [Conference paper]. Henn Architects, Berlin, Germany. 3. Figure 2: Betti, G., Aziz, S., & Ron, G. (2020). HENN Workplace Analytics. In C. Gengnagel, O. Baverel, J. Burry, M. Ramsgaard Thomsen, & S. Weinzierl (Eds.), Impact: Design With All Senses (pp. 636–647). Springer International Publishing. https://doi.org/10.1007/978-3030-29829-6_4 4. Figure 3: see image reference #3.
SPRING 2021 | Berkeley Scientific Journal
31
To Live Or Not To Live: Defining Life Outside Biological Systems BY NACHIKET GIRISH
I
magine a world restricted to a two-dimensional grid, on which black and white grid-boxes are the only distinct features. A world bereft of complexity and structure, of any physics, chemistry or biology. In fact, this entire world runs on two simple rules: a black square will turn white if it has anything other than exactly two or three black neighbors, and a white square with exactly three black neighbours will turn black. The creator of this grid universe, one of the earliest examples of what theoretical computer scientists now call cellular automata, was John Conway, the legendary British mathematician who tragically passed away last year due to complications arising from COVID-19.1 Although he made great contributions to several fields of mathematics, he is most well known for his grid world, which he dubbed the “game of life.2”That might seem surprising, since the description above seems nothing but a mathematical plaything. But, for philosophers, this quaint game raises questions about the very definition of what it means to be alive.
state characterized by capacity for metabolism, growth, reaction to stimuli, and reproduction.3”While it is a handy definition, it is very easy to find examples of things universally agreed to be living which do not satisfy one, or several, of these requirements. A tree can not respond visually to almost all kinds of stimuli, a unicellular organism can hardly grow bigger than one cell, and inanimate objects like crystals can actually create copies of themselves under the right circumstances.4 A more profound problem with this definition is that it is based on humanity’s experience with but a single paradigm of life. Limiting the scope of our definition in this way ignores the vast possibilities of life arising in unthinkably different ways throughout the universe. As Dr. Carol Cleland, philosophy professor at Colorado University at Boulder and member of NASA’s Astrobiology Institute, puts it in an interview with Astrobiology magazine:
"The key to formulating a general theory of living systems is to explore alternative possibilities for life."
HOW DO WE DEFINE LIFE? Typically, an article questioning our ideas of what life is would start with what conventional wisdom tells us it is. However, “conventional wisdom” has not settled on a definition of life. One can find significantly different definitions of life from different sources. Merriam-Webster, for example, defines it as “an organismic
32
Berkeley Scientific Journal | SPRING 2021
...in order to formulate a general theory of living systems, one needs more than a single example of life. As revealed by its remarkable biochemical and microbiological similarities, life on Earth has a common origin... The key to formulating a general theory of living systems is to explore alternative possibilities for life.5 A better approach, then, might be to approach the question of life from a more general perspective, taking into account the myriad
FEATURES
Figure 1: A real implementation of Von Neumann’s universal constructor. The original “organism” here has spawned two generations, with the current generation in the process of creating another copy. The purple lines are the tapes containing the genetic instructions, the “blueprint.” In the public domain.
possibilities of life throughout the universe. In fact, in 1994, a NASA committee was given exactly this task. Their proposed definition, sponsored by astronomer and host of the popular show Cosmos, Carl Sagan, was: Life is a self-sustaining chemical system capable of Darwinian evolution.6 This definition is interesting in many ways. Most notably, it eschews all references to biological processes, focusing instead on the crucial intergenerational information flow central to Darwinian evolution. What does a definition of life focused on evolutionary characteristics say about how we think about life? If the defining feature of life is something as simple, as mathematical, as the flow of genetic information, how complex does a living thing need to be to be considered alive? Indeed, this question is so rich with possibilities that it has spawned an entire field of study, that of artificial life, whose foundations were laid by the seminal work of John von Neumann. ARTIFICIAL LIFE Once we identify Darwinian evolution as a fundamental requirement for life, the question becomes one of identifying the simplest, most fundamental requirements of life. What would the simplest life look like? The famous mathematician, John von Neumann, took the first steps in the 1940s towards answering this question through his now-famous universal constructor. Von Neumann’s professed goal was to develop a model “whose complexity could grow automatically akin to biological organisms under natural selection.7” His focus on the growth of complexity
associated with evolution reflected the recognition of information flow as a central characteristic of life. His model consisted of three parts: first, a universal constructor (UC) mechanism which, given appropriate instructions, could construct anything it was instructed to; second, a blueprint, which would serve as an instruction set to construct the UC itself; and last, a universal copy machine, which, given a blueprint, could construct (almost!) exact copies of that same blueprint. Once a “new machine,” or a single organism, was constructed using the universal constructor mechanism and a given blueprint, that blueprint could be handed off to the copy mechanism. The copy mechanism would then copy that blueprint and give it back to the organism, which could use its UC mechanism to develop a new “offspring.” Tiny errors in the copying mechanism would produce slight variations in the blueprints, which would build up to result in what we see as evolution, giving this model the necessary features of life.8 To anyone with any basic biology experience, all of these processes will seem very familiar. Indeed, one could probably raise the same argument again: does this mechanism not seem to just borrow from how we know actual life works? But that’s precisely why von Neumann’s machine is so fascinating: he developed this model before the discovery of the role of DNA in reproduction.9 IS AI ALIVE? The computational/mathematical definition of life opens up very new ideas about what we consider to be life, far removed from the carbon-based, organic structures that we are used to associating with life. It is here that we return to John Conway’s monochrome
"Life is a self-sustaining chemical system capable of Darwinian evolution." FEATURES
SPRING 2021 | Berkeley Scientific Journal
33
Figure 2: A racetrack created in Conway’s game of life. Don’t let the colors fool you: they have just been added to differentiate between different kinds of objects. All unlit cells are still “dead” and all lit cells of any color are “alive” in the exact same state. This system contains many kinds of objects: spaceships and gliders, which travel across the grid; guns, which shoot smaller objects across the grid; and reflectors, which send a travelling object back along its original path. If such amazing complexity, and such a diversity of objects, can be created from just three basic rules, how complex would a system need to be to give birth to life? In the public domain.
game. It provides a simple environment with just two rules, but from these two basic rules, wonderful complexity arises. We can build patterns of dark cells which instantly die off or attain stable equilibrium states, patterns which undergo thousands of cycles of evolution before entering a stable state, or patterns which oscillate between several different states. In 2013, one pattern was discovered which actually creates a copy of itself before being destroyed in the process — the first example of a reproducing system in the Game of Life.1⁰ In the present, full-scale logic gates and even Neumann-esque universal constructors have been built in the Game of Life. With sufficient complexity, a replicator in the Game of Life may even be able to reproduce with variations: almost resembling Darwinian evolution. In a world where all of physics is just two rules of state, could a game of life UC or replicator be considered “living?” A biologist would unequivocally say no, but if we extend this analogy to noncarbon-based life, it becomes a trickier question. If complex selfreplication and the ability to store and transmit information down generations are taken as the essential characteristics of life, at what point will robots, or even artificial intelligence software, take the next step into being considered alive? REFERENCES 1. Zandonella, C. (2020, April 14). Mathematician John Horton Conway, a ‘magical genius’ known for inventing the ‘Game of Life,’ dies at age 82. Princeton University. https://www.princeton. edu/news/2020/04/14/mathematician-john-horton-conwaymagical-genius-known-inventing-game-life-dies-age 2. Gardner, M. (1970). Mathematical games. Scientific American, 223(4), 120–123. https://doi.org/10.1038/ scientificamerican1070-120 3. Merriam-Webster. Life. In Merriam-Webster.com dictionary. Retrieved April 19, 2021, from https://www.merriam-webster. com/dictionary/life 4. Chaitin, G. (1979). Toward a mathematical definition
34
Berkeley Scientific Journal | SPRING 2021
5.
6. 7. 8. 9. 10.
of “life.” In G. Chaitin, Information, randomness & incompleteness: Papers on algorithmic information theory (pp. 86–104).World Scientific Publishing Co. Pte. Ltd. https://doi. org/10.1142/9789814434058_0011 Astrobiology Magazine Staff. (2003, April 14). Life’s working definition: Does it work? Astrobiology Magazine. https:// www.astrobio.net/origin-and-evolution-of-life/lifes-workingdefinition/ Benner S. A. (2010). Defining life. Astrobiology, 10(10), 1021– 1030. https://doi.org/10.1089/ast.2010.0524 Neumann, J. (1966). Fourth and Fifth Lecture. In J. Neumann, Theory of self-reproducing automata (A. W. Burks, Ed., pp. 64– 87). University of Illinois Press. Hartman, H. (2018, December 29). Von Neumann and the origin of life [Video].YouTube. https://youtu.be/HLsC-mnRUXI Walker, S. I., & Davies, P. C. W. (2013). The algorithmic origins of life. Journal of The Royal Society Interface, 10(79), Article 20120869. https://doi.org/10.1098/rsif.2012.0869 Aron, J. (2010, June 16). First replicating creature spawned in life simulator. New Scientist. https://www.newscientist.com/ article/mg20627653-800-first-replicating-creature-spawnedin-life-simulator/ IMAGE REFERENCES
1. Banner: Martínez, S. (2007). Cellular automata (719999898) [Digital image]. Wikimedia Commons. https://commons. wikimedia.org/wiki/File:Cellular_automata_(719999898).jpg. Licensed under CC BY-SA 2.0. 2. Figure 1: Ferkel. (2008). Nobili Pesavento 2reps [Digital image]. Wikimedia Commons. https://commons.wikimedia.org/wiki/ File:Nobili_Pesavento_2reps.png 3. Figure 2: Simpsons contributor. (2009). Color coded racetrack large channel [Frame from animation]. Wikimedia Commons. https://commons.wikimedia.org/wiki/File:Color_coded_ racetrack_large_channel.gif
FEATURES
MICROSCOPIC ASTRONAUTS: ENGINEERING BACTERIA TO AID HUMAN SPACE TRAVEL
BY ANDERSON LEE
O
n February 18th, 2021, NASA’s rover, “Perseverance,” and its mini-helicopter, “Ingenuity,” landed in the Jezero Crater on the surface of Mars. Percy and Ginny, as they are colloquially called, are the most advanced robots ever sent to Mars. Soon, however, we will be sending something less durable, more variable, and much more valuable: humans. Supporting human life inherently makes this mission more difficult: people need food, water, and oxygen. The waste produced— carbon dioxide and physical excrements—must then be recycled or discarded into the environment. The long-term viability of humans in space also raises concerns. Exposure to radiation and zero-gravity environments will alter DNA and break down tissue, posing a myriad of health problems during a multi-year mission. Bringing medicine along is problematic because
FEATURES
many drugs will expire before the end of the mission. In order to address these biological concerns, NASA has started to employ biological technologies. When mixing the realms of space travel and biology, The Martian and Matt Damon’s potatoes tend to be on the forefront of people’s minds. Plants, however, require room to grow, soil, and water, and most of the biomass produced is either nutrient sparse or inedible. Instead, scientists are now focusing on edible bacteria. Many species of bacteria (and other microorganisms) produce biomass full of proteins and vitamins.1 These bacteria don’t take up much space and are relatively simple to genetically engineer. Scientists are engineering bacteria in new ways to address the unique challenges of space travel. Scientists are employing a technique called metabolic
SPRING 2021 | Berkeley Scientific Journal
35
engineering in order to optimize bacteria to meet the demands of space travel. At a broad level, metabolism in plants and bacteria is similar to that in humans: they break down food and modify it to make energy. You can think of metabolism as a collection of many overlapping pathways, each composed of sequential steps in which an enzyme—a protein machine—converts one molecule into another. Each of these steps produces byproducts in addition to energy, and it is these byproducts that are potentially useful. Different enzymes can bind to the same molecule and produce different products. Researchers can genetically engineer bacteria so that so that specific enzymes are present or absent, changing the final product of metabolism. This is when scientists can start using their creativity: from producing essential vitamins to making plants taste better, the possibilities of metabolic engineering are only limited by our knowledge of nature and our ability to manipulate it.
DNA Enzymes Food
Metabolism
“From producing essential vitamins to making plants taste better, the possibilities of metabolic engineering are only limited by our knowledge of nature and our ability to manipulate it.” oxygen. They could also be metabolically engineered to grow off of the compounds present in the Martian soil or human waste, creating a closed-loop life support system.1 The disparity between theoretical plausibility and dependable technology, however, spans many years into the future. So far, the vast majority of metabolic engineering has been focused on creating biofuels to address the need for alternative energy.4 A select few model organisms have been extensively characterized in the interest of biofuel production, but scientists can apply this already acquired knowledge to the context of human space travel. This application makes photosynthetic and edible organisms valuable so that they can be grown easily and their products don’t have to be extracted. In 2012, molecular pharming became a reality when a company received FDA authorization to sell a therapeutic protein that was made in plants.5 This marks the first biologically-produced medicine with FDA approval, demonstrating molecular pharming’s plausibility here on Earth and out in space.
Fuels Chemicals Drugs
Figure 1: Overview of the metabolic process. DNA encodes for enzymes, which turn food into specific molecules by the metabolic process. The Center for Utilization of Biological Engineering in Space (CUBES) showcased their creative ideas in a review focusing on the biological production of useful compounds. Acetaminophen, the active ingredient in Tylenol, is one such example. Producing common medicines like acetaminophen on demand is critical for long duration missions because their shelf life is shorter than the length of the trip.2 The production of therapeutics by organisms, known as molecular pharming, could also include cytokines, molecules that the immune system uses to communicate. Astronauts could eat the bacteria containing these molecules to counter the short-term effects of radiation sickness and the long-term effects of exposure to radiation, such as cancer.3 Bacterial production in space extends beyond pharmaceuticals—photosynthesizing bacteria could uptake the carbon dioxide produced by humans and recycle it into breathable
36
Berkeley Scientific Journal | SPRING 2021
Figure 2: Fred Haise, Apollo 13 astronaut, suiting up for his mission. The urinary and kidney infections he struggled with during the trip could be prevented by growable antibiotics in the future. In the public domain.
FEATURES
“One day, we all might be growing Tylenol next to our tomato plants.” The challenges facing molecular pharming and metabolic engineering in general are undoubtedly more daunting in space travel than on Earth. When an organism is genetically edited to make a product, its number one priority is survival, not producing whatever compound scientists are interested in. In space, these organisms will likely be under stress due to the different environmental conditions, like the lack of gravity. Consequently, they might produce less of their desired product, wasting precious energy in the process. Additionally, the power of replication is coupled to the problem of mutation. If a mutation causes a critical enzyme to stop working, that cell will no longer be able to produce the desired product. On top of that, this cell will not experience the metabolic burden of producing its biologic, so it could out-compete the other, nonmutated cells and take over the culture. Other challenges include simply keeping the bacteria alive during the arduous trip— something that is currently being investigated on the International Space Station.6 If all of these problems are addressed, what would this system look like on a Mars trip? Ideally, cells would be genetically engineered, frozen, and stored in small, labeled tubes. When needed, a tube could be removed, thawed, and grown up into a large culture that produces significant quantities of its biologic. Though it will be a while until this technology is completely developed, preliminary missions could harbor these engineered cells as a worst-case scenario, à la Matt-Damon-stranded-on-Mars. This technology will likely not be exclusive to spaceships and research labs. Scientific research meant for space travel has already infiltrated normal life, including innovations like Tempur-Pedic mattresses, cell phone cameras, and LASIK, along with many others.7 One day, we all might be growing Tylenol next to our tomato plants. The future of human space travel lies within the powers of biology; the natural problems that organisms pose have natural solutions. The terraformers of Mars will be a blend of astronauts— human-sized and microscopic—working together to start exploring the universe.
3. McNulty, M. J., Xiong, Y., Yates, K., Karuppanan, K., Hilzinger, J. M., Berliner, A. J., Delzio, J., Arkin, A. P., Lane, N. E., Nandi, S., & McDonald, K. A. (2021). Molecular pharming to support human life on the moon, Mars, and beyond. Critical Reviews in Biotechnology. https://doi.org/10.1080/07388551.2021.1888070 4. Brennan, L., & Owende, P. (2010). Biofuels from microalgae—A review of technologies for production, processing, and extractions of biofuels and co-products. Renewable and Sustainable Energy Reviews, 14(2), 557–577. https://doi. org/10.1016/j.rser.2009.10.009 5. Mor, T. S. (2015). Molecular pharming’s foot in the FDA’s door: Protalix’s trailblazing story. Biotechnology Letters, 37(11), 2147– 2150. https://doi.org/10.1007/s10529-015-1908-z 6. Everroad, C. (2018). Experimental evolution of Bacillus subtilis populations in space: Mutation, selection and population dynamics [Experiment Report]. Space Station Research Explorer, NASA. https://www.nasa.gov/mission_pages/station/ research/experiments/explorer/Investigation.html?#id=7660 7. NASA Spinoff. NASA, https://spinoff.nasa.gov// 8. Nangle, S. N., Wolfson, M. Y., Hartsough, L., Ma, N. J., Mason, C. E., Merighi, M., Nathan, V., Silver, P.A., Simon, M., Swett, J., Thompson, D.B., & Ziesack, M. (2020). The case for biotech on Mars. Nature Biotechnology, 38(4), 401–407. https://doi. org/10.1038/s41587-020-0485-4
IMAGE REFERENCES 1. Banner: NASA/JPL-Caltech. (2013). Valles Marineris: The Grand Canyon of Mars [Photographic mosaic]. NASA Solar System Exploration. https://solarsystem.nasa.gov/resources/683/vallesmarineris-the-grand-canyon-of-mars/ 2. Figure 1: Figure made by author. 3. Figure 2: Hengeveld, E. (1970). Fred Haise during suit-up [Photograph]. Apollo 13 Image Library, NASA. www.hq.nasa. gov/alsj/a13/images13.html
Acknowledgements: I would like to acknowledge postdoctoral fellow Dr. Jacob Hilzinger (UC Berkeley, CUBES) for his helpful feedback and general expertise about genetically engineered bacteria during the writing process.
REFERENCES 1. Linder, T. (2019). Making the case for edible microorganisms as an integral part of a more sustainable and resilient food production system. Food Security, 11(2), 265–278. https://doi. org/10.1007/s12571-019-00912-3 2. Berliner, A., Hilzinger, J. M., Abel, A. J., McNulty, M., Makrygiorgos, G., Averesch, N. J., Sen Gupta, S., Benvenuti, A., Caddell, D., Cestellos-Blanco, S., Doloman, A., Friedline, S., Ho, D., Gu, W., Hill, A., Kusuma, P., Lipsky, I., Mirkovic, M., Meraz, J., … & Arkin, A.P. (2020). Towards a biomanufactory on Mars. Preprints. https://doi.org/10.20944/preprints202012.0714.v1
FEATURES
SPRING 2021 | Berkeley Scientific Journal
37
A Deep Dive Into Modeling and Mechanisms
BSJ JM
Interview with Dr. John Moult
BSJ JM
BY BRYAN HSU, TIMOTHY JANG, AND ANANYA KRISHNAPURA
John Moult, PhD, is a professor in the Department of Cell Biology and Molecular Genetics at the University of Maryland. As the principal investigator of the Moult Group, based at the Institute for Bioscience and Biotechnology Research, Dr. Moult focuses on the computational modeling of biological systems. Dr. Moult is also the co-founder and president of the Critical Assessment of Methods of Protein Structure Prediction (CASP) challenge, a long-running protein modeling competition that aims to advance methods of predicting protein structure from sequence. In this interview, we discuss the growth and development of the field of protein modeling and prediction, as well as Dr. Moult’s work towards creating a framework for modeling biological mechanisms.
38
Berkeley Scientific Journal | SPRING 2021
: What initially fueled your interests in protein modeling and the field of computational biology as a whole?
: In some sense, it was somewhat of an accident. The initial experimental work on protein folding was done by a man named Christian Anfinsen in the early 1960s. I became a graduate student in 1965. At one point, my supervisor asked me, “Why don’t you go and solve this protein folding problem in your spare time?” I was not able to do anything about it at that time, but I did get intrigued by the problem, and gradually this interest had more and more of an impact on the research I did later. : What are some of the challenges associated with determining the 3-D structure of proteins?
: From a computational point of view, determining the 3-D structures of proteins comes with three main difficulties. First is the size of the search space, which describes the set of possible orientations of a protein. An unfolded polypeptide is very complex, and you can think about the number of ways it can be arranged in 3-D space in many different ways. One way I like to think about it is how DeepMind [subsidiary of Alphabet, Inc., whose research aims to construct AI systems] puts it. They compare the complexity of an unfolded polypeptide chain with that of the game of Go, where you have 19 by 19 positions. There are three states for each position: blank, black, or white. Therefore, there are 3361 total possibilities. You can think of a polypeptide chain with the same number of amino acids (361) as having roughly the same number of possible conformations if you approximate that there are only three conformations per amino acid; however, even this is an underestimate. The second difficulty comes from the folding of the protein. As a protein begins to fold and become compact, an increasing number of limitations on its movement arise since polypeptide chain bits cannot move through each other. The third difficulty comes from the fact that the free energy difference between the unfolded and folded states is small compared to the individual interactions between atomic groups within a protein. These three difficulties combine to make it computationally challenging to determine protein 3-D structure.
BSJ
: You co-founded the Critical Assessment for Structure Prediction (CASP) challenge in 1994. At the time, what was your main intention in creating this challenge?
JM
: Our purpose was to try to accelerate progress in solving this one problem: computing 3-D structure from sequence. The issue can be summarized as follows: when doing work in a virtual world, one can create a digital twin of a protein, but in doing so, this virtual world gives the individual too much leeway as to how they can arrange the protein. You end up losing the normal rigor of experimental science. The idea of CASP was to come up with a simple way of putting the rigor back into the process. There has been tremendous progress toward this goal since we started the challenge. We can now employ several techniques in protein modeling. For example, modeling using homology to other proteins has had a lot of steady progress and has become a very useful
INTERVIEWS
Figure 1: The left panel illustrates the crystal structure of the 354-residue domain ESKIMO 1 (6CCI) compared to its most accurate CASP13 prediction model on the right. Colors in the image represent the accuracy of the model, showing high accuracy in the core with green and blue and lower accuracy in the edge loop regions with orange and red. Image copyright © 2019 by Wiley Periodicals, Inc; reprinted under fair use. technique. The more fundamental problem of, “Can you calculate the structure without using much homology information?” has proven much tougher. We have seen several incremental improvements over the years, but it is in the past six years that certain methods have really taken off in order to address this problem.
BSJ JM
: What is deep learning, and how has it altered the way we predict biological structures?
: Deep learning has emerged from the application of neural networks or other machine learning methods to predict experimental outcomes from data. While practical applications of modeling protein structure using machine learning began in the 1990s, these methods were highly restricted because of technical difficulties with the algorithms. Around 2010, there were a series of breakthroughs in addressing this problem. One allowed for the ability to construct much bigger networks. The word “deep” in deep learning actually refers to the number of layers you have in a network. Whereas having more than three layers was previously considered “deep,” there are now networks with hundreds of layers. With deep learning, rather than having to pre-process the data, you can give the network raw data and it will sort out what is important.
BSJ
: Recently, there has been a lot of excitement about one CASP14 project in particular that utilizes deep learning: AlphaFold 2. How does it work differently from prior approaches?
JM
: There are some significant differences from other approaches. Earlier, I mentioned that there have been several major improvements in the field of protein modeling over the past six years. The first thing that happened was that traditional statistical methods became successful at predicting which amino acids are in contact in the protein’s three-dimensional structure. These predictions now provide some restraints to the possible 3-D structures you can predict for the protein. Then, about four or five years ago, people recognized that you could represent the set of contacts between the amino acids in a folded protein as an image. To do this, you make an amino acid sequence by an amino acid sequence array with n2 pixels, where n is the number of amino acids in the sequence. You fill in a pixel if there is contact between the amino acids. Otherwise, pixels remain blank. The result is a two-
INTERVIEWS
dimensional image. This is where deep learning comes in, and it has been very successful with image recognition. In a previous CASP round about two years ago (CASP13), a number of groups began applying these ideas—treating the contact maps as images and training convolutional neural networks to successfully predict the folds of most proteins. This was a very exciting advancement since the general topology of most proteins could be correctly modeled. However, atomic details were still elusive. In that CASP, DeepMind was the most successful, which was very impressive since they came in from outside the field. They built on the ideas that others had already developed within the community. In the following two years, they realized that they were stuck; with the way they were approaching the problem, they were not going to get to atomic-level accuracy in their models. So, they abandoned most of the previous technology and, as they say, explored at least a dozen other methods through the prototype stage. As a result, a few critical algorithmic changes were introduced. The first was getting rid of using a convolutional framework, which is not ideally suited to these types of images. Instead, they decided to use the currently emerging technique of attention learning. The second change they made was to the final stage of the network. Rather than outputting a set of predicted contacts, the network’s final stage now produces three-dimensional coordinates through the use of newly emerging technology. They also built some protein properties directly into the network structure. These changes resulted in advancing us from getting the fold right two years ago to now achieving atomic accuracy.
BSJ JM
: What do you aim to address as the future targets of CASP_ Commons and CASP15?
: CASP_Commons has been an attempt to, rather than test how well methods work against an experiment, actually
“These changes resulted in advancing us from getting the fold right two years ago to now achieving atomic accuracy.”
SPRING 2021 | Berkeley Scientific Journal
39
use the CASP community to address issues of significance. It has been quite successful recently in producing models of some of the most difficult-to-get SARS-CoV2 structures. Now that we have experimental data, we can confirm that some of these predictions have turned out to be very good. Of course, we are going to try to keep pursuing this line of research, but the main goal of CASP is to advance methods for protein modeling. Our real next challenge is predicting the structures of protein complexes. The folding problem was defined a long time ago when we thought about single proteins, but we now know that biology is really all about protein-protein interactions, and we want to be able to predict these. There are methods which already display some progress on this problem, but they do not quite nail down a solution. The expectation going forward is that the same sort of deep learning I mentioned previously, along with some tweaks, will be successful at solving this problem. Seeing whether this approach will be successful or not is the real source of excitement for the next CASP.
BSJ
: Much of your research is currently focused on the application of computational methods to model not just proteins, but biological systems. What drew you to computational biology?
JM
: I got a taste for doing things computationally with my work on the protein structure problem, and from there, I began to see that the broader field of computational biology would
be central to the future of biology. I think it is going to be like theory in physics in that it is going to drive the experimental process. In terms of my specific pathway into the field, my lab and I were initially interested in the question of how genetic variants affect disease.
BSJ
: You have helped create MecCog, a framework for representing biological mechanisms, especially those connecting genetic variants with disease outcomes. What prompted MecCog’s creation?
JM
: The reasons are similar to those that led to CASP’s creation. The institutions and the procedures individuals have created in order to do science are really amazing, but they are not perfect; they have a sort of inertia. As the science we do changes, the way that we go about it does not change fast enough. One field of study this particularly applies to is the study of complicated biological systems. Individuals are not able to measure exactly what they want to measure and do not have a way to efficiently organize existing data. For example, my lab and I are interested in Alzheimer’s disease, where the focus is the human brain, something as inaccessible as you can get in terms of measurement. Of course, you can do autopsies, scans, and so on, but you cannot really make molecular measurements in living human brains. Instead, you have to make measurements in mice or in cells and extrapolate from those measurements the mechanism behind the disease. In principle, that is not difficult, but in practice, it gets very, very messy.
Figure 2: Schematic of a previous generation of DeepMind’s protein structure predictive system, AlphaFold. The structure-prediction neural network is shown in green. Specifics for the most recent version of the network have not yet been released publicly. Image copyright © 2020 by the authors; reprinted under fair use.
40
Berkeley Scientific Journal | SPRING 2021
INTERVIEWS
Figure 3: General layout of the MecCog disease mechanism schema. The schema shows decreasing levels of certainty in evidence with the colors green, yellow, and red respectively. Black circles represent unknown mechanisms, and blue octagons indicate possible sites for therapeutic intervention. Mechanism modules are abbreviated as MM and substate perturbation as SSP. Image licensed under CC BY 4.0. Now, in Alzheimer’s, there is one gene variant called APOE4 that predisposes you to developing the disease. If you have two copies of this variant, you are around 30 times as likely to develop Alzheimer’s compared to if you had the normal variant. Obviously, there has been a lot of interest in this one variant, and there are around 10,000 published papers on the protein APOE. However, if you wanted to treat patients with this disease variant, you would not know whether it would benefit the patient to have more or less activity of this protein. We have 10,000 papers on the protein, and yet we still cannot answer the most basic qualitative question. Why is that? It is because of the remoteness of the experiments and the inability to organize the information you do have. MecCog is an attempt to come up with a framework that does exactly that; it solves this problem of how we can systematically think about mechanisms in a way that helps us sort out what we know, what we do not know, and what experiments to do.
BSJ
: In the language of MecCog, what are “entities” and “activities,” and what role do they play within a mechanism?
“The institutions and the procedures individuals have created in order to do science are really amazing, but they are not perfect; they have a sort of inertia.”
INTERVIEWS
JM
: We can think of mechanisms, particularly disease mechanisms related to genetic variation, as a series of steps. You start with the DNA, which is an “entity,” and you perturb it; this could be a base change in DNA. In the language of MecCog, this change is called a “substate perturbation,” where the state is the state of the DNA. Then, a mechanism module or “activity” links this change at the DNA level to the protein level. For this example, transcription and translation would link this base change in DNA to changes in the amino acids of a protein. Essentially, an entity that is perturbed, in this case the DNA, is linked by some activity to a change in another entity, such as a protein. We can thus think of a mechanism as a string of perturbed entities linked by a string of perturbed or normal activities. Of course, causal networks are fairly well established, but the difference with MecCog is that you can label the edges in a causal network with these mechanism modules or activities.
BSJ
: Could you give us an example of how one might analyze the interaction between genetic variation and disease phenotype through MecCog’s framework?
JM
: My previous example on Alzheimer’s and the APOE4 variant really illustrates the benefits of using MecCog. Normally, if you think about how a base change affects disease phenotype, there might be one or at most five different mechanisms linking the two. However, by my count, 22 different mechanisms have been proposed and supported by data for how APOE’s base change affects the risk of Alzheimer’s. Let us take one of these mechanisms as an example. Within this mechanism, there are two different inputs: the APOE4 base change and environmental perturbation. This environmental perturbation refers to stress on neurons, which happens under injury conditions
SPRING 2021 | Berkeley Scientific Journal
41
or with age. Stressed neurons produce more of this APOE4 protein, which is less thermodynamically stable than the normal protein. The less-stable protein, which is an entity, is more susceptible to proteolytic cleavage into two pieces. In turn, that allows the mechanism module of cleavage to go faster than it does with the normal protein. Thus, the next altered entity or substate perturbation in this mechanism is this state of having more cleaved protein than you would get with a normal protein. The cleaved protein goes on to bind to a protein called tau, one of the major players in Alzheimer’s. Tau is normally associated with microtubules, but its interaction with the cleaved APOE protein appears to detach it from microtubules. Ultimately, the increased aggregation of tau is one of the drivers of neural deterioration.
BSJ
: As of now, data for mechanism schemas must be manually inputted by researchers. How do you see this process changing over time?
JM
: When you do something like read a paper and say, “Okay, the relevant event for this mechanism step is the cleavage of this protein,” your mind has extracted information from the paper and formalized it into a MecCog-type arrangement. Currently, there are no methods for automatically replicating that. Of course, there is a huge amount of AI work going on in interpreting language. However, if you actually look at what has been achieved in the biological sciences, it is pretty disappointing. Current methods cannot even succeed in reliably identifying which proteins are referred to in a given text. So we are a long way from automating this process and directly mining information from papers, but it is really intriguing to think about how you might do this. What is happening now is this application of things called transformers to language, where you essentially transform the relationship between the words in order to make them nearer to the concepts. I am not sure, but maybe something like that is going to have an impact here. One thing this all emphasizes is the strange way in which we use language. How we use language in science is a very flexible and powerful thing, but its very flexibility makes it hard to deal with computationally.
BSJ
: Finally, what kinds of developments do you foresee MecCog leading to within the biological sciences, such as the medical field?
JM
: With 10,000 papers, I believe there is no way individuals or research groups can make sense of all the information out there or accurately model 22 schemas for each proposed mechanism. MecCog is designed as a crowdsourcing tool. For key diseases like Alzheimer’s, we are hoping to build a repertoire of all of these mechanisms to see what we know and what we do not know. Once we have these things laid out, we can then think of potential therapeutic strategies. What serendipitously came out of this was that if you have multiple inputs into a disease, you can draw an intersecting graph of all the different schemas. This results in a sparse neural network connecting genetic inputs to a disease output. If you have enough data, you can then train this model to predict disease state from an input. Additionally, if you have the topology of the model right, you
42
Berkeley Scientific Journal | SPRING 2021
“How we use language in science is a very flexible and powerful thing, but its very flexibility makes it hard to deal with computationally.”
will generate the right functions at each node in that sparse neural network. For example, we constructed a network to model part of Crohn’s disease. Six DNA variants in the input for this network contribute to an unfolded protein response, represented by one of the internal nodes. The network correctly learns the complex relationship between these inputs and elicits a response depicting a sigmoidal set-in. We did not tell the network that this is the sort of set-in it should give, but it learnt that this is what happens at that node. And now, with this network, you can ask, “If I were to drug the patient at one of these nodes, how would the network respond? Would administering the drug decrease the probability of disease?” Because we have so much data, in the future, I envision that we could have something loosely analogous to deep neural networks representing biological systems. To me, that is the really exciting way forward. This interview, which consists of one conversation, has been edited for brevity and clarity. IMAGE REFERENCES 1. Headshot: [Photograph of John Moult]. Institute for Bioscience & Biotechnology Research. https://www.ibbr.umd.edu/ taxonomy/term/445 2. Figure 1: Kryshtafovych, A., Schwede, T., Topf, M., Fidelis, K., & Moult, J. (2019). Critical assessment of methods of protein structure prediction (CASP)-Round XIII. Proteins, 87(12), 1011–1020. https://doi.org/10.1002/prot.25823 3. Figure 2: Senior, A. W., Evans, R., Jumper, J., Kirkpatrick, J., Sifre, L., Green, T., Qin, C., Žídek, A., Nelson, A., Bridgland, A., Penedones, H., Petersen, S., Simonyan, K., Crossan, S., Kohli, P., Jones, D. T., Silver, D., Kavukcuoglu, K., & Hassabis, D. (2020). Improved protein structure prediction using potentials from deep learning. Nature, 577(7792), 706–710. https://doi. org/10.1038/s41586-019-1923-7 4. Figure 3: Darden, L., Kundu, K., Pal, L. R., & Moult, J. (2018). Harnessing formal concepts of biological mechanism to analyze human disease. PLOS Computational Biology, 14(12), Article e1006540. https://doi.org/10.1371/journal.pcbi.1006540
INTERVIEWS
CROSSING THE BLOOD BRAIN BARRIER TO TREAT GLIOBLASTOMA MULTIFORME BY ANISHA IYER
B
lood is the ceaseless tide that washes over the body’s diverse cellular civilizations, perpetually revitalizing and maintaining their capabilities. Intricate and extensive, the human body’s vasculature provides each organ with the oxygen and nutrients necessary to perform its distinct and necessary function. Chemical therapies and other pharmaceuticals almost always need to travel through the bloodstream to reach their target tissues. When the Figure 1: TEM of BBB Neurovascular Unit. Neurovascular units are the building blocks of the BBB, with a restrictive diameter of approximately 9 micrometers. This false-coloured transmission electron micrograph (TEM) depicts a neurovascular unit in a mouse brain capillary. A red blood cell is visible in a solid, red-orange color at the center of the image—inside the lumen—which is lined with a layer of endothelial cells in grey and surrounded by various other cell types in the other colors in the image. Image licensed under CC BY 4.0.
"Many newly synthesized pharmaceuticals are unable to cross the BBB, limiting their ability to treat terminal brain diseases including malignant brain cancers such as glioblastoma multiforme." target tissue is the brain, however, not every drug or compound should be allowed to enter. As one of the most important organs, the brain possesses its own vasculature that is shielded by the brain’s robust protective system: the blood brain barrier (BBB). This barrier is a heavily restrictive border composed of specialized endothelial cells, which are held together by tiny connections called tight junctions (Figure 1).1 The BBB contains unusually small tight junctions, which heavily restrict the movement of molecules into the brain and provide selective permeability of the vasculature of the central nervous system (CNS). Despite its necessity, this protective barrier presents a serious obstacle for drug delivery, particularly for drugs with peptides and proteins. Many newly synthesized pharmaceuticals are unable to cross the BBB, limiting their ability to treat terminal brain diseases including malignant brain cancers such as glioblastoma multiforme (GBM).2 GLIOBLASTOMA MULTIFORME Due to its aggressive, treatment-resistant nature, GBM is the most severe and debilitating type of brain cancer and one of the most lethal cancers worldwide, with a median survival time of only 15 months.3,4,5 As both the most common and most deadly primary
FEATURES
SPRING 2021 | Berkeley Scientific Journal
43
Figure 2: (PLGA) Nanoparticles for drug delivery. This false-colored image depicts nanoparticles ranging in size from 100 nm to 500 nm. The nanoparticles are made of poly lactic-co-glycolic acid (PLGA) and are used as a drug delivery carrier, stabilizing the drugs and allowing them to be released in a sustained manner once inside the body. Image licensed under CC BY-NC 4.0. brain tumor in adults, GBM has been the target of extensive research, which is limited by the disease's localization in the CNS.4 Beyond its treatment-resistance and high incidence, GBM is especially difficult to treat because of the BBB.4 Unlike regular blood vessels, blood vessels of the CNS are specialized to better maintain homeostasis and prevent injury and disease. Endothelial cells, which make up the walls of blood vessels, allow only small, hydrophobic molecules to pass. Tight junctions, meanwhile, closely regulate the movement of molecules, ions, and cells between the blood and the CNS and prevent large compounds like proteins and peptides from passing.1 Although it shields the brain from toxins and harmful compounds, the BBB presents an obstacle for treatments that target the brain because it rejects drugs and chemical therapies that normally help radiation and surgeries treat or excise damaged tissue.3In GBM specifically, brain tumors contain various cellular populations that respond differently to chemical treatments (Figure 3). Much like with bacterial resistance, the tumors’ heterogeneity can often lead to treatment-resistance. As a result, GBM requires multimodal treatment regimens that can reach beyond the BBB.⁶ To overcome the BBB’s hindrances, scientists are developing alternative drug delivery systems. ALTERNATIVE DRUG DELIVERY SYSTEMS Alternative drug delivery systems help scientists carry drugs beyond the BBB by preventing these drugs from degrading before they reach their target site. Chemical delivery systems, for example, modify the chemical structure of a drug in order to carry it across the BBB. One such system is the lipid-mediated transport system, which increases a drugs hydrophobicity to trigger the BBB’s natural permeability to hydrophobic molecules.2 Biological delivery systems, on the other hand, re-engineer pharmaceuticals to increase their
44
Berkeley Scientific Journal | SPRING 2021
Figure 3: MRI scan; brain cancer (glioblastoma). This magnetic resonance image (MRI) scan visualizes the neuroanatomy of a 33-yearold patient with glioblastoma. Visible in the right hemisphere, the tumor is a large mass, colored black by MRI, which causes focal neurologic signs or seizures. Image licensed under CC BY 4.0. ability to cross through endogenous transporters within a layer of the brain’s capillaries. Other delivery systems modify tight junctions to disrupt the blood brain barrier, use antibodies to transport large molecules across the barrier, or take advantage of already present receptor-mediated transport systems to move molecules into the brain.2
"Although it shields the brain from toxins and harmful compounds, the BBB presents an obstacle for treatments that target the brain because it rejects drugs and chemical therapies that normally help radiation and surgeries treat or excise damaged tissue." While effective, these systems sometimes present health risks or are not efficient. Chemically modifying pharmaceuticals can decrease their strength, making them less effective. Re-engineering pharmaceuticals, as in biological delivery systems, requires scientists to devise specific re-engineering strategies for each compound, which does not provide a unilateral solution. Modifying tight junctions in the BBB can lead to uncontrolled permeability, which prevents the barrier from protecting the brain from toxins and pathogens and causes additional complications in patients.⁷ NANOPARTICLES AND FOCUSED ULTRASOUND TO CROSS THE BBB Many new drug delivery systems involve nanoparticles, which act as physical carriers to transport drugs across the BBB safely and
FEATURES
efficiently (Figure 2).⁷ Nanoparticle-based systems include three components: the drug, the stabilizer, and the nanoparticle itself. Because of the system’s multiple components, scientists can find an optimal combination of the three to maintain stability during circulation, carry the drug across the BBB, and prevent the body’s systems from dissolving nanoparticles in enzymes.⁴ Polylactic-co-glycolic acid (PLGA) nanoparticles, for instance, were designed to optimize nanoparticle composition, thereby improving one component of this system. PGLA and other types of polymeric nanoparticles encapsulate drugs, allowing them to cross the BBB through endocytosis (Figure 4). PLGA was a component of a successful system in which scientists effectively crossed the BBB with a chemotherapy drug called paclitaxel. Scientists loaded paxilatel onto PLGA nanoparticles and used an additional technique, focused ultrasound (FUS), to make the BBB more permeable to the nanoparticles. FUS is a particularly important technique because it can produce localized and selective BBB permeability while simultaneously being non-invasive. While still effective, FUS is applied from the outside of the skull, or transcranially, and is therefore non-invasive to the patient. Using ultrasound waves, FUS makes microbubbles inside neurovascular units oscillate. This oscillation produces mechanical forces against the tight junctions of endothelial cells that line the BBB. When the microbubbles collapse, they create channels between endothelial cells that act as microjets to draw cargo through in a fast and favorable way. Together, the two techniques disrupt the tight junctions of CNS endothelial cells and provided greater antitumor efficacy of the paclitaxel.⁸ Moreover, the forces produced by FUS against tight junctions Figure 4: Nano-needles shuttling the blood brain barrier, TEM. This false-colored transmission electron micrograph shows nano-needles (yellow tubular structures) crossing the blood-brain barrier (orange cell layer) to deliver the therapeutic cargo they contain from the blood (left; dark red space) to the brain (right; black). The nano-needles depicted here are formed from carbon nanotubes (CNTs), which are tubular nanostructures made of rolled-up layers of graphene. Much like nanoparticles, these CNTs are a current research topic due to their ability to act as nanocarriers to deliver drugs or genes to brain tumors blocked by the BBB. Image licensed under CC BY 4.0.
"While still effective, FUS is applied from the outside of the skull, or transcranially, and is therefore noninvasive to the patient." are reversible, with BBB permeation only lasting 4–6 hours. Attempts to permanently disturb the BBB are particularly dangerous because they can cause ion dysregulation, cellular communication problems, failure to maintain homeostasis, and the acceptance of immune cells and other molecules into the central nervous system—all of which can lead to neuronal dysfunction and degeneration. For patients with neurological diseases, traumatic brain injuries, or neurodegenerative disorders, interfering with the BBB’s properties can often escalate the pathology and progression of the disease.⁹ Through its reversibility, FUS avoids the serious dangers associated with permanent BBB disturbance. As a result, drug delivery using nanoparticles and FUS is especially promising because nanoparticle-based systems can be optimized and FUS is non-invasive and impermanent. CONCLUSION Cancerous or benign, brain tumors present the distinct difficulty of requiring intervention beyond the brain’s protective barrier. With GBM in particular, brain tumors require multimodal treatment regimens due to their cellular heterogeneity and resulting treatmentresistance. Thus, scientists have innovated several alternative drug delivery systems to enable drugs to cross the BBB. While many systems compromise treatment efficacy or affect the integrity of the BBB as a protective barrier, nanoparticles provide an effective alternative that preserves treatment efficacy, especially when combined with FUS, which only disrupts the BBB temporarily to prevent permanent damage to this crucial protective structure. As scientists innovate novel techniques beyond enhanced drug delivery to conquer GBM, immunotherapies like vaccines to overcome GBM’s heterogeneity and immune checkpoint inhibitors to amplify immune response foreshadow an optimistic future for the field of neuro-oncology. At the same time, alternative drug delivery systems, especially those with nanoparticle-delivery systems that work with FUS, provide a promising means to cross the BBB and work in tandem with these immunotherapies to fight GBM. Acknowledgements: I would like to acknowledge final year medical student Anam Sayed (Indian Institute of Medical Science and Research, India), First Faculty of Medicine Helen June Whitley (Charles University in Prague), and final year medical student Sowmyashree Nagendra (Bangalore Medical College and Research Institute, India) for their remarks about the pathology of GBM, the properties of components of the blood brain barrier, and current methods of GBM treatment, respectively.
FEATURES
SPRING 2021 | Berkeley Scientific Journal
45
REFERENCES 1. Daneman, R., & Prat, A. (2015). The blood–brain barrier. Cold Spring Harbor Perspectives in Biology, 7(1), Article a020412. https://doi.org/10.1101/cshperspect.a020412 2. Patel, M. M., Goyal, B. R., Bhadada, S. V., Bhatt, J. S., & Amin, A. F. (2009). Getting into the brain. CNS Drugs, 23(1), 35–58. https://doi.org/10.2165/0023210-200923010-00003 3. Sarkiss, C. A., Rasouli, J. J., & Hadjipanayis, C. G. (2016). Intraoperative imaging of glioblastoma. In S. Brem & K. G. Abdullah (Eds.), Glioblastoma (1st ed., pp. 187–195). Elsevier, Inc. https://doi.org/10.1016/B978-0-323-47660-7.00014-8 4. Harder, B. G., Blomquist, M. R., Wang, J., Kim, A. J., Woodworth, G. F., Winkles, J. A., Loftus, J. C., & Tran, N. L. (2018). Developments in blood-brain barrier penetrance and drug repurposing for improved treatment of glioblastoma. Frontiers in Oncology, 8, Article 462. https://doi.org/10.3389/ fonc.2018.00462 5. Chen, Y., Jin, Y., & Wu, N. (2021). Role of tumor-derived extracellular vesicles in glioblastoma. Cells, 10(3), Article 512. https://doi.org/10.3390/cells10030512 6. Mende, A. L., Schulte, J. D., Okada, H., & Clarke, J. L. (2021). Current advances in immunotherapy for glioblastoma. Current Oncology Reports, 23(2), Article 21. https://doi.org/10.1007/ s11912-020-01007-5 7. Kim, H. R., Gil, S., Andrieux, K., Nicolas, V., Appel, M., Chacun, H., Desmaële, D., Taran, F., Georgin, D., & Couvreur, P. (2007). Low-density lipoprotein receptor-mediated endocytosis of PEGylated nanoparticles in rat brain endothelial cells. Cellular and Molecular Life Sciences, 64(3), 356–364. https://doi. org/10.1007/s00018-007-6390-x 8. Li, Y., Wu, M., Zhang, N., Tang, C., Jiang, P., Liu, X., Yan, F., & Zheng, H. (2018). Mechanisms of enhanced antiglioma efficacy of polysorbate 80-modified paclitaxel-loaded PLGA nanoparticles by focused ultrasound. Journal of Cellular and Molecular Medicine, 22(9), 4171–4182. https://doi.org/10.1111/ jcmm.13695 9. Zlokovic, B. V. (2008). The blood-brain barrier in health and chronic neurodegenerative disorders. Neuron, 57(2), 178–201. https://doi.org/10.1016/j.neuron.2008.01.003
5. Figure 4: Al-Jamal, K.T. Nano-needles shuttling the blood brain barrier, TEM [Electron micrograph]. Wellcome Collection. https://wellcomecollection.org/works/nt9fw6r4
IMAGE REFERENCES 1. Banner: Jensflorian. Glioblastoma mitotic activity [Microscope image]. Wikimedia Commons. https://commons.wikimedia. org/wiki/File:Glioblastoma_mitotic_activity.jpg 2. Figure 1: Al-Jamal, K. T. Neurovascular unit, blood brain barrier, TEM [Electron micrograph]. Wellcome Collection. https:// wellcomecollection.org/works/ya7xpe5z 3. Figure 2: Cavanagh, A. Nanoparticles for drug delivery [Electron micrograph]. Wellcome Collection. https://wellcomecollection. org/works/yghmsfke 4. Figure 3: MRI scan; brain cancer (glioblastoma) [MRI scan]. Wellcome Collection. https://wellcomecollection.org/works/ m7rz7hqb
46
Berkeley Scientific Journal | SPRING 2021
FEATURES
Developing A New Choice: The Unraveling of Reproductive Mysteries Interview with Dr. Polina Lishko
BY LEXIE EWER, ESTHER LIM, ALLISUN WILTSHIRE, AND ANANYA KRISHNAPURA
BSJ PL
Polina Lishko, PhD, is an Associate Professor of Cell and Developmental Biology in the Department of Molecular & Cell Biology at the University of California, Berkeley. Dr. Lishko is the principal investigator of the Lishko Lab, which studies the steroid regulation of various tissues, from the brain to the reproductive organs, to improve reproductive health and contribute to treatments targeting neurodegenerative disease. She was recently awarded the 2020 MacArthur “Genius Grant” for her work on mammalian fertilization, contraception, and infertility treatment. Dr. Lishko is also an adjunct associate professor at the Buck Institute’s Center for Reproductive Longevity and Equality, where she performs research on ovarian aging with the goal of delaying reproductive aging and its associated age-related dysfunctions in women of child-bearing age. In this interview, we discuss Dr. Lishko’s research on factors that can influence sperm fertility as well as her recent work on the effect of steroid hormones on neurodegenerative disease. : What sparked your interest in women’s reproductive health and contraception?
: My colleagues and I were mainly focused on male fertility in the first four years of our research, but male fertility and female reproductive health are two sides of the same coin. Our initial focus was to fundamentally understand how sperm cells are able to find an egg. Basically, we wanted to know what makes sperm cells fertile. We found several key molecules that sperm cells express that are required for their motility and ability to find an egg. Naturally, we began to wonder, “Can we utilize those targets to make sperm cells less fertile?” This led us to the realization that our findings could help in developing a potential contraceptive tool. As my team and I began reading about contraception, it was shocking for us to realize how unsatisfactory female contraception currently is. Essentially, only a few options are available, and what we have on the market is mostly hormonal contraception. There are a few exceptions, like copper intrauterine devices (IUDs), but most contraception is hormonal and was introduced in the 1950s. They, of course, do what they are supposed to do, but they come with many side effects. We noticed that some of the contraceptive options used
INTERVIEWS
in the past were based on traditional folk medicine, so we began looking into what components of these herbs and natural products could affect sperm cells. We identified certain compounds and had a scientific eureka moment when we realized that particular sperm cell components could be targeted by key molecules that we were working with. And that is roughly a short story on how we started. We did not initially have the goal of developing a contraceptive. It all stemmed from scientific research and basic curiosity, which is actually a common trend seen in many research directions.
BSJ
: What are the disadvantages or risks associated with hormonal contraception, and how does your research on non-hormonal birth control aim to address these issues?
PL
: If the pills from the 1950s and 60s had to go through FDA approval now, I speculate that they would have a very hard time getting approved, since hormonal contraceptives come with a plethora of side effects. When we talk about side effects, we are talking about not only acne, but also blood clots, depression, suicidal thoughts, and weight gain. Some contraceptives come with a black box warning [the most stringent FDA warning indicating serious
SPRING 2021 | Berkeley Scientific Journal
47
safety risks], particularly because women who start taking these contraceptives early have an increased rate of osteoporosis. Several common contraceptives can also increase the risk of glaucoma [a condition that damages the optic nerve]. Additionally, women who smoke are highly advised against taking certain hormonal contraceptives because the combination of smoke, tobacco, and hormonal contraceptives increases the risk of cardiovascular issues, including blood clot formation. And this is just the beginning of the list. Of course, some researchers claim that the invention of the pill is one of the 50 greatest inventions of our civilization since it was a big game-changer. However, while it improved the economy and helped women lead better lives, it also came with some drawbacks. I think it is time to introduce better alternatives.
BSJ
: In your opinion, how does society’s current attitude towards male contraception differ from what you have seen in the past?
PL
their share of the responsibility.
BSJ
: In one of your papers, you study TRPV4, an ion channel capable of triggering sperm hyperactivation. What is sperm hyperactivated motility, and how is it triggered?
PL
: Sperm cells are like amazing nano-machines that have different motility patterns and are equipped with multiple tools in order to reach their goal. They have a simplified motility form, which looks like a snake wiggling around, that helps them get from point A to point B. They can also rotate along their axis in order to penetrate through a viscous environment. However, when they encounter particularly hard objects like the zona pellucida [the thick protective shield of the egg], they need a high-power motility mode called hyperactivation. For many years, scientists have known that the key to hyperactivated motility is calcium. Calcium ions enter the sperm tail, and this influx of ions triggers an internal mechanism that allows sperm cells to change their motility pattern to a more asymmetric bending pattern of the tail. In a viscous environment, this translates to a powerful forward push.
“It all stemmed from scientific research and basic curiosity.”
: It is getting better. I would say that many men are now very much in support of male contraceptives. They understand how important male contraception is, and they want to help their partners and take their share of the contraceptive burden, so to speak. There are now also several nonprofit organizations, such as the Male Contraceptive Initiative (MCI, https://www.malecontraceptive.org), that spearhead this development. They have various educational tools and receive support, though still not as much support as one would like them to have. In the past, contraception was perceived as mainly a female problem to solve because men do not get pregnant. However, times have changed, and a lot of men are now aware that they have to take
BSJ PL
: What is the role of the calcium ion (Ca2+) channel CatSper in this pathway?
: Ion channels are molecules embedded in cells’ plasma membranes that essentially serve as pores through which ions can enter or leave the cell. Some ion channels in sperm cells are unique to sperm cells alone, while some are also expressed in other cell types. CatSper, the calcium channel of sperm, allows calcium to
Figure 1: Model of sperm hyperactivation pathway. At 37oC, TRPV4 embedded in the sperm cell membrane is activated, allowing for an influx of sodium (Na+) ions that induces membrane depolarization. This depolarization activates CatSper and Hv1, the latter of which expels protons out of the sperm. The resulting intracellular alkalization further activates CatSper, leading to a calcium ion (Ca2+) influx that triggers sperm hyperactivation. Image licensed under CC BY 4.0.
48
Berkeley Scientific Journal | SPRING 2021
INTERVIEWS
enter the sperm cell and trigger hyperactivation. It is a channel that is only expressed in sperm cells and nowhere else in the body. This makes the channel very attractive as a potential contraceptive target, because a drug acting against this target would not affect anything else in the body. This makes the potential drug not only effective, but also safe.
BSJ PL
: What are the main mechanisms that interact to affect CatSper function?
: In human sperm, the CatSper channel requires a few things in order to become activated. Firstly, it requires progesterone, a steroid which is released after ovulation that eggs use to tell the sperm, “Hey, I’m here.” It also requires that a certain pH be achieved within the sperm cell; in other words, there must be a particular concentration of protons inside the sperm tail, and
a lower concentration is preferred. CatSper activation also relies on temperature and voltage, which is a measure of how positively the inside of the sperm cell is charged. For human CatSper to get fully activated, the voltage inside the sperm cell needs to be about +30 millivolts. Progesterone will be delivered to the sperm by the egg, and the pH inside the sperm cell is regulated by transporters and other ion channels. But what regulates the voltage? What regulates the depolarization—the positive charge—on the sperm plasma membrane? This is where the temperature sensitive channel, TRPV4, comes forward. You need the temperature of the sperm cell to be between 34–37oC in order for this channel to be active in conducting sodium and calcium. We believe this temperature is the initial trigger activated when the sperm cell encounters a warmer area. TRPV4 gets activated and brings positive ions inside the tail to create an initial depolarization, and this creates a condition under which CatSper can get fully activated, assuming progesterone is also nearby. In essence, the synchronous action of several key elements is required to engage hyperactivation.
BSJ PL
: What is the significance of your discovery of TRPV4 as a temperature-sensitive depolarizing channel in sperm?
: TRPV4 is important from a basic scientific perspective because if you want to know how a sperm cell works, you need to know all of its critical elements. From a contraceptive standpoint, TRPV4 might not be an ideal target because it is not only present in sperm, but also present in other cells such as choroid plexus cells in the brain, which produce cerebrospinal fluid. TRPV4 is highly expressed in these cells and regulates normal brain function. For example, mutations in TRPV4 are associated with hydrocephaly. TRPV4 is also found in osteoclasts, which maintain bone density, so it is not an ideal target unless a potential contraceptive targeting TRPV4 could be applied locally in the female reproductive tract without any way of getting inside the rest of the body. In that case, you would not need to worry about side effects because if it stays in the female reproductive tract, it would only affect sperm cells. However, if the drug has to go through the bloodstream, I would not target TRPV4.
BSJ
: Aside from your research regarding the sperm hyperactivation pathway, you have also conducted research into the effects of external factors like endocrine-disrupting chemicals (EDCs) on sperm fertility. What are some examples of EDCs, and how can we be exposed to them in our daily lives?
PL
: The fact that bisphenol A (BPA), phthalates, and other plasticizers affect our physiology has been known for a long time, but surprisingly, these compounds are still around us. They Figure 2: Examples of routes of exposure to endocrinedisrupting chemicals (EDCs). Image licensed under CC BY 4.0.
INTERVIEWS
SPRING 2021 | Berkeley Scientific Journal
49
Figure 3: Images of eggs nine hours after in vitro fertilization (IVF) by control versus DEHP-treated sperm. Figure 3A depicts a successfully fertilized egg with two pronuclei (PN) and one polar body (Pb). Figure 3B shows an unfertilized egg after IVF with DEHP-treated sperm. Note the significantly lower formation of pronuclei in DEHP-treated sperm in Figure 3C. Image licensed under CC BY 4.0. are included in our plastic products because they are a very cheap and efficient way for manufacturers to make plastic more bendable. Unfortunately, these compounds are not well attached to the products, so they leach into whatever environment they encounter, including water, food, and cosmetic products. These plasticizers are also components of paper receipts and could eventually end up in your digestive system if you do not wash your hands before eating. Why are these compounds called endocrine disruptors? BPA, for example, has the ability to mimic the effects that estrogen has on the body—even though BPA is quite different from estrogen structurally—which can lead to negative side effects, including tumor formation. In the United States, only one manufacturer provides the tubes for dialysis; those tubes are made of Tygon and plastics that contain plasticizers. Women who are undergoing blood dialysis treatment are advised against getting pregnant, and if they do get pregnant during treatment, they are advised to consider an abortion because there could be extreme teratogenic effects on the developing fetus that lead to severe birth defects. Biological males can develop feminizing features due to plasticizers mimicking female sex hormones. Additionally, infertility is a possible side effect of dialysis, but not many comprehensive studies have been done, so it is unclear whether this infertility is because of plasticizers or because of other chemicals that patients are exposed to during the course of treatment.
BSJ PL
: Why did you choose to focus on the effect of acute exposure to phthalates in your study on EDCs?
: We knew that long-term exposure to phthalates is negatively associated with male fertility. Phthalates are hydrophobic and tend to accumulate in our fat deposits, and the research has shown that the fat deposits around reproductive organs can accumulate and release toxic molecules as fat levels fluctuate in the body. However, the effect of acute exposure to phthalates on fertility is less clear, and we were interested in understanding what would happen if sperm cells were briefly exposed to these chemicals. In our research, we took mouse sperm cells, exposed them to phthalates, and then introduced them to fresh, unexposed eggs. We found that DEHP is one of the most toxic and widespread phthalates that really messes with sperm fertility. The sperm move more or less the same, but for whatever reason, they cannot fertilize the egg. Further investigation revealed that phthalates do not mess with ion channels as much as one would expect. What they do instead is damage sperm membrane integrity by increasing production of reactive oxygen species (ROS) in sperm cells. The following is
50
Berkeley Scientific Journal | SPRING 2021
all speculation, but given that prostate cancer is on the rise and certain types of cancer cells are also known to be affected by ROS production, this is something people might want to look at—that endocrine-disrupting chemicals could cause additional harmful effects, which we have not yet researched in depth.
BSJ
: In 2018, you co-founded YourChoice Therapeutics with the goal of developing safe and effective non-hormonal birth control. Although you have since left the company, could you tell us a bit about your time there and the experience of developing a marketable contraceptive?
PL
: It was a very interesting experience because I was not trained to be an entrepreneur—I was trained to be an academic scientist. If you do not have prior experience as an entrepreneur, you face a steep learning curve when going through the entire process of understanding how startups function and how drug development operates. However, it is entirely possible to create a new company. You do not have to be a field “expert” to become an entrepreneur. Anyone can do it, whether you are an undergraduate student, graduate student, faculty, or individual who is not associated with academia. Of course, you do need to have an intellectual property (IP) and a marketable product, something that people would like to buy and use. In our case, novel contraceptives were our product. The research for these contraceptives initially started in our lab. This project was actually spearheaded by two graduate students, one of whom graduated just last year and started her own company. They came upon this interesting compound, the mitochondrial uncoupling protein niclosamide, which had been approved in 1982 to treat tapeworm infections. The compound has poor bioavailability (so it does not get absorbed into the bloodstream by our gut), but it effectively messes with tapeworms’ mitochondria, the energyproducing organelle. People infected with tapeworms can take up to 1 gram of this drug and nothing will happen to the individuals, but the drug will drain the worms of their energy, leading them to detach from the gut and be expelled.
“The fact that BPA, phthalates, and other plasticizers affect our physiology has been known for a long time, but surprisingly, these compounds are still around us.” INTERVIEWS
We began exploring this mitochondrial uncoupling mechanism under the question, “If we uncouple these sperm mitochondria and drain them of energy, can they still fertilize an egg?” The idea was that the compound niclosamide could be contained in a capsule or something that could be applied locally, and then it could drain the energy of sperm cells similar to how it drains that of tapeworms. Furthermore, it would not be absorbed by the female reproductive tract because of its low bioavailability. Overall, the experience was fun and somewhat rewarding because I was not only developing knowledge or participating in teaching a new generation, but we were actually developing something people could use—a real product that can improve their lives.
BSJ PL
: What do you think needs to be done to make birth control more affordable and accessible?
: Contraception or any other medicine must be affordable. It is ridiculous how expensive certain drugs and lifesaving medicine are in our country. The whole system needs to be modified to make sure that these products are available to anyone who wants to use them. The government must step in and make certain firm policies to ensure that people can afford products. The pharmaceutical industry is unlikely to initiate these changes due to the ways those drugs are developed. The drug development process, and everything in it, requires a lot of money. Large corporations can afford to fund this process, but startups may have a hard time financially. Even large companies require a lot of resources to put drugs on the market, and that money must come from somewhere. If the government has certain policies regulating pharmaceutical sales, it can lower the prices of products. The pharmaceutical companies also have to be a little bit less greedy in certain areas. We have affordable aspirin right now. The chemical synthesis for compounds that could serve as contraceptives should not be more difficult or expensive, on average, than the synthesis of aspirin. It is all possible.
BSJ
: Finally, we understand that you are doing research into the link between sex steroid hormones and neurodegenerative diseases like Alzheimer’s disease. Could you tell us a bit about this project?
PL
: Patients with Alzheimer’s disease have an impaired ability to clear cerebrospinal fluid (CSF), which is why the CSF ends up accumulating a lot of “junk”: amyloid proteins, excessive neurotransmitters, misfolded proteins, and so on. These contribute to the development of the disease in the patient. Women who have a late onset of menarche [the first occurrence of menstruation] and/ or early onset of menopause are exposed to steroid sex hormones for a shorter-than-average period of time. These women, consequently, tend to develop Alzheimer’s disease at a higher rate, which is significant considering that two-thirds of Alzheimer’s patients are women. There is a direct connection between these hormones and the systems in our brain that participate in CSF clearing, and we want to understand what is going on. What is interesting is that the choroid plexus, the part of the brain that produces CSF, does
INTERVIEWS
“The chemical synthesis for compounds that could serve as contraceptives should not be more difficult or expensive, on average, than the synthesis of aspirin.” not express any classic progesterone receptors. Instead, it expresses TRPV4, the receptor that we found in sperm cells. We are currently conducting research into the molecular mechanism behind how steroid hormones participate in CSF clearing and, in turn, how this could lead to the development of age-linked neurodegenerative diseases like Alzheimer’s disease. Since my actual background is in neuroscience, and not in reproductive biology, this new research direction feels somewhat like a homecoming for me.
I
n late May, Professor Lishko quoted Golda Meir, the fourth prime minister of Israel, on Twitter when expressing her stance on recent events in the Israeli-Palestinian conflict. Her comment was widely criticized, particularly by members of the UC Berkeley community, with several individuals claiming that her statements conveyed racist and discriminatory implications. Additionally, during this time, several older tweets from Professor Lishko were also critiqued as being transphobic. In acknowledgement of the current situation and out of respect for all parties involved, the Berkeley Scientific Journal requested that Professor Lishko address her critics and clarify her intentions. Here are her responses.
BSJ
: You have previously written on Twitter, “And yes, biological sex is binary for most of the species, including humans….All the rest is unscientific manipulation of terminology and demagogy.” Some trans scientists critiqued this comment, and others that you have made, stating that they seemed to question and undermine the identities of trans and non-binary people. If you believe your comments were misinterpreted, how did you mean for your comments to be understood by the public?
PL
: This comment and the discussion pertinent to it referred specifically to the biology of sex, the binary form of gametes, and definitely not to gender identities, of which there are many. The biology of sex refers to the specifics of sex determination as it is known in reproductive biology on the genetic, cellular, and organismal levels. What this sentence actually meant in the context of the discussion is that for most species that reproduce sexually, nature evolved a binary form of sex determination and sexual reproduction: the female and the male genotype/phenotype. One form produces female gametes (eggs), and another form produces male gametes (sperm cells). It is possible that during billions of years of evolution, many other forms emerged, and yet the most successful form of sexual reproduction for most species is through the binary forms
SPRING 2021 | Berkeley Scientific Journal
51
of the phenotypically and physiologically different gametes. In my opinion, conflating the basics of reproductive biology with gender identities is a manipulation.
BSJ
: You have also quoted on Twitter, “Golda Meir: ‘Peace will come when the Arabs love their children more than they hate us.’” Some individuals have critiqued this comment, claiming that this statement felt like an inappropriate generalization to an entire community. If misinterpreted, how did you intend for the above quote to be interpreted?
PL
: By no means was this quote directed to all people of Arab heritage. From my understanding, the historic meaning of this quote referred to terrorist organizations and definitely not to all people of Arab heritage. One has to know the context, the history of the old quotes, before jumping to conclusions or accusations. The simple message that I was trying to send via this Tweet is that more love is needed to end the hate. Unfortunately, this message was misinterpreted.
BSJ
: In your opinion, what is the intersection between a scientist’s social views and their research endeavors? Do a scientist’s personal views influence the quality or outcome of their research?
PL
: It is a recipe for disaster when ideology or close-mindedness cloud the mind of a scientist. Here is a simple example. True scientists observe nature and its laws with an impartial mind and build hypotheses and conclusions based on their unbiased observations. Many scientists are also quite gifted people and not only excel in their particular discipline, but also show talents in art, music, crafts, etc. However, when it comes to scientific observations and conclusions, it is imperative that their scientific side—the side with an impartial and calm mind—always prevails. Let us consider what happens if there is an intrapersonal conflict within an individual between their scientific and artistic sides. Imagine two sets of scientific data obtained from the same experiment—one is artistically perfect and appealing, but scientifically incorrect, and another is not so artistically pretty but scientifically correct. If in this intrapersonal conflict one’s artistic side is victorious, one could end up with data manipulation and scientific misconduct, which might even put the lives of others in danger.
is welcome to participate in our studies during the period of active enrollment.
BSJ
: How are you actively working to promote a culture of equity and inclusion within your lab, regardless of an individual’s racial background or gender identity?
PL
: From the very beginning, our lab has welcomed anyone who is enthusiastic about the research we are doing and is eager to become a responsible member of the team, regardless of their age, beliefs, religions, political affiliations, racial backgrounds, gender identities, etc. Our only limitations are the physical space to accommodate our team and the funding to support the work we are excited about. True diversity makes the team stronger, and this is what we strongly believe in and always have. This interview, which consists of one conversation and one email communication, has been edited for clarity and brevity. REFERENCES 1. Headshot: [Photograph of Polina Lishko]. Lishko Lab. https:// lishkolab.org/about-pi/. Image reprinted with permission. 2. Figure 1: Mundt, N., Spehr, M., & Lishko, P. V. (2018). TRPV4 is the temperature-sensitive ion channel of human sperm. eLife, 7, Article e35853. https://doi.org/10.7554/eLife.35853 3. Figure 2: Sack, J., Hecher, S., & Appenzeller L. (2019). [Infographic of various pathways of exposure to pollutants]. Heinrich Böll Foundation and Break Free From Plastic. https:// www.boell.de/en/2019/11/05/plasticatlas 4. Figure 3: Khasin, L. G., Della Rosa, J., Petersen, N., Moeller, J., Kriegsfeld, L. J., & Lishko, P. V. (2020). The impact of di-2ethylhexyl phthalate on sperm fertility. Frontiers in Cell and Developmental Biology, 8, Article 426. https://doi.org/10.3389/ fcell.2020.00426
BSJ
: As a researcher working toward developing new drugs in the field of reproductive biology, it is important to include people of different identities in your research. How do you ensure that your research, where possible, is inclusive of individuals of all identities?
PL
: Our research activities require that we enroll participants to take part in our IRB-approved studies to provide sperm cells for our research. The nature of this research, the strict confidentiality of our participants, and the IRB-specific requirements mean that we cannot collect any personal data or identifiers and we cannot ask our participants any questions related to their health, identities, or other personal information. Any individual who can produce sperm cells
52
Berkeley Scientific Journal | SPRING 2021
INTERVIEWS
Data Analysis for Formula 1 BY ELETTRA PREOSTI
Guillaume Dezoteux is Head of Vehicle Performance of the Scuderia Alpha Tauri Formula One team. He received a bachelor’s degree in mechanical and automotive engineering from ESTACA Ecole D’Ingenieurs in 1993 before beginning his career in Formula 3.
I
n this interview, we discuss how data analysis is used to design and develop Formula One cars as well as optimize car performance during race weekends.
Franz Tost is a former racing driver and current Team Principal of the Scuderia Alpha Tauri Formula One team. After competing in Formula Ford and Formula Three, he studied Sports Science and Management while working as team manager at the Walter Lechner Racing School.
INTERVIEWS
SPRING 2021 | Berkeley Scientific Journal
53
BSJ FT
: What early experiences influenced your interest in racing, and how did you begin your career in Formula 1 (F1) ?
: Back in the 1970s, F1 races were telecasted by the Austrian broadcaster ORF, so as a young Austrian kid, I was able to watch all of the races. In fact, at the time, I was a huge fan of Austrian racing driver Johen Rindt. Watching him was my first introduction to F1. From then on, I made it my goal to work in F1. Fortunately, I was able to accomplish this goal, as in 1997, while working as Ralf Schumacher’s manager, I joined the F1 team Jordan Grand Prix. From then onwards, I have always worked in F1.
GD
: I am from the French Basque Country in the area near Biarritz; it is an area which is famous for surfing and rugby—not really for motorsports. But, when I was younger, I was still a big motorsports fan, and I would go Go-Karting with one of my friends. I was also really interested in science and, specifically, mechanical engineering. That was where my drive came from. However, at the time, I did not know anyone in the motorsports industry or even anyone that was familiar with motorsports. So, I went to Paris to study mechanical engineering. Then, in my spare time, I went to driving schools and worked on vintage racing cars. After I received my engineering degree, I went on to work in Formula 3, where I had my first experience working on single seater cars, and eventually became a race engineer. This was a really useful experience for me as I was able to learn more about car set-up and aerodynamics. Then, in 2006, I had the opportunity to move to Scuderia Toro Rosso. This was my beginning in F1.
BSJ
: Can you briefly describe your respective roles as Team Principal and Head of Vehicle Performance of a Formula 1 team and your responsibilities both during and outside of race weekends?
FT
: As team principal, during race weekends, my main job is to oversee team operations. However, I must also do a lot of press work, which includes attending marketing events and meeting with partners and sponsors. Finally, before every race, I must attend meetings hosted by the Federation Internationale de
l’Automobile (FIA), the governing body of motorsport, with all of the other team principals. Outside of race weekends, my main job is to organize meetings where we discuss prior race weekends and plan for the future. During these meetings, I seek to identify the team’s deficiencies and how to improve upon them so that the team can become more successful. Then, of course, it is always important for the team principal to find sponsors, as sponsorships reflect our perceived performance. And, throughout the entire year (both during and outside of race weekends) my job is to keep the team motivated.
GD
: I manage a group of engineers whose main responsibility is to optimize the performance of the car. For example, we have a group of race and performance engineers who actively participate both in equipment testing and during races. We also have a group of tire engineers that analyze tire performance. Although all F1 teams use tires manufactured by a third-party company, Pirelli, we must still figure out how to optimize tire usage for our own cars as this is a key performance differentiator during races. Then, we have a group of strategists who have two main responsibilities: race strategy, which involves using simulations to make live decisions during a race; and competitor analysis, which involves comparing our strengths and weaknesses to those of our competitors. Another group is the vehicle dynamics group, which contains a number of experts who specialize in maintaining different parts of the car such as suspension, power unit, and brake system. Finally, we have a simulation and software tool group, which develops tools and methodologies in order to run simulations for the rest of the group. While we mostly work on a short time scale, we also participate in the long-term development of the car by analyzing its development through the season in order to determine what areas we need to improve and to define key parameters for the next year.
BSJ FT
: What are the key ingredients in designing a Formula 1 car, and what is the overall design process like?
: The entire process begins with setting regulations for the upcoming season. Once these regulations are fixed, our team’s Computational Fluid Dynamics (CFD) department runs initial tests to better understand the newer regulations and determine how to proceed with car design. After this, we begin tests in our facilities’ wind tunnel, which we use to replicate the interaction between air and the car moving on the ground. At this point, about sixty percent of the car has already been designed. If there is a good correlation between the test results done by the CFD department and the results from the wind tunnel, we can finally design the remainder of the car parts and produce and assemble the car. Then, we go racing! We also must continue this process throughout the season so that we can bring updates to our car from race to race.
BSJ GD
: In general, during a race, what parameters do you seek to optimize to maximize race performance?
Figure 1: Alpha Tauri Car #10 at the 2021 Monaco Grand Prix.
54
Berkeley Scientific Journal | SPRING 2021
: Throughout any race weekend, we seek to strike a balance between a car configuration that works best for
INTERVIEWS
qualifying versus one that works best during the race, although we do generally focus on optimizing the car for the actual race. One major component of this involves using simulation tools, such as Monte Carlo simulation softwares, to assess the importance of starting position on the final race result. For example, if we start a race a few positions behind our competitor but with a slightly faster car, will we be able to finish ahead? Another two factors we must consider are our car’s overtaking ability as well as our car’s ability to defend against someone that is trying to overtake our car. For instance, on a track where it is very difficult to overtake, we cannot necessarily compromise our qualifying pace (which determines our starting position) for a better race set-up. So, we try to build some statistical analyses to develop our understanding of these kinds of trade-offs, but they are generally quite complicated and generate very heated discussions.
BSJ GD
: What track conditions affect the performance of the car, and how do these track conditions affect the car?
: To begin, F1 cars are very sensitive to wind because they are heavily driven by aerodynamics, so windy conditions can make determining the best overall set up around the track very difficult. For example, during pre-season testing, it was very difficult for us to find the right balance of the car around all of the corners in the track: some corners had strong headwinds whereas others had strong tailwinds. The differences in the direction and strength of these winds can massively change the aerodynamics of the car and, as a result, how much downforce the cars are able to produce. Another key factor is track temperature, which can influence how the tires perform in terms of thermal state and thermal behaviour. As opposed to road tires, F1 tires are extremely sensitive to temperature, and you have a very narrow window to extract their best performance.
BSJ
: What data do you collect during free practice sessions to determine what car configurations to use both during qualifying [sessions] and during the race?
GD
: We use a very complex data framework which processes a live telemetry stream from the cars and organizes the inputted data by lap. In this way, we can monitor trends over a series of laps. We work on different levels. First, we look in detail at what happens to the car during corners. There are two main components to this. The first is corner balance, which describes the shifting of the weight carried by each wheel on the car. The second is how the car balance evolves from braking to the apex to the exit of the corner. Then, we look at the car behavior over a series of laps. For instance, we look at how tire temperatures and tire pressures evolve as well as how wind affects the downforce and balance of the car. We also try to look at the average operating condition of the car and how it compares to the overall season and our simulations. Obviously, the cars behave differently from track to track, but there are still similarities between tracks (e.g., similar corners or track conditions). Thus, we analyze whether the car is performing more or less where we expect it to or if our performance is an outlier. Finally, while it is difficult to model race scenarios during free
INTERVIEWS
practice, we do try to simulate certain situations. For example, we will try to run our car as close as possible to the one in front, which generally has a negative effect on the car, in order to assess the effects of another car’s tow on brake system and power unit temperature.
BSJ GD
: How do you adapt to changes in track conditions during a race?
: One important aspect is to be able to prepare for changes in track condition from session to session. For example, the third free practice session generally occurs in the morning, when track conditions are usually colder, whereas qualifying occurs in the afternoon when track conditions are usually warmer. It is not enough to react to this temperature change during qualifying [sessions] so we use our telemetry data to predict how to change the configuration of the car so that the setup is actually a good one. We also try to look at historical data to see if it can help us understand track conditions that we did not expect. Have we experienced these conditions before? Have we seen this type of corner before? If so, how were we able to correct car balance? What kind of adjustments to set-up did we make? We can also use simulation tools to analyze the effects of different adjustments. For example, we can use our simulation tools to predict variations in tire thermal state or the effect of wind on tire temperature. We sometimes encounter scenarios which are not aligned with our previous simulations or historical data. For example, there might be something wrong with the car in terms of bodywork, adjustment, or aerodynamics, or maybe the tire is wearing at a higher rate than expected. We then immediately try to understand which first order parameter is having such a large influence on the behavior of the car and hiding all of the sensitivities we had predicted. As you can imagine, this is very difficult to do since we do not have historical references or simulations to assess this kind of behavior.
“The most important factor is that they must have a passion for F1. You must really live 365 days a year for F1 because it is a very hard and timeconsuming job.”
BSJ GD
: How do you evaluate tire performance throughout the race?
: Throughout the race, we have a lot of sensors to monitor the tires as well as models that predict tire performance given different scenarios. Generally, we try to monitor the thermal state of the tires as well as the tire’s wear rate. Since we do not actually have a direct measurement of tire wear, we use models that try to predict what the tire wear is. Those models are accurate in certain conditions and on certain types of tarmacs, but they are not accurate in other conditions. For example, when we have wear mechanisms like graining, in which you have a lot of damage on the tire surface, or blistering, which occurs when the rubber forming the tire is too
SPRING 2021 | Berkeley Scientific Journal
55
hot, our predictions do not work as well. However, when we have a more linear standard wear mechanism, we have good tools to predict when we will run out of tire life. We also rely on simulations in which we try to estimate how much energy is lost by the tire as opposed to how much lap time is lost if we are driving slower in order to prevent tire degradation.
BSJ GD
: How do you use data collected by other teams to make decisions during a race, and even to develop your own car?
: Looking at all of the teams’ data is a key aspect of what we do in vehicle performance, especially in the strategy group. Obviously, data from the other teams is very limited; we have some GPS data, and we can watch the other cars’ onboard cameras as well as listen to communications between race engineers and drivers. Based on all of this input, we try to build an understanding of what the strengths and weaknesses of our package are, and what the other teams’ strengths and weaknesses are. Analyzing teams’ data is also a key aspect of the medium term development in F1. We do not need anyone to tell us that with more power, more grip, and more downforce, we will go faster. This is pretty clear to everyone. What this data helps us determine, for example, is in which types of corners is extra downforce beneficial. Then, when we develop the car, we are able to filter out options that we think are suboptimal. In this way, we can focus on really small differences in car performance, as we are competing in such a tight midfield (within 0.2 – 0.3% of laptime, we can have five to seven cars).
BSJ entail?
: You have mentioned that you often use Monte Carlo simulations. Can you explain what these simulations
GD
: Monte Carlo simulations are generally used for race strategy. Obviously, it is not possible to simulate all strategies. Some Monte Carlo simulations build statistics on a set of scenarios by running a distribution of those races. Through this process, we try to build an understanding of what scenario will statistically bring you a better result. The Monte Carlo simulations are continuously updated with live telemetry data from the car during races.
BSJ GD
: What other types of simulations do you run?
: We run many other types of simulations: some help understand what is the best trade off between downforce and top speed while others help assess energy management strategy for the power unit. The latter, however, might not be useful for understanding car balance and tire behavior. Thus, we use other types of simulations to add more sophisticated tire models. We have transient simulations to help us represent dynamic maneuvers drivers may do, to name a few.
BSJ
: Do you have an example of a time when you made a mistake analyzing data, and how were you able to fix it?
56
Berkeley Scientific Journal | SPRING 2021
GD
: One of my worst mistakes occurred in Austin in 2019. All of our simulation and prediction tools were telling us not to pit again, so I was determined in making a one-stop strategy work. At the end of the race, one of our two drivers could not make the tires last until the end of the race and our second driver finished at the back. It was not until after the race that I realized that we had made a mistake in our live predictions. We were not updating our Monte Carlo simulations with the right numbers. Moreover, during the race, we were only paying attention to teams that were successful in using a one stop strategy, convinced that we could make it as well, rather than looking at how many teams had decided to change strategy. I think what is important in this kind of situation is to be able to admit that you have made a mistake and bounce back with solutions to prevent such a mistake from happening again.
BSJ FT
: How do you see F1 developing in the future?
: I see a great future for F1. To begin, we are moving towards using synthetic fuel, which is carbon net neutral. In addition, our engines utilize two engine recovery systems: the Motor Generator Unit-Heat, which recovers energy via heat, and the Motor Generator Unit-Kinetic, which recovers energy via kinetics. This is very important for sustainability and for the environment overall.
BSJ FT
: What advice would you give to someone who is pursuing a career in F1?
: The most important factor is that they must have a passion for F1. You must really live 365 days a year for F1 because it is a very hard and time-consuming job.
GD
: Just try. Try to get involved in some motorsport activities. Try to get into this environment, work hard, and learn. There is no magic. This interview, which consists of one conversation, has been edited for brevity and clarity. IMAGE REFERENCES 1. Banner: Welch, J. (2018, November 21). Birds Eye View of Roadway Surrounded By Trees [Photograph]. Pexels. https:// www.pexels.com/photo/bird-s-eye-view-of-roadwaysurrounded-by-trees-1624600/ 2. Figure 1: Thompson, M. (2021, May 21). Yuki Tsunoda of Japan driving the (22) Scuderia AlphaTauri AT02 Honda during the F1 Grand Prix of Monaco at Circuit de Monaco on May 23, 2021 in Monte-Carlo, Monaco [Photograph]. Getty Images / Red Bull Content Pool. https://www.redbullcontentpool.com/ scuderiaalphatauri/CP-P-675292
INTERVIEWS
EARLY PIGMENTS One of the earliest and most famous pigments is Tyrian purple, a biological pigment derived from sea snail secretions, which was discovered around 1000 BCE. The process of extracting the mucous was very laborious; since the snails that made it are only about three inches across, thousands of snails would produce only a small amount of dye. Later, people found other sources to create purple colorants. Han purple was developed after Tyrian purple and was also used primarily in the BCE years. As an early synthetic purple pigment, Han purple came not from living organisms, but was manufactured by reacting barium, copper, and silicon minerals with each other at very Figure 1: Sea snail. The sea snail Haustellum brandaris, one species from which people harvested Tyrian purple. Image licensed under CC BY-SA 3.0.
BY JESSICA JEN
An Anthology of Violet in Science FEATURES
A 19th-century mistake in synthesizing an antimalarial drug led to the first massproduced violet dye and set off the commercial synthetic dye industry. Far after its last use nearly 3000 years ago, a purple pigment resurfaced when scientists discovered its peculiar magnetic and quantum behaviors. One type of purple bacteria thrives on toxic hydrogen sulfide and is used to treat contaminated wastewater. These are only a handful of the many faces that the color violet has worn across science history. From the dye industry to modern biochemical research and bioremediation, violet has been on an incredible adventure through scientific developments.
SPRING 2021 | Berkeley Scientific Journal
57
Figure 2: Mauveine sample. Samples of mauveine alongside its chemical structure and descriptions at the Historial Dye Collection of the Technical University of Dresden. Image licensed under CC0 1.0.
“Han purple’s unusual magnetic behavior provides the first example of reduced dimensionality, which could illuminate the elusive behaviors of electrical resistance and magnetic fields.” high temperatures. Both Tyrian and Han purples are early ancestors of the biological and synthetic colorings that were to come. They also characterize the serendipity of science well. Both pigments have unique, unexpected properties beyond their well-known associations with color. Tyrian purple is also a good semiconductor and can be used in circuits.1 Han purple “loses a dimension” when placed in cold and magnetic conditions.2 Researchers analogized this concept to a Rubik’s cube. It is as if each layer of the cube acts independently, and the cube is reduced from a 3-D structure into separate 2-D layers. Han purple’s unusual magnetic behavior provides the first example of reduced dimensionality, which could illuminate the elusive behaviors of electrical resistance and magnetic fields.
route to replace the process of extracting quinine from cinchona tree bark. He chose to work with anilines (a structural family of compounds with characteristic phenylamino pairings) because he thought they had similar chemical formulae to quinine. Perkin’s experiment produced a vibrant purple solution that looked nothing like quinine.3,5 Undeterred, Perkin turned his supposed failure into a happy happenstance
that launched the beginning of the massproduced synthetic dye industry. Perkin’s faulty antimalarial adhered well to cloth, looked appealing, and could be made with the then-common coal tar. Practical, attractive, and scalable, Perkin’s new dye had all the ingredients of a prosperous product. At this time, Tyrian purple was still the dominant violet colorant, and people were dissatisfied with the hassle
Figure 3: Crystal violet in action. A sample of saliva stained with crystal violet and other reagents. Cells stained with crystal violet appear purple. Cells that did not retain crystal violet were colored with a pink counterstain. Image licensed under CC BY-SA 4.0.
BEGINNINGS OF SYNTHETIC DYES: THE ANILINES One violet colorant arose from an accident. The 19th century chemist William Perkin wanted to synthesize the antimalarial quinine, but failed.3,4,5 Perkin’s contemporaries valued quinine as a treatment for malaria and consumed it as a clear, bitter beverage called tonic water. Perkin was interested in finding a synthetic
58
Berkeley Scientific Journal | SPRING 2021
FEATURES
it took to produce a pinch of purple.3,4,5 Perkin took advantage of this purple deficit and manufactured his new mauve dye at a commercial scale. Mauveine (also known as aniline purple) became enormously popular in the years after Perkin’s discovery, and although it would fade from the public frenzy within a decade, the broader synthetic dye industry was climbing fast. Perkin tweaked his original recipe and produced safranin, pseudo-mauveine, and more derivatives. Fellow dye-makers propagated and expanded upon his methods, synthesizing a rainbow of new colors. Aniline dyes eventually expanded from staining fabric to staining cells. The most famous aniline in molecular biology is crystal violet. It stains DNA in a way such that ultraviolet light is not required for visualization.6 It is also used to differentiate the gram positive and gram negative types of bacteria, which is critical for determining which antibiotics are useful against an infection. Safranin and fuchsine are two other aniline dyes that are used as a counterstain to crystal violet during Gram staining. Cells are first exposed to crystal violet, which is then washed out and replaced with either a safranin or fuchsine counterstain. Bacteria that have retained the purple crystal violet are deemed grampositive, and the bacteria dyed pink by the counterstain are considered gram-negative. This procedure is fast and effective for classifying a large number of bacterial species. Crystal violet has also been used to combat bacterial, fungal, and parasitic infections.7 It is a popular alternative to penicillin and is still used to treat fungal infections, but like other members of its aniline family, it has enough carcinogenic properties that California’s Proposition 65 has added crystal violet (under the name “gentian violet”) to its list of potentially harmful substances.8 The balance between crystal violet’s abundant short-term benefits as a colorant and antiseptic and its serious long-term health risks still requires more research to establish. BEYOND SYNTHETIC DYES, INTO THE ENVIRONMENT Violets are abundant as dyes and stains, but they are also relevant in environmental
FEATURES
Figure 4: Wastewater lagoon. Wastewater lagoons often contain bacteria that metabolize manure under anaerobic conditions. Image licensed under CC BY-ND 2.0. remediation, particularly through a group of purple bacteria. Like many microbes, purple bacteria use photosynthesis to generate energy, just with a slight deviation. Unlike most photosynthetic organisms that use water and carbon dioxide to form sugar and oxygen, purple bacteria use sulfur, iron, or hydrogen in place of water due to the presence of bacteriochlorophylls (which also gives them their reddish-purple coloring).9 Thus, purple bacteria find their niches in the salty, acidic, and warm habitats that other photosynthetic organisms would not survive in. Purple sulfur bacteria, a category of purple bacteria, are especially valuable for their bioremediation capacities. When organic matter decomposes in an environment without oxygen, hydrogen sulfide usually forms. Hydrogen sulfide is an odorous, corrosive, flammable, and toxic gas often found in wastewater treatment lagoons. While most microbes shy away from sulfurrich environments, it’s the perfect setting for purple sulfur bacteria that abhor oxygen and require sulfur to photosynthesize.9,10 They can take in hydrogen sulfide and methane, in turn releasing non-toxic substances. This is especially significant when animal manure is present in wastewater, because processing manure releases a buffet of harmful substances, including hydrogen sulfide and methane. Purple sulfur bacteria
provide an optimistic solution to this type of environmental contamination. CONCLUSION From the textile industr y to environmental remediation, violet has starred in a series of anecdotes that cast the color in chemical, biological, and physical filters. It has represented wealth and power, ignited an entire industry, revolutionized bacterial classification, and detoxified contaminated wastewater. While each violet anecdote remains in progress, together they have established an exciting foundation for new discoveries to reshape the scientific fields we are familiar with, and for the next stories about violet to emerge.
“The balance between crystal violet’s abundant short-term benefits as a colorant and antiseptic and its serious longterm health risks still requires more research to establish.”
SPRING 2021 | Berkeley Scientific Journal
59
Acknowledgements: I would like to to acknowledge Dr. Martyn Smith (Professor of Toxicology in the School of Public Health) for offering his advice and expertise while reviewing my article. REFERENCES 1. Glowacki, E. D., Leonat, L., Voss, G., Bodea, M., Bozkurt, Z., Ramil, A. M., Irimia-Vladu, M., Bauer, S., & Sariciftci, N. S. (2011). Ambipolar organic field effect transistors and inverters with the natural material Tyrian Purple. AIP Advances, 1(4), Article 042132. https:// doi.org/10.1063/1.3660358 2. Levy, D. (2006, June 2). 3-D insulator called Han Purple loses a dimension to enter magnetic ‘Flatland.’ Stanford News. https://news.stanford.edu/news/2006/ june7/flat-060706.html 3. Cova, T. F. G. G., Pais A. A. C. C., & de Melo, J. S. S. (2017). Reconstructing the historical synthesis of mauveine from Perkin and Caro: Procedure and details. Scientific Reports, 7, Article 6806. https://doi.org/10.1038/s41598017-07239-z 4. Cartwright, R. A. (1983). Historical and modern epidemiological studies on populations exposed to N-substituted aryl compounds. Environmental Health Perspectives, 49, 13–19. https://doi. org/10.1289/ehp.834913 5. Ball, P. (2006). Perkin, the mauve maker. Nature, 440, 429. https://doi. org/10.1038/440429a 6. Yang, Y., Jung, D., Bai, D., Yoo, G., & Choi, J. (2001). Counterion‐ dye staining method for DNA in agarose gels using crystal violet and methyl orange. Electrophoresis, 2 2 ( 5 ) , 8 5 5 – 8 5 9 . http s : / / d oi . org/10.1002/1522-2683()22:5<855::AIDELPS855>3.0.CO;2-Y 7. Maley, A. M. & Arbiser, J. L. (2013). Gentian Violet: A 19th century drug re‐emerges in the 21st century. Experimental Dermatology, 22(12), 775–780. https://doi.org/10.1111/ exd.12257 8. Sun, M., Ricker, K., Osborne, G., Marder, M. E., & Schmitz, R. (2019). Evidence on the carcinogenicity of Gentian Violet. Office of Environmental Health Hazard Assessment, California Environmental
60
Protection Agency. https://oehha. c a . gov / me d i a / d ow n l o a ds / c r n r / gentianviolethid011719.pdf 9. Vasiliadou I. A., Berná, A., Manchon, C., Melero, J. A., Martinez, F., EsteveNuñez, A., & Puyol1, D. (2018). Biological and bioelectrochemical systems for hydrogen production and carbon fixation using purple phototrophic bacteria. Frontiers in Energy Research, 6, Article 107. https:// doi.org/10.3389/fenrg.2018.00107 10. Hädicke, O., Grammel H., & Klamt S. (2011). Metabolic network modeling of redox balancing and biohydrogen production in purple nonsulfur bacteria. BMC Systems Biology, 5, Article 150. https://doi.org/10.1186/1752-05095-150 IMAGE REFERENCES 11. Banner: Image created by author. 12. Figure 1: Violante, M. (2007). Haustellum brandaris 000 [Photograph]. Wikimedia Commons. https:// c om m ons . w i k i m e d i a . org / w i k i / File:Haustellum_brandaris_000.jpg 13. Figure 2: JWBE. (2012). Historische Farbstof fs ammlung D S C 0 0 3 5 0 [Photograph]. Wikimedia Commons. https://commons.wikimedia.org/wiki/ File:Historische_Farbstoffsammlung_ DSC00350.JPG 14. Figure 3: Microrao. (2015). Gram stain saliva [Photograph]. Wikimedia C o m m o n s . ht t p s : / / c o m m o n s . wikimedia.org/wiki/File:Gram_stain_ saliva.jpg 15. Figure 4: Steve. (2007). Vanguard Farms Influent to Lagoon 2 [Photograph]. Flickr. https://www.flickr.com/photos/ picstever/855679600
Berkeley Scientific Journal | SPRING 2021
FEATURES
BRINGING PHILOSOPHY INTO SCIENTIFIC RESEARCH BY MARLEY OTTOMAN
T
o most people today, the work of a scientist and the work of a philosopher could not be more different. The scientist toils to gather data empirically, analyze it, and decipher the results, with each discovery, or often lack thereof, still adding to the collective wealth of human knowledge. In contrast, the modern philosopher works largely within the realm of the abstract, pondering fundamentally complex questions, and seemingly getting little in return. As a result, the study of philosophy is now seen by many as almost a relic, especially within the scientific community. Nobel laureate physicist Stephen Hawking claimed the discipline was dead in his 2010 book, The Grand Design.1 Seeping further into the public consciousness, prominent science educators Neil deGrasse Tyson and Bill Nye have both called philosophy a waste of time. But, these claims ignore the possibility that there could be areas of scientific research where applying some philosophical analysis would be beneficial, and indeed there are a few. They are areas of science that face problems closely resembling those that philosophers have dealt with for centuries. Fields such as cognitive science, artificial intelligence, and stem cell biology routinely confront more abstract and poorly understood problems and could theoretically benefit from philosophical analysis. By examining a handful of philosophically based research projects within these fields, we can catch a glimpse into how such a seemingly unorthodox intellectual partnership could contribute to impactful discoveries. In “Why Science Needs Philosophy,” Laplane et al. present a
compelling, though opinion-driven, argument for the necessity of philosophical thinking in science.2 They begin with a quote from Albert Einstein: “This independence created by philosophical insight is—in my opinion—the mark of distinction between a mere artisan or specialist and a real seeker after truth.”3 Here, Einstein is stressing the importance of young scientists having a solid footing in philosophy, so that they may analyze beyond their biases and have a more comprehensive understanding of their work. This was indeed true for Einstein himself; he famously used simple thought experiments to conceptualize aspects of the theory of relativity, providing himself with a foundation to later solve the mathematics behind it.4 Laplane and company argue that philosophers use the same fundamental methods to approach problems that scientists do, differing from scientific experimentalists in the degrees of thoroughness, freedom, and theoretical abstraction that they can use.2 These differences have strengthened scientific research; in the past 40 years, for instance, cognitive scientists have begun employing a decades old philosophical theory to provide a better framework for understanding the notoriously enigmatic human mind. Enter, emergence theory (ET). At its most basic level, it posits that a system composed of many individual yet interdependent “sub-units” can have properties that a single, lone “sub-unit” could not.5 A fitting biological example is the neuron. A lone neuron is almost practically useless, but amass billions of them, precisely interconnected, and you have a system that allows for all the
“Philosophers use the same fundamental methods to approach problems that scientists do, differing from scientific experimentalists in the degrees of thoroughness, freedom, and theoretical abstraction that they can use.” FEATURES
SPRING 2021 | Berkeley Scientific Journal
61
Figure 1. The opening to a Saharan silver ant nest. When conceptualizing Emergence Theory, think of ants. On their own, they’re tiny insects with often miniscule lifespan. Working together, albeit under a monarchy, they can build massive underground structures. Image licensed under CC BY-SA 4.0.
complexities of the human experience. A 2010 paper by Stanford psychology professor J. L. McClelland attempts to flesh out how the growing adoption of ET (in favor of the then-held theory that the mind largely utilized symbolic processes) could shed light on many areas of current cognitive science research such as linguistics, decision making, cognitive architecture, and (perhaps most interesting and elusive) consciousness.6 The questions surrounding human consciousness are numerous, profound, and as old as the phenomenon itself, making it the ideal realm in which to tinker with philosophical analysis. Consider all the factors that impact one’s conscious decisions: projected outcomes, past experiences, subconscious influences, and others. These factors all overlap, influencing each other and suppressing less likely choices, and out of this emerges a final decision. Many scientists, including McClelland, contend that human consciousness as a whole can be described in the same vein, as an emergent phenomenon. It must be noted, however, that this is still mostly untestable and merely provides cognitive scientists with a better framework for understanding human intelligence. But nevertheless, some scientists have tried to extend McClelland’s work, attempting to explain, in great detail, all of consciousness. In a 2014 paper, “Integrated Information Theory of Consciousness 3.0” (IIT), Oizumi et al. propose that consciousness is not only definable, but also measurable. They even put forward a mathematical model, derived from their philosophical framework of the mind, that outputs a value (ΦMax) for how “conscious” a system is.7 While a description of the mathematical details of the model would be incalculably far outside the scope of this article, (not as incalculable though as finding ΦMax for even some simple systems) IIT is relevant here because of the criticisms it has received from many scientists and philosophers in the field. In a 2019 open letter to the Brain Research through Advancing Innovative
62
Berkeley Scientific Journal | SPRING 2021
Neurotechnologies Initiative, a diverse group of cognitive science researchers expressed their dissatisfaction with the credence IIT had been receiving despite its flaws and fundamental untestability.8 Machine learning researcher Max Tegmark pointed out one of these flaws by demonstrating how the exponential growth in processing power required to simulate even the simplest of conscious systems defined by IIT makes it impossible to run such a simulation on any current computer.9 In fact, some scientists feel IIT is better interpreted as describing a hypothetical proto-consciousness rather than as a theory attempting to describe human consciousness.10 Ultimately, IIT can serve to illustrate the limitations on philosophybased approaches. Bold philosophical ideas are exciting and interesting, but if they are not fundamentally testable then their practical utility is extremely limited. Still, if philosophy is going to persist within science, it should be useful. In the aforementioned 2019 paper on why science needs philosophy, Laplane et al. discuss how philosophy has helped further our understanding of cancer stem cells.2 The very same Laplane also explored this topic in depth in a 2016 monograph. By taking the generalized umbrella term used to describe stem cells, “stemness”, and applying some philosophical analysis, Dr. Laplane redefined stemness through four key sub-properties: categorical, dispositional, relational, and systemic.11,12 These helped to better define common stem cell behaviors despite the effects that internal and external factors can have. This helped elucidate some semantic and conceptual hurdles in both oncological and stem cell research.13 While Laplane’s model is one of several trying to untangle this difficult term, it is research-ready and testable, bringing it out of abstraction and making it useful for the scientific community. By providing new ways of understanding and interpreting stemness, theories like these can provide oncologists with new ways to approach the treatment of difficult or poorly understood cancers.14,15
FEATURES
“Theories like these can provide oncologists with new ways to approach the treatment of difficult or poorly understood cancers.” The flexibility in thinking that philosophy offers gives it real potential within scientific work. At its simplest, it can be invaluable for getting mechanistic thinkers to embrace the unorthodox and approach problems differently. At more complex levels, philosophical analysis can allow scientists to construct models of intractable problems in pursuit of gaining a deeper understanding. No doubt there is value in being able to pluck a difficult idea out of a world of limiting certainties, and place it into a realm of flexible possibilities. Moreover, seeing as how we currently have no shortage of complex problems to solve, it may be premature to begin removing tools from our collective toolbox. Acknowledgements: I would like to acknowledge postdoctoral fellow Jason Winning, Ph.D (University of Toronto) for his detailed help on clarifying difficult philosophical topics. REFERENCES 1. Hawking, S., & Mlodinow, L. (2010). The grand design. Bantam Books. 2. Laplane, L., Mantovani, P., Adolphs, R., Chang, H., Mantovani, A., McFall-Ngai, M., Rovelli, C., Sober, E., & Pradeu, T. (2019). Opinion: Why science needs philosophy. Proceedings of the National Academy of Sciences, 116(10), 3948–3952. https://doi.org/10.1073/ pnas.1900357116 3. Howard, D. A., & Giovanelli, M. (2019). Einstein’s Philosophy of Science. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Fall 2019 ed.). Stanford University. https://plato.stanford.edu/ archives/fall2019/entries/einstein-philscience/ 4. Britannica. (2010). Gedankenexperiment. In Brittanica.com encyclopedia (2010 ed.). https://www.britannica.com/science/ Gedankenexperiment
5. Blitz, D. (2011). Emergent evolution: Qualitative novelty and the levels of reality. Springer. https://doi.org/10.1007/978-94-015-8042-7 6. McClelland, J. L. (2010). Emergence in cognitive science. Topics in Cognitive Science, 2(4), 751–770. https://doi.org/10.1111/j.17568765.2010.01116.x 7. Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the phenomenology to the mechanisms of consciousness: Integrated Information Theory 3.0. PLoS Computational Biology, 10(5), Article e1003588. https://doi.org/10.1371/journal.pcbi.1003588 8. Lau, H. (2020, May 28). Open letter to NIH on neuroethics roadmap (BRAIN initiative) 2019. in consciousness we trust. https:// inconsciousnesswetrust.blogspot.com/2020/05/open-letter-to-nihon-neuroethics.html 9. Tegmark, M. (2016). Improved measures of integrated information. PLoS Computational Biology, 12(11), Article e1005123. https://doi. org/10.1371/journal.pcbi.1005123 10. Cerullo, M. A. (2015). The problem with phi: A critique of integrated information theory. PLoS Computational Biology, 11(9), Article e1004286. https://doi.org/10.1371/journal.pcbi.1004286 11. Melton, D. (2014). ‘Stemness’: Definitions, criteria, and standards. In R. Lanza & A. Atala (Eds.), Essentials of stem cell biology (3rd ed., pp. 7–17). Academic Press. https://doi.org/10.1016/B978-012-409503-8.00002-0 12. Laplane, L. (2016). Cancer stem cells: Philosophy and therapies. Harvard University Press. https://doi.org/10.4159/9780674969582 13. Clevers, H. (2016). Cancer therapy: Defining stemness. Nature, 534(7606), 176–177. https://doi.org/10.1038/534176a 14. Bialkowski, L., Jeught, K. V. der, Bevers, S., Joe, P. T., Renmans, D., Heirman, C., Aerts, J. L., & Thielemans, K. (2018). Immune checkpoint blockade combined with IL-6 and TGF-β inhibition improves the therapeutic outcome of mRNA-based immunotherapy. International Journal of Cancer, 143(3), 686–698. https://doi. org/10.1002/ijc.31331 15. Liu K. E. (2018). Rethinking causation in cancer with evolutionary developmental biology. Biological Theory, 13(4), 228–242. https:// doi.org/10.1007/s13752-018-0303-0 IMAGE REFERENCES
Figure 2. Dyed clonal stem cell tracking. A 3D rendering of a live culture of stem cells, dyed to track movement and behavior. Image licensed under CC BY-NC-ND 3.0.
FEATURES
1. Banner, background: Budassi, P. C. (2020). Artist’s conception of the Giant Void [Illustration]. Wikimedia Commons. https://commons. wikimedia.org/wiki/File:Giant_void.png 2. Banner, foreground: Wmayner. (2016). The Greek letter Phi, denoting integrated information [Graphic]. Wikimedia Commons. https:// commons.wikimedia.org/wiki/File:Phi-integrated-informationsymbol.png 3. Figure 1: Tørrissen, B. C. (2011). Saharan silver ant nest [Photograph]. Wikimedia Commons. https://commons.wikimedia.org/wiki/ File:Erg_Chebbi_Silver_Ant_Nest.jpg 4. Figure 2: Malide, D., Metais, J.-Y., & Dunbar, C. (2011). Stem cell clonal tracking [Microscope image]. Cell Image Library. http://www. cellimagelibrary.org/images/44151
SPRING 2021 | Berkeley Scientific Journal
63
BY ANNA CASTELLO
F
or city planners, maps and models come in all shapes and sizes. Wastewater maps, for example, project cities onto flat planes, allowing planners to trace how water flows from drain to ocean. 3D models, in contrast, grant architects the power to visualize how sounds and shadows are shaped by elevation. And through the usage of software like ArcGIS, city planners can explore how all the moving parts of a city work together cohesively. Models of the human body, however, aren’t as effective. Most current models of human systems don’t capture the full extent of the human body’s interconnectedness. But a new type of cell culture system, called an assembloid, might offer the next step for scientists trying to understand how the human body works.
64
Berkeley Scientific Journal | SPRING 2021
ORGANOIDS AND ASSEMBLOIDS Assembloids are largely based on a relatively recent cell culture system called organoids. Organoids are miniaturized models of organs or organ parts that are used to study biological processes. They are made in the laboratory from human induced pluripotent stem cells (hiPSCs), a special cell type that has been genetically reprogrammed to have the ability to differentiate into any cell type in the adult human body.1 This differentiation is achieved through the forced expression of specific sets of transcription factors.2 With the right environments and growth factors, hiPSCs can self-assemble into an organoid (Fig. 1) consisting of various cell types needed
FEATURES
Figure 1: General brain organoid culture protocol. Human pluripotent stem cells (PSCs) are cultured in appropriate culture media. After addition of certain enzymes that break the physical linkages between the PSCs and the plate they are growing on, the PSCs aggregate into 3-D spheres called embryoid bodies (EBs). The EBs then differentiate into neural tissue after specific molecular factors are added.22 to mimic the approximate structure and function of the target organ.3,4,5,6 These self-organizing collections of cells have been used to model many diseases such as Alzheimer’s and Zika fever.7,8,9,10 While organoids have contributed to incredible advancements in medicine, they fail to represent the body’s interconnectedness— they only permit the study of a disease or a drug on a single organ type. However, many diseases and drugs affect more than one of the body’s operations. Organoids do not allow researchers to see how multiple organs and tissues work together as they would in vivo. While they aren’t perfect, assembloids offer one solution to this problem. The principle underlying their creation is relatively simple: place two organoids next to one another and they can fuse, integrating multiple cell types in one 3-D model. While these models will likely prove useful in many branches of biology, so far they have mostly been used in a couple specific contexts: studying the progression of disease, and studying drug interactions that involve multiple organ systems. The cortico-striatal assembloid in particular has rapidly gained traction as a useful model for disease development, at least for neuroscientists.11,12 CORTICO-STRIATAL ASSEMBLOID Cortico-striatal assembloids link the cortex and the striatum, two closely-related regions of the brain. Anatomically, these regions interface most closely in the basal ganglia, a portion of the brain responsible for motion.13 Normally the cortex and the striatum work together to initiate voluntary movement and inhibit opposing movement, resulting in a smooth and controlled motor response.14 The cortex can be thought of as the mayor of a model city, thinking of new plans to coordinate the workforce (in the body’s case, the muscles). These plans are then communicated to the striatum, or the project developer, who clearly relays the instructions to the muscles. But sometimes, this communication can be muddled. In cities, poor coordination causes police officers to knock on the wrong doors and construction crews to demolish the wrong buildings. In the body, poor coordination can have equally serious consequences: Parkinson’s, for example, causes patients to experience tremors
and perform rigid movements as a result of muddled synaptic communication.15 Dysfunctions of the cortico-striatal circuits appear to be linked to not only the inability to form fluid voluntary muscle contractions, but also to the presence of neuropsychiatric diseases such as autism spectrum disorder, schizophrenia, and obsessivecompulsive disorder.16,17,18,19 Assembloids could help us understand these conditions better by providing us a model for communication between the cortex and striatum.12 Researchers from Stanford University have been able to create such a model by connecting two individual region-specific brain organoids that model the human cerebral cortex and striatum to form an interconnected corticostriatal assembloid.12 The formation of this assembloid was closely monitored through imaging and testing of the synaptic connections which allowed for greater insight into how this circuit is assembled during development and how and when functional defects during development arise. Because assembloids involve live cells, they can be imaged and experimented on at any time during their formation and maturation. This provides us with a better understanding of the miniscule changes that occur during development than we can garner from studying human fetuses. Basic imaging, however, isn’t enough to determine whether these assembloids are executing their main function: talking to each other through synaptic connections. To do that, researchers need to use a technique called retrograde viral tracing (Fig. 2). When a cortico-striatal assembloid is properly connected, retrograde viral tracing shows that cortical neurons extend into the striatum and form synaptic connections with medium spiny neurons, major components of the striatum.12,20 This same technique of making assembloids can be used with cells derived from clinical patients. This has been used to study Phelan-McDermid syndrome, for example, which is characterized by a wide variety of defects throughout the body, ranging from the brain to the heart and the gastrointestinal system. Using individualized assembloids created from the stem cells of patients, scientists have been able to observe hyperactivity in medium spiny neurons, a cellular hallmark of Phelan-McDermid syndrome that
“The nascent success of the cortico-striatal assembloid demonstrates that connected brain region-specific organoids can be used to better understand the mechanism of neuropsychological diseases and to accelerate the search for novel treatments.” FEATURES
SPRING 2021 | Berkeley Scientific Journal
65
Figure 2: Retrograde viral tracing . Retrograde viral tracing is a technique that employs viruses that encode visualization markers, such as fluorescent proteins, to visualize neural connections. Virus is taken up at the axonal terminal and travels to the cell body (retrograde transport). The technique is used here to visualize neuronal projections from cells in the cortical (hSC) organoid into cells in the striatal (hStrS) organoid. hCS cells were infected with the virus AAV-DIOmCherry, which expresses the red fluorescent protein mCherry only in the presence of the protein Cre. hStrS cells were infected with the virus rabies-∆G-Cre-eGFP, which expresses both the protein Cre and the green fluorescent protein eGFP. When hCS cells project to hStrS cells, the projections take up rabies-∆G-Cre-eGFP from the hStrS cell, introducing Cre into hCS cells and inciting expression of mCherry. Thus, only connections between hCS and hStrS cells will be stained with both eGFP and mCherry.12,21
“This means there is, in theory, no cap on the types of organoids that can be assembled together.” can be observed in the neurons of mice and human suffering from the disease.12 The nascent success of the cortico-striatal assembloid demonstrates that connected brain region-specific organoids can be used to better understand the mechanism of neuropsychological diseases and to accelerate the search for novel treatments. WHAT IS NEXT? While it seems relatively easy to connect various neurons together, as in the cortico-striatal assembloid, connecting vastly different cell types to make an assembloid that incorporates several body systems is more difficult. But recently, researchers have been able to create a cortico-motor assembloid connecting cortical, spinal, and muscle organoids.11 In this assembloid, the decisionmaking cortex, akin to a mayor, relays information to the spinal cord—the city’s communication office— which then connects with the workforce to incentivize action, quantified by the presence of a twitch in the muscle. Such a model could provide the means to study potential cures for amyotrophic lateral sclerosis (ALS), among other diseases. In contrast with the cortico-striatal assembloid which is only composed of neurons, the cortico-motor assembloid proves that different tissue types can be combined to form a model that better encapsulates the body’s complex interactions. This means there is, in theory, no cap on the types of organoids that can be assembled together to then be used to understand many different diseases in a level that organoids individually never could. Importantly, assembloids could also become standards for personalized medicine, a type of medicine that tailors treatments
66
Berkeley Scientific Journal | SPRING 2021
to a specific individual’s genetic information and environmental conditions. Assembloids created directly from a patient’s own cells could help researchers identify what genetic, cellular, and molecular issues a patient is facing and provide a platform to test how the individual might react to certain treatments—a useful feature, considering that the wrong medication might have deadly side effects. Assembloids provide a means to understand any patient, eliminating some of the guesswork of medicine and helping researchers and doctors give the best possible care to every patient. Acknowledgements: I would like to acknowledge PhD student Giulia Castello from the Immunology Department at Leiden University Medical Center for her detailed feedback and fruitful conversation about human models for medical research. REFERENCES 1. Choi, K.-D., Yu, J., Smuga-Otto, K., Salvagiotto, G., Rehrauer, W., Vodyanik, M., Thomson, J., & Slukvin, I. (2009). Hematopoietic and endothelial differentiation of human induced pluripotent stem cells. Stem Cells, 27(3), 559–567. https://doi.org/10.1634/ stemcells.2008-0922 2. Takahashi, K., Tanabe, K., Ohnuki, M., Narita, M., Ichisaka, T., Tomoda, K., & Yamanaka, S. (2007). Induction of pluripotent stem cells from adult human fibroblasts by defined factors. Cell, 131(5), 861–872. https://doi.org/10.1016/j.cell.2007.11.019 3. McCracken, K. W., Catá, E. M., Crawford, C. M., Sinagoga, K. L., Schumacher, M., Rockich, B. E., Tsai, Y.-H., Mayhew, C. N., Spence, J. R., Zavros, Y., & Wells, J. M. (2014). Modelling
FEATURES
4.
5.
6. 7. 8. 9.
10.
11.
12.
13. 14. 15.
human development and disease in pluripotent stem-cellderived gastric organoids. Nature, 516(7531), 400–404. https:// doi.org/10.1038/nature13863 Trisno, S. L., Philo, K. E. D., McCracken, K. W., Catá, E. M., Ruiz-Torres, S., Rankin, S. A., Han, L., Nasr, T., Chaturvedi, P., Rothenberg, M. E., Mandegar, M. A., Wells, S. I., Zorn, A. M., & Wells, J. M. (2018). Esophageal organoids from human pluripotent stem cells delineate Sox2 functions during esophageal specification. Cell Stem Cell, 23(4), 501-515.e7. https://doi.org/10.1016/j.stem.2018.08.008 Dye, B. R., Hill, D. R., Ferguson, M. A., Tsai, Y.-H., Nagy, M. S., Dyal, R., Wells, J. M., Mayhew, C. N., Nattiv, R., Klein, O. D., White, E. S., Deutsch, G. H., & Spence, J. R. (2015). In vitro generation of human pluripotent stem cell derived lung organoids. eLife, 4, Article e05098. https://doi.org/10.7554/ eLife.05098 Sasai, Y. (2013). Cytosystems dynamics in self-organization of tissue architecture. Nature, 493(7432), 318–326. https://doi. org/10.1038/nature11859 Gerakis, Y., & Hetz, C. (2019). Brain organoids: A next step for humanized Alzheimer’s disease models? Molecular Psychiatry, 24(4), 474–478. https://doi.org/10.1038/s41380-018-0343-7 Fyfe, I. (2021). Brain organoids shed light on APOE genotype and Alzheimer disease pathology. Nature Reviews Neurology, 17(1), 1. https://doi.org/10.1038/s41582-020-00437-w Dang, J., Tiwari, S. K., Lichinchi, G., Qin, Y., Patil, V. S., Eroshkin, A. M., & Rana, T. M. (2016). Zika virus depletes neural progenitors in human cerebral organoids through activation of the innate immune receptor TLR3. Cell Stem Cell, 19(2), 258–265. https://doi.org/10.1016/j.stem.2016.04.014 Garcez, P. P., Loiola, E. C., Madeiro da Costa, R., Higa, L. M., Trindade, P., Delvecchio, R., Nascimento, J. M., Brindeiro, R., Tanuri, A., & Rehen, S. K. (2016). Zika virus impairs growth in human neurospheres and brain organoids. Science, 352(6287), 816–818. https://doi.org/10.1126/science.aaf6116 Andersen, J., Revah, O., Miura, Y., Thom, N., Amin, N. D., Kelley, K. W., Singh, M., Chen, X., Thete, M. V., Walczak, E. M., Vogel, H., Fan, H. C., & Paşca, S. P. (2020). Generation of functional human 3D cortico-motor assembloids. Cell, 183(7), 1913-1929.e26. https://doi.org/10.1016/j.cell.2020.11.017 Miura, Y., Li, M.-Y., Birey, F., Ikeda, K., Revah, O., Thete, M. V., Park, J.-Y., Puno, A., Lee, S. H., Porteus, M. H., & Pașca, S. P. (2020). Generation of human striatal organoids and corticostriatal assembloids from human pluripotent stem cells. Nature Biotechnology, 38(12), 1421–1430. https://doi.org/10.1038/ s41587-020-00763-w Graybiel, A., Aosaki, T., Flaherty, A., & Kimura, M. (1994). The basal ganglia and adaptive motor control. Science, 265(5180), 1826–1831. https://doi.org/10.1126/science.8091209 Purves D, Augustine G. J., Fitzpatrick D., Hall, W. C., LaMantia, A.-S., McNamara, J. O., & White, L. E. (Eds.). (2008). Neuroscience (4th ed.). Sinauer Associates. Blandini, F., Nappi, G., Tassorelli, C., & Martignoni, E. (2000). Functional changes of the basal ganglia circuitry in Parkinson’s disease. Progress in Neurobiology, 62(1), 63–88. https://doi. org/10.1016/S0301-0082(99)00067-2
FEATURES
16. Shepherd, G. M. G. (2013). Corticostriatal connectivity and its role in disease. Nature Reviews Neuroscience, 14(4), 278–291. https://doi.org/10.1038/nrn3469 17. Milad, M. R., & Rauch, S. L. (2012). Obsessive-compulsive disorder: Beyond segregated cortico-striatal pathways. Trends in Cognitive Sciences, 16(1), 43–51. https://doi.org/10.1016/j. tics.2011.11.003 18. Peça, J., Feliciano, C., Ting, J. T., Wang, W., Wells, M. F., Venkatraman, T. N., Lascola, C. D., Fu, Z., & Feng, G. (2011). Shank3 mutant mice display autistic-like behaviours and striatal dysfunction. Nature, 472(7344), 437–442. https://doi. org/10.1038/nature09965 19. Welch, J. M., Lu, J., Rodriguiz, R. M., Trotta, N. C., Peca, J., Ding, J.-D., Feliciano, C., Chen, M., Adams, J. P., Luo, J., Dudek, S. M., Weinberg, R. J., Calakos, N., Wetsel, W. C., & Feng, G. (2007). Cortico-striatal synaptic defects and OCD-like behaviours in Sapap3-mutant mice. Nature, 448(7156), 894–900. https://doi. org/10.1038/nature06104 20. Kita, H., & Kitai, S. T. (1988). Glutamate decarboxylase immunoreactive neurons in rat neostriatum: Their morphological types and populations. Brain Research, 447(2), 346–352. https://doi.org/10.1016/0006-8993(88)91138-9 21. Lafon, M. (2005). Rabies virus receptors. Journal o f Ne urov iro l o g y , 1 1 ( 1 ) , 8 2 – 8 7 . ht t p s : / / d oi . org/10.1080/13550280590900427 22. Kuo, J. (2019). Organoids: The future of medicine. Berkeley Scientific Journal, 23(2), 4–7. https://escholarship.org/uc/ item/5323s7ms IMAGE REFERENCES 23. Banner: Muotri, A. (2019). A cross-section of a mini-brain. Muotri Lab, UCSD. 24. Figure 1: see reference #22. 25. Figure 2: see reference #12.
SPRING 2021 | Berkeley Scientific Journal
67
Potential Relationships Between NAFLD Fibrosis Score and Graft Status in Liver Transplant Patients
By Haaris Kadri1, Raja R. Narayan MD MPH2, Sunnie Y. Wong MD PhD2, and Marc L. Melcher MD PhD2 1 2
Department of Molecular and Cell Biology, University of California, Berkeley Department of Surgery, Stanford University
ABSTRACT Non-alcoholic fatty liver disease is projected to be the most common cause of liver failure in the coming decade and is a very common reason for liver transplantation. One measure of its severity is the level of hepatic fibrosis, traditionally assessed by a liver biopsy. The non-alcoholic fatty liver disease fibrosis score was developed to non-invasively predict the degree of fibrosis using patient characteristics and laboratory values. We hypothesized that this score could also be used to assess the quality of donated livers, since many donors are obese and thus have a higher risk of fatty liver disease. Using data from the United Network for Organ Sharing over two decades, this study tests whether graft failure is associated with the donor liver’s non-alcoholic fatty liver disease fibrosis score. Statistical analysis yielded that the relationship between the score and time till graft failure is insignificant: A chi-square test of independence between the two gives a p-value of .1311, and a Kaplan-Meier survival analysis yielded a p-value of .2, neither of which were under the significance level of .05. Though the results were not statistically significant, future studies on non-invasive assessments and their use may illuminate possibilities for clinical applications. INTRODUCTION Liver transplantation is the only definitive treatment for patients with end-stage liver disease.1 Increase in the incidence of non-alcoholic fatty liver disease (NAFLD)-related liver failure has caused it to become one of the most common reasons for liver transplantation. The severity of the disease itself can range from hepatic steatosis (lipid retention in the liver) to steatohepatitis (lipid retention coupled with inflammation of liver tissue), and it can lead to advanced cirrhosis and hepatocellular carcinoma, among other ill sequelae.1 The decision to use a liver from a deceased donor for transplantation into a recipient depends on histopathological,
physical, and biochemical factors. Therefore, the increase in the prevalence of NAFLD, which damages the liver at the cellular level, reduces the quality of the liver donor pool and can be problematic when procuring organs for transplantation.2 Currently, the gold standard method to assess a donor liver’s quality is the histologic review of a biopsy obtained at procurement time. This review screens for the degree of hepatic steatosis, hepatocellular injury and disease, inflammation, and fibrosis; all of which can preclude transplantation.3 However, there are significant limitations to this method. For example, the biopsy only assesses a small portion of the liver parenchyma, the functional tissue of the organ, and might not represent the global histopathology of
Figure 1: Different levels of fibrosis in liver parenchyma. This image shows the five accepted stages of fibrosis common in pathological analysis of liver fibrosis scoring, F0-F4. F0: no fibrosis, F1: portal fibrosis without septa, F2: few septa, F3: numerous septa without cirrhosis, F4: cirrhosis.
68
Berkeley Scientific Journal | SPRING 2021
RESEARCH
Donor
Recipient
n = 366
Age (years) Sex (%) BMI (kg/m ) 2
COD (%)
CIT (hours)
Median Age
47
% Male
40.7104
% Female
59.2896
Median BMI
28.3008
Anoxia
38.2514
CVA/Stroke
36.3388
Head Trauma
24.3170
CNS Tumor
0
Other
1.0929
Median CIT
7.3
n = 366
Age (years)
Median Age
58
% Male
32.5137
% Female
67.3967
BMI (kg/m )
Median BMI
26.7760
Graft Status (%)
Survived
80.3279
Failed
19.6721
Median Time
1101
Sex (%) 2
Time to Graft Failure (days) (n = 361)
the liver. Histological assessment can also vary from pathologist to pathologist, which is problematic when it comes to clearly and effectively communicating the severity of a biopsy reading and can have severe consequences when the liver is being vetted for transplantation.4,5 Given the aforementioned limitations of the liver biopsy, there is an interest in developing alternative tools to avoid biopsies when possible.6 One such tool, the NAFLD fibrosis score (NFS), predicts the level of hepatic fibrosis from patient age, BMI, and diabetes status, as well as several laboratory values including aspartate aminotransferase (AST) and alanine aminotransferase (ALT), platelet, and albumin levels. The NFS is used to predict postoperative complications and mortality and was developed specifically for use in NAFLD patients.7,8 It can be calculated using readily-available patient data to provide results almost instantly, giving patients a lower-cost alternative to biopsy.9 A high NFS value indicates a high probability for advanced fibrosis and a low NFS value indicates the
Table 1: Demographics Data of all Included Donor and Recipient Pairs. Abbreviations: BMI: Body Mass Index, COD: Cause of Death, CVA: Cardiovascular Accident, CNS: Central Nervous System, CIT: Cold Ischemia Time.
absence of advanced fibrosis.7 Notably, the NFS does not predict low fibrosis risk, but merely the presence or absence of high-level fibrosis. In particular, metrics like the NFS are known for their high negative predictive value when it comes to eliminating the potential for high-grade and advanced fibrosis.10 Negative predictive value (NPV), in this case, is the probability that a subject does not have some high-grade fibrosis given that their NFS does not indicate that they do. Due to its high NPV, the NFS most accurately detects advanced fibrosis in afflicted livers when used alongside other noninvasive indexes, like FIB-4 and BARD scores.11 Graft failure is a postoperative risk for transplantation patients and can lead to death. Within one year of transplantation, 10% of livers fail.12 Also, in the United States, as of 2013, approximately 3000 people on the transplant waitlist either die or become too sick to undergo transplant per year, and in 2017, 742 livers were thrown out post-procurement for a variety of reasons, including histological anomalies,13, 14 leading to not only a loss of life, but
Figure 2: Flowchart of Patient Inclusion. This chart shows how many patients started off in the initial database and how the number was whittled down upon enforcement of the inclusion criteria.
RESEARCH
SPRING 2021 | Berkeley Scientific Journal
69
NAFLD Fibrosis Score
Correlated Fibrosis Severity
- 1.455 > Score (Low)
F0 - F1 (Low severity)
-1.455 < Score < 0.675 (Medium)
F2 (Moderate severity)
Score > 0.675 (High)
F3 - F4 (High severity)
also a loss of healthcare dollars. Thus, it is important to develop stronger predictors for liver success and outcomes post-transplant. Even though the NFS was not created to assess liver graft failure, we hypothesize that it would be indicative, to some degree, of postoperative graft failure and time to graft failure.
Table 2: NAFLD Fibrosis Scores and Their Assigned Fibrosis Severity. The above NAFLD Fibrosis score ranges are best associated with their paired pathologist scores, which can be seen as ranked on a scale from 0 to 4. F0: no fibrosis, F1: mild fibrosis, F2: moderate fibrosis, F3: severe fibrosis, F4: cirrhosis. Calculating NAFLD Fibrosis Scores To calculate NFS for each donor liver, the following formula was utilized.7 The variable diabetes_score was quantified on a binary scale, with a patient diagnosed with diabetes assigned a score of 1 and a patient not diagnosed with diabetes assigned a score of 0.7
METHODS Patient Selection Patients were selected from a prospectively maintained United Network for Organ Sharing (UNOS) database of liver donations and transplants performed at a single center between September 1999 and May 2020. The database culls information from the Organ Procurement and Transplantation Network (OPTN) at the national, regional, and individual levels. Liver transplant donor-recipient pairs with the following available data were included in the study: donor NFS, donor liver biopsy data, and recipient graft survival. Relevant donor liver fibrosis data was collected from either available liver biopsy reports or final surgical pathology reports. From a database of 599 transplantation procedures, 366 had the data necessary for inclusion (Figure 2). Demographics data for both donors and recipients was collected and tabulated as well (Table 1). Determining Postoperative Outcomes for Donors Graft survival and time to failure were noted. The distinction between primary and secondary graft failure was also established. As is common convention in transplantation, primary graft failure is defined as complications with the initial engraftment of the liver, and secondary graft failure is defined as complications after engraftment.16 Due to ambiguity or lack of specification regarding the level of graft failure, this study did not differentiate between the two types when doing analysis, though this distinction is standard practice.
Analysis of NAFLD Fibrosis Score Categorization and Pathologist Scoring The donor NFS values were split into three groups with associated risks of fibrosis. All low NAFLD scores (F0 – F1) were assigned 0, medium scores (F2) were assigned 1, and high scores (F3 – F4) were assigned 2. Pathologist scoring for fibrosis is on a scale from 0 to 4, so scores in the range [0, 1) were assigned 0, [1, 3) were assigned 1, and [3, 4] were assigned 2. In the analysis, pathologist scoring is provided in ranges because of discrepancies in how it was reported. For example, some livers were given non-integer fibrosis scores, such as 1.5. In other cases, right-lobe and left-lobe fibrosis readings were inconsistent. In these instances, the average of the two reported values was used. Statistical Analysis of the Relationship Between NAFLD Fibrosis Score and Graft Failure A statistical analysis of the patient’s NFS and graft failure was done by separating the sample of 366 patients into two groups: one group of patients with graft failure and one group without. The groups were then broken into three strata: those with an NFS less than -1.455, those with a score between -1.455 and 0.675, and those
Figure 3: Bubble Plot of NAFLD Score Category and Pathologist Scoring. This graph plots the intersections between NFS categorization and pathologist scoring in three dimensions. Points with no bubbles indicate no agreement between NFS Score category and pathology scoring.
70
Berkeley Scientific Journal | SPRING 2021
RESEARCH
with a score higher than 0.675 (Table 2).7 The proposed null hypothesis was that graft survival in recipients was independent of NFS scores; this hypothesis was tested using a chi-square test of independence. A Kaplan-Meier estimation analysis was used to test whether donor NFS scores were associated with time to graft failure. Data was censored when the time to graft failure was not recorded for a patient. The recipients were split into three groups in two different ways, and their survival curves were graphed and tested for any statistical significance. In the first categorization (Figure 3), the patients were split into groups based on the pre-determined NFS categories of low, moderate, and high severity fibrosis risk of their transplanted organs. In the second analysis (Figure 4), the patients were split into terciles (lower third, middle third, and upper third), with 122 patients in each. The null hypothesis for these tests was that the three groups, in each of the two tests, had similar time-to-failure, or survival times. RESULTS
Figure 4: Kaplan-Meier Survival Curve for Graft Survival by Pre-Determined NAFLD Scores. This set of curves shows the survival of the 266 patients postoperatively as split up into the three NFS categories.
Correlation of the NAFLD Fibrosis Score with PathologistReported Fibrosis Scoring Of the 366 patients included in this study, 262 had pathology data including assessment of fibrosis. The results of the scoring comparisons are compiled in the bubble plot (Figure 5), where the x-axis is the NAFLD score category, the y-axis is the pathologist scoring as per the donor reports, and the size of the bubble itself indicates how many instances there were of that specific intersection. Instances of agreement, shown on the graph as (0,0), (1,1), and (2,2), only account for 36 of the 262 (13.74%) donor livers for which fibrosis data was available. Relationship between NFS and Graft Failure in Recipients Out of the 366 recipients included in the study, 72 (19.67%) of them ended up with graft failure. The graft failure incidence at 90 and 365 days was 4% and 10% respectively. The graft outcomes were broken down by NFS severity. 7 While there appears to be a trend towards greater graft loss with higher severity, there was no significant difference in graft failure at 90 (χ2 (2, N = 366) = 1.9232, p = .3823) and 365 days (χ2 (2, N = 366) = 1.6022, p = .4488) (Table 3).
Figure 5: Kaplan-Meier Survival Curve for Graft Survival by NAFLD Score Terciles. This set of curves shows the survival of the 266 patients postoperatively as split up into three equally sized terciles.
RESEARCH
Kaplan-Meier Survival Analysis for the Determination of Correlation Between NFS and Graft Survival Time The 366 recipients and their graft survival times were analyzed in two ways through a Kaplan-Meier survival analysis, for which they were split into three groups. Although the survival curve displayed a trend toward better graft survival for the low fibrosis risk group in the first analysis and the first tercile group in the second, this association was not statistically significant (p = .2).
SPRING 2021 | Berkeley Scientific Journal
71
Donor NFS Severity
90 Day Graft Loss (%)
365 Day Graft Loss (%)
Low Severity
3.6
5.5
Medium Severity
2.8
9.8
High Severity
6.0
11.3
Total
4.4
9.8
DISCUSSION Graft failure after liver transplantation can result in death if re-transplantation cannot be performed. Thus, it is important to identify donor livers that are less likely to fail. We hypothesized that a higher donor liver NFS would be associated with a greater risk for graft failure in recipients due to the association between NFS and liver fibrosis, a risk factor in transplantation. Graft failure was identified in 4.37% of recipients 90-days post-transplant, and 9.84% of recipients one-year post-transplant. Although graft survival was lower at higher NFS, this was not found to be significant. This could either be due to the limited power of this study or the true lack of an association between the NFS and the degree of fibrosis linked to graft failure. More data and collaboration with other centers is being pursued to conduct a higher-powered study to address the former. Regarding the latter, research is being done to find a tool that can diagnose fibrosis associated with graft failure. Employment of artificial intelligence (AI) on this front has been fruitful. The use of AI in histopathological assessment yields accurate, objective results that can be analyzed quickly to provide information regarding associations between histopathological features of the liver tissue and patient outcomes post-transplant.15 Very little association was found between donor NFS and biopsy liver fibrosis in pathologist assessments (Figure 5). It is not surprising that the pathologist scoring of fibrosis was low in general, as these livers came from a database of livers accepted for donation. Few donor livers selected for transplantation would have high NFS. The NFS may also be inappropriately affected by the donor circumstance of death. As an example, if a patient died of an overdose and was anoxic for a period, their liver may have been impacted leading to increased AST and ALT levels on the lab tests. Elevated AST and ALT increase the NFS, thus, overcalling fibrosis risk. Donor NFS may also not predict early graft failure well because it only accounts for the quality of the donor organ and not the health of the recipient. Future studies could investigate whether the combination of donor NFS with recipient-specific factors, such as MELD score, would better predict graft-survival posttransplantation. This combination may determine which donor livers are suitable for donation and which potential recipients cannot risk receiving a liver that has histopathological anomalies. The biopsy remains the gold standard of pathological analysis. It is difficult, however, to compare biopsy results to one another because of the variability in pathologist analysis.4,5 Inconsistencies in the interpretation of reports between pathologists can lead to more issues with decisions, which are important to avoid in high-risk situations like a transplant. Additionally, although all transplants analyzed in this study were conducted at the same medical center, misinterpretation and lack of clarity between different transplant centers can have adverse impacts.16 The area of highest agreement
72
Berkeley Scientific Journal | SPRING 2021
Table 3: Graft Outcome by Donor NFS Severity. This table shows the relative percentages of graft loss observed in patients with low, medium, and high NFS severity. The “total” row on the bottom shows a complete percentage of 90- and 365-day graft loss.
between the NFS and pathologist assessment was with low-scoring livers (12.60% of available fibrosis data), which means that there are conditions for which NFS could supplement a liver biopsy to a fair level of accuracy. Further analysis of data and non-invasive tools is important in this regard. The study was limited by the available data. The initial database accessed had 599 patient files, but 233 (38.90%) of them had to be excluded from the study for being incomplete. Out-of-date files were excluded only if the transplant occurred outside of the study period. Assuming these documents were filled out completely and accurately, the addition of the excluded patients could have led to more insightful statistical results in the end. Due to the prevalence of liver diseases like NAFLD, liver transplantation is becoming increasingly common. To avoid retransplantation and recipient mortality, it is important to be certain of the donor liver’s viability. Though it could not be concluded that a donor liver’s NFS was associated with recipient graft failure, there is the possibility that the use of other tools, or NFS in conjunction with other tools, may prove to be an adequate indicator of patient outcomes, reducing or even eliminating the need for a biopsy in cases, which could provide a more holistic analysis of the potential risks involved in a given transplant procedure, one of which is postoperative graft failure in the recipient. While the biopsy is the “status quo” assessment when assessing livers for transplantation, using noninvasive tools avoids a lot of the subjectivity that comes with pathological assessment, while also being time-efficient and affordable. ACKNOWLEDGEMENTS This work was supported in part by Health Resources and Services Administration contract HHSH250-2019-00001C. The content is the responsibility of the authors alone and does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. Government. This research was conducted with help from the Department of Surgery at Stanford University. REFERENCES 1. Pais, R., Barritt, A. S., Calmus, Y., Scatton, O., Runge, T., Lebray, P., Poynard, T., Ratziu, V., & Conti, F. (2016). NAFLD and liver transplantation: Current burden and expected challenges. Journal of Hepatology, 65(6), 1245–1257. https:// doi.org/10.1016/j.jhep.2016.07.033 2. Patel, Y. A., Berg, C. L., & Moylan, C. A. (2017). Nonalcoholic fatty liver disease: Key considerations before and after liver transplantation. Digestive Diseases and Sciences, 61(5), 1406–
RESEARCH
1416. https://doi.org/10.1007/s10620-016-4035-3 3. Cheah, M. C., McCullough, A. J., & Goh, G. B.-B. (2017). Current modalities of fibrosis assessment in non-alcoholic fatty liver disease. Journal of Clinical and Translational Hepatology, 5(3), 261–271. https://doi.org/10.14218/JCTH.2017.00009 4. Bracamonte, E., Gibson, B. A., Klein, R., Krupinski, E. A., & Weinstein, R. S. (2016). Communicating uncertainty in surgical pathology reports: A survey of staff physicians and residents at an academic medical center. Academic Pathology, 3. https://doi. org/10.1177/2374289516659079 5. Coffin, C. S., Burak, K. W., Hart, J., & Gao, Z. (2006). The impact of pathologist experience on liver transplant biopsy interpretation. Modern Pathology, 19(6), 832–838. https://doi. org/10.1038/modpathol.3800605 6. Dyson, J. K., Anstee, Q. M., & McPherson, S. (2014). Nonalcoholic fatty liver disease: A practical approach to diagnosis and staging. Frontline Gastroenterology, 5(3), 211–218. https:// doi.org/10.1136/flgastro-2013-100403 7. Angulo, P., Hui, J. M., Marchesini, G., Bugianesi, E., George, J., Farrell, G. C., Enders, F., Saksena, S., Burt, A. D., Bida, J. P., Lindor, K., Sanderson, S. O., Lenzi, M., Adams, L. A., Kench, J., Therneau, T. M., & Day, C. P. (2007). The NAFLD fibrosis score: A noninvasive system that identifies liver fibrosis in patients with NAFLD. Hepatology, 45(4), 846–854. https://doi. org/10.1002/hep.21496 8. Treeprasertsuk, S., Björnsson, E., Enders, F., Suwanwalaikorn, S., & Lindor, K. D. (2013). NAFLD fibrosis score: A prognostic predictor for mortality and liver complications among NAFLD patients. World Journal of Gastroenterology, 19(8), 1219–1229. https://doi.org/10.3748/wjg.v19.i8.1219 9. McPherson, S., Stewart, S. F., Henderson, E., Burt, A. D., & Day, C. P. (2010). Simple non-invasive fibrosis scoring systems can reliably exclude advanced fibrosis in patients with non-alcoholic fatty liver disease. Gut, 59(9), 1265. https://doi.org/10.1136/ gut.2010.216077 10. Robinson, A., & Wong, R. J. (2020). Applications and limitations of noninvasive methods for evaluating hepatic fibrosis in patients with nonalcoholic fatty liver disease. Clinical Liver Disease, 15(4). https://doi.org/10.1002/cld.878 11. Xiao, G., Zhu, S., Xiao, X., Yan, L., Yang, J., & Wu, G. (2017). Comparison of laboratory tests, ultrasound, or magnetic resonance elastography to detect fibrosis in patients with nonalcoholic fatty liver disease: A meta-analysis. Hepatology, 66(5), 1486–1501. https://doi.org/10.1002/hep.29302 12. George, L. A., Lominadze, Z., & Kallwitz, E. R. (2018). Con: Patient and graft survival outcome at 1 and 3 years posttransplant do not reflect the value of successful liver transplantation. Clinical Liver Disease, 11(3), 62–65. https:// doi.org/10.1002/cld.670 13. Hart, A., Schladt, D. P., Zeglin, J., Pyke, J., Kim, W. R., Lake, J. R., Roberts, J. P., Hirose, R., Mulligan, D. C., Kasiske, B. L., Snyder, J. J., & Israni, A. K. (2016). Predicting outcomes on the liver transplant waiting list in the United States: Accounting for large regional variation in organ availability and priority allocation points. Transplantation, 100(10), 2153–2159. https:// doi.org/10.1097/TP.0000000000001384
RESEARCH
14. Chu, M. J. J., Dare, A. J., Phillips, A. R. J., & Bartlett, A. S. J. R. (2015). Donor hepatic steatosis and outcome after liver transplantation: A systematic review. Journal of Gastrointestinal Surgery, 19(9), 1713–1724. https://doi.org/10.1007/s11605-0152832-1 15. Yang, L., Ghosh, R. P., Franklin, J. M., Chen, S., You, C., Narayan, R. R., Melcher, M. L., & Liphardt, J. T. (2020). NuSeT: A deep learning tool for reliably separating and analyzing crowded cells. PLOS Computational Biology, 16(9), Article e1008193. https:// doi.org/10.1371/journal.pcbi.1008193 16. Colling, R., Verrill, C., Fryer, E., Wang, L. M., & Fleming, K. A. (2014). Discrepancy rates in liver biopsy reporting. Journal of Clinical Pathology, 67(9), 825–827. https://doi.org/10.1136/ jclinpath-2014-202261
SPRING 2021 | Berkeley Scientific Journal
73
Reimagining Autonomous Underwater Vehicle (AUV) Charging Stations with Wave Energy
By X Sun1, Bruce Deng2, and Jerry Zhang2, Michael Kelly1, Reza Alam1, and Simo Makiharju1 1 2
Department of Mechanical Engineering, University of California, Berkeley Department of Computer Science, University of California, Berkeley
ABSTRACT The vast capabilities of autonomous underwater vehicles (AUVs)—such as in assisting scientific research, conducting military tasks, and repairing oil pipelines—are limited by high operating costs and the relative inaccessibility of power in the open ocean. Wave powered AUV charging stations may address these issues. With projected increases in usage of AUVs globally in the next five years, AUV charging stations can enable less expensive and longer AUV missions. This paper summarizes the design process and investigates the feasibility of a wave powered, mobile AUV charging station, including the choice of a wave energy converter and AUV docking station as well as the ability to integrate the charging station with an autonomous surface vehicle. The charging station proposed in this paper meets many different commercial, scientific, and defense needs, including continuous power availability, data transmission capabilities, and mobility. It will be positioned as a hub for AUV operations, enabling missions to run autonomously with no support ship. The potential market for this design is very promising, with an estimated $1.64 million market size just for AUV technologies by 2025. INTRODUCTION The global autonomous underwater vehicle (AUV) market is expected to grow from $638 million in 2020 to $1.638 billion by 2025 [1]. The military and defense sector currently holds and is expected to continue holding the largest market share of the AUV industry. Within these sectors, AUVs can be used in mine countermeasures, anti-submarine warfare, reconnaissance, and force protection; each of these areas are projected to grow significantly in the next few years.1 In this paper, we propose a model closed loop, autonomous system that would enable AUVs to operate for long durations without human intervention—including recharging and data collection— over large swaths of oceanic territory. Existing AUVs are limited by power and data capacity and must return to ship or shore for battery recharge and data collection. An emerging technology, underwater docking stations, helps AUVs to recharge and store extra data underwater, but is highly limited by its battery capacity and maintenance requirements.2 However, our proposed solution combines a wave energy converter (WEC) and an associated AUV docking station into an autonomous platform that eliminates the need for manual operation. DESIGN PROCESS The following design consists of three primary subsystems: the WEC for power generation, an autonomous surface vehicle (ASV) that acts as both a mobile central node and command module, and a docking station for power and data transmission to an AUV. 1. Wave Energy Converter The WEC is the primary power generator. We chose to use a
74
Berkeley Scientific Journal | SPRING 2021
heaving point absorber design, which requires an absorber at the top and an anchor at the bottom. Instead of using a counterweight to anchor the absorber, the absorber in our design uses the relative motion between the ASV and a sea anchor, caused by waves, to pull a spring-loaded mooring line spool that spins a rotary generator. The combination of an ASV and WECs uses few moving parts exposed to seawater, and the point absorber design has a high power absorption to surface profile area ratio while still maintaining a mobile ASV capability. Using the ASV as the absorber also improves the system’s stealthiness, which is vital to defense customers—ASVs are already used in covert operations.3 1.1. Heaving Point Absorber Among the six types of WECs (point absorbers, terminators, attenuators, oscillating wave surge, submerged pressure differential, and rotating mass devices), we narrowed down our choices to the heaving point absorber and attenuator designs because of their relatively high capture width ratio andCWR efficiency (capture width divided by a characteristic dimension of each WEC). CWR is considered to be equivalent to the hydrodynamic efficiency, which best reflects the hydrodynamic performance of WECs. Ultimately, the heaving point absorber offered the greatest benefits for planned uses due to its small size, flexility, minimum maintenance requirements, and high CWR efficiency.2 With the ASV acting as a buoy for the point absorber, the WEC and ASV can be linked under the same system. 1.2. Sea Anchor Because the usage of a traditional heavy counterweight or anchor would not be conducive to the deployment of a mobile system, we used a sea anchor, or para-anchor, to provide the reaction force for the WEC. The sea anchor is an underwater, inverted
RESEARCH
outweighed the drawback of its complexity. These features help the WEC ride higher above the waves and capture energy that would otherwise be lost to unstable motions. The dual hulls also create less drag, which reduces energy used to maneuver; having two propellers further apart limits the need for a rudder, which would be another point of possible failure due to frequent fracture. Catamarans also provide more space than monohulls, which is beneficial for storing additional equipment on an AUV.7
Figure 1: Example diagram of an off-the-shelf para-anchor.4 parachute that uses water pressure to stabilize a sea vessel. It provides a counterforce to waves lifting up the ASV by pulling a cord linked to the spring loaded mooring line spool. While the ASV will move with the movement of the water’s surface, the small amount of power lost to sea-anchor movement is minimal in comparison to the amount of power and space necessary for a full anchor and winch to secure the WEC. 2. Autonomous Surface Vehicle The ASV is the central node of our system. It connects the WEC to the docking station and provides a platform for electronics and system control. The ASV houses the WEC rotary generator, the underwater docking station, and battery power storage. The ASV also contains other electronics platforms for data collection and transmission. Motors and winches inside of the ASV will lower elements of the WEC and docking station, and two additional motors will power the propulsion system. Solar panels on the top of the ASV provide an alternative power source in the case of low wave activity. 2.1. Hull Structure When selecting the hull design of the ASV, our two primary choices were a monohull or catamaran structure. Initially, the monohull structure appeared best because of its simplicity and prevalence, but the catamaran hull’s greater stability and buoyancy
2.2. Mobility AUVs across many industries prioritize the need to cross vast distances across the ocean rather than maintenance and surveillance around a specific area.8 The charging station must travel with the AUV to increase its range and operational capability. Through a careful selection of the WEC, more power can be produced with the proposed configuration than needed by two operating AUVs. While an additional motor creates additional power demand and increases design complexity, it does not outweigh the benefits of creating a near infinite range for the AUV. 2.3 Solar Power Solar panels placed across the surface of the ASV can act as a secondary source of power, primarily for when the wave height does not create optimal power generation through the WEC. Based on a solar panel efficiency of 20 percent and a surface area of 1 m2, 200 W and 1.6 kWh of energy per day can be produced (assuming 8 hours of sunlight).9 2.4 Power Storage Lithium ion batteries charged by both the WEC and solar panels will be the primary power source. Using lithium ion batteries was a natural choice given their prevalence and relatively high power density (above 200 Wh/kg) in comparison to the lead acid battery (under 50 Wh/kg).10 The batteries will be placed in two groups aboard the catamaran, one at the bottom of each hull. This positioning would provide greater stability and safety for the craft because the batteries weight will be focused to the lowest and widest possible locations in the craft. The ability to separate the battery into two sections provides redundancy in the case of failure within one battery compartment. Based on current battery standards, about 20 kWh of storage capability will be introduced onboard, which is
Figure 2: The C-Cat 3 ASV from L3Harris (left) and Autonomous Warrior 2018 (right) that this ASV design is modeled after.5,6
RESEARCH
SPRING 2021 | Berkeley Scientific Journal
75
enough energy to charge an AUV and move the ASV.11 2.5. Data Transmission (AUV-ASV-Satellite/Shore) Traditional AUVs and remote research sites will often use hard drives or other data storage devices to maintain recorded and obtained data, with manual means of data retrieval. However, this model presents three primary issues: the possibility of data loss through storage failure; inaccessibility, where access to a remote site is difficult and costly; and untimeliness of data availability. To counteract these limitations, data will be directly and instantaneously transmitted through a satellite to the consumer, negating all three issues with storing data aboard the system. Reliable hard drives are available as backups when satellite communication is lost. 3. Docking Station The docking station provides a secure power and data connection to an AUV. The docking station uses a drop cord, consisting of a cord attached to the ASV with a winch that can be raised and lowered to any desired depth. The cord would transfer power from the ASV to the AUV and collect data from the AUV. The ability of an AUV to be able to stay beneath the surface saves time and money during operational periods. Another concern is the growth and buildup of marine biomass—including barnacles, algae, and tubeworms—creating an obstruction between the docking station and the AUV’s contact points. Maintaining an operational depth below 300m prevents the buildup of such biomass. Finally, our design creates a universal docking station, so that customers who may need a wide variety of differently sized or shaped AUVs can select their systems based solely on their mission tasks instead of the capabilities of the charging station. 3.1. Drop Cord There are two main types of AUV docking stations: cone systems and wire latching systems. A cone system is composed of a horizontal cylindrical housing and a funnel cone for the AUV to drive into.12 After the vehicle is homed inside of the cylinder, there are usually some mechanisms, such as clamps, to secure the AUV. Either wetmateable or inductive charging could be used. Although the cone system provides high protection for the vehicle and high efficiency of charging and data transmission, it has a big flaw: the shape of the cone and size of the cylinder varies greatly with the type of AUV used in the mission. It also requires extreme precision for homing because of the limited size of the inner cylinder. The wire latching system, in contrast, is a design that can charge any AUV regardless of its shape or size. The most developed design comes from a collaboration between the MIT Sea Grant Laboratory and the Woods Hole Oceanographic Institute’s Autonomous Ocean Sampling Network (AOSN) Dock. This docking station is a vertical latching system composed of a V-shaped titanium latch along a spring-loaded capture bar secured on the nose of the vehicle. In order to latch to the station, the AUV drives into the long vertical docking pole and pushes the capture bar aside with its forward momentum. The latching operation is entirely passive, which allows the AUV to be kept at the station for safety even during a power outage. In order to undock, the vehicle releases the capture bar through a rotary actuator and withdraws from the pole. AOSN was
76
Berkeley Scientific Journal | SPRING 2021
Figure 3: The AOSN docking components on the deck (left) and an omnidirectional approach to docking (right).14,15 designed to provide both charging and data transmission wirelessly via conducting cores that are built on the AUV nose’s underside and on the lower dock carriage’s upper side; the core that is on the AUV can be folded until it needs to be employed. The docking process is accomplished via an ultra-short baseline system on the vehicle and a 2 kHz acoustic beacon on the hub. After that, the two conducting cores are aligned as the lower carriage of the dock is driven up so that the AUV is forced to stay tight between the two carriages as well as to the conducting pads on the carriages. Meanwhile the conducting cords are also tightly secured.13 While latching designs like the AOSN dock do not provide AUVs full-body protection like the funnel designs do, they keep the AUV very stable and, through their passive connective mechanism, protect it from drifting away. The latching mechanism can be easily installed on any AUV nose and allows the AUV to home from any direction; it serves as a truly universal docking station for any market. In addition, housed dock electronics include a Long BaseLine transponder for acoustic navigation and a utility acoustic modem for communication with the AUV. A homing beacon, located above the docking pole, signals the AUV’s ultrashort baseline homing system. Dock sensors include an acoustic Doppler velocimeter to measure currents, a Seabird SBE 38 temperature sensor, a Paroscientific pressure sensor to monitor total depth, and dock carriage position switches to determine dock status.3 As in our design, the key parts of the AOSN docking station will be implemented (electronics, movable carriages, conducting cores, and conducting pads). Note in Figure 4, the positions of conducting cores and the two carriages are flipped but their functions remain the same. 3.2. Power Transmission The three sections of an AUV are connected via power transmission. An electrical cable is sufficient to transmit power between the ASV and the docking station. However, power transmission from the docking station to an AUV will require a non-permanent connection. There are two solutions: a direct, wetmateable connection or inductive charging. The wet-mateable connection between the AUV and docking station provides a higher rate of power transmission compared to inductive charging, but requires the AUV to be positioned extremely precisely relative to the docking station. Specifically, a wet-mateable solution would directly
RESEARCH
connect the AUV to the docking station through metal contacts, such as a plug, while inductive charging would only require the AUV to be close to the docking station for power transmission. Inductive charging tolerates a margin of error, minimizes electrical faults, generates little heat, and is more durable. Thus, inductive charging is preferable for power transmission between the AUV and docking station. While its primary drawback is power loss, A 2019 study showed that the power loss was not significant enough to create a problem.10 The AC voltage generated by the wave converter is rectified to DC voltage because of DC power’s superior transmission capabilities in salty ocean water; it does not need an additional cable to attune to the negative polarity of ocean water. In contrast, AC power requires three-phase cables.12 This current will run from the WEC to the battery and then down to the docking station where the AUVs will be charged. 3.3. Data Transmission (ASV-AUV) While the optimal choice for data transmission between the AUV and docking station would be a direct connection, the stringent requirements for a direct connection would limit the capability of the docking station. Thus, the design includes a wireless connection through radio waves as both acoustic and optical methods are not suitable for data transmission. Real time transmission and satellite communications with the land can be achieved with an antenna on the ASV. Radio waves would work properly at 315Mhz or 433Mhz. 4. Final Conceptual Design By incorporating three different subsystems of mobile ASV, heaving point absorber WEC, and drop cord charger, this AUV
Property
Acoustic
Optical
RF
Bandwidth
~kpbs
~gpbs
~Mpbs
Range
kms
~100 m
~10 m
Speed
1500 m/s
2.2 · 108 m/s
2.2 · 108 m/s
Table 1: Characteristics of underwater communication (Alex Immas, personal communication). design is a new system that uses wave energy to power the growing AUV industry. CONCLUSION With more than eighty percent of the ocean unexplored and untapped, more AUVs could be deployed in the future and this will require an autonomous powering and data transmission system. This design could replace traditional fixed-point WECs to create a new future. In addition, instead of building a separate body for the WEC or an expensive system dedicated to just one specific AUV, this proposed design has a unified and simplified structure that is created to achieve the needs of multiple consumers by using the ASV as a heaving body. Further actions are needed for building a physical prototype and validating the theoretical knowledge presented. ACKNOWLEDGMENTS The team thanks the U.S. Department of Energy for hosting the 2020 Marine Energy Collegiate Competition and providing the funding to conduct this research. Great thank you to the competition judges and members from the Monterey Bay Aquarium Research Institute for assistance and helpful feedback. APPENDIX The appendix to this article may be found online by navigating to: https://escholarship.org/uc/our_bsj/ REFERENCES
Figure 4: Labeled view of the AOSN omnidirectional docking station with AUV latching system.14
RESEARCH
1. MarketsandMarkets Research Private Ltd. (2020). Autonomous underwater vehicle (AUV) market by type (shallow AUVs, medium AUVs, large AUVs), application (military & defense, oil & gas), shape, technology (navigation, imaging), payload type (cameras, sensors), region – global forecast to 2025 (Report Code SE 3671). https://www.marketsandmarkets.com/MarketReports/autonomous-underwater-vehicles-market-141855626. html 2. Babarit, A. (2015). A database of capture width ratio of wave energy converters. Renewable Energy, 80, 610–628. https://doi. org/10.1016/j.renene.2015.02.049 3. Frye, D. E., Kemp, J., Paul, W., & Peters, D. (2001). Mooring developments for autonomous ocean-sampling networks. IEEE Journal of Oceanic Engineering, 26(4), 477–486. https://doi. org/10.1109/48.972081
SPRING 2021 | Berkeley Scientific Journal
77
Figure 5: Final conceptual design of a mobile AUV charging station powered by wave energy, utilizing an autonomous surface vehicle combined with a heaving point absorber, stowable para-anchor, and drop chord inductive charging system compatible with multiple sizes and models of AUVs.
4. Dunens, E. (2017). Sea anchor at Port Fairy Pelagic, Victoria [Photograph]. Wikimedia Commons. https://commons. wikimedia.org/wiki/File:Sea_Anchor_(36776431450).jpg 5. L3Harris Technologies, Inc. ASView™ Control System. https:// www.l3harris.com/all-capabilities/asview-control-system 6. Hitz, G., Pomerleau, F., Garneau, M.-E., Pradalier, C., Posch, T., Pernthaler, J., & Siegwart, R. (2012). Autonomous inland water monitoring: Design and application of a surface vessel. IEEE Robotics & Automation Magazine, 19(1), 62–72. https://doi. org/10.1109/mra.2011.2181771 7. U. S. Navy. (2014). The Navy unmanned undersea vehicle (UUV) master plan. Createspace Independent Publishing Platform. 8. National Renewable Energy Laboratory and Pacific Northwest National Laboratory. (2019). Powering the blue economy: Exploring opportunities for marine renewable energy in maritime markets. U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy. https://www.energy.gov/sites/prod/ files/2019/03/f61/73355.pdf 9. Quinn, J. B., Waldmann, T., Richter, K., Kasper, M., & Wohlfahrt-Mehrens, M. (2018). Energy density of cylindrical Li-ion cells: A comparison of commercial 18650 to the 21700 cells. https://doi.org/10.1149/2.0281814jes 10. Miao, Y., Hynan, P., von Jouanne, A., & Yokochi, A. (2019). Current Li-ion battery technologies in electric vehicles and opportunities for advancements. Energies, 12(6), 1074. https:// doi.org/10.3390/en12061074 11. Saleem, A., Rashid, F., & Mehmood, K. (2019). The efficiency of solar PV system. In Proceedings of 2nd International MultiDisciplinary Conference 19-20 December 2016, Gujrat. 12. Khaligh, A., & Onar, O. C. (2017). Energy harvesting: Solar, wind, and ocean energy conversion systems. CRC Press.
78
Berkeley Scientific Journal | SPRING 2021
13. Gish, A. L. (2004). Design of an AUV recharging system [Master’s thesis, Massachusetts Institute of Technology]. https://apps.dtic. mil/sti/pdfs/ADA425977.pdf 14. Singh, H., Lerner, S., von der Heyt, K., & Moran, B. A. (1998). An intelligent dock for an autonomous ocean sampling network. In OCEANS’98 IEEE/OES Conference Organizing Committee (Ed.), OCEANS’98. Conference Proceedings (Cat. No.98CH36259) (Vol. 3, pp. 1459–1462). IEEE. https://doi. org/10.1109/OCEANS.1998.726312 15. Singh, H., Bellingham, J. G., Hover, F., Lemer, S., Moran, B. A., von der Heydt, K., & Yoerger, D. (2001). Docking for an autonomous ocean sampling network. IEEE Journal of Oceanic Engineering, 26(4), 498–514. https://doi.org/10.1109/48.972084 16. Singh, H., Catipovic, J., Eastwood, R., Freitag, L., Henriksen, H., Hover, F., Yoerger, D., Bellingham, J. G., & Moran, B. A. (1996). An integrated approach to multiple AUV communications, navigation and docking. In OCEANS ‘96 MTS/IEEE Conference Committee (Ed.), OCEANS’96 MTS/IEEE Conference Proceedings (Vol. 1, pp. 59–64). IEEE. https://doi.org/10.1109/ OCEANS.1996.572458
RESEARCH
SPRING 2021
| Berkeley Scientific Journal
79
cover design credit of William Ren, Jiarachaya Kiriruangchai, Ziyuan Lei, Christy Lai, Megan Yan, and April Huang from Innovative Design