ORIGINS

Page 1

25TH ANNIVERSARY EDITION

ORIGINS

VOL 25, 26, ISSUE ISSUE 21


STAFF STAFF

EDITOR’S EDITOR’S NOTE NOTE The origins of science journalism can be traced back to the late 19th century when H.G. Wells became the first ever science-journalist, arguing that writers need to translate scientists’ jargon. Since then, science journalism has gone through many different eras of journalism from the “Gee Whiz” age to the “Watchdog” era. Now, in this last century, we find ourselves in a unique age in which not only information but also misinformation is easily spread through the internet. For this reason, it has become more important than ever for science journalists to communicate information accurately and clearly to the public.

Editor-in-Chief Elettra Preosti

Managing Editor Melanie Russo

Features Editors Lilian Eloyan Natalie Slosar

Interviews Editors Ananya Krishnapura Esther Lim

Research Blog Editors Nanda Nayak Rebecca Park

Elettra Preosti

Layout Editors

Aarthi Muthukumar Stephanie Jue

Publicity and Finance Chairs Afroze Khan Hosea Chen

Copy Editors Leighton Pu Noah Bussell

Senior Advisor Jonathan Kuo

Features Writers

Anisha Iyer Anna Castello Gunay Kiran Ibrahim Abouelfettouh Jonathan Hale

Interview Team Allisun Wiltshire Andrew Delaney Caroline Kim Elizabeth Chen Grace Guan Jacob Kaita Martin

Letian (Jane) Li Luyang Zhang Marley Ottoman Shreya Ramesh Siddhant Vasudevan Kira Sterling Laurentia Tjang Lexie Ewer Li Qiankun Marina Ilyas Michael Xiong

Research and Blog Team Anjuli Niyogi Eunice Tsang Evelyn Kong Julia Wong Katherine De Lange Lauren Meyers Leighton Pu

2

Mark Ortega Noah Bussell Nethra Koushik Rebecca Hebert Sinead de Cleir Xavier Yin Xiaopei Chen

Berkeley Scientific Journal | FALL 2021

Melanie Russo

With this in mind, in the spring of 1996, a group of eager young students sought to create a platform through which undergraduates at the University of California, Berkeley could publish their scientific research. As a result, the Berkeley Scientific Journal was founded. The journal has since expanded further to include an Interviews department, which talks with leading scientists around the world, a Features department, which produces articles about significant scientific discoveries across the world, and a Blogs department, which was the origin of our online presence. Publishing our journals online has made our versatile scientific writing easily accessible to the world. Since its origin, though, the Berkeley Scientific Journal has been a standard for undergraduate science journals at other universities as writers strive towards the best kind of science journalism. This semester, as our writers crafted their pieces, they also kept in mind the origins of the science at hand. For example, Features writer Anna Castello discusses current research that shows how psychedelics may benefit mental health while exploring the history of psychedelic legalization. In another piece, Ibrahim Abouelfettouh investigates technologies that unveil our understanding of how the universe itself originated. An interview with Dr. Brandon Collins illustrates how the history of fire management in California has led to our current wildfire crisis. These are to only name a few. So, for our 25 year anniversary, as we look to the future of the Berkeley Scientific Journal and continue to evolve, we remember the strength of our roots and our part in the origin of accurate, accessible science journalism.

Elettra Preosti Editor-in-Chief Melanie Russo Managing Editor

FEATURES


TABLE TABLE OF OF CONTENTS CONTENTS Features 10. 19. 29. 33. 41. 46. 54. 58. 67. 75.

Forging Stars: The Technology Behind Fusion Power Marley Ottman Dark Energy, Robots, and Intergalactic Cartography Ibrahim Abouelfettouh Blazing a New Trail: Wildfire Suppression in California Luyang Zhang Examining the Role of Availability Heuristic in Climate Crisis Belief Gunay Kiran Provisional Truths: The History of Physics and the Nature of Science Jonathan Hale It’s Lights Out and Away We Go Siddhant Vasudevan Let’s Take a Trip into Mental Health Anna Castello The Sunset of Twilight Sleep Jonathan Kuo Cultured Meat: Growing Meat in the Lab Jane Li Where is Everyone: The Search for Life in the Vast Unknown Shreya Ramesh

Interviews 4.

The Origins of the Berkeley Scientific Journal

14.

Finding Meaning in Sound: Auditory Perception and Adaptation (Dr. Frédéric Theunissen)

24.

Dual Imaging: a New Frontier in MRI (Dr. Ashok Ajoy)

Elizabeth Chen, Ananya Krishnapura, Jonathan Kuo, Esther Lim, Laurentia Tjang, Allisun Wiltshire, Michael Xiong Caroline Kim, Kira Sterling, Ananya Krishnapura Andrew Delaney, Lexie Ewer, Esther Lim

36. 50.

Engineering Logevity and the Reversibility of Aging (Dr. Irina Conboy) Qiankun Li, Michael Xiong, Esther Lim

Rewriting Textbooks with Single-Particle Tracking Microscopy (Dr. Robert Tjian)

Elizabeth Chen, Laurentia Tjang, Ananya Krishnapura

61.

Innovating Unprecedented Treatments for Celiac Disease (Dr. Detlef Schuppan)

71.

Burning Questions with a Forestry Expert (Dr. Brandon Collins)

Marina Ilyas, Jacob Martin, Esther Lim

Grace Guan, Allisun Wiltshire, Ananya Krishnapura

Research 79.

The Effect of Conflict on Healthcare Workers in Syria: Results of a Qualitative Survey

85.

Wildfire Significance within the San Francisco Bay Area’s Air Quality

Sarah Abdelrahman and Rohini Haar

Scott Hashimoto, Rohith Moolakatt, Amit Sant, Emma Centeno, Ava Currie, Joyce Wang, Grace Huang, Dr. Amm Quamruzzaman

FALL 2021 | Berkeley Scientific Journal

3



Professor Spotlight Caroline Kane, PhD Caroline Kane, PhD, is Professor in Residence Emerita of Biochemistry, Biophysics and Structural Biology at UC Berkeley. Her research is centered on gene expression in eukaryotic cells, specifically the transcription elongation process. She has been the faculty advisor for BSJ since the journal was founded in 1996.

Brent Mishler, PhD Brent Mishler, PhD, is the Director of the University and Jepson Herbaria at UC Berkeley and a professor in the Department of Integrative Biology. BSJ published co-authored research of Professor Mishler in Spring 1996 for our “Inaugural Issue.”

Robert Tjian, PhD Robert Tjian, PhD, is professor of biochemistry and molecular biology at the University of California, Berkeley and was named an Howard Hughes Medical Institute investigator in 1987 and served as president of the institute from 2009 to 2016. In one of our earliest issues, BSJ published an interview with Dr. Tjian in 2000 for our “Special Report on Biotechnology.”

Michael Eisen, PhD Michael Eisen, PhD, is a Professor of Genetics, Genomics, and Development in the Department of Molecular and Cell Biology at the University of California, Berkeley and a Howard Hughes Medical Institute Investigator. He is one of the cofounders of the Public Library of Science, an open access library of scientific literature, and also the current Editor-in-Chief of eLife, a peer-reviewed open access scientific journal for biomedical and life sciences.

FEATURES INTERVIEWS

FALL 2021 | Berkeley Scientific Journal

5


INTRODUCTION

B

erkeley Scientific Journal, founded in 1996, was created at a time when historic advances were reconfiguring the ways people spoke about, wrote about, and worked in science. It was a special moment for scientific research. The first mammal, Dolly the sheep, was successfully cloned in a laboratory. Later that year, Andrea Ghez and Reinhard Genzel first found evidence for a black hole residing in the center of our galaxy, a discovery that earned them the Nobel Prize in 2020 (along with Roger Penrose). As one of the first science journals dedicated to undergraduate research in the nation, BSJ reflected and contributed to these reconfigurations. It helped undergraduate scholars formalize their engagements in science and scientific publication, making more visible the substantive roles through which students PROGRESS IN RESEARCH

BSJ

: How has the development of new research methods and techniques throughout the years changed the landscape of biological research?

CK

: It has sped up discovery, and that is not a surprise. Historically, this has always happened in science. When CRISPR was first introduced, I wished I still had my lab running because I immediately thought of half a dozen experiments that we could do so rapidly that would have taken years before because of the complexity of some of the cloning or knockouts. And, it is not just CRISPR. Fifteen years of research led to the ability to put together these vaccines for coronavirus within six to eight months. That was completely unheard of. In a way, this acceleration increases the pressure on scientists because it is harder to stay up to date on everything that is happening or every technique that you might want to use in your own lab. But, my hunch is that scientists have always felt that kind of pressure. I am really pleased that discoveries are happening even faster because it means that we are still inching closer and closer to the biological truth of the way things work, but these inches get covered faster.

6

Berkeley Scientific Journal | FALL 2021

contributed to the scientific process. It also helped spark similar projects at other universities across the nation, creating nationwide venues where undergraduate participation in science was normalized and celebrated. In this special piece, we commemorate the 25th anniversary of the Berkeley Scientific Journal by reflecting on the journal’s progression since its early days, as well as the evolving fields of scientific journalism and publication. In conversations with Caroline Kane, who has served as the Journal’s faculty advisor since its initiation in 1996, as well as other faculty members involved in early issues of the journal, we consider how the landscapes of scientific research, academic publication, and communication with the public have changed over the past quarter century.

BSJ BM

: How, if at all, have your research projects shifted over the years in response to recent developments?

: For several years, I have been a faculty instructor for the course in Moorea, “Biology and Geomorphology of Tropical Islands” (ESPM C107 or IB 158LF), and it has been my privilege to oversee the progression of student projects over the years. The Moorea course (accessible online at moorea-ucb.org/) is an unusual course at Cal and began just before BSJ in 1991. For most of the students that take it, it is their first real experience with independent research. Most courses or projects we allot student research units to do not consist of independent work; students are carrying out something somebody tells them to do. In the Moorea course, students go from square one and learn how to pick a research topic and design a good set of actions to address it before following through all the way to publication. The Berkeley Scientific Journal has been valuable as one of the ultimate goals for some of our best students to aspire to in that they can not only produce a class paper, but they can publish it. We have had publications in other journals as well, but BSJ has been a very trustworthy goal for the students all along. One idea that the students are really interested in now is the data science revolution. Several students are interested in modeling the ranges of both native and invasive species. My own research has changed a lot into big data approaches with large scale phylogenies, which use genomic data and then geographic data from museum databases. The questions are enduring, but

FEATURES INTERVIEWS


the methods that we are able to apply keep getting better. The new techniques that are coming out in data science, molecular biology, and computational biology are nice additions to a set of more traditional techniques that persist and ultimately are still needed for our field of ecology and evolution. : I no longer do the old kinds of experiments that I did 30 to 40 years ago here. My research methods used to be to tear the cell apart, isolate the protein, and study it in isolation. Now, we produce whatever we want to study through genetic engineering. We would not be able to do this without Jennifer Doudna’s discovery of CRISPR systems. For example, if we want a protein to carry a fluorescent tag, we use gene editing to put a fluorescent tag on the molecule so that we can study its movement. The fluorescent light is a way to spotlight the molecules we want to see in the middle of a billion other molecules running around in the cell.

even if researchers cannot afford to do so. Some sort of grant or subvention or even forgiveness of fees should be a part of the system. It is pragmatic for your career to publish in open access publications. If something is buried in a print world that nobody can access electronically or if something is not freely accessible online, chances are that individuals will just ignore it. These days, it almost always has to be open access. I really believe in science for the people, and I believe the way to make this discourse democratic, open, and available to everyone is to make it open access. For example, in the Jepson Herbarium, where I am the Director, our biggest project is called the Jepson Manual. The manual used to be a book we sold, but now we give the information away for free on the website The Jepson eFlora. We have open access to our most central, important set of resources, which I believe we have to do. It is tempting to sell things, but it is better to raise your money in other ways through fundraising and grants.

BSJ

ME

RT

: Professor Kane, our journal interviewed you in our fall 1997 issue for the article, “Women in Science: An Exploration of Barriers.” What are your opinions on the progress of gender equality in STEM fields in 2021?

CK

: I think there has been an enormous improvement, but there is also an enormous way to go. Gender equality in all sciences has improved dramatically since the late 90s. Currently UC Berkeley’s Molecular and Cellular Biology department has almost 30% faculty as women in tenure track roles and positions. We are one of the most diverse departments in the United States regarding the proportion of women in faculty positions, but there is still a long way to go because certain groups, such as women of color as well as the LGBTQ+ community, remain underrepresented. With regards to the LGBTQ+ community, some of our own faculty who are part of this community were even intimidated from admitting that until 10 years ago; the shift towards greater acceptance is only very recent. Underrepresented groups still face microaggressions and offhand comments, but I am gratified that it is so much better now. I am still working on the issue of increasing diversity in the scientific community as well as including those who have disabilities, whether they are visible or invisible. OPEN ACCESS PUBLICATION

BSJ

: What are your thoughts on the rise of open access publications? In your opinion, how, if at all, has this shift affected research in academia?

BM

: I think it is great. I am speaking, though, as somebody who is in a moderately rich institution as compared to many around the world. Everything I have published for years is open access, and I think everything in science publication will ultimately end up moving toward open access. In my opinion, it is fair to expect that the more well-funded researchers and institutions can pay for publication. The one caveat I would have is that there should always be a way to publish,

FEATURES INTERVIEWS

: Previously, the issue with research was physical accessibility; it was neither easy nor free to send hard-copy journals to everybody on the planet. The internet allowed us to gain access to the information we wanted. So, on that note, the scientific community thought of creating a big database of every scientific paper such that we can easily search for information and connect what we read to experiments we are interested in doing. However, these papers were owned by publishers, and we had no right to download them, use them, or distribute them in any way. It seemed so obviously wrong. Research is a public good. It is mostly funded by public money, and it is performed by scientists who are working in the public’s interest in order to do good for the world. As a scientist, I would want everybody who is interested in my work to be able to access the information. So, I, along with my advisor at the time, Pat Brown, and Harold Varmus, created the Public Library of Science (PLOS) to publish all open access journals. We wanted to fix this problem by creating a totally different model for science publishing, where the fundamental principle is that whatever you produce is freely available; there are no restrictions on who can access or use the material. Publishers did not want to do this because they make a lot of money from publications, and scientists still want to publish in the most prestigious journals for their career. The scientific community did not completely make the shift to open access for a long time, but it is finally now starting to happen. Of course, there is still a role for journals that organize information, but they should not have such exclusivity tied to them. Regarding the process of peer review, we should not only review works of science at the very beginning and only have them reviewed by one authority. However, there is the danger that if you rely on public commentary, random people can say whatever they want. Overall, the shift to open access requires a lot of care, thought, and oversight to make sure that these endeavors are not destroyed by the internet.

CK

: I am a huge proponent of open access. Some open access journals even put the papers up after peer review but before formal publication to let others comment on the reviewers’ comments, as well as comment on the paper itself, in case they have additional input. I retired in 2008, and if I did that in 2008, I would not have been able to publish my research anywhere; that

FALL 2021 | Berkeley Scientific Journal

7


was considered full release of your data. However, now, most of the open access journals will still accept these papers. I think that it can only improve the quality of the paper and in some ways, make it more credible. Open access also helps to make non-scientists feel more included in the discourse of the scientific community as they can see for themselves what science really is, which is progress through disagreement, uncertainty, discussion, and changing hypotheses based on new data. Tax payers pay for the research, so they should be able to have access to publications, even the publications they may not understand.

RT

: I would say there is a major revolution going on. Many of us here in Berkeley feel that when our work gets done, we want it to be publicly accessible as soon as possible. Open access matters most to underprivileged scientists in developing countries. Open access means we are trying to democratize science. One of the things that has really revolutionized publication, especially during COVID, is bioRxiv, which is basically a preprint server. After scientists do experiments and write a paper, they can send it to bioRxiv instead of major publishing journals. Your paper automatically becomes public. At this point, nobody has reviewed it yet, but you get a lot of comments and you can start adding experiments to make sure the results are interpreted properly. The next step would be sending it to journals where there would be very stringent review. In the meantime, you would still update the bioRxiv version. What I like about this process is that the open access version is in your control, not in the control of the journal. That is going to have massive implications for democratizing information. This type of publishing is really crucial during COVID because of the urgency of the topic, so a lot of papers were put into bioRxiv, including ours. SCIENCE JOURNALISM

BSJ

: With the rise of social media over the past few years, science journalism has declined as a means for providing information for the general public. What do you make of this trend?

BM

: It obviously goes without saying that the nature of scientific discourse has really gotten worse. You can see it

8

Berkeley Scientific Journal | FALL 2021

right now with all the misinformation about COVID-19, and before that you could see it with climate change and evolution. I think that makes the role of real scientific publications evermore essential. When BSJ started, it was not as obvious why we had to have peer reviewed, real scientific papers, but now it is just essential. There have to be places where real scientific studies have undergone peer evaluations so we can get trustworthy information. My recommendation to people is to use social media for social things, like keeping up with your friends and family, but do not use it for anything important. For example, in the Moorea course papers we do not really want people to cite blogs and websites. We say, “Cite real scientific papers.” You can have Joe’s blog on science and Joe can say anything, but you should not believe it. You have to go look at the real literature. I think BSJ really has to keep going and has to keep that high standard.

ME

: The internet has empowered people with the opportunity to communicate science to wide audiences. However, since traditional media has been disrupted by the internet, science journalism as a full-time profession is harder than it used to be. Something that has disintegrated is our ability to have an overall agreed-upon distribution of science information that governs the way we make decisions as a society. It has become easy for the public space to be occupied with either misleading science disinformation or sometimes just chaos, which undermines the whole endeavor. It is not that people have suddenly started to be more ill-informed or biased in their thinking, but rather that because of the internet and increased connectivity, we now are more aware of this issue. The viral character of science disinformation is really problematic. Additionally, some media outlets are out to find the most marketable bits of disinformation and spread them. During the COVID-19 pandemic, a few prominent scientists became the worst actors in this space by broadcasting inaccurate information. So, it is not always the scientific community against the world, but sometimes also the scientific community against itself when we are all figuring out what to do. But there are pluses and minuses here. It is a hard time to be in the business of trying to communicate science to the public because it is not entirely clear who you are talking to and what they expect out of you, but if you decide to become a science communicator, you have so many more options to do that than ever before on so many platforms.

CK

: I think that sometimes people want to believe what fits their beliefs about the way the world works. Previously, when scientific journalism contradicted what people believed about how the world worked, as long as the journalism respected the people who were skeptical, these people likely accepted the new facts. However, now it is much harder for the non-scientific public to figure out which sources are credible. As you have seen with vaccinations for COVID-19, there is misinformation that has come, often from people’s friends, on social media, which is spreading stereotypes that are just not true. This is one of the reasons why religious leaders, community figures, and people you personally might

FEATURES INTERVIEWS


know have been recruited to spread accurate information. People are more likely to believe information from individuals they trust over “trusted” sources. This makes it more difficult for scientific journalists to try and change anybody’s mind. However, that does not mean that scientific journalists should change what they are doing. In fact, more than before, it means they need to still write articles in language that non-scientists can understand and still write articles that use data, use logic, and rational and critical thinking. It is unacceptable for them to insult readers who disagree.

BSJ

: Do you have any advice for us as undergraduate science journalists attempting to publish accessible and accurate information for our community?

ME

: In the movie “Almost Famous,” there was a piece of advice to “be honest and unmerciful,” and I think that this is one of the things we need to do more of in science journalism. There is often too much of an effort to smooth over the rough edges of science, sanitize the way science is presented, remove ambiguity, and pretend as a writer that you understand everything, but you never really do, right? Even when I am reading about things I know like the back of my hand, I am still always learning something because I am never completely, fully aware of all the nuances of things I write about. We have shaped the idea that the job of a science communicator or journalist is to take something really complicated and smooth its edges for the public so they can consume it in bite-sized, easily digestible chunks, as if we were explaining something to a toddler or someone incapable of understanding it. We do a disservice to the public in thinking that what we need to do is to turn science into a bunch of bullet points that can be easily captured. It makes people think that they are being sold something as opposed to being given insight into something. And I think that is a mistake. I think the public is much more sophisticated in their thinking than people give them credit for. Most people behave like scientists in some way or another. They make empirical observations about the universe and try to figure out how it affects them. As science communicators, rather than thinking of yourself as someone who is just going to compartmentalize and simplify information, think of yourself as an agent of the public, doing what they do not have time for and do not have the relevant expertise to do. We go in and wade into a complicated subject and spend time to learn not just the details, but also the context of it. Then, we come back and distill the information for the public in a way that captures what matters, communicating it to them without destroying the nuances.

or may not have the first authorship. BSJ has provided a venue for undergraduates to publish peer-reviewed science, so when faculty see an issue of BSJ, they know that other faculty and senior scientists have taken a look at the work to ensure the papers are up to a professional standard. The second aspect is that BSJ humanizes scientists and those of us who are in the scientific community. Other than being professionals or scientists in academia or industry for years, we are also sports fans, music fans, and all of these other things. What BSJ has done is humanize scientists for the undergraduates on the Berkeley campus and even for many of our staff members. BSJ has softened the image of scientists while not losing its scientific nor journalistic rigor. REFERENCES 1. 2. 3. 4.

[Photograph of Caroline Kane]. The Graduate School of the Stowers Institute. https://www.stowers.org/gradschool/ news/mar-5-2020. Image reprinted with permission. [Photograph of Brent Mishler]. University of California Berkeley Research. https://vcresearch.berkeley.edu/faculty/ brent-d-mishler. Image reprinted with permission. [Photograph of Robert Tjian]. Howard Hughes Medical Institute. https://www.hhmi.org/scientists/robert-tjian. Image reprinted with permission. [Photograph of Michael Eisen]. Howard Hughes Medical Institute. https://www.hhmi.org/scientists/michael-b-eisen. Image reprinted with permission. AUTHORS Elizabeth Chen Ananya Krishnapura Jonathan Kuo Esther Lim Laurentia Tjang Allisun Wiltshire Michael Xiong

BSJ

: In your opinion, what has been the most significant impact of BSJ on the Cal scientific community?

CK

: I think there are two most significant impacts. One is providing a legitimate, reviewed location for undergraduates to put their science without having to wait for it to be included in one of the larger papers from their research laboratories, where you may

FEATURES INTERVIEWS

FALL 2021 | Berkeley Scientific Journal

9


FORGING STARS: THE TECHNOLOGY BEHIND FUSION POWER BY MARLEY OTTMAN

I

f you were to stop and look directly at the sun on a bright day, you would probably experience two sensations: a painful stinging in your eyes and a feeling of awe at the sun’s immense power. So it is natural that the question eventually arose, “Why not try making power the same way the sun does?” For more than 60 years, scientists have been trying to do just that. The process is called nuclear fusion, wherein several small atoms are forcibly fused together, releasing energy and some reactant particles.1 This is in contrast to the nuclear fission process that supplies the energy generated by today’s nuclear power plants by splitting heavy atoms. Fusing these atoms is not the hard part—in fact, 1958 marked the first experiment that successfully achieved controlled thermonuclear fusion.2 The difficulty lies in fusing these atoms in a way that releases more energy than is put in, especially doing it for long enough to generate real recoverable power. But, in the past two decades, interest has revived in sustainable nuclear fusion and private money has started to flow into dozens of fusion power startups. As this momentum towards a future powered by nuclear fusion builds, it is crucial to understand the scientific concepts and technologies currently being utilized in fusion labs across the world. In order to fully understand fusion power research, we must first understand the issue arguably most central to the whole field: confining the plasma fuel. Plasma wants nothing more than to disperse its heat and destroy its immediate surroundings. This means that keeping it confined at high enough densities for stable reactions is a real challenge. The tremendous gravity of stars keeps the plasma contained within them trapped and pressurized. But on Earth, we must get creative. In the 1950s, Soviet physicists Igor

10

Berkeley Scientific Journal | FALL 2021

Tamm and Andrei Sakharov drafted a system that utilized plasma’s material nature of being able to be shaped by magnetic fields. By forming super magnets in rings and arranging those rings into a hollow toroidal shape, the plasma is confined in a continuous loop and the tokamak design concept was born.3 Almost all of the modern approaches to fusion power rely on super magnets for confinement. Their role is so important that a key metric of gauging the cost and efficiency of proposed systems is the beta parameter (β), a ratio of the plasma’s pressure to the pressure created by the confining magnetic field. Keeping this ratio high, by keeping the amount of magnetic force necessary low, is key to ideal reactor design.4 The potential first generation of fusion reactors are being pioneered by companies like TAE Technologies. At the heart of their design is the concept of colliding beam fusion (CBF). As the name implies, CBF attempts to produce nuclear fusion by colliding two streams of high energy particles into each other. As these beams unite in a central chamber, they achieve energy levels high enough to form a plasma. Powerful magnets coax this plasma into the shape of a rotating hollow cylinder, similar in shape to a tin can without its lids. This containment formation is called a field-reversed configuration (FRC) and is another keystone in TAE’s design. The FRC’s potential lies within its magnetic geometry; unlike tokamaks, FRCs do not have a toroidal magnetic field. Instead, FRCs rely solely on poloidal magnetic fields to contain the plasma. This keeps engineering and maintenance simpler, as well as giving it a very high beta.5,6 To keep the fusion reaction stable and sustainable, particle accelerators are placed around this central chamber and inject high energy fuel particles into the plasma to sustain

FEATURES


the reaction. These fuel particles are boron-11 and hydrogen. When they fuse, they produce three alpha particles, and more importantly, little to no neutrons.* This fuel strategy is called aneutronic fusion. Multiple startups are trying it due to its higher theoretical output efficiency. Unlike the deuterium and tritium (D-T) reaction frequently used in fusion experiments, aneutronic reactions do not produce nearly as many neutrons that scatter and take precious reactant energy with them.7 There are a number of aneutronic fuel combinations that look promising, but there is a reason the biggest fusion projects in history have steered clear. This improved recoverable output energy comes at a price: higher input enFigure 1: Image of a toroidal shape as refrenced ergy.8 The conditions to provoke fusion in aneutronic fuel mixtures are much more extreme than those for D-T, requiring more heat and den- of fuels helium-3 and deuterium. They then collide and compress sity. This translates to more complex engineering, and ultimately, the plasma with powerful magnetic fields until sufficient fusion ochigher cost. However, that does not stop the risk-embracing start curs. Once all the fuel has been expended, the plasma is allowed to ups from trying. Helion Energy is another such company trying to decompress, giving back the energy needed, and the whole process use an aneutronic reaction to produce net-positive fusion. Their restarts. This method combines the properties of both inertial and approach begins with two preheated pockets of plasma composed magnetic confinement and results in a cyclic process, almost like a

FEATURES

FALL 2021 | Berkeley Scientific Journal

11


fusion engine. Not the sci-fi starship kind, but the thermodynamic kind. Helion is not the only startup to forgo the conventional super magnets and tokamak approach; the innovative startup General Fusion is attempting to create a sort of cyclic fusion engine. The design uses the tried and tested D-T reaction for fuel, but that is its only similarity to other concepts. At the core of their reactor is a spherical tokamak of fuel plasma, this geometry being essentially a sphere but with a small hole running through its poles.8 They then want to surround this core with a blanket of molten lead by spinning it like a centrifuge. In this way, a cavity for the plasma can form in the center. Placed outside of this liquid metal shell, steam rams are cyclically and synchronistically ‘hammering’ inward, tamping the metal inward and compressing the plasma to a point where fusion takes place. The resultant heat is absorbed by the liquid metal and cycled through a heat exchanger that creates steam, which then turns a turbine and generates power. This liquid lead shell is perhaps the most fascinating design quirk as it serves multiple functions. It is simultaneously acting as a means of containment and compression of plasma while also acting as a radiation shield and the heat transfer fluid. General Fusion has even proposed adding lithium to the molten lead as the fleeing high energy neutrons from the reaction could be absorbed by the lithium atoms and transmute it into tritium that could be recycled for fuel.9 For decades, excitement from advances in fusion research have caused some to make the public somewhat idealistic assurances on when exactly nuclear fusion power will reach maturity. There is a joke in the scientific community that fusion power is perpetually

“less than 20 years away,” but for the first time in decades, the 20 year figure may prove to be correct. Although it is clear that whenever fusion power may become a reality, it will not be a cure-all to the world’s energy problems. It is also clear that it will be a major milestone in human history. That is referring only to the technology’s ability to generate power; it is impossible to tell what number of ancillary technological advancements will be made in this race towards the ultimate source of renewable energy. *Due to the fact that nuclear fusion going on inside of a reactor is made up of an incredibly high number of individual reactions, it is impossible to say that none of these will result in neutrons. Aneutronic fusion is typically classified as having <5% of its resultant energy due to neutrons.10 REFERENCES 1. 2. 3. 4.

5.

Shultis, J. K., & Faw, R. E. (2002). Fundamentals of nuclear science and engineering. New York: Marcel Dekker. Philips, J. A. (1983). Magnetic Fusion. Los Alamos Science, 64-67. Arnoux, R. (1970, October 27). Which was the first tokamak-or was it tokomag? Retrieved from https://www.iter. org/newsline/55/1194 Harms, A. A., Schoepf, K. F., Miley, G. H., & Kingdon, D. R. (2010). Principles of fusion energy: An introduction to fusion energy for students of science and engineering. Singapore: World Scientific. Steinhauer, L. C., Roche, T., & Steinhauer, J. D. (2020).

Figure 2: Example of D-T fusion reaction

12

Berkeley Scientific Journal | FALL 2021

FEATURES


Figure 3: General Fusion Prototype Anatomy of a field-reversed configuration. Physics of Plasmas, 27(11), 112508. doi:10.1063/5.0022663 6. Ono, Y. et al. (1999). New relaxation of merging spheromaks to a field reversed configuration. Nuclear Fusion, 39(11Y), 2001-2008. doi:10.1088/0029-5515/39/11y/346 7. Harms, A. A., Schoepf, K. F., Miley, G. H., & Kingdon, D. R. (2010). Principles of fusion energy: An introduction to fusion energy for students of science and engineering. Singapore: World Scientific. 8. Laberge, M. (2018). Magnetized Target Fusion with a Spherical Tokamak. Journal of Fusion Energy, 38(1), 199203. doi:10.1007/s10894-018-0180-3 9. Dai, Z. (2017). Thorium molten salt reactor nuclear energy system (TMSR). Molten Salt Reactors and Thorium Energy, 531-540. doi:10.1016/b978-0-08-101126-3.00017-8 10. Roth, P. J. (1989). Space Applications of Fusion Energy: Part II - Factors Affecting Confinement Concept. Fusion Technology, 15(2P2B), 1107-1107. doi:10.13182/ fst89-a39840 IMAGE REFERENCES 11. Figure 1: Pxhere. https://pxhere.com/en/photo/1194583. 12. Figure 2: Pixabay. https://pixabay.com/images/search/hydrogen/. 13. Figure 3: General Fusion Plasma Injector. https://commons.wikimedia.org/wiki/File:General_Fusion_plasma_ injector_(32476489803).jpg.

FEATURES

FALL 2021 | Berkeley Scientific Journal

13


Finding Meaning in Sound: Auditory Perception and Adaptation INTERVIEW WITH PROFESSOR FRÉDÉRIC THEUNISSEN BY CAROLINE KIM, KIRA STERLING, AND ANANYA KRISHNAPURA

PROFESSOR FRÉDÉRIC THEUNISSEN Dr. Frédéric E. Theunissen is a professor in the Department of Psychology at the University of California, Berkeley and is the principal investigator of the Theunissen Lab. His work strives to understand how the brain recognizes and perceives natural sounds, such as human speech and animal vocalizations. In his research, he uses computational approaches as well as behavioral experiments to explain auditory phenomena and understand the neural representations of natural sounds. In this interview, we discuss his findings on auditory perception and adaptation in zebra finches and humans.

14

Berkeley Scientific Journal | FALL 2021

BSJ FT

: What drew you to the fields of neuroscience and psychology, specifically auditory science and perception?

: It was not a straight route to where I am now. I received my undergraduate degree in physics, but I was also interested in biology and evolution from the beginning of my studies. For a while, I thought medicine would be a good path for me, but then I realized that I liked research and the academic environment much better. In the 1980s, Donald Glaser, who was a physics professor at UC Berkeley and Nobel Prize winner for the invention of the bubble chamber, was trying to model the brain with neural networks. I went to his lab meetings, and I thought that applying a physics-based approach to tackling such a complex problem as presented by the human brain was a really fascinating project. I think a lot of physicists are a little naive because they think that clean, simple mathematical formulations can be easily applied to most projects, but it becomes more difficult to implement this kind of approach once you break into the complexities of biology. I was interested in the intricacies of his work, and that served as my first attraction to neuroscience and the study of the brain.

INTERVIEWS


With regard to auditory science, I like to learn different languages, I enjoy music, and I am fascinated by animal communication and the evolution of language, so it was a natural transition into studying sounds. I study what the auditory system does, what happens in different stages of auditory processing, and how we can understand that processing—not just in terms of simple synthetic sounds that engineers produce, like pure tones and white noise, but particularly natural sounds. My first contribution to the field of neuroscience was to study how we can understand the different computations that are happening in different stages in the context of natural sounds. Now, we are pushing our questions even further, attempting to additionally understand the context of sounds that we produce for communication.

BSJ FT

: What is a vocal signature, and how do animals use these signatures to distinguish individuals?

: Vocal signatures are mostly used in the context of animals where animals use them to identify each other. For example, emperor penguins live in huge colonies, and, to us, all the penguins might look alike. It turns out that they themselves also have a hard time recognizing one another, and the only way they can find their mate is often through these vocal signals. These very characteristic signals that they produce are their individual vocal signatures. Passive voice cues can also be used to distinguish between individuals. For instance, one individual may have a vocal tract that is a little bigger or has different cavities compared to another’s, leading to overall differences in voice. Your manner of speaking is another example of a passive voice cue. Individuals might use certain expressions in a way that is unique to them. Since you can recognize a person’s speech based on these kinds of cues, they can also serve as an individual’s signature.

BSJ

: Between active and passive cues, is one more effective than the other in differentiating individuals?

FT

: Some active cues are more salient than passive ones. If your goal is to identify yourself, you can accentuate non-passive features. For example, I can say, “I am Frédéric Theunissen,” and that can be my individual signature. That is a stronger cue for you than if you were to just listen to my voice for a while. The flip side is that these cues are less flexible. Whereas the main purpose of active cues is to identify individuals, they do not carry much meaning beyond their initial use. Conversely, passive cues generalize very well. With voice recognition, I do not have to remind you every five minutes that I am Frédéric Theunissen because, at some point, you recognize my voice and know that I am the speaker.

BSJ FT

: What initially drew you to studying zebra finches in particular?

: Zebra finches are the laboratory equivalent of mice for songbirds. One reason for this is that they are domesticated, which means they do well in captivity. Another is that they breed all year long, so you can maintain a colony of birds relatively easily. The lab in which I did my postdoctoral research worked with zebra finches to study the mechanisms of song learning and song production. As it turns out, they are a great species in which to study communication because they are social birds and have a rich repertoire of communication calls. In addition to songs, which the males produce to attract females, they have a repertoire of sounds for all kinds of communication: aggression, distress, and even different types of alarms. When I started studying zebra finches, I was really focused on song learning and the ability to learn to vocalize, which is something that is pretty unique. Then, I realized we can study more than just how they learn to produce sounds by imitation. We can further explore how their auditory system as a whole is involved in determining signals, making decisions, producing sounds, and so forth. That opened the door to a lot of different questions in communication.

BSJ

: In one of your experiments, you conditioned finches to discern recordings of vocalizations using a food-based reward system. Can you tell us more about the process of designing this experiment?

FT

Figure 1: Zebra finches.2 Zebra finches are commonly found in Central Australia, where they most likely evolved. The gray and bright orange-colored bird is highly social by nature and is considered to be incredibly adept at singing and producing vocalizations.3

INTERVIEWS

: It is hard to ask animals what they are thinking. Therefore, we study them in two ways, one of which is just by observation. For example, if we see that a bird only responds when his mate is calling, we can deduce that individual recognition has taken place. Alternatively, we can also set up experiments to test certain ideas. These experiments fall under the category of “Go/No-Go” tasks. If the finch hears one sound, it performs the associated task to receive a reward. We noticed that these birds are both good at pecking and slightly impatient, so we taught them that in order to receive a reward, they must refrain from pecking when played a particular song. When the bird starts to hear a sound and knows it is not tied to a reward, it will peck in order to skip the trial and move on to the next. When it recognizes one of the rewarded sounds, then it knows it needs to wait in order to be rewarded with food. Thus, our experiments took the classic model of operant conditioning experiments and optimized it for the personality of zebra finches.

FALL 2021 | Berkeley Scientific Journal

15


Figure 2: Flowchart depicting the sequence of the go/no-go task from the aforementioned zebra finch study.4 Finches pecked a keypad to initiate a sequence of vocalizations from one of two different bird vocalizers. One vocalizer is associated with a reward (Re), while the other is not (NoRe). Birds maximized their food reward by interrupting the NoRe vocalizer through pecking and refraining to interrupt the Re vocalizer through not pecking.

BSJ FT

: What can your research on their auditory perception tell us about the evolution of zebra finches?

: It is hard with sounds, and brains in general, to go back into the fossil record and make concrete claims as to how evolution has occurred. However, I can say a few things on how the brain of zebra finches has been specialized to do certain tasks. One striking result was the number of individuals that a finch could remember and how fast they could commit this to memory. It was very clear that this animal has a high capacity for building auditory memories. On top of memorizing the signatures of up to fifty different birds, these finches must also recognize different renditions of an individual’s unique call, which can be a very difficult task. We do not know yet where these memories are stored in the birds’ auditory systems and what allows them to store and recall memories so quickly. What we do know, though, is that even with complex natural sounds, they have a very efficient neural representation. We have shown this not just in one paper, but several times over the years when comparing natural and synthetic sounds.

BSJ FT

: Can your research on zebra finches offer insight into how humans have created their methods of communication?

: I think finches and humans share a lot of similarities in terms of high-level auditory representation—that is, how we represent sounds internally. We can think of a sound not only as a pure tone of a given frequency but also as an auditory object, which is essentially a group of different sounds that can be attributed to a specific source. There exists a whole set of characteristics and qualities, beyond just pitch and amplitude, that allows a sound to be identified as a particular auditory object. For instance, we can say the word “hello” a thousand different ways, but they all represent the same auditory object in our mind. This mechanism of perception is

16

Berkeley Scientific Journal | FALL 2021

highly advanced, but it seems so natural and automatic that we do not even think to realize it. We also see this level of perception occurring in other animals— in particular, we found that the response properties that we observed in human auditory neurons were mirrored in those of zebra finches as well. At some point, though, you have to study humans if you want to learn about human language. It is fascinating to see at what point the human brain in particular has become this hyper-specialized, language-processing organ. Going from simply recognizing speech signals to processing language is something that is unique to only humans, and there is still a debate of where that line is.

BSJ

: In “Rapid Adaptation to the Timbre of Natural Sounds,”5 you mentioned that there are several characteristics that identify an auditory object, one of which is timbre. What is timbre, and what is its significance in speech*?

FT

: What is interesting about timbre is that it is defined with a negative definition— it encompasses all aspects of sound beyond just loudness and pitch. Timbre is what makes a certain voice or instrument unique, and so it is key to identifying auditory objects. Some features of sounds, like frequency or amplitude, can be observed mathematically based on the features of a given sound wave. Timbre lies outside of these characteristics that are easily quantifiable and represents the quality of the sound or the set of characteristics that allows us to perceive them. There are a number of components that change the quality of the sound, but the reason why timbre is so important is because it serves as the key to identifying auditory objects. If we go back to our discussion about auditory objects, we can think of it as the shape and features of the sound, similar to how one would recognize the shape and features of a face. We can pronounce every single vowel of speech with the same pitch, intensity, and amplitude, but we would still be able to tell each vowel apart

INTERVIEWS


because each vowel is distinct in their timbre.

BSJ FT

: What are the neural processes that support this auditory adaptation?

: There are all kinds of neural mechanisms that are both short-term and long-term. The depletion of neurotransmitters at the synapse are examples of short-term processes. You could also have a postsynaptic mechanism adaptation, where, if you always have the same calcium influx when you are transmitting information from one neuron to the other, the effect of that will decrease over time. So you can have levels at which adaptation can occur — it happens in your ear, all the way to your brain.

BSJ

: In one of your initial experiments testing auditory adaptation, you primed participants with one of two sounds, and when asked to classify an intermediate sound between the two, listeners reported it sounding more like the one they were not primed with. Was there anything that surprised you about the results of this study?

FT

: Our results were not particularly surprising, since previous work tells us that after sensory adaptation, we should expect to see an aftereffect. For example, if you stare at a waterfall flowing downwards and suddenly look elsewhere, everything seems to move upwards. This is the same phenomenon that is observed here, but in terms of sound rather than visual perception. Using a visual analogy, it would be as if after looking at pictures of planes and birds, you were shown a bird that was somewhat plane-like; you would perceive it as more bird-like. The part of it that was more unexpected was that adaptation happens for these pretty high-level features, like timbre, which is made of a lot of different components.

BSJ

: What do you believe is the most rewarding aspect of your work, and what do you think is in store for future research on auditory perception?

FT

: I think the most rewarding aspect of my work is that as I am discovering things, the kind of questions I have become richer. Every day, I get more amazed by the growing complexity of my work and the processes by which humans and animals are able to

Figure 3: Results of intermediate sound perception based on primed sound.5 The x-axis indicates the composition of the intermediate sound that participants were given, in terms of how much it was composed of Sound 2. The three curves indicate whether participants were primed with Sound 1, Sound 2, or neither (baseline). The y-axis indicates the proportion of responses where participants indicated that the morph sounded more like Sound 2.

INTERVIEWS

FALL 2021 | Berkeley Scientific Journal

17


perceive sounds. You start to gain an appreciation of these animals by doing this kind of research, and that, to me, is highly rewarding. But there is also the reward associated with being able to do this work with other people. In terms of what is in store for future research, we have a good understanding of auditory memories and representation, but we do not know exactly where they are stored and how they are formed. It might be useful to think about where memory is stored for words or how we are able to associate everyday phenomena with certain sounds. Hopefully, by the time I retire, I could solve at least one of these problems. REFERENCES 1. 2. 3.

4. 5.

Frédéric Theunissen [Photograph]. Center for Neural Engineering and Prostheses. http://www.cnep-uc.org/faculty/ucb-faculty/frederic-theunissen/. Image reprinted with permission. Lawton, M. (2009). Zebra Finches [Photograph]. https://www. flickr.com/photos/michaellawton/5712718319 Mello, C. (2014). The Zebra Finch, Taeniopygia guttata: An Avian Model for Investigating the Neurobiological Basis of Vocal Learning. CSH Protocols. https://doi.org/10.1101/pdb. emo084574 Elie, J.E., Theunissen, F.E. Zebra finches identify individuals using vocal signatures unique to each call type. Nat Commun 9, 4026 (2018). https://doi.org/10.1038/s41467-018-06394-9 Piazza, E.A., Theunissen, F.E., Wessel, D. et al. Rapid Adaptation to the Timbre of Natural Sounds. Sci Rep 8, 13826 (2018). https://doi.org/10.1038/s41598-018-32018-9

* Though this study was conducted under the supervision of Professor Theunissen, the concept was developed and study performed by graduate student Elise Piazza.

18

Berkeley Scientific Journal | FALL 2021

INTERVIEWS


How Five Thousand Robots Are Mapping The Expanding Universe

BY IBRAHIM ABOUELFETTOUH

A

s night falls on the Western Hemisphere, half of the planet prepares to end its day. At the same time, somewhere in Arizona, on top of a mountain, five thousand little robots are only starting theirs. A job description only fitting for a science fiction novel, these five thousand robots are collecting data from the far reaches of deep space in an attempt to map out the universe through space and time. They are in search of answers to questions that have plagued the human species for as long as we have been able to look up. They are in search of dark energy.

Figure 2: The composition of the universe.2 Credit: NASA

Figure 1: Nicholas U. Mayall Telescope, Kitt Peak National Observatory.1 Home to the Dark Energy Spectroscopic Instrument. * THE MOTIVE The universe is split into three components. First, there is baryonic matter, which is the measurable matter that we interact with everyday from the finest grain of sand to the air we breath to the fusion core of the sun. As abundant as it may seem, this constitutes only five percent of the observable universe. Second, dark matter, the so-called “missing mass” of the universe, makes up twenty-seven percent. Finally, there is dark energy, obscure and seldom detectable to humans, which fills a whopping sixty-eight percent. Where then, is all the dark energy, the missing majority of the universe?

FEATURES

The existence of dark energy was confirmed in 1998 by UC Berkeley’s own Saul Perlmutter,who shared a Nobel Prize for the discovery that something, which we cannot detect, is accelerating the expansion of the universe.3, 4 Its properties are strange, and it is thought as the only medium to interact with gravity alone. Dark energy works against gravity, and thus, works against the natural clumping of matter that yields planets, stars and eventually, galaxies like our own Milky Way. For reasons unknown to us, there is enough dark energy in the universe to overpower gravity’s attraction and cause the universe to expand apart from itself at an accelerated rate. Currently, dark energy is thought to occupy the universe uniformly and so despite its abundance, it is spread too thin to render itself detectable. What do we mean when we say the universe is expanding? This is not to say that objects are moving apart from one another, but that the space between matter itself is expanding. Think of a balloon with points drawn on it. If we inflate the balloon, the points grow further apart, even though they are not in themselves moving across the surface. More importantly, physicists have discovered that this expansion has been accelerating over the last 9 billion years, meaning that this expansion rate has increased over time.5 In the case of the interstellar balloon, this would mean

FALL 2021 | Berkeley Scientific Journal

19


that the more it is inflated, the faster it will inflate. Because they do not entirely know why, scientists have defined “dark energy” as this undetermined cause of the expansion of the universe. This theory raises important questions. Why did dark energy only start accelerating the expanding universe 9 billion years ago? What is the nature of this interaction? What does dark energy tell us about the fate of the universe? Perhaps the most important question is: what can we find out? To this last question, we might have an answer.

Figure 3: Expanding universe on a balloon.6 * THE EXPERIMENT

most ambitious survey of the sky in history. So how does it work? Having started in early 2020, DESI is collecting data to infer when the universe started accelerating in its expansion with extreme precision. They are able to do this by exploiting the finite and constant speed of light, known as the universal speed limit. For a star that is one light-year away, it would take one year for its light to reach planet Earth. The sun is eight light-minutes away, meaning if the sun were to suddenly “turnoff,” it would take eight minutes for us to stop seeing its light. Thus, contrary to the name, a light-year is a unit of distance, equivalent to 5.8 trillion miles. Generally, objects that are light-years away are effectively lagging behind in the past as their light takes time to reach us here on Earth. This means that by looking farther and further into space, we are travelling back in time. Thus, many stars we see in the night sky are gone by now, yet we still see them. DESI uses this principle to take time-based data points of the universe’s history. A galaxy that is 10 billion light years away is 10 billion years older at the time

“...many stars we see in the night sky are gone by now, yet we still see them. DESI uses this principle to take timebased data points of the universe’s history.”

Researchers at Lawrence Berkeley National Laboratories (LBNL) are building the largest 3D map of the universe to date.7 The Dark Energy Spectroscopic Instrument (DESI) is an experiment attached to Arizona’s Nicholas U.8 Mayall Telescope and over the course of five years, it will measure the effect of dark energy on the acceleration of the universe’s expansion. Other than its relative amount and expansive properties, we know very little about the nature of dark energy. Because of this, as we look into the vast expanse of deep space, we do not know exactly what we are looking for. As such, scientists want to be as thorough, extensive and precise as possible, gathering as much data as our technologies allow. DESI is planning to use spectroscopy from over 30 million distant galaxies and clusters to shed some needed light on the direction and future of our universe, making this the

when we look at it. By surveying millions of galaxies, we get snapshots in time of what the physical properties of these galaxies were. By measuring the acceleration of the universe across these millions of galaxies, we can get an accurate overview of the evolution of dark energy and its effect on our universe without needing to ever observe it directly. So how does DESI work? The answer is robots. DESI is equipped with five thousand robot positioners, each with their own

Figure 4: The DESI (top) fitted on the Mayall Telescope (blue).9 *

Figure 5: Five thousand robots laid out on the DESI Focal Plane.10 *

20

Berkeley Scientific Journal | FALL 2021

FEATURES


THE SCIENCE

Figure 6: Hubble telescope zooming into deep seemingly empty space near the Sombrero Galaxy (left).11 The plethora of stars and galaxies detected in the square regions (right). Credit: NASA, STScI. capability to move around and look at a list of pre-determined galaxies in their field of view. The robots need to be choreographed to ensure they do not bump into one another, and as they dance, they use fiber optics to collect data from galaxy to galaxy. As the Mayall Telescope shifts its perspective, the robots plan to survey one third of the night sky. To us, each slice of the sky the robots survey is a tiny, seemingly empty region of space. But, in reality, each invisible region of space contains within it hundreds of thousands of galaxies, as the image above illustrates. Throughout its 5-year operation, DESI will collect an estimated 50TB of data using spectrograph detectors from the light hitting each robot’s optic fiber “eye” from each galactic object.

Figure 7: DESI robots close-up. Each robot contains an optic fiber (dark blue dot) and can rotate independently.12 *

FEATURES

Scientifically, how can DESI use these tiny points of light to map out the universe? One way is to use redshift distortion. Redshift is the concept that while light waves are moving towards us, the universe is expanding at the same time, and thus the waves get stretched. This is an example of the Doppler Effect. You experience this effect with sound waves when an ambulance is moving towards you, and the sound is higher in pitch as it gets closer, and then as it is moving away, the sound is lower in pitch. Redshift distortion is the Doppler Effect for objects moving away from us. Figure 8: The effect of the The farthest galaxies exhibit universe’s expansion on the highest redshift, which means light waves, resulting in that they seem to be moving away redshift.13 * faster than nearer objects. The reason behind this is one of the original pieces of evidence for the expansion of the universe. If we are one dot on our interstellar balloon, then given a constant expansion rate, we would notice the near dots moving away from us slower. On the other side of the balloon, dots would appear to be moving faster as the surface area of the balloon increases and distorts our perspective. The implications of this tie the expansion of the universe to redshift distortion, distance, and time, all which we can measure. In assuming a constant expansion rate, scientists expected this redshift-to-distance relationship to be linear: a straight line. Surprisingly, they measured a higher redshift-to-distance than expected: a line curved upwards. This startling discovery was of the first indications that the universe was not only expanding constantly but that it was accelerating too. The universe’s own stretching causes the light we detect to be shifted and stretched. In essence, space is making more space and distorting our perception of light. We can use the redshifts of millions of galaxies to map the relative positions at different times, which tells us how the universe is expanding, and in what way it is accelerating. Why and how the acceleration occurred in the first place, we do not know. This is one of the major questions that DESI hopes to answer. A natural question that follows would be: “How do we know what the furthest possible object is?” We have managed to detect a lingering noise across the universe called the Cosmic Microwave Background Radiation (CMBR). The light waves from the Big Bang stretched so much as the universe expanded over time, they became microwaves. Watching TV without tuning into anything, you get white noise. White noise is this very background radiation, and we are effectively tuning into the universe’s history. Because we know that farther away means longer ago, we can calculate, using the Doppler Effect, the age of the universe it-

FALL 2021 | Berkeley Scientific Journal

21


“DESI is hoping to peer 11 billion years into the past, where galaxies and other interstellar objects are less common and harder to detect.”

Figure 9: Predictions about the fate of the universe. The acceleration of the universe’s expansion is shown as curving the scale of the universe. 14 Credit: NASA/CXC/M. Weiss self. Scientists calculated it to be somewhere around 13.7 billion years old simply because it takes the light a maximum of 13.7 billion years to be stretched to the wavelength of this CMBR. DESI is hoping to peer 11 billion years into the past where galaxies and other interstellar objects are less common and harder to detect. We use other relics in this space archeology to determine the acceleration of dark energy and its effects. The other principle method DESI will use is by measuring the Baryon Acoustic Oscillations of distant galaxies.15 Though it is known that there is no sound in space due to its vacuum-state, this was not always the case. Back when the universe was young and hot, matter was unable to become neutral, which resulted in a charged plasma floating around. Whenever gravity compelled objects to attract to become patchier and denser, fluctuations were formed in the density of this plasma. These fluctuations are essentially just sound waves that come from a periodic compression and separation of matter flowing in this plasma. DESI measures these oscillations around

​ igure 10: Baryon Acoustic Oscillations (artist’s depiction) surroundF ing the early universe’s galaxies.16 *

22

Berkeley Scientific Journal | FALL 2021

old galaxies and they can tell us how the universe was expanding at that time. Our knowledge of dark energy, as the name suggests, is crude so far. DESI will gather data to build a true 3D map of tens of millions of galaxies. This data helps us understand more about how dark energy is accelerating the universe’s expansion, refining our results and paving the way for new theories and groundbreaking physics. DESI and its next-generation grandchildren have the potential to answer age-old questions in physics that tell us about our place in the universe. Where are we? When are we? Where are we going? These are questions that kept our ancestors awake looking up at the night sky and wondering in awe at its mystery. REFERENCES 1.

Sargent, M. Kitt Peak & Mayall Telescope, DESI Image Gallery. Retrieved 9 February 2022, from https://photos.lbl.gov/bp/#/ folder/4478366/ 2. Wiessinger, S., & Lepsch, A. (2016). GMS: Content of the Universe Pie Chart. Retrieved 10 February 2022, from https://svs. gsfc.nasa.gov/12307 3. Dark Energy, Dark Matter | Science Mission Directorate. Retrieved 10 February 2022, from https://science.nasa.gov/astrophysics/focus-areas/what-is-dark-energy 4. The Nobel Prize in Physics 2011. (2019). Retrieved 10 February 2022, from https://www.nobelprize.org/prizes/physics/2011/perlmutter/facts/ 5. Lewton, T. (2020). What Might Be Speeding Up the Universe’s Expansion? 37 READ LATER SHARE COPIED!. Retrieved 10 February 2022, from https://www.quantamagazine.org/whyis-the-universe-expanding-so-fast-20200427/ 6. Kaiser, N. (2016). The Physics of Gravitational Redshifts in Clusters of Galaxies. Retrieved 10 February 2022, from https:// cosmology.lbl.gov/talks/Kaiser_16_RPM.pdf 7. Berkeley Lab — Lawrence Berkeley National Laboratory. Retrieved 10 February 2022, from https://www.lbl.gov 8. Dark Energy Spectroscopic Instrument (DESI). Retrieved 10 February 2022, from https://www.desi.lbl.gov 9. Kitt Peak & Mayall Telescope, DESI Image Gallery. Retrieved 10 February 2022, from https://photos.lbl.gov/bp/#/folder/4478426/ 10. LBL Newscenter. Retrieved 10 February 2022, from https:// newscenter.lbl.gov/wp-content/uploads/sites/2/2019/10/DESI-focal-plane-5K-robots.jpg 11. Beyond the Brim, Sombrero Galaxy’s Halo Suggests Turbulent Past. (2020). Retrieved 10 February 2022, from https://www. nasa.gov/feature/goddard/2020/beyond-the-brim-sombrero-

FEATURES


​ igure 11: DESI’s robots pointing at F the night sky, taking spectroscopic light data (top). Example of data point from the Triangulum Galaxy (bottom).17 * galaxy-s-halo-suggests-turbulent-past 12. DESI Focal Plane. Retrieved 10 February 2022, from https:// news.fnal.gov/wp-content/uploads/2021/05/focal-plane-section-hi-res-desi.png 13. Unveiling The Dark Energy. (2000). Retrieved 10 February 2022, from https://enews.lbl.gov/Science-Articles/Archive/ SNAP-2.html 14. Imagine the Universe!. (2006). Retrieved 10 February 2022, from https://imagine.gsfc.nasa.gov/science/featured_science/ tenyear/darkenergy.html 15. Wright, E. (2014). Baryon Acoustic Oscillation Cosmology. Retrieved 10 February 2022, from https://www.astro.ucla. edu/~wright/BAO-cosmology.html 16. Baryon Acoustic Oscillations. Retrieved 10 February 2022, from http://1t2src2grpd01c037d42usfb.wpengine.netdna-cdn.com/wp-content/uploads/sites/2/BOSS-BAO.jpg 17. DESI Opens Its 5K Eyes to Capture the Colors of the Cosmos. (2019). Retrieved 10 February 2022, from https://newscenter. lbl.gov/2019/10/28/desi-opens-its-5000-eyes-to-capture-thecolors-of-the-cosmos/ * Credit: Lawrence Berkeley National Laboratories (LBNL), © The Regents of the University of California, Lawrence Berkeley National Laboratory.

FEATURES

FALL 2021 | Berkeley Scientific Journal

23


Dual Imaging: A New Frontier in MRI INTERVIEW WITH DR. ASHOK AJOY BY ANDREW DELANEY, LEXIE EWER, AND ESTHER LIM

Ashok Ajoy, PhD, is an assistant professor of chemistry at the University of California, Berkeley. His research team focuses on utilizing physical chemistry to develop “quantum-enhanced” NMR and MRI technologies, pushing past the current resolution and signal limitations. Beyond his research, Dr. Ajoy is very enthusiastic about his students and emphasizes the importance of the contributions made by his graduate and undergraduate researchers. Having become a professor during the SARS-CoV-2 pandemic, he is especially grateful for his students and expressed that the multiple papers published by his lab are due to the hard work of everyone on his team. Sophie Conti, one of Dr. Ajoy’s research assistants who works on the nitrogen vacancy center magnetometry in microfluidics project, said of the Ajoy lab, “I’ve really loved working in the Ajoy lab thus far because of the supportive community and amazing opportunities for learning. I think our lab is really unique in that undergraduates are really encouraged and supported by the other lab members to further their own learning and research if it interests them.” In this interview, we explore how the use of diamond microparticles can enhance MRI and optical imaging, resulting in a form of dual imaging that has revolutionary impacts for the fields of medicine, biology, and the physical sciences.

24

Berkeley Scientific Journal | FALL 2021

BSJ

: Can you explain the concept of hyperpolarization in imaging, including how it is achieved and how it increases the clarity of a magnetic resonance image (MRI)?

AA

: MRI is an extremely versatile and powerful imaging tool, which has become so mature that now a doctor can just press a button and get an image. In reality, you are imaging water in the human body, and within the water, you are imaging the proton nuclear spin. In each hydrogen atom of water, there is a nucleus which contains a proton, and this proton has a spin which is essentially a little magnet itself. The tunnel that you go into in an MRI machine is a superconducting magnet, which is 100,000 times stronger than a weak refrigerator magnet. When you go into this magnetic field, the proton spins align in accordance with the magnet. Then, you tip the protons, and the spins start processing to generate a signal. This alignment is called polarization, and it depends linearly on the strength of the magnetic field. It is an amazing quantum mechanical effect that generates a signal which leads to the images, but the amount of alignment of the spins in a magnetic field is very low and inefficient. That is because the nuclei are in the center of the atom, and they cannot interact very strongly with their environment. If you increase the magnetic field, you will get more and more polarization, but from one Tesla to 20 Tesla, there is only a twenty-fold increase. Additionally, a 20 Tesla magnet is very expensive and very large. You cannot put a human being into it due to the physiological effects of high magnetic fields. As such, there is a big push in the community to improve the polarization and to see if we can generate this alignment of the nuclei in a magnetic field without a magnet; for instance, by shining a laser on a sample. Hyperpolarization is the process of aligning spins without using a magnet or aligning them much more than you could with a magnet. In our paper, we showed that for a special class of materials—diamonds—shining laser light on the material can align these spins about a million times more than in a normal magnetic field. You only need slightly more than the power of a laser pointer to accomplish this. This is remarkable because now the MRI signals will be significantly brighter and faster. Additionally, this laser-assisted hyperpolarization has many other advantages. Devices using this technology can potentially be made at a very low cost, and once you make a device out of it, you can get very high spatial resolution in addition to the improvement in signal. The current frontier of MRI asks whether you can make an MRI machine that images down to the molecular scale since current MRI is in a millimeter to centimeter length scale. You can see full three-dimensional imaging through a person, cell, or tissue. But it still has very poor resolution and signal, so many people are interested in trying to improve MRI on both of these fronts.

INTERVIEWS


Figure 1: Increased Signaling of Hyperpolarization. Graph illustrates the increase in signal intensity of hyperpolarized imaging (red line) as compared to magnetic resonance imaging (black line), with a 7 Tesla magnet. For reference, the magnets commonly used in clinical imaging are 1.5 T and 3 T.

BSJ

: You mentioned in your paper that we may be able to study other hyperpolarized solids, such as silicon and silicon carbide, using a similar mechanism to how you studied diamond microparticles. What is the significance of studying other hyperpolarized solids in this manner?

AA

: To understand this, we have to step back and ask, “If you can shine a laser’s light on something, will it be polarized?” For example, if you take a bath of water and shine laser light on it, the spins of the water molecules are not going to align because optically there is no transition that these spins are reacting to. There are special materials, however, which you can excite with light and polarize; for instance, diamond. The advantage is that once the spins of diamond are aligned, you can transfer this alignment to other molecules that come into contact with it. Therefore, we became interested in asking: if you can make a diamond sponge in which you shine light on the diamond to align its spins and flow water through it, then the water will come out polarized. If someone drinks the water and then goes into an MRI machine, will the image be hundreds of thousands of times brighter? This is a frontier direction, and it is still not clear whether diamond is the only material—or the best material—for this task. Other materials have been discovered, and many people are interested in first

INTERVIEWS

Figure 2: Nitrogen vacancy centers and hyperpolarization. Top diagram depicts the chemical structure of the diamond nanoparticles. As shown, the lattice of 13C nuclei contains infrequent nitrogen atoms adjacent to lattice vacancies. These nitrogen vacancy centers result in the distinct purple color of the diamond and its disposition to be hyperpolarized. Bottom diagram is a schematic of a hyperpolarization experiment, displaying the laser (green line) used to hyperpolarize the diamond nanoparticles and align their spins.

exploring a large number of these molecules and making scaffolds out of such porous materials, so that they can bring analytes—water, carbon dioxide, or anything else in contact with them—and spin-polarize them.

BSJ

: You also mentioned that the imaging of microdiamonds may lead to further applications for physical, chemical, and biological analysis. Could you elaborate more on how microdiamond imaging will lead to developments in each of these fields?

AA

: We showed that it is particularly easy to hyperpolarize nanodiamonds or microdiamonds. Interestingly, it turns out that these particles also fluoresce. Before we got into this field, many people used diamond fluorophores as biological markers. Due to a point defect in diamond, called a nitrogen vacancy center, when you shine a small amount of laser light on the diamond, it will fluoresce forever. Diamond nanoparticles were used as fluorescent tags for many years, and now, we show that by shining light on them, the spins can be spin-polarized. Now, you can image the diamond nanoparticles optically and through MRI, which is the “dual-mode imaging” we describe in the paper. You may ask, what is the big deal? It turns out that optical imaging is amazing, and all the things that we see around us are

FALL 2021 | Berkeley Scientific Journal

25


mostly optical images, which are fast and cheap. The detectors are also very good. However, optical imaging has one major problem, which is that it works poorly in scattering environments. Say you have a bottle of milk, you put something inside it, and you try to image through it. You cannot do it, because optical imaging normally has a spatial resolution that is set by the wavelength of light. All of microscopy uses the fact that optical imaging has very high spatial resolution, but, if you look at these optical images, very rarely will you see an image that is taken in a centimeter of tissue or a centimeter deep in something that is scattering. There is a big challenge in the community about deep tissue imaging. How do you image not just in one slice of cell or tissue but actually reasonably deep? It turns out that MRI is completely immune to scattering. This is because what you are seeing in an MRI is actually the nuclear spins. You can image through solid, opaque things such as bone, blood, and fatty tissue and be able to see through the person. The technical reason for this is very interesting: MRI does not image in real space. Actually, the image is in the conjugate space to real space, called k-space. It is somewhat similar to X-ray imaging where you image in a Fourier reciprocal space. These two spaces are connected with a Fourier transform. The big picture is that these diamond particles can act as fluorescent tags to indicate a particular place in a cell to image optically. We have now learned that you can image them through MRI. We are hoping to use this facility to image slightly deeper. For example, you can tag things in the body or parts of the cell and try to do some

chemical imaging in the tissue. We are still not there yet, but it is a potential approach that could allow you to do deep tissue imaging.

BSJ AA

: What are the advantages and drawbacks of single imaging, via visible-wavelength optics, and MRI?

: Optical imaging has a resolution that is set by the wavelength of light, which is called the Rayleigh limit. Ultimately, optical imaging is very fast, cheap, and high resolution. Wherever you can use optical imaging, you should. In some cases, however, the resolution of optical imaging is not good enough. For instance, if you want to see what happens inside a molecule, then the resolution of optical imaging is sub-par. This is because the resolution of optical imaging is around half a micron, and, within a cell or in molecules, the objects are a few nanometers across. While, generally, for biological imaging, optical imaging is the workhorse, it suffers from the issue that in a scattering medium, you attenuate very largely and you lose resolution. For that reason, if you try to image through skin, you have to use something like ultrasound or MRI. But MRI suffers from low resolution and low sensitivity. It is also expensive because it needs big magnets. You can trace the reason why the signal resolution is low to the fact that the polarization is low. So, if you can polarize the spins without a magnet, then you can increase the signal by several fold, approaching a million fold, and you can therefore increase the resolution by the same factor. The advantage of using these small diamond microparticles

Figure 3: Background Suppression. To mimic the background that is present in optical imaging and MRI, Alexa dye, and 13C methanol were deposited in the center of a ring of diamond microparticles in a phantom. Upon imaging, the circle of diamond microparticles is indistinguishable from the background; however, background subtraction can be used to increase the contrast of the image and allow the diamond phantom to be seen.

26

Berkeley Scientific Journal | FALL 2021

INTERVIEWS


is that you can increase their polarization with light, and then you can image them through MRI very efficiently. :You used multiple regimes to manipulate the nanoparticles in your experiment. What are the differences between these regimes, and why are each of these regimes needed?

I should emphasize that diamond is not the only material of this class. Although diamond is expensive, it is good because it is relatively inert. Other molecules that have similar properties to diamond are often carcinogenic, such as pentacene and anthracene. There is a growing effort to try to discover more molecules that, like diamond, have the property of a spin-photon coupling.

AA

BSJ

BSJ

: We ask the question: if you can image an object through optics or MRI, could you image through both? It turns out you can. Optical imaging is real space imaging, which means if I look there, I can see that point. On the other hand, MRI images are in Fourier reciprocal space which means that each point that you get has information of all the real space pixels. Ultimately, this means that the MRI image that you get must then go through a Fourier transformation to obtain the real image. The image that you see is not what you are actually sampling. You are actually sampling in k-space. For instance, when k is equal to zero, the first k-pixel is the mean of the image, and the second pixel gives you the first wavelength. This tells us the spatial wavelengths that make up an object. To know what your MRI image is in real space, you use a Fourier transform, which is a mathematical linearity transformation widely used in computer science, physics, and electrical engineering. Essentially, you reconstruct the structure of a molecule, and we get it in real space. However, things get very complicated if you are trying to image things that are sparse. Say you are trying to image stars in the sky, and the light is very low. Normally, you will focus your detector on only one part of the night sky and try to increase the exposure time. But, by definition, there are very few objects. So, most of the time, you are focusing your camera into parts of the sky where there is nothing. Optical imaging does not work well when the number of photons are low and images are sparse. But, if you imagine k-space, for instance, wherever you look, every pixel of k-space will give you information about all the stars. Because it is a Fourier reciprocal, you can use this property to combine both real and k-space images and improve the speed of imaging. This has never been done before, as far as I am aware. We have not done it in this paper yet, but we postulate that this can be done, and we calculate that we can accomplish that incredible speed in the future.

BSJ AA

: What is the importance of 13C and nitrogen vacancy centers on the imaging of diamond microparticles?

: Nitrogen vacancy centers are essentially defective diamonds. Diamond is made up of carbon atoms and the lattice. If you remove one carbon atom, replace it with nitrogen, and have a vacancy in the neighboring site, that is a nitrogen vacancy center. This property was discovered by accident by the diamond industry. Colored diamonds are often deemed to be much more expensive than normal diamonds, and this color is achieved by bombarding diamonds with nitrogen atoms. For example, we have diamonds that are deep purple in our lab. It is because of these defects and nitrogen atoms inside that causes the diamond particles to have very nice spin properties. The nitrogen vacancy centers also spin, and they can be polarized when you shine light on them. Then, we transfer the spin alignment to the neighboring nuclei, which in this case is carbon, and that is how we get these images.

INTERVIEWS

: Since their inception in 1977, MRI machines have revolutionized the diagnosis of many conditions in medicine. In what ways do you expect studies in this area to impact what can be visualized in animals?

AA

: We do not intend to ever put diamonds into people. Instead, we want to use diamond as a conduit to transfer spin polarization from the diamond into something else. But, for imaging cells, or in vitro samples, there is still a place for it. For live animal imaging, or human imaging, it is mostly that you are going to create porous materials where you trap water. These materials could be diamonds or some of these other molecular centers. You shine light onto this material, the spins in the water molecules get polarized, and you can take it into the body. We are interested in a future where we have a machine in which you flow water through it, you shine laser light into it to polarize the water, and you inject it into a person. The applications of this would be very far-reaching. Additionally, you can use molecular markers such as pyruvate for cancer imaging, or you can image metabolism to diagnose cancer since cancer cells metabolize faster. It is going to have a revolutionary impact because most people diagnosed with cancer are already in the late stage. But, what if you can devise an imaging technology that is selectively imaging cancer so that you can find it at an early stage? You can spot cancer before you normally would, which would make a world of difference. Hyperpolarized imaging is a key component of this because it can give you selective information. Currently, the many other forms of medical imaging require quite a significant dose of radiation, and there is only a certain limited number of such scans allowable. MRI has a very big advantage in that you can see things and it is radiation-free.

BSJ

: Beyond its impact on healthcare and biology, how do you expect the development of dual imaging technology to contribute to our understanding of the physical sciences?

AA

: I think one of our biggest contributions is demonstrating the usefulness of nuclear spins. Nuclear spins exist in everything; you and I are made up of atoms and the center of atoms are nuclei, many of which have spins. Usually, these spins are weak and difficult to observe, so people do not study them very much. However, we are interested in coupling these spins with light. Thus, we shine light on atoms and then make their nuclear spins “glow” so that you can then study them, and, through them, you can probe matter. Thus, we are very interested in making chemical sensing probes. This will report not just color, but nuclear magnetic resonance (NMR) information and chemical information. We are trying to imagine a sensor where you put it into a cell, and it will report on all the molecules around it. You can do this through NMR spectroscopy, which gives you very chemically specific information—a fingerprint of the

FALL 2021 | Berkeley Scientific Journal

27


molecule from which you can extract information about the molecule—and the core of this is this hyperpolarization technology. There are many applications of this chemistry in technology.2

National Academy of Sciences, 118(21). https://doi.org/10.1073/ pnas.2023579118

BSJ AA

: Do you have anything else to add about your current research or upcoming projects?

: We recently published a paper showing some of these applications of spins, especially in the quantum sciences. It turns out that these nuclei are some of the most coherent quantum mechanical objects in the universe. This means that we were able to keep them processing for something like one thousand and ten oscillations. It is so surprising to us that these defective diamonds have these remarkable nuclear spins that are very coherent. We are interested in using these nuclear spins in various tasks in fundamental physics, such as quantum information science, quantum simulation, or quantum sensing. My point is, NMR and MRI are very central to many sciences because they create an image ultimately generated from spins, and spins are quantum mechanical objects. Thus, there is chemistry there, because the spins can report on their local environment. But, the spins themselves are quantum mechanical, so you can do a lot of fundamental physics. Furthermore, you can exploit these resources for quantum computing, which makes computers much faster. This is the amazing part of this research field. REFERENCES 1.

2.

Lv, X., Walton, J. H., Druga, E., Wang, F., Aguilar, A., McKnelly, T., Nazaryan, R., Liu, F. L., Wu, L., Shenderova, O., Vigneron, D. B., Meriles, C. A., Reimer, J. A., Pines, A., & Ajoy, A. (2021). Background-free dual-mode optical and 13C magnetic resonance imaging in diamond particles. Proceedings of the National Academy of Sciences, 118(21). https://doi.org/10.1073/ pnas.2023579118 Beatrez, W., Janes, O., Akkiraju, A., Pillai, A., Oddo, A., Reshetikhin, P., Druga, E., McAllister, M., Elo, M., Gilbert, B., Suter, D., & Ajoy, A. (2021). Floquet prethermalization with lifetime exceeding 90 s in a bulk hyperpolarized solid. Physical Review Letters, 127(17). https://doi.org/10.1103/physrevlett.127.170603 IMAGE REFERENCES

1. 2.

3. 4.

28

Headshot: [Photograph of Ashok Ajoy]. Ajoy Lab. https://chemistry.berkeley.edu/faculty/chem/ajoy. Image reprinted with permission. Figure 1: Lv, X., Walton, J., Druga, E., Nazaryan, R., Mao, H., Pines, A., Ajoy, A., & Reimer, J. (2020). Imaging sequences for hyperpolarized solids. Molecules, 26(1), 133. https://doi. org/10.3390/molecules26010133 Figure 2: See reference #1. Figure 3: Lv, X., Walton, J. H., Druga, E., Wang, F., Aguilar, A., McKnelly, T., Nazaryan, R., Liu, F. L., Wu, L., Shenderova, O., Vigneron, D. B., Meriles, C. A., Reimer, J. A., Pines, A., & Ajoy, A. (2021). Background-free dual-mode optical and 13C magnetic resonance imaging in diamond particles. Proceedings of the

Berkeley Scientific Journal | FALL 2021

INTERVIEWS


Blazing a New Trail: Wildfire Suppression in California BY LUYANG ZHANG

CALIFORNIA IN FLAMES

G

lowing orange skies hang above smothering smoke as air quality alerts flare red, warning against outdoor activity. California’s 2020 summer is an apocalyptic scene as an outbreak of dry lightning prompted a record-breaking 10,000 wildfires that burned through nearly 4.2 million acres of land in one of the most severe fire seasons in state history.1 Unfortunately, 2020’s record-breaking wildfire season was not an isolated incident — in fact, it is the most recent addition to the state’s trend of increasingly severe fire events. Eighteen of the twenty largest fires in California’s history have occurred in the last two decades. In 2020 and 2021 alone, the state will have respectively poured a projected total of 1.3 and 1.13 billion dollars into the thousands of planes, helicopters, bulldozers, fire engines, water trucks, and firefighters needed to quell wildfires

throughout the year — the first fiscal years the state has ventured into billion dollar wildfire suppression budgets.1,2 Whether the metric is by dollars invested in suppression or proportion of acres burned at high severity, the pattern is obvious: the increasing rate of larger, more severe fires requires increasing amounts of suppression resources, and threatens to outstrip California’s capacity to effectively manage wildfires. This pattern may be most obvious to Thom Porter, UC Berkeley forestry alumnus and Director of the California Department of Forestry and Fire Protection (Cal Fire). With over two decades of forestry and firefighting experience, Porter has seen more than his fair share of wildfires, giving him the unique ability to see an evolution in California’s wildfire seasons. In 2018, the Camp Fire, resulted in a total of 18,804 burnt structures and 85 deaths; it was the most deadly and destructive fire

in California’s history. After the Camp Fire Porter explained, “In the first two-thirds of my career, the old-timers would talk about career fires,” or unusually large, destructive fires. “We do not talk about that anymore. It is almost gone from our vocabulary. We are finding times where we are having multiple career fires every single season, sometimes at the same time.” 1,3 SPARKING ECOLOGICAL CHANGE The acceleration of large fires cannot be attributed to a single cause. Climate change, increasing urbanization, and implications of historic forest management, to name just a few, are all influencing the rising frequency and extent of fires.4 One thing these factors have in common are that they are amplified by a faltering wildfire management system based around fire suppression. Recent increases in suppression budgets are often the result of public pressure to take di-

FALL 2021 | Berkeley Scientific Journal

29


Figure 1. An airplane drops fire retardant on a wildfire.

“The pattern is obvious: the increasing rate of larger, more severe fires requires increasing amounts of suppression resources, and threatens to outstrip California’s capacity to effectively manage wildfires.” rect, decisive action against wildfires in the interest of safeguarding developed structures, valued natural resources, and human safety.2 Fire suppression involves dousing flames with water and chemical fire retardants and using bulldozers to clear large swatches of brush and trees before they are consumed by fire.2 However, historic policies of fire suppression have contributed to a buildup of woody, flammable material in

30

Berkeley Scientific Journal | FALL 2021

California ecosystems. This excess of readily available fuel has resulted in an increased likelihood for more frequent and severe wildfires similar to those that have occured in the last few decades.5 Additionally, assuming that suppression alone will effectively mitigate wildfires forever oversimplifies California’s ecosystem ecology, which is deeply intertwined with the presence of fire. For some wildlife, fires mean rebirth, not destruction. Fires can create patches of variation and diversity between burned and unburned areas, creating a wealth of ecological niches that support California’s rich species diversity. 6 Fire exclusion eliminates many of these essential processes from the ecosystem, causing long-term issues for humans as well as for a wide array of California organisms. Take Ceanothus, for example, a genus of nitrogen-fixing shrubs and small trees native to California. The leaves on Ceanothus are coated with flammable resin, which provides the heat required for seed germination when set alight. In addition, the roots of Ceanothus trees are resistant to fire — thus, the plant can regrow in burned areas.7 These “pyrophiles” use fires as a key aspect of their life cycles and as a strategy to edge out competing, non-fire resistant

plants. Conversely, the way that suppression has fueled a turbulent cycle of fires has also caused increasingly severe burns in certain areas that experience the greatest buildup of organic material. These wildfires benefit hardy, non-native species that thrive under the pressure of regular wildfires, such as perennial pepperweed and poison hemlock, which have already started to replace more vulnerable, native species such as sycamores, cottonwoods, and willows.8 Even native species like

“Assuming that suppression alone will effectively mitigate wildfires forever oversimplifies California’s ecosystem ecology, which is deeply intertwined with the presence of fire.”

FEATURES


Figure 2. Firefighters apply a prescribed burn to fallen branches in Grand Canyon National Park, Arizona, for the South Rim Piles Project.

Figure 3. A species of Ceanothus. those in Ceanothus, which appear as large shrub fields following high severity fires, are growing at rates that are not necessarily natural and can have dire ecological consequences. With more erratic fires and poor fire management, growing fires will cause significant shifts in these species and, as a result, in California’s landscapes. FIGHTING FIRE WITH FIRE Luckily, ecological and human health are not mutually exclusive. Prescribed burning, a stewardship strategy originally used

FEATURES

by Indigenous peoples, combines the ecological benefits of fire with mitigating the risks fire poses to human safety. Prescribed burning involves intentionally setting controlled fires to reduce hazardous fuels. These fuels include dead grass, fallen tree branches, dead trees, smaller-diameter trees, and brush — all major contributors to the current fuel loads driving California’s unprecedented wildfires. Prescribed burns can also cycle nutrients from the burned material back into the soil as well as slow the spread of insect infestations and some invasive species. They can also assist with the growth and proliferation of pyrophiles and other plants that incorporate wildfire into their life cycle.9 But for now, California will continue to combat fires mainly with suppression. This is in part due to the extensive preparation required for safe prescribed burns. Fire managers must have a clear idea of burn objectives, which can in-

fluence types of vegetation burned and size and intensity of burn operations. They must also navigate planning, permitting, and implementing prescribed fires within “burn windows” that have ideal weather and air quality conditions. Additionally, labor and resources needed to implement prescribed fires at the necessary scales are often diverted to engage in suppression activities, thus hindering the application of proactive fuels treatments.2,9 Other concerns include potential adverse respiratory health effects of adding particulate matter from prescribed fires into the atmosphere.10 Despite these institutional challenges, there is hope that fire management may look different in the future. Recently, California has passed bills SB-332 and AB642, which establish legal protection for prescribed burn practitioners, establish cultural burning liaisons, and propose the development of a prescribed fire burning center.11,12 Hopefully, this return to historical land management strategies, in a time when the future bodes of unprecedented ecological turmoil and the smoke-filled summers to come, will plant the first seeds of change in California’s charred landscapes.

FALL 2021 | Berkeley Scientific Journal

31


REFERENCES 1.

Welcome to Stats & Events. (n.d.). Retrieved November 6, 2021, from https://www.fire.ca.gov/stats-events/ 2. McDonald, B., Burrous, S., Weingart, E., & Felling, M. (2021, October 11). Inside the Massive and Costly Fight Against the Dixie Fire. The New York Times. https://www.nytimes.com/ interactive/2021/10/11/us/californiawildfires-dixie.html 3. ABC10. (2019, May 29). Cal Fire Chief explains the state of California wildfires and how to prepare | Extended Interview. https://www. youtube.com/watch?v=Qv0inMS4Occ 4. Magazine, S., & Daley, J. (n.d.). Study Shows 84% of Wildfires Caused by Humans. Smithsonian Magazine. Retrieved November 8, 2021, from https://www.smithsonianmag.com/ smart-news/study-shows-84-wildfirescaused-humans-180962315/ 5. Society, N. G. (2020, January 15). The Ecological Benefits of Fire. National Geographic Society. http:// www.nationalgeographic.org/article/ ecological-benefits-fire/ 6. Rocca, M. E. (2009). Fine-scale patchiness in fuel load can influence initial post-fire understory composition in a mixed conifer forest, Sequoia National Park, California. Natural Areas Journal, 29(2), 126-132. https://bioone.org/ journals/natural-areas-journal/ volume-29/issue-2/043.029.0204/ Fine-Scale-Patchiness-in-FuelLoad-Can-Influence-InitialPost/10.3375/043.029.0204.full 7. Eldorado—Fire Management. (n.d.). Retrieved November 6, 2021, from https://www.fs.usda.gov/detail/ eldorado/fire/?cid=fsbdev7_019091 8. Bell, C. E., Ditomaso, J. M., & Brooks, M. L. (2009). Invasive plants and wildfires in Southern California. https://escholarship.org/uc/ item/3tk834s7 9. Controlled Burning | National Geographic Society. (n.d.). Retrieved November 6, 2021, from https://www. nationalgeographic.org/encyclopedia/ controlled-burning/# 10. Haikerwal, A., Reisen, F., Sim, M.

32

Berkeley Scientific Journal | FALL 2021

R., Abramson, M. J., Meyer, C. P., Johnston, F. H., & Dennekamp, M. (2015). Impact of smoke from prescribed burning: Is it a public health concern?. Journal of the Air & Waste Management Association, 65(5), 592-598. https://pubmed.ncbi. nlm.nih.gov/25947317/ 1600 × 1069 11. Bill Text—SB-332 Civil liability: Prescribed burning operations: Gross negligence. (n.d.). Retrieved November 8, 2021, from https://leginfo. legislature.ca.gov/faces/billTextClient. xhtml?bill_id=202120220SB332 12. Bill Text—AB-642 Wildfires. (n.d.). Retrieved November 8, 2021, from https://leginfo.legislature.ca.gov/ faces/billNavClient.xhtml?bill_ id=202120220AB642 IMAGE REFERENCES 1.

2.

3.

4.

Eric In SF. (2019, November 8). Ceanothus cuneatus [Photograph]. Wikimedia Commons. https:// docs.google.com/document/ d/1u3zz4PXED8kvObDf_ reYObJaWSQdfzfor1tbzhjdm8s/edit Grand Canyon NPS. (2019, May). Grand Canyon National ParkPrescribed Pile Burning, May, 2019 1107 - 47969074817 [Photograph]. Wikimedia Commons. https:// commons.wikimedia.org/wiki/ File:Grand_Canyon_National_ Park-_Prescribed_Pile_Burning,_ May,_2019_1107_-_47969074817.jpg McGuire, R. M. (2013, August 19). [Bomber plane drops red fire retardant]. Richard McGuire Photo. https://richardmcguire.ca/2013/08/ fighting-fires-from-the-air/ Swanson, D. S. (2021, August 21). [The French Fire burns in Kern County, California, on Aug. 21, 2021]. Time. https://time.com/6092810/californiawildfire-legal/

FEATURES


EXAMINING THE ROLE OF AVAILABILITY HEURISTIC IN CLIMATE CRISIS BELIEF BY GUNAY KIRAN

E

very year, 725,000 people die from mosquito-borne diseases.1 This fact may not come as a surprise because the effects of malaria and other mosquito-caused diseases are well known due to their high coverage across global news networks. What about worldwide deaths caused by dogs? When asked to a room full of UC Berkeley students, estimates did not exceed a couple thousand. However, on average, rabid dogs account for 25,000 human deaths per year.1 Although many people would classify dogs as harmless compared to most other animal species, dogs are third on the world’s deadliest animals list.1 Human perception of the world is based on the information available to us at any point in time. Since news about dog-caused deaths is rarely covered on media platforms, we as humans classify

FEATURES

these types of canines as safe creatures. On the other hand, mosquitocaused deaths have a more pervasive media coverage, creating an accurate belief about the dangers of the species. This unconscious cognitive distortion of the human mind also applies to shark and hippo attacks. Since shark attacks appear more frequently in the media than hippo attacks do, many people tend to classify sharks as more dangerous. In reality, hippo-attack-caused deaths are roughly fifty times more common. This unconscious bias, known as a heuristic, is called the Availability Effect and is defined in Daniel Kahneman’s book, Thinking, Fast and Slow, as “the process of judging frequency by the ease with which instances come to mind.”4 One of the best examples Kahneman gives when explaining this phenomenon relates to Hollywood divorces. Since such divorces

FALL 2021 | Berkeley Scientific Journal

33


“Availability Heuristic”, by The Decision Lab generate interest, they are at the forefront of people’s consciousness and are easy to recall. Thus, people think that marriages fail in Hollywood more frequently than they fail in real life. Considering all of this, why do some people not see global warming as hazardous or a prioritized problem even though there is a great abundance of scientific evidence related to human-caused environmental pollution? Aren’t scientific data about the climate crisis shown on TV and social media platforms as frequently as in magazines? It seems like some people do not believe even that global warming is an abnormal event.2 This is illustrated in the recent survey data gathered by Yale and George Mason Universities. Based on their data, 58% of Conservative Republicans, 52% of all Republicans, and 42% of Liberal Moderate Republicans claim that global warming occurs mostly due to natural changes, not due to human activities. Yet, nearly all Liberal Democrats and more than half of Moderate Conservatives think that global warming occurs mostly due to human activities. According to Jing Shi, one of the authors of Public Perception of Climate Change: The Importance of Knowledge and Cultural Worldviews from the ETH Zurich Institute for Environmental Decisions (IED), the source of this difference in beliefs is the level of people’s knowledge related to climate, which also correlates to higher levels of concern for climate change. Shi and his team surveyed a diverse group of people from Canada, China, Germany, Switzerland, the UK, and the US. They found a direct correlation between an individual’s level of climatecrisis-related knowledge and their concern towards the climate crisis. As can be seen in Figure 1, as citizens’ knowledge about the causes of global warming rises, they become more concerned about the issue. This is because as people learn about climaterelated issues, human-caused climate change starts to become a piece of information that is easily recollected. Then, as they are able

34

Berkeley Scientific Journal | FALL 2021

to recall the instances of climate crisis, the issue appears to be more frequent, causing them to be more concerned about the topic. Thus, “their likelihood of accepting the reality of human-caused global warming and their support of policies to solve the problem” increases proportionally.2 Shi’s paper thereby demonstrates that the amount of “climate-relevant knowledge is important for people’s willingness to change behaviors, [and] to accept climate change policies.”3 Furthermore, in Shi’s experiment, Americans are the ones that know the least about climate change and, in correlation, demonstrate the least concern compared to the citizens of other countries. For those who do not have much information about a subject, the Availability Heuristic plays a greater role in structuring their belief system. Therefore, Americans’ low levels of knowledge can explain their low levels of concern. Professor Norbert Schwarz, who is a professor of psychology at the University of Southern California, illustrates this phenomenon with a pertinent experiment. In his study, Professor Schwarz asked the students to recall behaviors in their routine that could influence their cardiac health.4 In the group of participants, half of the students had a history of cardiac disease in their families while the other half did not. When Schwarz asked the group with no cardiac disease history to memorize eight examples of healthy behavior, they had a hard time retrieving eight full events. Since the frequency with which the events came to their minds was low, they felt greater danger compared to the other group. Also, when the group was asked to retrieve eight examples of risky behavior, the students’ responses followed the same pattern. They had a hard time retrieving eight full events. Since in the minds of the students with no family history of cardiac disease, the frequency of them conducting risky behaviors for their cardiac health was low, they felt safe. With a high probability, this is the case for Americans in Shi’s study, who do not have the necessary amount of information about climate change. Since they know little, they can not retrieve enough instances of climate change-caused disasters. Therefore, due to the Availability Effect, when their minds judge the frequency of climate crisis caused events by the ease with which instances come to their minds, they feel safe about the climate crisis and think that it is not a problem.4 Evidently, individuals who are well-educated on climate change and its driving factors are able to recall them, are able to understand the risks associated with the issue, and thus are more likely to support eco-friendly policies. However, people who do not have climate-specific knowledge cannot recall any causes or instances of climate crisis, making them more likely to feel safe, and therefore, more prone to believe that this crisis is nonproblematic. In other words, the lack of information about the causes or instances of climate change makes people prone to the Availability Heuristic, which, in this case, makes them trivialize the climate crisis. Thus, awareness of the availability bias can help us question our beliefs and realities. It can point out if our belief in the climate crisis is just based on our perception or the truth. With increased awareness, we can live in a reality built by research and proven facts, not just our opinions. Therefore, we can objectively see what is important and act on these significant matters – in this case, saving our one and only planet.

FEATURES


Figure 1: The illustration is originally drawn, and is inspired from Dana Nuccitelli’s graph in The Guardian “Scientists are figuring out the keys to convincing people about global warming”.

IMAGE REFERENCES

REFERENCES 1. 2.

3.

4. 5.

What are the world’s deadliest animals? (2016, June 15). Retrieved October 29, 2021, from https://www.bbc.com/news/ world-36320744 Nuccitelli, D. (2016, May 04). Scientists are figuring out the keys to convincing people about global warming | Dana Nuccitelli. Retrieved October 29, 2021, from https://www. theguardian.com/environment/climate-consensus-97-percent/2016/may/04/scientists-are-figuring-out-the-keys-toconvincing-people-about-global-warming Shi, J., Visschers, V. H., & Siegrist, M. (2015). Public Perception of Climate Change: The Importance of Knowledge and Cultural Worldviews. Risk Analysis, 35(12), 2183-2201. doi:10.1111/risa.12406 Kahneman, D. (2013). Thinking, fast and slow. New York: Farrar, Straus and Giroux. Nucitelli, D. (2014, August 07). Facts can convince conservatives about global warming – sometimes | Dana Nuccitelli. Retrieved October 29, 2021, from https://www.theguardian. com/environment/climate-consensus-97-per-cent/2014/ aug/07/facts-can-convince-some-conservatives-about-globalwarming

FEATURES

1.

2.

(Cover Image) Creative Commons Licence. (n.d.). [Environment, disaster, global warming, climate change].Pixabay. https://pixabay.com/illustrations/climate-change-global-warming-2254711/ “Why Do We Tend to Think That Things That Happened Recently Are More Likely to Happen Again?” The Decision Lab, https://thedecisionlab.com/wp-content/uploads/2020/06/ availability-heuristic-the-decision-lab.png. Accessed 22 Nov. 2021.

Acknowledgements: This article was peer reviewed by Maximilian Auffhammer, who is the George Pardee Jr. Professor of International Sustainable Development and Associate Dean of Social Sciences at UC Berkeley.

FALL 2021 | Berkeley Scientific Journal

35


Engineering Longevity and the Reversibility of Aging INTERVIEW WITH DR. IRINA CONBOY BY QIANKUN LI, MICHAEL XIONG, AND ESTHER LIM

BSJ IC

: What is the physiological basis behind aging?

: Some factors of aging are still under dispute, and many of them are unknown. Some say that aging is due to shortening of the telomeres. Others say that it is due to damage to the DNA throughout the genome, or accumulation of reactive oxygen species which damage proteins, or accumulation of senescent cells (clusters of cells throughout the body that make the rest of the tissue unhealthy). Very interestingly, there is no definitive proof that any of those factors, or a combination of them, is what drives aging.

BSJ IC

: How does your research contribute to our understanding of aging?

: If you start with a very old animal, such as a two year old mouse, which is analogous to a 75-80 year old person, and you apply our “engineering longevity” approach, that animal becomes rapidly young with respect to tissue repair. There is improvement in the muscle, liver, brain, cognitive capacity, agility and strength, which means that whatever cumulative damage happened—telomere attrition, accumulation of DNA damage, mitochondrial damage, or something else—is not the driver of aging, because we were able to reverse it. Once again, it points to the conclusion that aging is determined by the rate of tissue repair, so if we increase the rate of tissue repair, not only can we stop aging or slow it down, we can also gradually reverse it.

BSJ IC

: What does it mean to “engineer longevity”?

: Some people perceive the aging process of humans as something similar to how a new car ages—as you use it, it becomes less functional. What we started to realize is that aging depends on the efficiency of damage repair and not how much damage can accumulate. Since it is a process of repair, we can try to extend the process and then tune that to our advantage, and that is synonymous with “engineering longevity.” Our hope is that we will find ways to not only treat age-related diseases like Alzheimer’s, but to prevent them.

BSJ IC Dr. Irina Conboy, PhD, is a professor in the Department of Bioengineering at the University of California, Berkeley. Her research focuses on the physiological basis of aging and potential therapeutic solutions to aging with the goal of applying these treatments to various degenerative diseases. Dr. Conboy’s lab aims to “engineer longevity” by way of rejuvenation of tissues through plasma exchange and through gene editing with CRISPR technology.

36

Berkeley Scientific Journal | FALL 2021

: What is therapeutic plasma exchange, and what is its significance in medical treatments?

: Therapeutic plasma exchange (TPE) has been used in the medical field for around 35 years. The main goal of TPE is to purify blood plasma of toxins or autoreactive antibodies that can attack the person’s body. In general, it can purify the circulatory system. For example, if someone ingested a toxin or if they overdosed on drugs and no other treatment works, TPE could save their lives by replacing part of the blood plasma with saline and albumin. It also works in the case when the body generates an antibody which attacks its own proteins. For example, in multiple sclerosis, in which the antibodies attack the myelin sheaths that wrap the nerves, TPE works by removing those overactive antibod-

FEATURES INTERVIEWS


Figure 1: Blood Rejuvenation by Plasmapheresis. Pro-geronic factors that accumulate with age can be removed by plasmapheresis treatment.

ies from the blood. This scheme also applies to most autoimmune diseases. TPE returns blood cells back to the person, but the cells are resuspended in physiologic solution, like saline and albumin. Albumin is an abundant protein in our blood which is needed for protein transport and blood rheology, which allows our blood vessels to keep their shape. When blood plasma is removed, some of the albumin is lost, so we have to replenish it. That is the entire procedure of TPE.

“All the typical regulators of health—proteins that maintain tissues, repair tissues, and make blood vessels better— which typically declined with aging, now came back.”

BSJ

: You found that TPE decreased the levels of proteins that accumulate with age, and surprisingly, also elevated the levels of certain proteins. What is the significance of this observation?

IC

: When we decided to apply TPE to “engineer longevity,” we thought that we were simply removing accumulated senescent cells or inflammatory proteins. When people grow older, they have multi-tissue inflammation and fibrosis, which comes from damage of organs and tissues over time and the inability to repair them. The immune system becomes chronically activated instead of having a productive immune response that eliminates bacteria and viruses. What we thought is that TPE will be useful for people who are older because it will simply remove excessive inflammatory proteins, but what we discovered about one month after procedure and control is that all the typical regulators of health—proteins that maintain tissues, repair tissues, and make blood vessels better—which typically declined with aging, now came back. Not only was there the first wave when we diminished inflammatory proteins, there was also a second wave when the age-diminished

FEATURES INTERVIEWS

proteins again became restored. What we realized is that when many of the proteins are elevated with disease/age, they suppress the productive homeostatic gene expression and, consequentially, healthy blood proteome. All the genes in our body are interconnected for us to function as organized systems, and when some of them are expressed in excess, they suppress many of the other genes/proteins that we need for healthy tissues. Therefore, an acute large plasma dilution allowed those other proteins to come back.

BSJ

: What are the physiological processes that accelerate aging? And how does TPE alleviate the effects of these processes?

IC

: The main mechanisms are cellular senescence, immunosenescence, and systemic chronic inflammation. Cellular senescence is an interesting concept that was pioneered by Judith Campisi, a professor at Buck Institute for Research on Aging. She

FALL 2021 | Berkeley Scientific Journal

37


published works describing senescent cells. Senescence evolved as an anti-cancer phenomenon where if a cell’s normal functions are impaired, such a cell does not divide. At the same time, senescent cells produce senescence-associated secretory phenotype (SASP), such as harmful inflammatory proteins. These senescent cells affect tissue around them in many ways, including making the tissue become more prone to cancer spreading. Senescent cells themselves do not turn into cancer, but they allow cancer cells that spontaneously appear in the tissue to metastasize and grow better. As for the role of TPE, we found that TPE reduces tissue senescence. For example, for the brain, we show that there are fewer cells with a particular marker of senescence, senescence-associated (SA) Beta-Gal. That is quite interesting because we did not use the drugs called senolytics; instead, we studied senescence in parallel with the plasma dilution procedure. It is interesting that lowering the levels of excessive systemic proteins could attenuate senescence. We do not know if plasma dilution removed them or perhaps made them healthier and less senescent.

another marker of senescent cells. What we published is that p16 is expressed even in young animals in tissues that are very healthy. So it will not work if you start ablating cells based on p16. In contrast, if you make cells less senescent or healthier through plasma dilution, this might be a milder and safer approach.

BSJ IC

: What further research needs to be conducted to establish attenuation of SASP as an actual treatment for aging?

: A potential treatment for aging is the attenuation of senescence-associated secretory phenotype (SASP). Why did you choose to attenuate SASP, instead of directly removing senescent cells?

: In the lab, we work with neutral blood exchange (NBE) instead of TPE, which is done by our collaborator, Dr. Dobri Kiprov. He thought that TPE could be used for rejuvenation back in 2014. What needs to be done is reputable clinical trials with placebo controls. There will be some subjects between 60 and 80 years old undergoing TPE but also a placebo group, and we would then measure numerous parameters of tissue health, inflammation, regeneration, fibrosis, degeneration, and other hallmarks of what are called the comorbidities of aging. Through this clinical trial, one will see whether TPE could be repositioned to treat diseases that increase with aging. There are numerous diseases without cures right now, and there is a suggestion that TPE, which is already approved by the FDA, could be prescribed to those patients. Right now we are trying to fund-raise for that clinical trial.

IC

BSJ

BSJ

: Direct removal of senescent cells can lead to bad things happening, such as wound healing becoming worse. That was also published by Judith Campisi’s lab. When they looked at skin and wound healing with and without senescent cells, healing without senescent cells did not take place as well as with senescent cells. Additionally, some markers of senescent cells are similarly expressed on normal cells in young animals when cells differentiate. Cells that regenerate tissue start as stem cells and differentiate into precursor or progenitor cells before differentiating further into the final tissue. During that differentiation process, cells express p16,

: In addition to this study, you also developed a graphenebased biosensor for detection of bio-orthogonally labeled proteins, which aims to identify circulating biomarkers of aging during heterochronic parabiosis. What is the purpose of heterochronic parabiosis? Are there any challenges or ethical concerns associated with this procedure?

IC

: Heterochronic parabiosis is a very ancient approach that was introduced 200 years ago. In this approach, different animals are surgically sutured together. The animals are of different

Figure 2: Model of the dilution effect in resetting of circulatory proteome. A induces itself and C; A represses B; C represses A. Dilution of an age-elevated protein, A, breaks the autoinduction and diminishes the levels of A. The secondary target of A, B, becomes de-repressed and elevated. The attenuator of A, C, has a time-delay of being diminished, as it is intracellular and was not immediately diluted, and some protein levels persist even after the lower induction of C by A. C is no longer induced by A and decreases, and a reboot of A results in the re-induction of C by A, leading to the secondary decrease of A signaling intensity/autoinduction and a secondary upward wave of B.

38

Berkeley Scientific Journal | FALL 2021

FEATURES INTERVIEWS


Figure 3: Function of Click-A+Chip. Binding of the bio-orthogonally labeled proteins with a linker molecule on the graphene surface can be detected by the sensor. ages, or they could have different diseases or health status. It was applied to the idea of rejuvenation by trying to see if connecting one old rat with six young rats can “dilute” the aging. However, these approaches are not well controlled. Although animals are monitored every day and are given analgesics, it is a very poorly controlled procedure because you do not know when the positive and negative effects take place. There is organ sharing and environmental adaptations, not just blood exchange. But, overall, this procedure helps us to understand the general concept of aging and rejuvenation. An old mouse which has already accumulated intrinsic damage is rapidly rejuvenated through parabiosis to a young partner while the young mouse ages when it has no previous intrinsic accumulation of tissue or cell damage.

BSJ

: What is bio-orthogonal non-canonical amino acid tagging (BONCAT), and how does it allow us to label proteins of interest?

IC

: BONCAT was developed by Professor David Tirrell from Caltech, and I did my sabbatical in his lab where I learned about this technology. He developed it in bacteria, and we then applied it to the field of mammalian aging in vivo. In this technology, there is a mutant enzyme of methionine tRNA synthetase. Methionine tRNA synthetase is the enzyme that attaches the amino acid methionine to its specific tRNA for translation. If this enzyme is mutant, it can incorporate a non-canonical amino acid instead of methionine during the synthesis of polypeptides. One single

FEATURES INTERVIEWS

mutation, L274G, in the sequence of the enzyme allows one to incorporate azido-nor-leucine (ANL) instead of methionine, hence, metabolically label cells or tissues with ANL. If you have two mice that are parabiotically connected and exchanging blood with each other, but only one of them is expressing the transgene, only the proteins of that one animal will be tagged with ANL. One can then see how the proteins go through the shared heterochronic blood circulation and where they end up, whether in the muscle, brain, or liver. Moreover, one can specifically identify the proteins which came from the young animal into the old animal, or from the old animal into the young animal. That is what we did and published in 2017 in Nature Communications, and with the help of Professor Kiana Aran, we then developed the more sophisticated, digital graphene-based biosensor called Click-A+Chip.

BSJ IC

: How does Click-A+Chip work, and what are some of its key features?

: The chip is a graphene model device where graphene is used as the electro-conductive material to detect increase in resistance through binding of analytes to its surface. It is transistor-based so the current and resistance can be tuned. If anything is bound to the surface of the graphene, resistance to the circuit is introduced, which one can detect as a drop in electric current. The device was developed by Professor Kiana Aran. With these collaborative efforts, it is possible to accurately identify every single “young” and “old” circulating protein that is important for tissue

FALL 2021 | Berkeley Scientific Journal

39


aging and rejuvenation.

BSJ IC

: What were some critical observations when you applied Click-A+Chip to heterochronic parabiosis?

: We found that rejuvenation is not based on a silver bullet- one blood protein. Why would it be? Instead, there are numerous young proteins that traverse from the blood of young mice to the tissues of the old mice that work together and interact with each other. Some of them diminish inflammation, some of them participate in remodeling of the extracellular matrix, and others activate muscle stem cells to divide and differentiate. Some of these proteins, such as Leptin and Lif1, were previously implicated in aging and rejuvenation but not in the effects of heterochronic parabiosis.

“With these collaborative efforts, it is possible to accurately identify every single “young” and “old” circulating protein that is important for tissue aging and rejuvenation.”

BSJ IC

: Aside from identifying factors of aging, are there other potential applications of Click-A+Chip?

Instead of doing very difficult studies in vivo to see what happens in the brain, they use chips which could be humanized.

1. 2.

3.

4.

REFERENCES

Headshot: [Photograph of Irina Conboy]. Irina Conboy. https://bioeng.berkeley.edu/wp-content/uploads/conboy_ crop1.jpg. Image reprinted with permission. Figure 1: Mehdipour, M., Etienne, J., Liu, C., Mehdipour, T., Kato, C., Conboy, M., Conboy, I., & Kiprov, D. D. (2021). Attenuation of age-elevated blood factors by repositioning plasmapheresis: A novel perspective and approach. Transfusion and Apheresis Science, 60(3), 103162. https://doi. org/10.1016/j.transci.2021.103162 Figure 2: Mehdipour, M., Skinner, C., Wong, N., Lieb, M., Liu, C., Etienne, J., Kato, C., Kiprov, D., Conboy, M. J., & Conboy, I. M. (2020). Rejuvenation of three germ layers tissues by exchanging old blood plasma with saline-albumin. Aging, 12(10), 8790–8819. https://doi.org/10.18632/aging.103418 Figure 3: Sadlowski, C., Balderston, S., Sandhu, M., Hajian, R., Liu, C., Tran, T. P., Conboy, M. J., Paredes, J., Murthy, N., Conboy, I. M., & Aran, K. (2018). Graphene-based biosensor for on-chip detection of bio-orthogonally labeled proteins to identify the circulating biomarkers of aging during heterochronic parabiosis. Lab on a Chip, 18(21), 3230–3238. https:// doi.org/10.1039/c8lc00446c

: In our study, young mice expressed the mutant methionine tRNA synthetase, so we looked at young proteins in old tissues. Since aging is driven by an excess of old proteins, we are also aging the methionine tRNA synthetase transgenic animals for the reciprocal study: for example, to identify the systemic proteins of old animals that make young tissues pro-geronic.

BSJ

: You have expertise in various scientific fields of immunology, bioengineering, therapeutics, and more. How do these fields intersect in your research?

IC

: I think immunology is something that everybody should learn because it is one of the most well-developed and oldest areas of science that is quite complex and easy to misunderstand. It also has technological applications in the area of immune engineering. Knowledge of immunology contributes to our current studies on how TPE rejuvenates the old immune system, making individuals more resilient to viral illnesses such as COVID-19. Old people succumb to COVID-19 more than young people, so TPE could be used to improve recovery from COVID-19. There are also many ways to combine BONCAT with understanding immune system responses, such as looking at what cancer cells make in vivo; for instance, how they change immune responses in young versus old animals. Another example is a recent published paper on a blood brain barrier organ chip, which allows screening of all secretory molecules either using BONCAT or Click-A+Chip.

40

Berkeley Scientific Journal | FALL 2021

FEATURES INTERVIEWS


PROVISIONAL TRUTHS: THE HISTORY OF PHYSICS AND THE NATURE OF SCIENCE BY JONATHAN HALE


CANNONBALLS AND CONTROVERSY

I

n Aristotle’s Physics, the text to which the contemporary scientific discipline owes its name, the Greek philosopher claimed that the greater the mass of an object, the faster it would fall.1 For approximately two millennia after its proposal in the fourth century B.C.E., Aristotle’s theory was considered law. Around 1590 C.E., a mathematics professor at the University of Pisa named Galileo Galilei sought to prove otherwise. According to an account by his pupil Vincenzio Viviani, Galileo simultaneously dropped cannonballs of varying weights from the top of the Tower of Pisa to test Aristotle’s prediction that they would reach the ground at different times.2 The cannonballs hit the ground in unison. “‘To the dismay of all the philosophers,’” wrote Viviani, “‘very many conclusions of Aristotle were proven [false]… conclusions which up to then had been held for absolutely clear and indubitable.’”2 Galileo’s refutation of Aristotle’s centuries-old theory of gravity sent shockwaves through the budding scientific community, starting a chain reaction of discovery and falsification that has left an enduring mark on the way we think about science. But while Galileo had succeeded in shaking up the physics of his day, he was unable to explain what caused objects of different masses to fall at the same rate. It was not until almost a century later that Isaac Newton was able to provide a solution to Galileo’s puzzle. In early 1685, Newton formulated the law of universal gravitation: All particles are attracted to one another by a force directly proportional to the product of

Albert Einstein their masses and inversely proportional to the square of the distance between their centers.3 In other words, the force of gravity is greater between objects that are larger and closer together. Newton also postulated that force is equal to mass times acceleration in what is now known as Newton’s second law of motion. This law states that acceleration is equal to force divided by mass. So in the case of Galileo’s cannonballs, a larger cannonball would produce a greater gravitational force, but this force would be acting on a greater mass. More force divided by more mass would result in an acceleration identical to that of the smaller cannonball (or any object for that matter). The law of universal gravitation appeared alongside Newton’s other laws of motion in the 1687 publication Philosophiæ Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy) and radically transformed the way that physicists viewed the natural world. A WRINKLE IN SPACETIME

In the early 20th century, Newton’s theory was challenged by Albert Einstein’s theory of general relativity. Einstein realized that it would be possible to replicate Earth’s gravity in outer space. Think about what it feels like to jump in a rapidly accelerating elevator — you feel heavier because more initial force is required to overcome the elevator’s upward acceleration. You also stay in the air for less time because after you jump, the car’s floor accelerates up to meet your feet. Now consider what would happen if you were to emerge from unconFigure 1: (Left) Isaac Newton. (Right) Newton’s law of sciousness in a windowless container accelerating at 9.8 universal gravitation. Force (F) is equal to Newton’s m/s2 through zero gravity — you would feel no differgravitational constant (G) multiplied by the product of the masses of each object (m1 and m2) and divided by the ent than if you were standing on Earth’s surface. This is because 9.8 m/s2 is the average rate of gravitational square of the distance between their centers (r2).

42

Berkeley Scientific Journal | FALL 2021

FEATURES


Figure 2: In (a), the beam of light appears curved because of the upward acceleration of the elevator car. In (b), the beam of light appears curved because of the distortion in spacetime caused by Earth’s large mass. The beam of light is traveling in a straight line in both instances. acceleration on Earth. So, if you were to toss a ball to the other side of the container, it would appear to “fall.” The ball would be travelling in a straight line as the bottom of the container accelerated upwards to meet it, creating the illusion of Earth’s gravity. The same principle could be applied to light. If you shined a flashlight from one end of the room to the other, the trajectory of the beam of light would appear slightly curved with reference to the walls of the room.4 To Einstein, the equivalence between gravitational and non-gravitational states was proof that gravity was not a force at all, but a mere illusion of perspective. But what, then, could possibly explain gravity’s effects? In his theory of general relativity, Einstein posits that large celestial bodies such as the Earth cause a distortion in spacetime, affecting the paths of objects in their gravitational fields. According to this theory, light does travel in a straight line through a gravitational field, albeit not from our frame of reference.4 Whereas Newton’s theory held that gravity was a force generated by objects because of their mass, Einstein argued that according to general relativity, gravity is not a force at all, just a byproduct of the imprint that large objects make in spacetime. Objects are not being attracted to each other as Newton thought, but travelling in straight lines through a distorted universe. General relativity was mere speculation until May 29th of 1919, when a solar eclipse offered British astronomers Frank Dyson and Arthur Stanley EddingFEATURES

ton the opportunity to test Einstein’s theory. With the sun completely obscured by the moon, stars in the sun’s immediate vicinity would be visible. If general relativity was correct, then the starlight would “bend” as it passed the sun, causing a discrepancy between the stars’ predicted and perceived positions.5 But Einstein himself seemed untroubled regarding the possibility that his theory might be falsified, going so far as to declare that “the most beautiful fate of a physical theory is to point the way to the establishment of a more inclusive theory, in which it lives on as a limiting case.”6

“Einstein himself seemed untroubled regarding the possibility that his theory might be falsified, going so far as to declare that ‘the most beautiful fate of a physical theory is to point the way to the establishment of a more inclusive theory, in which it lives on as a limiting case.’ ”

THE METHOD TO THE MADNESS

The observations of Dyson and Eddington supported the predictions of general relativity and launched Ein-

FALL 2021 | Berkeley Scientific Journal

43


Figure 3: According to Einstein’s theory of general relativity, extremely massive objects cause distortions in spacetime, depicted here as a two-dimensional grid. Each line in the grid “bends” as it passes close to the Earth or Sun. Similarly, the path of light’s travel “bends” even as the light continues moving in a straight line.

stein into international fame. However, equal in magnitude to Einstein’s impact on the world of physics was his impact on the methodology and practice of science. Einstein was willing to be wrong. The celebrated philosopher Karl Popper, who had attended Einstein’s lectures as a teen and regarded him as a significant influence, expressed this sentiment in his principle of demarcation.7 Popper argued that the boldness of science to subject itself to rigorous testing and the willingness of scientists to accept refutation is what demarcates science from pseudoscience.8 According to Popper, the goal of science should not be to prove theories right, but to prove them wrong.9 From Aristotle to Galileo and from Newton to Einstein, the development of our understanding of gravity demonstrates the potency of this ideal.

“Popper argued that the boldness of science to subject itself to rigorous testing and the willingness of scientists to accept refutation is what demarcates science from pseudoscience.” The constant undermining of our understanding of gravity helps illustrate Popper’s claim that at no point will we arrive at an end to science. This is because the aim of science in the Popperian sense is not to make any definitive, incontestable claims about the nature

44

Berkeley Scientific Journal | FALL 2021

of the world, but rather to establish provisional truths to be questioned and falsified by the next generation of scientists. The process of proposing and rigorously testing falsifiable hypotheses that challenge these provisional truths forms the basis of the scientific method we know today.

Karl Popper Whether or not science is successful in obtaining objective truth is, by Popper’s account, beside the point. The value of science and its method is in its embodiment of our capacity to turn an inquisitive eye to the world around us and make thoughtful claims about how it works. Science is both daring and humble in its willingness to try and fail. Just as Einstein was willing to abandon general relativity should it have proved incorrect, science should never relinquish its commitment to boldly exploring the unknown without fear of being wrong. Or, as Popper suggests, “‘Do not try to evade falFEATURES


6. 7. 8. 9.

Figure 4: A reproduction of an image captured during the 1919 total solar eclipse. By measuring the relative positions of stars in the constellation Taurus, astronomers discovered that the Sun’s gravity altered the path of light’s travel. The 1919 solar eclipse failed to falsify Einstein’s theory of general relativity, launching the theory and its creator into international fame. sification, but stick your neck out!’”.7 When Galileo falsified Aristotle’s theory of gravity, our perception of the world changed forever; when Einstein proposed general relativity, our perspective was altered once again. The history of scientific discovery tells us that we should be prepared to embrace further changes still when our provisional truths are inevitably falsified. REFERENCES

1. Rovelli, C. (2015). Aristotle’s Physics: A physicist’s look. Journal of the American Philosophical Association, 1(1). https://doi.org/10.1017/ apa.2014.11. 2. Segre, M. (1989). Galileo, Viviani and the tower of Pisa. Studies in History and Philosophy of Science Part A, 20(4), 435–451. https://doi. org/10.1016/0039-3681(89)90018-6 3. Cohen, I. B. (1981). Newton’s discovery of gravity. Scientific American, 244(3), 166–179. https://doi.org/10.1038/scientificamerican0381-166 4. Isaacson, W. (2018). Einstein. Simon & Schuster. 5. Cervantes-Cota, J. L., Galindo-Uribarri, S., &

FEATURES

Smoot, G. F. (2019). The legacy of Einstein’s eclipse, gravitational lensing. Universe, 6(1), 9. https://doi.org/10.3390/universe6010009 Holton, G. (1986). The advancement of science, and its burdens: The Jefferson lecture and other essays. Cambridge University Press. Popper, K. (1974). Unended quest: An intellectual autobiography. Open Court Publishing Co. Popper, K. R., & Miller, D. (1985). Popper selections. Princeton University Press. Thornton, S. (2021). Karl Popper. The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/fall2021/entries/popper/ IMAGE REFERENCES

1. Sustermans, J. (1635). Portrait of Galileo Galilei [Oil on Canvas]. In le Gallerie degli Uffizi. https://www.uffizi.it/en/artworks/portrait-galileo-galilei-by-justus-sustermans%20 2. College, O. (n.d.). Einstein’s thought experiment. In Lumen Learning. https://courses. lumenlearning.com/physics/chapter/34-2-general-relativity-and-quantum-gravity/ 3. Haymond, B. (n.d.). The Unification of Mass-Energy and Spacetime? [Online Image]. In Thy Mind, O Human. https://www.thymindoman.com/the-unification-of-mass-energy-and-spacetime/ 4. ESO, Landessternwarte Heidelberg-Königstuh, F. W. Dyson, C. Davidson, & A. S. Eddington. (2019). Highest resolution image of the 1919 solar eclipse [Online Image]. In European Southern Observatory (ESO). https://www.eso. org/public/images/potw1926a/ 5. Subastan Carta de Albert Einstein en 135 mil dólares. (2019). [Online Image]. In Presencia Noticias. https://presencianoticias. com/2019/04/02/subastan-carta-de-albert-einstein-en-135-mil-dolares/ 6. Pensamientos de Karl Popper. (n.d.). [Online Image]. In La Ventana Ciudadana. https://laventanaciudadana.cl/pensamientos-de-karl-popper/

FALL 2021 | Berkeley Scientific Journal

45


O

BY SIDDHANT VASUDEVAN

ne by one, five red lights illuminate above a sea of cars that lie impatiently on the asphalt. As the lights shine, the sound of roaring engines is replaced with a moment of silence; fans and drivers all holding their breaths— waiting. And then, just as quickly as it came, the silence is shattered, and all five lights disappear. “It’s lights out and away we go!” This iconic phrase marks the beginning of every race in Formula 1, the highest level of motorsport, in which 10 teams—or constructors—design cars every year for their two drivers. The result is twenty cars and drivers racing on tracks across the world. While motorsport fans in the United States may be more familiar with races around an oval track, operated by The National Association for Stock Car Auto Racing (NASCAR), Formula 1 is known for its complex tracks with a mix of sharp turns, sweeping high speed curves, and narrow corridors that drivers must navigate at speeds of up to 320 km/h (200 mph). In order to make these daring feats possible, teams must engineer a car not only capable of conquering these conditions, but one that can do so while jockeying for positions in a race. As a result of the sport’s competitive nature, each and every car embodies the forefront of automotive technology and design. More specifically, attempts to make Formula 1 cars faster have prompted some of the world’s leading aerodynamics research. Since the first Grand Prix in 1950, Formula 1 cars have experienced some of their most dramatic improvements as engineers and researchers have gained a deeper understanding of aerodynamics.1 However, the cars are also becoming increasingly dependent on their aerodynamics. The cars have trouble following one another, reach cornering speeds that many consider dangerous, and are becoming so fast that we may be

FEATURES


nearing the human limit. In recent years, the sport has witnessed an increase in rules and regulations that make racing closer—rather than faster.2 The focus is now on making the racing more competitive, and as a result, the rate at which Formula 1 cars are getting faster may be plateauing.3 The science of motorsport is confined by the necessity of a delicate balance between entertainment and pushing engineering limits. Now, that balance is beginning to be tested. THE BIRTH OF DOWNFORCE

By lowering the car to be just barely above the ground, shaping the bottom of the car into a smooth wing profile, and placing walls on either side, the car was able to create narrowing tunnels for the air under the car, a concept known as “ground effect downforce.”8 In other words, the car was suctioned to the ground because the air flowing underneath traveled much faster at a low pressure.10 The Lotus 78 and its successor, the Lotus 79, were extremely successful, with the latter winning 8 of the 16 races in the season and securing the championship for the team and driver by a comfortable margin.11 However, Lotus was no longer the only ones developing downforce, and in 1978, the Brabham team created a car that pushed the limit of how far teams were willing to go. The Brabham BT46B, designed by Gordon Murray, took ground effect to the extreme by attaching a fan to the back of the car that sucked more air underneath the car, thereby generating even more downforce than the natural flow of air around the car.12 If you have ever wondered why golf balls have indents in them, the answer is aerodynamics. When golfers realized that older, more dented balls traveled further than their smooth counterparts, researchers investigated the phenomenon. Normally, a sphere moving through the air creates a pocket of low pressure behind it which acts as a vacuum and pulls the sphere backward. However, the dimples on a golf ball make the boundary layer of air more turbulent, giving it more energy to stay on the surface of the ball and allowing the air to fill in more of the low pressure zone behind the ball.13 Formula 1 cars have been designed in a similar fashion; they have been “dented” in all the right places in just the right way to make them move through the air and produce downforce efficiently.14 In fact, the cars today can produce more than their weight in downforce, which means that theoretically—and at high enough speeds—they could be driven on the ceiling.15

At the beginning of Formula motorsport, the cars were not a complex amalgamation of wings, panels, and shapes that manipulate how air flows around the car. In order to minimize aerodynamic drag, the resistive force on a body as it moves through air), early Formula 1 car designers worked to make their cars smaller and more streamlined, which led to a lot of thin, rounded, bullet-like cars.4 This concept was already well established in the design of planes, but it had a fundamental flaw when it came to getting cars around a race track—cars must interact with the ground.5 While these streamlined shapes allowed the car to cut through the air with minimal resistance, they contrastingly made the cars struggle to go around any corners. At times, the air traveling under the cars generated lift—lifting the cars into the air and flipping them.6 Throughout the 1960s, one team began developing a new approach. Colin Chapman, the founder of the automotive company Lotus, is largely credited with changing the philosophy of aerodynamics on Formula 1 cars from ‘attempting to go fast despite the air passing the car,’ to ‘going faster with the help of air passing the car.’ His work began from the understanding that cars become significantly faster around the corners when they make better contact with the ground. Since the friction between a tire and the ground pulls a car inward during a corner, one can increase the cornering speeds by increasing the force with which the car is pushed downCAUGHT IN DIRTY AIR ward5. In order to increase this force, Chapman could have added weight to the car; however, it takes more energy to accelerate, de- Although aerodynamic innovation has driven the sport for years, celerate, and change the direction of a heavy body than it takes for many incredible innovations of the past have fallen. Years ago, a lighter one. Therefore, Chapman looked to aerodynamics. each car looked wildly different from the next, as each constructor On a plane, the curvature of an aerofoil creates a pres- paved their own radical design; however, cars today look almost sure gradient above and below the wing since the air following identical since innovations that are extreme are likely to be rethe curved upper surface will decrease in pressure. Because there moved so that racing remains competitive.15 For example, ground is more pressure below the wing, the wing is pushed upward.7 Since Chapman wanted the opposite effect, he flipped the conventional airplane wing upside down so that instead of generating lift, it generated what is known as downforce.8 Though these wings added drag, the benefits around corners were able to offset any decrease in straight-line speed.3 Thus the Lotus 49, first driven at the 1968 Monaco Grand Prix, was the first Formula 1 car to incorporate “wings,” marking the beginning of an aerodynamics revolution in the sport.9 Continuing to optimize the down- Figure 1: An inverted airplane aerofoil generates downforce. The curvature of a wing force of their cars, Chapman and his team creates a region of lower pressure and higher pressure; by orienting the low pressure region raced the Lotus 78 for the first time in 1977. below the wing, the wing is pulled downward.

FEATURES

FALL 2021 | Berkeley Scientific Journal

47


Figure 2: The Lotus 78 was the first car which utilized “ground effect” downforce. The channels on either side of the car led air into large curved surfaces which acted as wings across the entire underside of the car. effect was greatly limited from the sport even though it was the most significant aerodynamics innovation of the time, and the BT46B “fan car” was banned from competing, even though it won its first—and only—race.12 Starting next year, even some of the basic aerodynamic surfaces on the cars are coming under the scrutiny of Formula 1’s governing body. Currently, Formula 1 car aerodynamics have become increasingly complex as the teams strive to outdo one another. But, the issue with this approach is that in creating the most optimized aerodynamic setup, the quality of the racing is decreasing. For instance, F1 cars today struggle to overtake each other, because when one car comes behind another, it is hit by what is known as “dirty air.”16 The aerodynamic surfaces have all been designed to work in clean, smooth and non-turbulent airflow; the air behind a car is anything but that.17 All the layers of complexity and innovation leave a trail of turbulence behind each car.14 In attempting to overtake, a car can lose grip and ultimately struggle to remain competitive. A large part of the allure of racing is to see the cars overtaking each other in daring, brilliant moves—shaking up the outcome of each race. And thus, to address dirty air, 2022 marks a major change in Formula 1: the cars are being completely redesigned.

the car comes more resilience to turbulence, and therefore, allows the cars to closely follow behind each other.14 Though many other major components such as the engines are unlikely to have major changes, the aerodynamic regulations mark a new era of Formula 1, and as one of the largest changes since 2014, it will likely reset the playing field. Currently, improvement comes at the cost of entertainment, so as part of the redesign for next year, Formula 1 is taking a step back. With all these aerodynamic simplifications and new regulations considered, the cars next year are almost certainly going to be slower than the ones this year. But the simple fact is that Formula 1 is a sport; it is entertainment as much as it is engineering. Perhaps that is its allure. There is something so fundamentally beautiful in putting engineering on the competitive world stage— inspiring the next generation of engineers and demonstrating the cutting edge of human capabilities. So when “it’s lights out and away we go”—when we take the plunge in this new direction—we can do so with an understanding. An understanding that for many, these cars are more than just a source of entertainment: they are beautiful symbols of humanity’s engineering potential.

THE FUTURE OF FORMULA 1 The aerodynamic surfaces on the car are being reshaped—and ultimately simplified—in a bid to make the cars more capable of coping with turbulent air. The current cars create significant amounts of turbulent air, especially from the wheels. Since the cars are meticulously designed to work in smoothly flowing air, all of this turbulence that is pushed out from the sides and behind the car can cause cars behind to experience downforce losses of up to 45%. For 2022, teams and the governing body claim to have found ways to drop that number down to 14%.18 The front wing and new additional wings over the wheels plan to control the wake from the wheels and keep it over the body of the car, while the aerodynamic components throughout the body and rear wing will push the air up and over the cars behind.19 With the simplified geometry of

48

Berkeley Scientific Journal | FALL 2021

Figure 3: The aerodynamics of a golf ball. The dimples on the ball allow the air to follow the surface of the ball for longer. This creates a narrower region of low pressure behind the ball than is created by a smooth sphere.

FEATURES


11.

12.

Figure 4: The 2022 Formula 1 cars are more sleek with simplified aerodynamic components.

Acknowledgements: This article was peer reviewed by Chris Ohanian, who has worked on aerodynamic development at Ferrari F1. REFERENCES Codling, S. (2017). Speed read F1: The technology, rules, history and concepts key to the sport. Motorbooks. 2. Mafi, M. (2007, November 21). Investigation of turbulence created by ... - f1-forecast.com. Congress Center Dresden, Germany. Retrieved February 17, 2022, from https://www.f1-forecast. com/pdf/F1-Files/Investigation%20of%20Turbulence%20 Created%20by%20Formula%201%20Cars%20with%20CFD. pdf 3. Katz, J. (2021). Aerodynamics in motorsports. Proceedings of the Institution of Mechanical Engineers, Part P: Journal of Sports Engineering and Technology, 234(4), 324–338. https:// doi.org/https://doi.org/10.1177/1754337119893226 4. Fields, Joshua, “How Advancements in Aerodynamics Improves the Performance of Formula 1 Racecars” (2015). Honors Theses. 300. Retrieved February 17, 2022, from https://digitalworks.union.edu/theses/300 5. Toet, W. (2013). Aerodynamics and aerodynamic research in Formula 1. The Aeronautical Journal (1968), 117(1187), 1-26. doi:10.1017/S0001924000007739 6. Perkins, C. (2020, July 28). Why some race cars kept backflipping in the late 1990s. Road & Track. Retrieved February 17, 2022, from https://www.roadandtrack.com/motorsports/ a25949868/mercedes-clr-le-mans-crash-analysis/ 7. Babinsky, H. (2003, November). How do Wings Work? IOP. Retrieved February 17, 2022, from http://www3.eng.cam.ac.uk/ outreach/Project-resources/Wind-turbine/howwingswork. pdf 8. Ayushman. (2020, June 29). Aerodynamics in formula 1. The GSAL Journal. Retrieved February 17, 2022, from https:// thegsaljournal.com/2020/06/28/aerodynamics-in-formula-1/ 9. Hasanovic, V. (2018, April 3). Formula 1 Aerodynamics - Introduction. F1technical.net. Retrieved February 17, 2022, from https://www.f1technical.net/features/21555 10. Ogawa, A., Mashio, S., Nakamura, D., Masumitsu, Y., Minaga-

13.

14.

15.

1.

FEATURES

16.

17. 18.

19.

wa, M., & Nakai, Y. (2009). Aerodynamics analysis of Formula one. f1-forecast.com. Retrieved February 17, 2022, from http:// f1-forecast.com/pdf/F1-Files/Honda/F1-SP2_21e.pdf Tech Tuesday: The lotus 79, F1’s Ground Effect Marvel. Formula 1® - The Official F1® Website. (2018, August 21). Retrieved February 17, 2022, from https://www.formula1.com/en/latest/ technical/2018/8/tech-tuesday-retro-lotus-79.html Somerfield, M. (2020, October 7). Banned: The full story behind Brabham’s F1 ‘fan car’. Retrieved February 17, 2022, from https://us.motorsport.com/f1/news/banned-tech-brabhambt46b-fan/4808235/ Veilleux, T., & Simonds, V. (2005, September 19). How do dimples in golf balls affect their flight? Scientific American. Retrieved February 17, 2022, from https://www.scientificamerican.com/article/how-do-dimples-in-golf-ba/#:~:text=Dimples%20on%20a%20golf%20ball,the%20size%20of%20 the%20wake. Hasanovic, V. (2018, August 14). Formula 1 Aerodynamics – Basics of Aerodynamics and Fluid Mechanics, part II. F1technical.net. Retrieved February 17, 2022, from https://www. f1technical.net/articles/ BBC. (2019, March 15). Formula 1: The secret aerodynamicist reveals design concepts. BBC Sport. Retrieved February 17, 2022, from https://www.bbc.com/sport/formula1/47527705 Toet, W. (2019, April 18). Willem Toet explains...the 2019 F1 aerodynamic dilemma: Race Tech Magazine. Race Tech Magazine. Retrieved February 17, 2022, from https://www.racetechmag.com/2019/03/willem-toet-explains-the-2019-f1-aerodynamics-dilemma/ Neborn, J. J., & Somiy, R. G. (n.d.). The International Vehicle Aerodynamics Conference. Durham university . Elson, J., Forster, K., Oxley, M., Williams-Smith, J., & Hughes, M. (2021, July 26). How F1’s 2022 rules should bring closer racing: Aero changes explained. Motor Sport Magazine. Retrieved February 17, 2022, from https://www.motorsportmagazine. com/articles/single-seaters/f1/behind-the-scenes-of-f1s-new2021-rules-and-why-they-could-work Miller, C. (2021, July 16). New 2022 F1 car promises better aerodynamics, closer racing. Car and Driver. Retrieved February 17, 2022, from https://www.caranddriver.com/news/ a37038474/2022-f1-car-revealed/ IMAGE REFERENCES

20. Car Aerodynamics Basics and How-To Design Tips cont…. (n.d.). Build Your Own Race Car. 21. Mario Andretti, Lotus 78, Monza. (2020). Flickr. Retrieved February 17, 2022, from https://www.flickr.com/photos/52605354@N06/50033136942. 22. Gu, J. (n.d.). Microscopic Surface Textures Created by Interfacial Flow Instabilities. SemanticScholar. Retrieved February 17, 2022, from https://www.semanticscholar.org/ paper/Microscopic-Surface-Textures-Created-by-Interfacial-Gu/4139173fbaf8ff0f397c238a3cf3c5a14b676321. 23. racesimstudio. (2021). Livery Templates - Formula Hybrid X 2022 & EVO - Assetto Corsa.

FALL 2021 | Berkeley Scientific Journal

49


Rewriting

Textbooks With

Single-Particle Tracking

Microscopy INTERVIEW WITH DR. ROBERT TJIAN BY ELIZABETH CHEN, LAURENTIA TJANG, AND ANANYA KRISHNAPURA Dr. Robert Tjian is a professor of biochemistry and molecular biology at the University of California, Berkeley. He received his PhD from Harvard University in 1976 and joined UC Berkeley as faculty in 1979. During his three decades at Berkeley, his research interests have revolved around gene regulation through transcription factors, which has led him to study cancers and cell differentiation. Dr. Tjian was named a Howard Hughes Medical Institute investigator in 1987 and served as president of the institute from 2009 to 2016. He also served as the director of the Berkeley Stem Cell Center and the faculty director of the Li Ka Shing Center for Biomedical and Health Sciences. In this interview, we discuss Dr. Tjian’s current research on novel transcription factor mechanism modeling found through single-particle tracking microscopy.

BSJ RT

: What are chromatin loops, and how do they fit into the model of DNA extrusion?

: One of the things that we discovered back in the ‘80s was that humans have a gene body, a promoter, and an enhancer. In simple organisms like bacteria or phages, the promoter, the enhancers, and everything that controls the gene are very close to each other on the DNA. In higher organisms like humans, the enhancer, which activates the promoter, can be thousands of kilobases away. How does that enhancer know which promoters it is supposed to talk to? The model that has come around to answer this question is referred to as “DNA extrusion.” In this model, DNA, which is flexible, is presumed to form a “loop” that enables promoter-enhancer communication. The bigger the loop is, the further apart the distance between the enhancer and promoter. Many papers from many labs write about how two proteins in particular, CTCF (CCCTC-binding factor) and cohesin, work together to form a protein complex that wraps around DNA and works as a kind of doorstop to stop DNA from looping continuously. When Joe Decker first discovered this mechanism, it was initially thought that this might explain enhancer-promoter relations. Our lab and Dr. Anders Hansen, the senior author of a paper we collaborated on,

50

Berkeley Scientific Journal | FALL 2021

also wanted to better understand this possibility, so we started testing the importance of this function by making mutations to knock out cohesin or CTCF.1 Surprisingly, when we did that, we got rid of the DNA loops, but the transcription was not affected. We now know that these loops are just structural components that help us condense chromatin. If you get rid of those loops, there is no effect on function. This is a really important lesson for people to understand: correlation does not give you causality.

BSJ RT

: How did you discover the interaction between cohesin and CTCF?

: Here is where novel technology comes into play. When your BSJ predecessors interviewed me in 2000, my lab was working on in vitro biochemistry. Back then, there were only a few ways you could use to try to understand the biology within cells. One was to do a mutational analysis to assess the importance of the gene to a certain function. The other way was in vitro reconstitution, where we tear the cell apart, purify the components that make up the biological machinery we are interested in, put the machinery together in a test tube, and then observe the reaction that it causes. Until recently, that is what I did to understand functionality at a biochemical level. How-

INTERVIEWS


“We had to study the biology in snapshots since we did not have a ‘camera’ that could capture the movement of biological molecules in action like a movie. . . . what we really wanted was a microscope that could measure the movement of molecules in live cells.” ever, I began to realize that through the reconstitution method, you cannot exactly replicate what is happening inside a cell because you have taken the biological machinery out of the context of its normal situation. I really wanted to study this machinery in the context of the living cell. In other words, instead of pulling it out of the living cell, getting rid of everything else and looking at it in isolation, what I really wanted to ask is: “How does this machine work in the context of the whole living cell or even in the whole organism, in vivo?” Based on reconstitution experiments and genetics, we had found that cohesin and CTCF work together. They comprise a complex together that binds to DNA, but we did not really understand the dynamics of the reaction. In other words, we could measure the reaction based on where it started and ended, but we did not know the pathway in between. Based on the limits of available techniques, we had to study the biology in snapshots since we did not have a “camera” that could capture the movement of biological molecules in action like a movie. Nobody thought that we could ever achieve that because the methods available, like X-ray crystallography and cryo-electron microscopy, allowed you to see molecules, but those molecules had to be in dead samples that were frozen and blasted with an X-ray, or hit with an electron. As a result, Eric Betzig, who was a physicist at the Howard Hughes Medical Institute (HHMI), understood that what we really wanted was a microscope that could measure the movement of molecules in live cells. In 2010, Dr. Betzig and I got together, and we figured out how to make such a microscope. I did not think that in my lifetime, I would ever be able to do that. For the first time in my life, I could actually watch transcription factors moving around, including CTCF and cohesin.

BSJ RT

: How did this development affect the study of protein complexes?

: Later, Anders came from Harvard to study with me and Xavier Darzacq because he knew we had the one microscope

in the world that could actually do this. He then measured the dynamic movement of CTCF and cohesin using the embryonic stem cell system. He observed residence time of the DNA binding event, which is basically how long something is bound to DNA before it leaves. We suspected that cohesin would probably bind in a pretty stable manner due to its ring-like structure that surrounds DNA, and we measured the residence time of cohesin to be 25-30 minutes. We expected the same range of residence time by CTCF because they are in the same complex, but he got a shocking finding: the residence time of CTCF was one minute. That was a revolutionary and foundational discovery, and it started to change our entire view of how protein complexes bind. Everything we ever thought we understood about protein and macromolecular interactions is probably wrong. The differences in timescales are probably in the order of several magnitudes. Certain transcription factors that we thought had a residence time of 20 minutes, 30 minutes, an hour, or a day instead bound to DNA for 300 milliseconds. It is as if we can just take the textbook and throw it out the window.

BSJ

: In your paper with Dr. Anders Hansen, “Distinct Classes of Chromatin Loops Revealed by Deletion of an RNA-Binding Region in CTCF,” you discuss how chromatin loops are controlled by an internal RNA-binding region (RBri). What is the exact role of this region?

RT

: It is general knowledge that most molecules in aqueous solution move by Brownian motion. That means any molecule has total freedom to diffuse throughout the volume of its vessel. There should be no constraints on the movement of a molecule; it should be able to travel and cover the entire volume of its vessel, the cell. The speed with which the molecule moves depends on its size, temperature, and the viscosity of the solution. That is classical Brownian motion, and everything that we ever imagined about molecules in aqueous solution is governed by this principle. However,

Figure 1: RNA binding region (RBRi) on CTCF mediates clustering of CTCF in DNA.1 CCCTC-binding factor (CTCF) is a highly conserved transcription factor that binds DNA and brings it together to form DNA loops in the DNA extrusion model. Both models above refer to the RBRi-dependent CTCF loop class.

INTERVIEWS

FALL 2021 | Berkeley Scientific Journal

51


Figure 2: Gain of function mutations in ENL protein promote the development of Wilms’ tumor.2 Eleven nineteen leukemia (ENL) proteins allow for appropriate transcription levels for normal kidney development. When ENL is mutated in a certain position, it increases self-association and activation of DNA Polymerase II, resulting in aberrant gene activation that contributes to the development of Wilms’ tumor. when Anders tracked the movement of CTCF, he was shocked—it was moving by non-Brownian motion. This means that when we mapped out the angles and trajectories of the way CTCF travels in the live cell, it preferred to go back to where it came from rather than go somewhere else. This is called anisotropic diffusion. Very few people have ever seen such a thing in a living cell. Anders then did the classical genetic experiments to find the part of the molecule that is causing it to travel in this fashion, and it was the RNA binding domain.

BSJ

: Your results demonstrated that the loss of CTCF RBri deregulates 5,000 genes after four days, possibly leading to the development of diseases such as cancer. Are there any functional similarities among these genes?

RT

: Our results demonstrated that there were certain classes of genes that were really dependent on CTCF and its RNA binding domain, and there were some genes that were not. This indicates to us that there is probably more than one mechanism involved in chromatin loop control. The thing about biology is that, due to billions of years of evolution, things are never as simple as you think. Currently, this research is being continued by Andres at MIT. In time, I think this research is going to have a definite impact on processes like drug discovery, but we are still too far from being able to say that right now. There will need to be more research into what other complex is taking over the RNA binding function outside of CTCF’s RBri.

BSJ

: You have also published research discussing the role of a chromatin reader mutation in causing Wilms’ tumors.2 What are Wilms’ tumors, and how do they relate to leukemia?

RT

BSJ

: What is special about the target genes of ENL gain of function mutations that their upregulation promotes oncogenic activity?

RT

: ENL is basically functioning like an oncogene. It is turning on genes that are causing the cells to replicate faster and causing carcinogenic functions. ENL is a chromatin reader, so I think a lot of people are interested in how it controls gene expression and how chromatin marks are read by ENL. It is measuring methylation or acetylation on chromatin, specifically on histones. There is a relationship between the marks and how genes are either turned on or repressed, but it is still very mysterious as to why, in a particular cell type—in this case the kidney—this particular set of mutations has oncogenic effects, but in other cells, it does not happen.

BSJ

: We interviewed you in our journal’s 2000 issue, “Special Report on Biotechnology,” about your new biotechnology company Tularik, Inc. In your opinion, how do you think the field of biotechnology has evolved since then?

Berkeley Scientific Journal | FALL 2021

RT

: The field of biotechnology continues to explode. The first wave of biotechnology was in the late ‘70s and was led by Genentech and Chiron. At the time, the field was called biologics, where you express and purify proteins like insulin or growth hormone. Tularik was founded in the ‘90s during the second wave, where we began using molecular biology to find bioavailable drugs. Since then, of course, the development and use of so many other techniques, such as antibody treatments, CRISPR, RNA inhibition, and non-coding RNAs, has resulted in many different local modalities of drugs. The latest company that I started is called EIKON Therapeutics,

“When drugs hit their target, they can change the target’s speed or binding capability. Thus, singleparticle tracking microscopy gives us the ability to directly observe whether the drug is actually hitting your target.”

: Wilms’ tumor is a very devastating children’s kidney disease. Eleven nineteen leukemia protein (ENL) is a regulator that can affect many genes, and mutations in this factor can help lead to many different diseases. My colleague Dr. Charles David Allis discovered ENL’s influence on

52

leukemia, but realized that it is also affecting a lot of other functions. With regards to Wilms’ tumor, many different gain of function mutations in ENL can turn on genes that should be turned off, leading to the development of cancer.

INTERVIEWS


which was formed in 2019. Within months after the start of the company, COVID happened, but it did not slow us down. EIKON Therapeutics is now booming. The company centers on single-particle tracking microscopy, which is what allows us to watch individual molecules move in live cells. When drugs hit their target, they can change the target’s speed or binding capability. Thus, single-particle tracking microscopy gives us the ability to directly observe whether the drug is actually hitting your target. We are the only company in the world that can do that right now. I think of EIKON as Tularik 2.0. Tularik changed the whole model of drug discovery, and EIKON is doing it again, but with completely different technologies. Most current pharma are based on biology, but half of EIKON consists of engineers because we have to build a microscope, have robots, and use machine learning and AI to interpret the data. Single-molecule tracking data is not something you, as a human being, can analyze, so machines have to do the interpretation as you are generating, literally, terabytes of data per day. Data processing is four orders of magnitude faster than what we did in the lab. The field of biotechnology is changing very dramatically.

BSJ

: In our previous interview with you in 2000, you also expressed that the scientific field is completely dominated by speakers of the English language. Considering the growing emphasis on diversity more recently, has this changed at all?

RT

“Co-mentoring and teamwork like this is a trend for the future of academia. Our combined science is greater than the sum of each of our work separately.” recruit better people and inspire different generations of scientists. Fusing our labs is probably one of the most revolutionary things I have done. REFERENCES 1.

2.

Hansen, A. S., Hsieh, T. S., Cattoglio, C., Pustova, I., Saldaña-Meyer, R., Reinberg, D., Darzacq, X., & Tjian, R. (2019). Distinct classes of chromatin loops revealed by deletion of an RNA-binding region in CTCF. Molecular cell, 76(3), 395–411. https://doi.org/10.1016/j.molcel.2019.07.039 Wan, L., Chong, S., Xuan, F., Liang, A., Cui, X., Gates, L., Carroll, T. S., Li, Y., Feng, L., Chen, G., Wang, S. P., Ortiz, M. V., Daley, S. K., Wang, X., Xuan, H., Kentsis, A., Muir, T. W., Roeder, R. G., Li, H., Li, W., … Allis, C. D. (2020). Impaired cell fate through gainof-function mutations in a chromatin reader. Nature, 577(7788), 121–126. https://doi.org/10.1038/s41586-019-1842-7

: That is a tough question to answer because I live in a privileged bubble here in the Bay Area and at Cal, the number one public university in the U.S. So it is really hard for us to understand the challenges faced by people, including scientists, in other parts of the world. The field is still dominated by English. Even when my partner is French, everything we do is in English. I am afraid that English is still the dominant scientific language. I think this is true in many science fields even outside biology, such as physics, chemistry, and computer science.

BSJ RT

: In your opinion, what is the most important step you have taken in your scientific research role?

: I have been an independent scientist ever since I started here when I was 28 years old. When I came back to Cal from being president of HHMI, I did something very unusual. I combined my lab completely with a young faculty member, Xavier Darzacq. There are huge advantages of this fusing process. While one might see imbalances as I am a senior faculty member while he is a junior, we turned this unbalanced situation to our advantage because we completely trust each other. It allows him to learn about Cal much faster and allows him a much bigger budget when we fuse our lab budget into one. On my side, I have a young colleague who brings a completely new skill set to the table; he understands machine learning and the microscopes we hope to use and develop. All the grad students and postdocs in my lab now have two mentors that teach them very different things, and the diversity of technologies we have in the lab expanded by a factor of three. I think co-mentoring and teamwork like this is a trend for the future of academia. Our combined science is greater than the sum of each of our work separately by a large measure. We are also able to

INTERVIEWS

FALL 2021 | Berkeley Scientific Journal

53


Let’s Take a TRIP into Mental Health BY ANNA CASTELLO


HISTORY OF PSYCHEDELICS In 1943, Swiss scientist Albert Hoffman voyaged into a lysergic acid diethylamide, or LSD, “acid trip”. Upon taking approximately 250 mg of LSD, about twice the typical dose one would take today, Hoffman recorded feeling dizzy and having the desire to laugh, as well as a “most severe crisis.”1,2 His experience inspired him to outsource LSD to any researcher interested in studying the compound. As interest in the compound grew, researchers began to investigate similarly shaped molecules that have been used for thousands of years in many Indigenous cultures: psilocybin found in mushrooms, mescaline found in cacti, and N,N-dimethyltryptamine (DMT) found in the psychoactive drink ayahuasca (see Figure 1). They found that these molecules have similar chemical structures to serotonin, a neurotransmitter in our brain responsible for the regulation of mood, sleep, and other processes. Due to this structural similarity, these compounds can enter the bloodstream and act as agonists— chemicals that bind to a receptor (in this case, mainly the 5-HT2A and 5-HT2B receptors in the brain) and incite a biological response.3 These hallucinogenic drugs, known as the “classic psychedelics,” have the power to drastically alter one’s perception of reality for anywhere between three to twelve hours, depending on the drug, dose, and person. On May 13th, 1957, Life magazine published an article titled “The Discovery of Mushrooms That Cause Strange Visions,” introducing many Americans to psychedelics for the first time. Featured in the article, a New York banker recounts his experience with a medicine woman named Marie Sabine who performed a mushroom based healing ritual.4 This story spread and by the 1960s, the government opened a federally funded trial at the Spring Grove Mental Health Facility in Catonsville, Maryland to test whether LSD could help with treating mental disorders.5 News of the trial’s promising results spread across the United States, fascinating scientists and the public alike. This sentiment, however, abruptly changed as psychedelics became entwined with the counterculture movement. The movement’s strong anti-Vietnam and anti-government ideals threatened Nixon’s government, leading to a mass demonization of psychedelics in the media

and by the government. These compounds quickly became classified as Schedule 1 drugs: a category of drugs considered to have “high potential for abuse” and “no currently accepted medical use,” making them entirely illegal.6 Even Life magazine reversed their stance on psychedelics, labeling them an “exploding threat.”7 This vilification of psychedelics included “educational” and government funded films warning of the risk of chromosomal damage, birth defects, fatal accidents, suicide, psychosis, and brain damage—

“'Classic psychedelics' have the power to drastically alter one’s perception of reality for anywhere between three to twelve hours, depending on the drug, dose, and person" most of which have been disproved.8 In fact, research suggests that psychedelics are neither toxic nor physically or chemically addictive, and there seems to be no evidence suggesting that chromosomal damage and birth defects are caused by psychedelic use.8 In reality, the most common danger currently associated with these compounds comes from illegally purchased psychedelics laced with drugs such as PCP or methamphetamine. Though, very infrequently, LSD psychosis, which mirrors symptoms of schizophrenia, can develop in individuals who are predisposed to psychosis. However, a study surveying 20,000 psychedelic users found no significant association between psychedelics and mental illnesses, demonstrating how rare such a psychotic break due to psychedelics is.9,10 THE DMN AND DEPRESSION Psychedelics are thought to affect a region of the brain called the default mode network (DMN). The DMN is composed of several brain regions that are active when an individual is not focused on stimuli from the outside world. These regions have been studied by

Figure 1: Chemical structure of classic psychedelics compared to serotonin. These are the chemical structures of the classic psychedelics. Due to their similar shape, they bind to 5-HT, or serotonin, receptors in a similar manner. Depending on the psychedelic, they can be eaten, smoked, or drunk in a brew.

FEATURES

FALL 2021 | Berkeley Scientific Journal

55


Figure 2: Connectivity in a placebo brain and one on psilocybin. The image on the left is a representation of the connections a brain makes when not on psilocybin, while the representation on the right is that of someone on psilocybin. Clearly the one on the right has significantly more connections which help rewire the brain and its old thoughts and habits, helping combat mental illnesses. using functional MRIs that measure small changes in blood flow which occur with brain activity. The DMN can be thought of as the self, unaffected by a person’s surroundings, and is responsible for mind-wandering, planning the future, and looking at the past. Dysfunction of the DMN has been found in people suffering from major depressive disorder, bipolar disorder, schizophrenia, and other mental illnesses.11 With depression being one of the leading mental health issues globally, psychedelics’ effects on the DMN make them a promising therapeutic drug. In fact, a 2017 study looked at the brains of 19 patients with treatment-resistant depression by using fMRI to examine changes in blood flow before and after treatment with psilocybin. All 19 patients reported feeling less depressed after one week of treatment, and nearly half of patients reported less depression after five weeks. The study also found decreased cerebral blood flow in the temporal cortex, including the amygdala—which controls anger, fear, sadness, and aggression.12 Interestingly, however, this study found increased activity in the DMN, a surprising result given that increased activity was previously found to be a marker for depressed mood.13 Multiple studies have actually found depression to be linked to either higher or lower levels of activity of the DMN.13,14,15,16 It is irregular DMN activity levels that depress mood—this is a lesser known fact that could explain why there are inconsistencies within studies on how psychedelics affect DMN activity. Clearly, the mechanism by which psychedelics alter the brain to help with depression in a therapeutic setting is not fully understood yet. Fortunately, as the stigma associated with psychedelics decreases, many more research institutions are starting to take a closer look at the neural mechanisms affected by these drugs. PSYCHEDELICS AND ADDICTION Depression is not the only affliction psychedelics can remedy. Currently, psychedelics are being explored for remedying addiction as well. Addictive compounds work by short-wiring the reward system in the brain through the release of “feel-good” neurotransmitters such as dopamine. These neurotransmitters saturate the

56

Berkeley Scientific Journal | FALL 2021

nucleus accumbens, a region in the brain responsible for pleasure and therefore impulse control.17 Psychedelics seem to hijack this process, rewiring the brain and making new neural connections— and thus developing in individuals the ability to overcome alcoholism and other forms of addiction (see Figure 2).18 Though more research on the mechanisms by which psychedelics accomplish this is necessary, studies suggest that psychedelics aid addiction recovery tremendously. For instance, a Johns Hopkins study looked at 15 psychiatrically healthy cigarette addicts who had all previously attempted to quit smoking. They were subjected to a 15 week program where they would ingest increasing amounts of psilocybin accompanied by cognitive behavioural therapy, with the goal in mind to quit smoking. Eighty percent of the participants remained smoking free 6 months after the study. A follow up to this study found that high levels of smoking abstinence was maintained even a year and a half after the treatment. This is an incredible success rate as the current

"Surprisingly, one component that seems to be crucial for psychedelics to have a therapeutic effect is the desire for the trip to be transformative" tools to quit smoking, both pharmacological and behavioural, provide, on average, a mere 35% success rate. Though this pilot study was not perfect, it highlights the possibilities that psychedelics can offer.19,20 Surprisingly, one component that seems to be crucial for psychedelics to have a therapeutic effect is the desire for the trip to be transformative. There is substantial evidence from the ‘60s and ‘70s when individuals commonly used both psychedelics and smoked cigarettes that demonstrates the importance of intention. Because these individuals had no intention to quit smoking, they remained cigarette smokers long after stopping the use of hallucinogens. Researchers believe psychedelic users must make a con-

FEATURES


scious decision to use these compounds as a tool to stop smoking in order for the trip to have the desired effect. In fact, many experts agree that the original method of consuming psychedelics with guides and shamans might indeed be the best way to reach psychological breakthroughs. Though today’s psychologists, psychiatrists, and mental health workers can not be directly compared to shamans and light workers, as shamans have extensive roles in the community that go beyond guiding people through a psychedelic experience, the importance of guidance is not something that is being ignored in today’s clinical setting.

12.

13.

MANY MORE TRIPS TO GO Research is not at the point where psychedelics can be fully prescribed outside clinical studies since there is still a lot to be discovered on how these drugs interact with the brain. However, we are definitely on the edge of a psychedelic renaissance, promising intriguing, new understanding of the brain and one’s psyche. REFERENCES 1.

Holze, Vizeli, P., Ley, L., Müller, F., Dolder, P., Stocker, M., Duthaler, U., Varghese, N., Eckert, A., Borgwardt, S., & Liechti, M. E. (2021). Acute dose-dependent effects of lysergic acid diethylamide in a double-blind placebo-controlled study in healthy subjects. Neuropsychopharmacology, 46(3), 537–544. https://doi.org/10.1038/s41386-020-00883-6 2. Hofmann, A., & Ott, J. (1980). LSD, my problem child. McGraw-Hill. 3. Romano, G. (2019). Neuronal receptor agonists and antagonists. Materials and Methods, 9. https://doi.org/10.13070/ mm.en.9.2851 4. Wasson, R. G. (1957, May 13). Seeking the magic mushroom. Life, 49(19), 100–120. 5. Neher, J. (1967). LSD: the Spring Grove experiment (54 minutes, black and white, 1967). CBS Reports. https://doi. org/10.1176/ps.18.5.157-a 6. United States. Drug Enforcement Administration. (2003). Drug Enforcement Administration : a tradition of excellence, 19732003. :U.S. Dept. of Justice, Drug Enforcement Administration, 7. Moore, G., Schiller, L., Farrell, B., et al. (January 01, 1966). LSD: The exploding threat of the mind drug that got out of control : turmoil in a capsule, one dose of LSD is enough to set off a mental riot of vivid colors and insights, or of terror and convulsions. Life, 60, 12.) 8. Nichols. (2016). Psychedelics. Pharmacological Reviews, 68(2), 264–355. https://doi.org/10.1124/pr.115.011478 9. Vardy, & Kay, S. R. (1983). LSD psychosis or LSD-induced Schizophrenia?: A multimethod inquiry. Archives of General Psychiatry, 40(8), 877–883. https://doi.org/10.1001/archpsyc.1983.01790070067008 10. Johansen, & Krebs, T. S. (2015). Psychedelics not linked to mental health problems or suicidal behavior: A population study. Journal of Psychopharmacology (Oxford), 29(3), 270– 279. https://doi.org/10.1177/0269881114568039 11. Whitfield-Gabrieli, & Ford, J. M. (2012). Default Mode Net-

FEATURES

14.

15.

16.

17. 18.

19.

20.

work Activity and Connectivity in Psychopathology. Annual Review of Clinical Psychology, 8(1), 49–76. https://doi. org/10.1146/annurev-clinpsy-032511-143049 Carhart-Harris, Roseman, L., Bolstridge, M., Demetriou, L., Pannekoek, J. N., Wall, M. B., Tanner, M., Kaelen, M., McGonigle, J., Murphy, K., Leech, R., Curran, H. V., & Nutt, D. J. (2017). Psilocybin for treatment-resistant depression: fMRI-measured brain mechanisms. Scientific Reports, 7(1), 13187–11. https://doi.org/10.1038/s41598-017-13282-7 Bluhm, Williamson, P., Lanius, R., Théberge, J., Densmore, M., Bartha, R., Neufeld, R., & Osuch, E. (2009). Resting state default‐mode network connectivity in early depression using a seed region‐of‐interest analysis: Decreased connectivity with caudate nucleus. Psychiatry and Clinical Neurosciences, 63(6), 754–761. https://doi.org/10.1111/j.1440-1819.2009.02030.x Mulders, van Eijndhoven, P. F. ., Pluijmen, J., Schene, A. H., Tendolkar, I., & Beckmann, C. F. (2016). Default mode network coherence in treatment-resistant major depressive disorder during electroconvulsive therapy. Journal of Affective Disorders, 205, 130–137. https://doi.org/10.1016/j.jad.2016.06.059 Zhu, Wang, X., Xiao, J., Liao, J., Zhong, M., Wang, W., & Yao, S. (2012). Evidence of a dissociation pattern in resting-state default mode network connectivity in first-episode, treatment-naive major depression patients. Biological Psychiatry (1969), 71(7), 611–617. https://doi.org/10.1016/j.biopsych.2011.10.035 Chen, Wang, C., Zhu, X., Tan, Y., & Zhong, Y. (2015). Aberrant connectivity within the default mode network in first-episode, treatment-naïve major depressive disorder. Journal of Affective Disorders, 183, 49–56. https://doi.org/10.1016/j. jad.2015.04.052 Facing addiction in America : the surgeon general’s report on alcohol, drugs and heath. (2016). U.S. Department of Health and Human Services, Office of the Surgeon General. Peters, & Olson, D. E. (2021). Engineering safer psychedelics for treating addiction. neuroscience insights, 16, 263310552110338–26331055211033847. https://doi. org/10.1177/26331055211033847 Johnson, Garcia-Romeu, A., Cosimano, M. P., & Griffiths, R. R. (2014). Pilot study of the 5-HT2AR agonist psilocybin in the treatment of tobacco addiction. Journal of Psychopharmacology, 28(11), 983–992. https://doi.org/10.1177/0269881114548296 Johnson, Garcia-Romeu, A., & Griffiths, R. R. (2017). Longterm follow-up of psilocybin-facilitated smoking cessation. The American Journal of Drug and Alcohol Abuse, 43(1), 55– 60. https://doi.org/10.3109/00952990.2016.1170135 IMAGE REFERENCES

1. 2. 3.

Banner: made by author. Figure 1: made by author. Figure 2: Petri, Expert, P., Turkheimer, F., Carhart-Harris, R., Nutt, D., Hellyer, P. J., & Vaccarino, F. (2014). Homological scaffolds of brain functional networks. Journal of the Royal Society Interface, 11(101), 20140873–20140873. https://doi. org/10.1098/rsif.2014.0873

FALL 2021 | Berkeley Scientific Journal

57


THE SUNSET OF TWILIGHT SLEEP A story about how one drug cocktail changed the course of American obstetrics.

BY JONATHAN KUO

O

n quiet nights, residents strolling within Boston’s Fenway neighborhood may have heard chilling screams echo through the equally chilly air, punctuated by the softer wails of newborns taking their first breaths. A particularly inquisitive—or concerned—person might trace the ruckus back to 197 Bay State Road, on an estate flush with the Charles River where, in 1914, a private hospital was established.1,2 This was not your typical hospital. Here, expecting mothers gathered under the care of the reputed Dr. Eliza Taylor Ransom and a fleet of nurses, eager to experience the novelties of “twilight sleep” and the painless childbirth they had heard it could deliver.2 At the time, twilight sleep described a cocktail of minute amounts of two drugs: morphine and scopolamine. Morphine, as we know now, floods opioid receptors scattered across the brain, spinal cord, gut, and every other location where neurons and electrical impulses assemble, delivering signals that diminish the experience of pain, at least temporarily.3 Scopolamine, meanwhile, targets and blocks a special type of receptor called the muscarinic acetylcholine receptor, which can be found in the brain, in junctions where neurons speak to muscles, and in clusters of neurons scattered across the body called ganglions.4 In the case of twilight sleep, when scopolamine acts on the brain, it interferes with signaling that encodes thoughts and experiences into neural circuits, causing short-term amnesia.5 Administered in this cocktail, the drug pair blocked pain and memories, allowing mothers to go through the excruciating process of labor without remembering a single thing. For a generation of expecting mothers with few options for pain relief during childbirth, twilight sleep seemed in many Figure 2: Jane Erin Emmett, said to be the first American child born in Freiburg, Germany with the twilight sleep method. In Freiburg’s Frauenklinik, where the technique was created, doctors called it Dämmerschlaf.

58

Berkeley Scientific Journal | FALL 2021

Figure 1: A sketch of a mother holding up her baby, which was born by the twilight sleep method. The sketch was published in The Ladies World, a magazine for women’s interests said to print “over one million copies monthly.” ways like a miracle. Existing treatments— comprising primarily either ether, chloroform, or nitrous oxide (more commonly called ‘gas’)—were not particularly reliable, after all, nor were they available to every patient. According to historian Judith Walter Leavitt, “anesthesia use revealed wider practitioner variation than any other obstetric intervention” throughout the latter half of the nineteenth century.6 Some physicians used these agents in every birth they oversaw; others decried them as dangerous and deadly. Others still reserved them for the fraction of cases when “patients have demanded it with an emphasis which could not be resisted.”6 And even when a patient could get access to anesthetics, there was no standard regimen for dosing these drugs. Physicians administered them through whichever means they found comfortable, whether that be by a laced cloth held to the nose, or a glass filled with drug-soaked cotton, or a series of needle pricks, or some other method entirely. Dr. Ransom’s twilight sleep hospital was the first hopital of its kind in America, and Dr. Ransom was one of the first twilight sleep practitioners in the States. You would be hard-pressed to find another person better suited for the job. Ransom had learned its precise method that detailed a series of timed injections in Germany, under the direct supervision of the two physicians who had developed the practice. A renowned specialist in mental and nervous diseases, she had credentials from Boston University’s School of Medicine (where she was top of her class and one of the first female graduates), Johns Hopkins Medical School, Harvard University, and the Neurological and Pathological Institute of New York.7 As a mother of two children, she was also intimately familiar with the pain of childbirth—she called it “needlessly [going] through hell”—and, consequently, the immense relief that something like twilight sleep could offer to mothers anxious about the birthing process.8

FEATURES


Over the next two years, Ransom would deliver over three hundred babies with the twilight sleep method. “None of them was attended by the slightest mishap,” she would proudly tell The Boston Sunday Post.9 In hospitals across the nation, hundreds of doctors would join her in delivering thousands of babies under the marvelous magic of twilight sleep. The method received coverage in popular

“For a brief moment, twilight sleep seemed ascendant, poised to revolutionize the course of American obstetrics and transform the birthing experience from one of pain to one of preference.” magazines and hundreds of newspapers, local and national. Movement leaders rallied in department stores—where women commonly shopped—and created organizations and associations to publicize and praise the treatment. For a brief moment, twilight sleep seemed ascendant, poised to revolutionize the course of American obstetrics and transform the birthing experience from one of pain to one of preference. And it did, but perhaps not in the way that its advocates might have wanted. The technique engendered critique from some mainstream physicians as fierce as the female support underpinning its spread. Some derided the technique as “pseudo-scientific rubbish” and “quackish hocus-pocus”—as nonsense spouted by women who did not know what they were talking about.10,11,12 Some claimed that the method would cause babies or their mothers to be “sickly,” “weak-minded,” or “insane.”8 One doctor called for censures from the American Medical Association, charging that twilight sleep’s organizers’ “first and only aim is the money part attached to it.”13 Supporters would vehemently claim that negative effects during childbirth were due to physicians executing twilight sleep in the wrong way, or that they were simply unrelated. But as scattered case studies and press coverage reported infants falling ill, or mothers dying during twilight sleep labor, the old medical guard slowly but surely began to reject twilight sleep. Over the following decades, twilight Figure 4: sleep lost its grasp on American obstet- A photograph of Dr. rics, although the drug cocktail contin- Elizabeth Taylor ued to be used in other contexts, like as Ransom, ca. 1916. a general anesthetic in a wide variety of surgeries.14,15 By the 1960s, few, if any, obstetricians used the method altogether. Now, while morphine is still widely used for pain relief, scopolamine rarely makes its way to the clinic. While it is sometimes used to reduce nausea, it is especially cautioned against in the first trimester because it can cause “limb and trunk deformities.”16 So, who was right? Was it the physicians who insisted that twilight sleep was untested and dangerous, or was it the women who believed that twilight

FEATURES

Figure 3: An advertisement for Dr. Ransom’s hospital. sleep was, in Ransom’s words, “the greatest boon for motherhood the world has ever known”? The historical record seems to indicate that both perspectives have some credence. The pain relief offered by twilight sleep was not always complete: “observers witnessed women screaming in pain during contractions, thrashing about, and giving all the outward signs of ‘acute suffering,’” explains Leavitt, even though twilight sleep was supposed to offer painless labor.6 And, when administered callously, both scopolamine and morphine can have dangerous side effects: both drugs cause depression, or reduced activity, of the central nervous system, so an overdose of either can cause a patient to stop breathing.17,18 Even so, the normal spread of chloroform, ether, and gas had their medical problems too: like with twilight sleep, too low of a dosage could simply fail to offer pain relief, and too high of a dosage could cause death. Without statistics to compare their efficacy—an impossible demand from the archives, considering that treatment at the time was generally unstandardized and that we still lack a robust method to measure the intensity of pain, anyway—it’s difficult to discern whether twilight sleep or its alternatives were physiologically best for a mother in labor. But unlike its alternatives, twilight sleep was especially potent in the way it socially transformed American medicine. Before twilight sleep, pain management during labor was primarily determined by either midwives, who had limited access to pharmaceutical treatments, or by physicians, who generally conceived of pain relief as a medical decision to be made by the physician, rather than as an ongoing dialogue between patient and doctor. As part of the first-wave feminist movement, twilight sleep offered a medical birthing experience centered around the desires of the mother, rather than the demands of the practitioner, at a time when feminists were fighting for rights like suffrage and access to education, too. Like suffrage and education, twilight sleep gave women a choice: to be relieved from and forget the pain of childbirth, rather than to be subject to the whims of the physician, who likely viewed women’s pain with “an aura of distrust” that historian Elinor Cleghorn has argued “been enfolded into medical attitudes over centuries.”19 At a moment when pain and fear often overshadowed the birthing process, twilight sleep offered feminists the right to forget the process, and to be free from an experience medical knowledge at the time had deemed largely inescapable. That, I think, was the true power of twilight sleep.

FALL 2021 | Berkeley Scientific Journal

59


Figure 5: A chart that might be used to keep records during twilight sleep labor.

14. Neville, W. S. T. (1929). Morphine-scopolamine narcoanæsthesia in nasal operations. Proceedings of the Royal Society of Medicine, 22(11), 1431–1434. https://doi. org/10.1177/003591572902201103 15. Wade, H. (1938). Vesical exclusion: President’s address. Proceedings of the Royal Society of Medicine, 31(3), 277–292. https://doi.org/10.1177/003591573803100335 16. Herrell, H. E. (2014). Nausea and vomiting of pregnancy. American Family Physician, 89(12), 965–970. 17. Schiller, E. Y., Goyal, A., & Mechanic, O. J. (2022). Opioid overdose. In StatPearls. StatPearls Publishing. 18. Corallo, C. E., Whitfield, A., & Wu, A. (2009). Anticholinergic syndrome following an unintentional overdose of scopolamine. Therapeutics and Clinical Risk Management, 5(5), 719–723. https://doi.org/10.2147/tcrm.s6732 19. Cleghorn, E. (2021, June 17). Medical myths about gender roles go back to ancient Greece. Women are still paying the price today. Time.

REFERENCES

IMAGE REFERENCES

1. 2. 3.

4.

5.

6. 7. 8. 9. 10. 11. 12. 13.

60

Twilight sleep babies captivate. (1915, May 25). The Boston Globe, 16. Painless childbirth. (1914, October 16). The Boston Globe, 10. Brownstein, M. J. (1993). A brief history of opiates, opioid peptides, and opioid receptors. Proceedings of the National Academy of Sciences of the United States of America, 90(12), 5391–5393. https://doi.org/10.1073/pnas.90.12.5391 Renner, U. D., Oertel, R., & Kirch, W. (2005). Pharmacokinetics and pharmacodynamics in clinical use of scopolamine. Therapeutic Drug Monitoring, 27(5), 655–665. https://doi.org/10.1097/01.ftd.0000168293.48226.57 Caine, E. D., Weingartner, H., Ludlow, C. L., Cudahy, E. A., & Wehry, S. (1981). Qualitative analysis of scopolamine-induced amnesia. Psychopharmacology, 74(1), 74–80. https://doi. org/10.1007/BF00431761 Leavitt, J. W. (1986). Brought to bed: Childbearing in America, 1750 to 1950. Oxford University Press. Bacon, E. M. (1916). The book of Boston: Fifty years’ recollections of the New England metropolis. Book of Boston Company. Early, E. (1930, September 19). Unusual Party of 200 Twilight Sleep Babies. Miami News-Record, 13. Two twilight sleep babies before camera. (1916, April 9). Boston Sunday Post, 25. ‘Motherhood without fear’. (1914). Journal of the American Medical Association, 63(25), 2233–2234. https://doi. org/10.1001/jama.1914.02570250063025 Gillespie, W. (1915). Analgesics and anaesthetics in labor: Their indications and contra-indications. The Ohio State Medical Journal, 11(October), 611–615. Leavitt, J. W. (1980). Birthing and anesthesia: The debate over twilight sleep. Signs, 6(1), 147–164. https://doi. org/10.1086/493783 Hallarman, H. (1915). The “Twilight Sleep.” Journal of the American Medical Association, 64(5), 459. https://doi. org/10.1001/jama.1915.02570310079036

Berkeley Scientific Journal | FALL 2021

1. 2. 3. 4. 5.

Jacobs, W. L. (February, 1915). [Illustration of a woman in dark dress holding up baby]. In A Twilight Sleep Talk by Twilight Sleep Mothers, The Ladies’ World, 36(1). Rion, H. (February, 1915). First American child born in Freiburg [Photograph]. In The truth about twilight sleep. McBride, Nast, and Company. [Scanned image of Advertisement for Twilight Sleep Maternity Hospital]. (1918). The New England Medical Gazette, 53, 634. See reference #7, page 306. Hellman, A. M. (1915). Chart for recording details of labor [Scanned image]. In Amnesia and analgesia in parturition (twilight sleep). Paul B. Hoeber.

FEATURES


DR. DETLEF SCHUPPAN: Innovating Unprecedented Treatments for Celiac Disease BY MARINA ILYAS, JACOB MARTIN, ESTHER LIM

Detlef Schuppan, MD, PhD, is a professor of Medicine, Gastroenterology, and Hepatology at the Medical Center of the Johannes Gutenberg University and Beth Israel Deaconess Medical Center, Harvard Medical School, and founding director of the Institute for Translational Immunology. He is recognized as a leading expert in celiac disease and his research into celiac disease, fibrotic diseases, cancer and autoimmunity has led to numerous developments in the field. He discovered tissue transglutaminase as an autoantigen in celiac disease in 1997, which led to a paradigm shift in celiac disease research and the development of a highly reliable diagnostic test. In this interview, we hear his perspectives on celiac disease treatment and learn about exciting progress in the field.

BSJ DS

: What is celiac disease, and how does it affect the human body?

: Celiac disease is a nutritional disease that has a fairly well-defined genetic basis. It is an immunological intolerance to gluten proteins in wheat, barley, rye, and related cereals, and these gluten proteins represent about 90% of the cereal proteins present. Gluten is digested to a smaller extent than other food proteins, so everyone has fragments of somewhat intact gluten peptides that reach the small intestine. Part of this is taken up by the gut mucosa, the lining of the gut containing the lamina propria, a connective tissue lining that harbours the largest immune system of the body. Normally, little bits of certain nutrients, like the gluten peptides, get into the lamina propria of the gut wall where the immune system senses them but still maintains a level of active immunosuppression. This means that the gut is primed to have tolerance for foods, which is important for the maintenance of the organism. Hence, normal people without celiac disease will not have an adverse response to gluten. However, it is much harder for people with celiac disease to be tolerant to gluten. Due to a certain genetic predisposition, their bodies recognize gluten peptides as something bad that has to be fended off. After ingestion of gluten, an immune reaction is triggered in the upper small intestine, which leads to intestinal inflammation and the classical signs of celiac disease. When you take biopsies, you see various degrees of atrophy of the villi, which are finger-like protrusions in the small intestine that are important for nutrient uptake. Consequently, their intestines cannot adequately absorb nutrients like minerals, vitamins, and amino acids, which can result in mal-

“Due to a certain genetic predisposition, their bodies recognize gluten peptides as something bad that has to be fended off.”

nutrition, anemia or osteoporosis. This can even lead to intestinal cancer in adults and growth problems in children. Adults diagnosed with late-onset celiac disease often also suffer from other intestinal problems,


Figure 1: Atrophy of villi in small intestine. T-cell activation and the subsequent release of cytokines leads to inflammation of the epithelial cells, and shortening of the villi.

joint pain, difficulty in concentration, and associated autoimmune diseases (from multiple sclerosis to rheumatoid arthritis to thyroid diseases) and not necessarily just severe diarrhea.

BSJ DS

: What is the genetic basis of the disease?

: The genetic background of celiac disease is in DQ molecules. They are immunological molecules that present gluten to T-cells, thus activating the T-cells. Human Lymphocyte Antigen (HLA)-DQ2 and -DQ8 molecules are necessary but not sufficient genetic predispositions for celiac disease. What that means is that everyone who has celiac disease has HLA-DQ2 and -DQ8, but not everyone with HLA-DQ2 and -DQ8 has celiac disease. About 30-40% of most populations have HLA-DQ2 or -DQ8, but only a small percentage of this group develops celiac disease. If you belong to the 60-70% who do not have HLA-DQ2 and -DQ8, you will not develop celiac disease. There are also many other extrinsic factors that may lead to celiac disease and its symptoms. The gut microbiome plays a role, and antibiotic treatments can alter the microbiome in early childhood. Certain intestinal infections by viruses can trigger a higher sensitivity to gluten. Non-steroidal and anti-inflammatory medications, like Advil, if consumed in large amounts, can disturb the intestinal barrier and make it more sensitive to anything that comes in.

62

Berkeley Scientific Journal | FALL 2021

BSJ DS

: What are the current methods to diagnose celiac disease?

: We have developed a non-invasive autoantibody blood test that can diagnose active celiac disease on a population level that is now used worldwide. Autoantibodies are antibodies that are directed against the body’s own proteins, and more rarely, non-protein molecules. In celiac disease, you have a very specific autoantibody which is directed against an enzyme called tissue transglutaminase (TG2) that is ubiquitous in the body and also present in the gut lining. We discovered TG2 as the celiac autoantigen in 1997. If you do upper endoscopy and take small biopsies, you are able to see the intestinal lesions typical of celiac disease, namely villous atrophy, crypt hyperplasia, and lymphocytic infiltration. If the autoantibody test is positive and you get confirmation of intestinal lesions by biopsy, you can confirm that the person has celiac disease. For population studies, it is enough to just do the blood test, which is cheap and quickly done, and then confirm the diagnosis by upper endoscopy and biopsy.

BSJ

: Currently, what are the most viable and cost-effective options for treatment in mild to moderate cases of celiac disease? What about for cases of refractory celiac disease?

INTERVIEWS


DS

: The best treatment is a gluten-free diet. A Dutch pediatrician, Willem-Karel Dicke, found that patients’ conditions improved when wheat was scarce in supply during the second world war. With a chemist, van de Kamer, Dicke found that it was the gluten component of wheat that caused villous atrophy. Hence, the gluten-free diet was established and has been the mainstay of therapy since the early 1950s, as it was highly effective in the majority of patients. Refractory celiac disease is celiac disease that is not responsive to the strict gluten-free diet, and there are two types of refractory celiac disease. Type I is a very high sensitivity to minute amounts of gluten. In gluten-free products, a minor amount of 20 milligrams per kilogram is allowed. However, patients with Type I refractory celiac disease have ongoing complaints and elevated levels of antibodies in the blood and will have an immune reaction to even a few milligrams of gluten per day, sometimes even leading to symptoms like severe diarrhea. Most of these type I patients will never develop malignancy, but it is still difficult to remedy because minute amounts of gluten cannot be avoided even in strict gluten-free environments. Type II refractory celiac disease is pretty rare, and type II patients have signs of an autonomous and malignant process of immune cell clones which, like in cancer, start to proliferate and cause inflammatory damage. You can treat milder forms of type II refractory celiac disease for a long time with drugs directed against T-cells. 50% of patients will not have a disease progression in five years, while 50% will progress to overt malignancy in the form of T-cell lymphoma of the gut. This is very difficult to treat. It usually leads to death within a few months with the only possible treatment being a bone marrow transplant.

BSJ

: You recently published a study on the efficacy of a transglutaminase inhibitor, ZED1227, at reducing small intestinal damage in celiac patients. Why was there a need for new treatment options for celiac disease, in addition to a gluten-free diet?

DS

: The reason is that the gluten-free diet is difficult to maintain in everyday life. It will be very important for people with celiac disease to be able to have a standby medication, which would protect them from the ingestion of minor amounts of gluten

INTERVIEWS

that is usually unavoidable in social settings. There have been trials for enzymes that you can take with your meal that degrade the rest of the undigested gluten peptides that cause an immune reaction. These enzymes can be quite efficient in vitro or in some well-controlled in vivo scenarios. But in real life, they were not very efficient, due to mechanical problems of mixing with the ingested food and also because the reactions are pH-dependent. Some very efficient gluten-degrading enzymes have been developed but they are not yet sufficient to completely get rid of the gluten. Up to now, these approaches have not been successful.

BSJ DS

: How is transglutaminase involved in the pathogenesis of celiac disease? How did you identify this pathway?

: Tissue transglutaminase (TG2) is the celiac disease autoantigen and an enzyme that modifies the gluten peptides that enter the gut by changing their biophysical properties via induction of a negative charge in the immunogenic (HLA-DQ2/8-presented) gluten peptides. When certain neutral glutamine residues in the gluten peptides are modified into acidic glutamic residues by the action of intestinal TG2, it changes their properties with regard to the immune system, allowing increased binding to HLA-DQ2 and -DQ8 molecules on immune cells in the gut, which in turn, causes increased T-cell activation. Hence, we decided to develop a drug based on inhibiting TG2 to prevent this potentiation of the immune response. Since TG2 is centrally involved in the pathogenic process of celiac disease in almost all cases where the patient has

“Tissue transglutaminase (TG2) is the celiac disease autoantigen and an enzyme that modifies the gluten peptides that enter the gut, by changing their biophysical properties via induction of a negative charge in the immunogenic (HLA-DQ2/8-presented) gluten peptides.” elevated TG-2 autoantibodies, such treatment should be effective to very specifically attenuate the inflammatory response to gluten. The idea was that if we have an inhibitor of TG2, we can prevent its reaction with the gluten peptides and thus their immunogenic activation also in vivo. Before the identification of TG2, it was unclear what the autoantigen of celiac disease was, and many renowned research groups tried to find the assumed extracellular

FALL 2021 | Berkeley Scientific Journal

63


Figure 2: Pathogenesis of Celiac Disease. Gluten peptides are transported to the subepithelial lamina propria where they react with TG2. The deamidated peptides are presented to T-cells via HLA-DQ2 or HLA-DQ8, initiating an immune response against the mucosa of the small intestine.

matrix component to which the autoantibodies were directed. I was quite knowledgeable about the extracellular matrix (connective tissue) components by that time because I did my PhD thesis about identification and sequence analysis of basement membrane proteins. I thought it would be possible to find and identify the matrix component that reacts with these autoantibodies. After some erroneous pathways, we used radiolabeled cell culture from fibroblastic cells and managed to immunoprecipitate TG2 with patients’ autoantibodies, and thus identify it as the autoantigen of celiac disease. Unexpectedly, TG2 is not a matrix protein, but it can associate with the matrix. When it is secreted from cells, it can bind to fibronectin in the matrix and this causes the pattern that you see with the autoantibodies on tissue sections. That is how our research into TG2 in celiac disease started. We published our findings about TG2 as the autoantigen of celiac disease in Nature Medicine in 1997, and this propelled many research groups to see how TG2 could be linked to the pathogenesis of gluten in driving celiac disease. My colleagues in Norway and the Netherlands found that TG2 can deamidate with gluten peptides in vitro. Since then, thousands of papers have exploded in terms of the pathogenesis of celiac disease. When I relocated from the US to Germany in 2011, I worked with a clinical developer and a company specialized on transglutaminases to develop a molecule that specifically targets TG2 and eventually, to produce a clinical drug.

64

Berkeley Scientific Journal | FALL 2021

BSJ

: In your study, patients received varying doses of ZED1227 as treatment for celiac disease. What were the most significant findings of your study?

DS

: Our TG2 inhibitor drug is targeted mainly to the intestine. It does not significantly go into the circulation and is very specific to this process of celiac disease. We could not be 100% sure if it would work, so it was a bit courageous to do this. In the so-called phase 1 testing more than 100 healthy subjects were given increasing doses of the drug up to a one week period. Fortunately, they had no side effects. Before that, we did many animal experiments for safety and tolerability, and there were no genetic or other adverse changes observed. In our recent phase two clinical study we did endoscopies on the 160 volunteer celiac patients, who were in remission on the gluten free diet, before and after a daily challenge with three grams of gluten (20% of a usual daily dose) in the form of a cookie for six weeks. Biopsies were examined and assessed for villous atrophy and inflammation by an expert pathologist. Such exact assessment of the biopsies is a complex process because you have to correctly orient the sections to measure the villus height. The blinded evaluation showed that only one quarter of the patients had worse outcomes, while all the others were similar in terms of retained villus height. Only one quarter of the patients were on the placebo pill and the other three quarters got three

INTERVIEWS


different doses of the drug, so this indicated that the drug could work. Upon unblinding, it was then confirmed that the retained villi heights observed in patients were indeed due to ZED1227, and we were extremely happy because, in a way, it became the first drug with proven efficacy.

BSJ

: In 2009, you proposed that TG2 inhibitors and DQ2 blocking peptides could potentially be used to prevent inflammation in celiac disease since the deamination of gluten peptides by TG2 and the subsequent presentation by HLA-DQ2/8 are the processes that initiate the adaptive immune response. Now, you have actualized this possibility in your clinical trial in 2021. Could you share with us your thoughts on your research journey in celiac disease?

DS

: As you can see, it took a long time, from the discovery of the enzyme and the input of many other researchers who further explored the mechanisms of this enzyme, to developing this first clinically-proven drug. We discovered the enzyme in 1996, published our findings in 1997, established the clinically highly useful antibody assay in 1998, and finally successfully concluded our phase 2 trial in 2021. That took 25 years. I think if you work in research, you would know that the process of completing

“it was then confirmed that the retained villi heights observed in patients were indeed due to ZED1227” an experiment and confirming those results can take many years. There are many failures, repetitions, and confirmations, and it is even more difficult if you want to get it published in a good journal. You have review processes that can last for two years, usually requiring many more experiments to be done. Research, and especially translational trajectories, are very different from what many people might think.

Figure 3: Reduction of Gluten-Induced Mucosal Damage. The three different doses of the transglutaminase-inhibiting drug, ZED1227, led to a decrease in gluten-induced mucosal damage.

INTERVIEWS

FALL 2021 | Berkeley Scientific Journal

65


BSJ

: What are some projects you are currently working on in your lab? What is the future direction of your research in the field of celiac disease?

DS

: We have several areas now in the lab that are quite interesting and also promising. We have various projects and clinical studies on autoimmune diseases. Regarding the role of nutrition and especially wheat, we have just finalized a multiple sclerosis clinical study with and without wheat. We also just completed a study on primary sclerosing cholangitis and familial Mediterranean fever and started studying rheumatoid arthritis and lupus, all being autoimmune disease that are apparently exacerbated by wheat. Here the trigger is not gluten but the amylase-trypsin inhibitor proteins of gluten-containing grains, a discovery that we made in my group in Boston, first published in the Journal of Experimental Medicine in 2012. Another interesting area is what we call atypical or type 2 food allergies. That is the novel entity that is the primary contributor to irritable bowel syndrome (IBS), which is very prevalent in many populations. There is the connection between the brain and IBS, between the brain and the gut, and vice versa. Whatever you have in the gut (food components and microbiota as well as microbial metabolites) influences your well-being dramatically, so we are very interested in the immunological and metabolic communication between the gut and the periphery, and we have several projects on this. Another very big area is fibrotic diseases where we work with blood markers of fibrosis progression, which is well reflected in certain serum markers and on the development of antifibrotic therapies, in view of organ scarring (fibrosis) being responsible for roughly 50% of chronic diseases worldwide. We are also working on novel therapies for solid cancers, such as anti-cancer therapeutics that address both cancer cells and the immune system around the cancer to convert the usual cancer-tolerant local immune response to an active anti-cancer immune response. If we combine such therapies alongside direct cancer cell growth-inhibiting drugs, we can obtain very good effects. As for celiac disease and medical research, I believe that there is still a good amount of positive challenges and possibilities for good translational research, which have always been my vocation, being both a basic scientist as well as a clinician. I think the very important thing for the future is to maintain interest in the life sciences, natural sciences, and medicine, and have people who are intrinsically motivated to contribute to the field. I consider this a priority for myself and a prominent task for senior scientists that we owe to the next generation and to society.

3.

4.

5.

6.

7.

org/10.1056/nejmoa2032441 Figure 2: Schuppan, D., Junker, Y., & Barisani, D. (2009). Celiac disease: From pathogenesis to novel therapies. Gastroenterology, 137(6), 1912–1933. https://doi.org/10.1053/j.gastro.2009.09.008 Fritscher-Ravens, A., Pflaum, T., Mösinger, M., Ruchay, Z., Röcken, C., Milla, P. J., Das, M., Böttner, M., Wedel, T., & Schuppan, D. (2019). Many patients with irritable bowel syndrome have atypical food allergies not associated with immunoglobulin E. Gastroenterology, 157(1). https://doi. org/10.1053/j.gastro.2019.03.046 Junker, Y., Zeissig, S., Kim, S.-J., Barisani, D., Wieser, H., Leffler, D. A., Zevallos, V., Libermann, T. A., Dillon, S., Freitag, T. L., Kelly, C. P., & Schuppan, D. (2012). Wheat amylase trypsin inhibitors drive intestinal inflammation via activation of toll-like receptor 4. Journal of Experimental Medicine, 209(13), 2395–2408. https://doi.org/10.1084/jem.20102660 Verdu, E. F., & Schuppan, D. (2021). Co-factors, microbes, and immunogenetics in celiac disease to guide novel approaches for diagnosis and treatment. Gastroenterology, 161(5). https:// doi.org/10.1053/j.gastro.2021.08.016 Dieterich, W., Ehnis, T., Bauer, M., Donner, P., Volta, U., Riecken, E. O., & Schuppan, D. (1997). Identification of tissue transglutaminase as the autoantigen of celiac disease. Nature Medicine, 3(7), 797–801. https://doi.org/10.1038/nm0797-797

REFERENCES 1. 2.

66

Headshot: [Photograph of Detlef Schuppan]. Obtained from Dr. Detlef Schuppan, image reprinted with permission. Figure 1 & Figure 3: Schuppan, D., Mäki, M., Lundin, K. E. A., Isola, J., Friesing-Sosnik, T., Taavela, J., Popp, A., Koskenpato, J., Langhorst, J., Hovde, Ø., Lähdeaho, M.-L., Fusco, S., Schumann, M., Török, H. P., Kupcinskas, J., Zopf, Y., Lohse, A. W., Scheinin, M., Kull, K., … Greinwald, R. (2021). A randomized trial of a transglutaminase 2 inhibitor for celiac disease. New England Journal of Medicine, 385(1), 35–45. https://doi.

Berkeley Scientific Journal | FALL 2021

INTERVIEWS


CULTURED MEAT: GROWING MEAT IN THE LAB BY JANE LI

H

umans and meat consumption have an extensive evolutionary relationship. Homo sapiens are omnivores with natural preferences for both meat and plant consumption due to the history of hunting and eating meat that helped humans survive food scarcity. Nowadays, meat continues to play an important role in many cultures by contributing to a balanced diet due to its nutritional richness—meat consists of protein, vitamin B12, and more.1 However, meat needs to be farmed and processed before entering the market, and here, a problem arises: the conventional production process of meat negatively impacts the environment. This process requires large crop quantities for animal feed and a high proportion of agricultural land

FEATURES

for raising animals—more than three-quarters of the world’s arable land.2 Expansion of animal agriculture has posed challenges to the environment due to the emission of massive amounts of greenhouse gases into the air, which was referred to as ‘Livestock’s long shadow’ by the Food and Agricultural Organization (FAO) in 2006. Indeed, recent statistics from the FAO show that total emissions from livestock account for 14.5 % of all anthropogenic greenhouse gas emissions.3 The conventional production process, focusing on production efficiency, fails to recognize other impacts such as climate change, animal welfare, or sustainability. To mitigate these adverse effects, scientists have developed a biotechnology that advocates for sustainable meat pro-

duction. Implementing this biotechnology, which uses stem cell culturing techniques, could address the significant global threats of industrial livestock farming.4 THE COMPLEXITY OF MEAT Cultured meat is also referred to as in-vitro meat, lab-grown meat, cell-based meat, and clean meat. Although the technology of producing cultured meat for human consumption is relatively new, the idea appeared in many science fiction stories during the nineteenth and twentieth century. In 1930, for instance, the idea was discussed by British statesman Frederick Edwin Smith who predicted: “It will no longer be necessary to go to the extravagant length

FALL 2021 | Berkeley Scientific Journal

67


Figure 1. Derivation of Embryonic stem cells

“It will no longer be necessary to go to the extravagant length of rearing a bullock in order to eat its steak. From one ‘parent’ steak of choice tenderness it will be possible to grow as large and as juicy a steak as can be desired.”

Figure 2. Satellite cells (ASCs)

of rearing a bullock in order to eat its steak. From one ‘parent’ steak of choice tenderness it will be possible to grow as large and as juicy a steak as can be desired.” 5 However, the structural complexity of meat itself, consisting of diverse tissue types and intricate systems, is difficult to replicate outside of the body of animals. It is these unique qualities of meat that give it its distinct taste and texture—accordingly meat from mammals, poultry, and seafood differs based on muscle, fiber type, and thus quality and taste.6 Thus, replicating different types of meat in a laboratory requires a comprehensive understanding of meat’s structure and its components. It is due to this intricacy of meat that the idea of cultured meat did not receive significant attention in the scientific community until recent decades. In May 2005, the first comprehensive article elaborating on cultured meat was published in the journal Tissue Engineering. With continuing research around the world, scientists have gained a better understanding of meat structure and have found that necessary components of meat include skeletal muscle cells and adipocyte, or fat, tissues. Today, the possibility of growing meat in a lab is made possible by a very unique type of cell that could be used to culture muscle and fat tissues—stem cells.

and have a limited ability to differentiate into specialized tissues (multipotent or unipotent).8 Due to their ability to differentiate into different cell types, especially skeletal muscle and adipocyte tissues, stem cells from various development stages have been proposed as viable cells for meat culturing. Satellite cells, the adult stem cells associated with

STEM CELL TECHNOLOGY FOR CULTURED MEAT Stem cells are unspecialized cells in the body that possess the ability to self-renew, differentiate, and develop a specialized functionality.7 The two main groups of stem cells, embryonic and adult stem cells, differentiate at the stage of derivation—when they are derived for culturing. Embryonic stem cells (ESC), derived during early development (blastocyst stage), can be cultured in laboratories and differentiated into any cell type in the body (pluripotent). In contrast, adult stem cells (ASC), found in adult tissues, are rarer

68

Berkeley Scientific Journal | FALL 2021

Figure 3. Production process of cultured meat

FEATURES


Figure 4. Lab-grown burger created by Mark Post at Maastricht University. Credit: PA IMAGES/ALAMY STOCK PHOTO skeletal muscle, which can easily differentiate into myotubes and myofibrils (skeletal muscle fibers), were identified by Mauro (1961) as functional in muscle regeneration after injury. Since these cells tend to mature and specialize as skeletal muscle fibers, they are considered to be ideal cells for skeletal muscle engineering.10 Besides ASCs and ESCs, induced pluripotent stem cells (iPSCs)—cells derived from reprogramming cells isolated from somatic tissue to a pluripotent state—are alternative, possible starting sources for cultured meat due to their pluripotency. In 2018, the first stable culture of bovine ESCs was reported to potentially differentiate into cell types necessary for skeletal muscle development.11 More excitingly, the culture could form a stable cell bank that prevents further biopsy—the removal of tissue—from animals. Mesenchymal stem cells (MSC), multipotent adult stem cells that are present in many tissues and which play vital roles in muscle development, could also serve as good starting cells.12 After cell selection, stem cells taken from muscle tissues or directly from embryos proliferate and differentiate into muscle cells, which are then transferred to a scaffold where they grow and develop into larger tissues. During this process, scientists need to provide the cells with an ideal growing environment known as cell culture medium. However, scientists face dilemmas in choosing an ideal culture medium. CHALLENGES FACING CULTURED MEAT Currently, cultured meat mainly uses animal-derived components such as fetal bovine serum (FBS) from dead calves, which is unsustainably sourced and contradicts the ethical principle of

FEATURES

slaughter-free cultured meat. As a universal supplement containing hundreds of different proteins and thousands of small-molecule metabolites in unknown concentrations, it is both difficult as well as costly to replace FBS by a synthetic medium.4 Aiming for cell-specific media could be more cost-effective since FBS could be substituted by components such as hormones and growth factors that are specific to the cell line being grown.13 As soon as scientists can produce such media safely on an industrial scale, meat can be grown from a biopsy of an animal sample in the laboratory. But, scientists are still working to increase the efficiency and accuracy of cultured meat production. A more advanced tissue engineering approach, such as using textured soy protein as a scaffold to support cell attachment and proliferation, may be required to more accurately replicate meat.14 Today, efficiently producing meat safe for consumption in a lab is not yet possible, but it may be on the horizon. A novel approach, meat in the laboratory has the potential to address animal welfare concerns and mitigate climate change through the use of stem cell technology. Stem cell research has gained much attention in the scientific community due to its great potential—applications of stem cell technology and tissue engineering are not limited to cultured meat and have already played vital roles in fields including medicine. Cultured meat research has made much progress in the past decade. In 2013, the world’s first biosynthesized burger was produced from bovine stem cells. Although it was produced at a high cost, the success was a hallmark in cultured meat research. Still, this area is in its infancy and requires more research that could address current challenges for it to potentially appear in our diets in the future decades.

FALL 2021 | Berkeley Scientific Journal

69


REFERENCES 1.

2.

3. 4.

5. 6.

7.

8.

9. 10.

11. 12.

13.

70

Pereira, P. M. de C. C., & Vicente, A. F. dos R. B. (2013). Meat nutritional composition and nutritive role in the human diet. Meat Science, 93(3), 586–592. https://doi.org/10.1016/j.meatsci.2012.09.018 Foley, J. A., Ramankutty, N., Brauman, K. A., Cassidy, E. S., Gerber, J. S., Johnston, M., Mueller, N. D., O’Connell, C., Ray, D. K., West, P. C., Balzer, C., Bennett, E. M., Carpenter, S. R., Hill, J., Monfreda, C., Polasky, S., Rockström, J., Sheehan, J., Siebert, S., … Zaks, D. P. M. (2011). Solutions for a cultivated planet. Nature, 478(7369), 337–342. https://doi.org/10.1038/ nature10452 FAO - News Article: Key facts and findings. https://www.fao. org/news/story/en/item/197623/icode/ Post, M. J., Levenberg, S., Kaplan, D. L., Genovese, N., Fu, J., Bryant, C. J., Negowetti, N., Verzijden, K., & Moutsatsou, P. (2020). Scientific, sustainability and regulatory challenges of cultured meat. Nature Food, 1(7), 403–415. https://doi. org/10.1038/s43016-020-0112-z Treich, N. (2021). Cultured meat: Promises and challenges. Environmental and Resource Economics, 79(1), 33–61. https:// doi.org/10.1007/s10640-021-00551-3 Zhang, X., Owens, C. M., & Schilling, M. W. (2017). Meat: The edible flesh from mammals only or does it include poultry, fish, and seafood? Animal Frontiers, 7(4), 12–18. https://doi. org/10.2527/af.2017.0437 Zakrzewski, W., Dobrzyński, M., Szymonowicz, M., & Rybak, Z. (2019). Stem cells: Past, present, and future. Stem Cell Research & Therapy, 10, 68. https://doi.org/10.1186/s13287-0191165-5 Prochazkova, M., Chavez, M. G., Prochazka, J., Felfy, H., Mushegyan, V., & Klein, O. D. (2015). Embryonic versus adult stem cells. In A. Vishwakarma, P. Sharpe, S. Shi, & M. Ramalingam (Eds.), Stem Cell Biology and Tissue Engineering in Dental Sciences (pp. 249–262). Academic Press. https://doi. org/10.1016/B978-0-12-397157-9.00020-5 Mauro, A. (1961). Satellite cell of skeletal muscle fibers. The Journal of Biophysical and Biochemical Cytology, 9(2), 493–495. Zhu, H., Park, S., Scheffler, J. M., Kuang, S., Grant, A. L., & Gerrard, D. E. (2013). Porcine satellite cells are restricted to a phenotype resembling their muscle origin. Journal of Animal Science, 91(10), 4684–4691. https://doi.org/10.2527/jas.20125804 Yuan, Y. (2018). Capturing bovine pluripotency. Proceedings of the National Academy of Sciences, 115(9), 1962–1963. https:// doi.org/10.1073/pnas.1800248115 Du, M., Yin, J., & Zhu, M. J. (2010). Cellular signaling pathways regulating the initial stage of adipogenesis and marbling of skeletal muscle. Meat Science, 86(1), 103–109. https://doi. org/10.1016/j.meatsci.2010.04.027 van der Valk, J., Brunner, D., De Smet, K., Fex Svenningsen, A., Honegger, P., Knudsen, L. E., Lindl, T., Noraberg, J., Price, A., Scarino, M. L., & Gstraunthaler, G. (2010). Optimization of chemically defined cell culture media—Replacing fetal bovine serum in mammalian in vitro methods. Toxicology in Vitro,

Berkeley Scientific Journal | FALL 2021

24(4), 1053–1063. https://doi.org/10.1016/j.tiv.2010.03.016 14. Ben-Arye, T., Shandalov, Y., Ben-Shaul, S., Landau, S., Zagury, Y., Ianovici, I., Lavon, N., & Levenberg, S. (2020). Textured soy protein scaffolds enable the generation of three-dimensional bovine skeletal muscle tissue for cell-based meat. Nature Food, 1(4), 210–220. https://doi.org/10.1038/s43016-020-0046-5 IMAGE REFERENCES 15. Figure 1: Hynes, R. O. (2008). US policies on human embryonic stem cells. Nature Reviews Molecular Cell Biology, 9(12), 993–997. https://doi.org/10.1038/nrm2528 16. Figure 2: Muñoz-Cánoves, P., & Huch, M. (2018). Definitions for adult stem cells debated. Nature, 563(7731), 328–329. https://doi.org/10.1038/d41586-018-07175-6 17. Figure 3: Treich, N. (2021). Cultured Meat: Promises and Challenges. Environmental and Resource Economics, 79(1), 33–61. https://doi.org/10.1007/s10640-021-00551-3 18. Figure 4: Dolgin, E. (2020). Will cell-based meat ever be a dinner staple? Nature, 588(7837), S64–S67. https://doi. org/10.1038/d41586-020-03448-1 19. Cover image: Dolgin, E. (2020). Will cell-based meat ever be a dinner staple? Nature, 588(7837), S64–S67. https://doi. org/10.1038/d41586-020-03448-1

FEATURES


Interview with Dr. Brandon Collins BY GRACE GUAN, ALLISUN WILTSHIRE, AND ANANYA KRISHNAPURA Brandon Collins, PhD, is an adjunct professor in the Department of Environmental Science, Policy, and Management at the University of California, Berkeley. Dr. Collins is also a research scientist in a partnership with the U.S. Forest Service, Pacific Southwest Research Station, and UC Berkeley Center for Fire Research and Outreach. Additionally, he is an Associate Editor with the Journal of Forestry. In this interview, we discuss Dr. Collins’ research on the history and trajectory of fire management strategies in California as well as factors driving forest change.

BSJ BC

: What sparked your interest in forestry and fire science?

: I did not have an “Aha!” moment where I knew that was exactly what I wanted to do, but an influential moment was back in 1991 when I was in middle school and we had the Oakland Hills fire. Although I was not anywhere near the fire—at the time, I was living in Alameda, about 10 miles or so from it—it was still something that hit close to home for me. I remember being on a football field to pick up my brother from practice and ash was visibly falling. From then on, I was really interested in fire and forestry. My dad had an undergraduate degree in forestry from the University of Alberta, so my dad’s explanation of the fire coupled with my firsthand experience is probably what got me into it.

BSJ BC

: How does your work as a research scientist at UC Berkeley differ from your work with the U.S. Forest Service?

: Currently, I am a partnering scientist with the Forest Service, although I used to be full-time. The hybrid-type position that I am now in allows me to also teach ESPM 134, an upper-division class on the importance of fire, insects, and disease as agents of forest disturbance, as well as at the Forestry Field Camp for UC Berkeley. Overall, though, from a work standpoint there has not been a considerable change. The transition has been relatively

INTERVIEWS

seamless. There are several scientists in different research stations at the Forest Service that have affiliations with universities, so it can be a fluid thing to go back and forth between these institutions. It is really a neat partnership and something I wish there was more of.

BSJ BC

: What should the Bay Area expect in terms of the frequency and severity of wildfires in the coming years?

: Unfortunately, I cannot offer a great prediction on that. The one thing we have learned, especially in the last four or five years, is that fire is a lot more likely than perhaps it was in the past. We used to have these one-off events like the Oakland Hills fire in the 90s that I mentioned earlier; before that, I think the last big fire in the Berkeley Hills was back in the 20s. Now, I think these kinds of topics are constantly on people’s minds because there always seems to be either a fire near us or smoke from something much further away which reinforces this idea of the inevitability of fire. For a long time, people had no direct experience with fire or smoke, so they would not even think about or plan for it. Now, however, I think it is something we can plan for. We cannot say for sure that next year we will have a terrible fire in the Oakland Hills again; there are random components to these events, and sometimes, getting an ignition that we cannot get crews to quickly enough is almost all it takes. My take-home would be that people ought to be thinking about that inevitability—not that fire will happen tomorrow or next year, but

FALL 2021 | Berkeley Scientific Journal

71


that it is more likely than not in our lifetime.

BSJ BC

: What are the different fire management strategies California has employed in the last few centuries?

: In pre-Euro-American settlement times, there is plenty of evidence that Native Americans lit fires intentionally to manipulate the forest for different characteristics that were beneficial for them. Some early California settlers also adopted these strategies, particularly for the purposes of improved grazing. There are even some accounts of private foresters who owned timberland and saw a benefit to prescribed burning. At the time, the norm for foresters was to say that fire, in general, was not good. They believed it was hurting regeneration. The majority of foresters did not want fires at all, but there was this small contingent that recognized that if they kept putting fires out, they would eventually have this problem of fuel buildup and overly dense forests. By the 1930s to 1940s, we had pretty much adopted a policy, both in-state and nationally, of full suppression. Any of those contingent foresters that were using fire were either displaced or had their programs completely shut down. We suppressed fire for many decades, and we were very successful at it up until about the 1980s to 1990s. More recently, we are starting to see some of the consequences attached to putting fires out for so many decades. However, we are simplifying it when we say that we went into full suppression as soon as Euro-Americans came; there is a little more nuance to the story. For instance, around the late 1960s and early 1970s, there were a couple of areas in California’s national parks where they started to use naturally ignited fire. When lightning started fires, they let them burn and resume their natural ecological role in these forests. Today, while we still practice fire suppression, there is much more recognition of the need for fire. The key issue that we struggle with is how to reimplement this process, especially when we have taken fire out for so long. The forests are not in a condition where we can just flip a switch and automatically decide to let fires burn. Ultimately, while we have a broad agreement that fire is ecologically important, we have not made the leap into actually implementing it at a meaningful scale.

BSJ

: In your paper, “A quantitative comparison of forest fires in central and northern California under early (1911–1924) and contemporary (2002–2015) fire suppression,” you discovered “no statistical difference in annual number of human-caused fires between the early suppression and contemporary time period.” In your opinion, why has this number stayed constant?

BC

: That was an interesting finding. The paper was based on a data set we found in this giant ledger sitting at the Forest Service research office in Redding. It supposedly had a comprehensive account of all fires from 1911 to 1924 that were either on or adjacent to national forest land. That data was interesting because I think some people have speculated that fires have gotten so bad recently since there have been so many more human issues; the population is much greater than it was historically, and we have people out camping in the woods that are not very familiar with forests and thus are not as careful as they should be. However, based on the data

72

Berkeley Scientific Journal | FALL 2021

set in the ledger, that was not the case. We hypothesized that there were people back then setting fires on purpose, perhaps not legally. For example, one of the easiest ways to clean up debris for people working in the woods after a timber harvest was to burn it. There may have also been some carelessness at play; some of the equipment that they would use back then did not have the same ability that our technology has now to arrest sparks. This is all speculation, though. We do not have a great explanation for why the numbers are the way they are. Importantly, what the data does highlight is the fact that we cannot just blame the larger population and more human-caused fires of today as reasons for why we are seeing the types of fires we are seeing.

BSJ BC

: What factors have caused the number of larger-sized fires to increase earlier in the year?

: Based on the ledger’s data, if today’s fires are not due to human ignitions (at least not as frequently as in the past), then we are looking at fuels, climate, or both as possible drivers. These recent fires are well outside of anything seen historically from that data set or from several other reconstructions of historical fires. From a fuel standpoint, forests have changed drastically compared to the past. If we think about forest conditions, there are always inputs (for fire ignition) on the forest floor due to needles dropping, branch breakage, and dead trees. The only way that material is being dealt with is through decomposition, but there is no way the microbes are keeping up with the input rates. That is where fire used to come in historically. Fire would come fairly frequently in our forests every five to ten years or so and consume some of those fuels, generally keeping the density of trees in check. Since we shut fires out and changed care for the forests, we inadvertently increased the ability of fires to spread at a higher intensity. The next factor leading to large-scale fires is climate. The fact that we have longer, warmer dry seasons just gives more opportunity for fire to burn. We couple that with the fuel condition, and we see that we are really in a bad way with regards to the way fires are spreading. Thinking about where we are going to be in the near future with climate tells me we need to go full speed on this problem. That

“There is an element of inevitability to wildfires now; we cannot pretend that we will be able to put them all out.” means doing large-scale forest restoration, something we have not yet proven that we can do. There is an element of inevitability to wildfires now; we cannot pretend that we will be able to put them all out.

BSJ BC

: How do active restoration methods differ from each other?

: There are two main ways that we can do forest restoration. On the mechanical side, we can cut down trees, later using

INTERVIEWS


not a true replicative of what actually went on in the ecosystem. We could think that we wrote a perfect prescription, but the randomness of fire is exactly what some of these forests need to create these varied habitat types. The reality is that it is not a debate between which of the methods is the better strategy; both are necessary.

BSJ

: You have mentioned that landscape-level analyses could potentially inform a prioritization scheme for achieving large-scale forest resilience to fires. How feasible would implementation of this project be, and what are potential complications?

Figure 1: Percentage of total recorded large (>2024 ha) fires by month in the early suppression (1911-1924) and contemporary (2002-2015) periods. Note the significant increase in large fires during June and July in the present-day versus what was recorded a century ago. them for timber or for energy by burning them. On the other side, we can use active fire, which involves a combination of strategies where we use either prescribed fire or “managed wildfire,” allowing naturally ignited wildfires to burn under conditions where we could have put them out. This can be risky.

“We could think that we wrote a perfect prescription, but the randomness of fire is exactly what some of these forests need to create these varied habitat types.” On the mechanical side, the main problem we face is that this method can elicit a lot of distrust from the public, particularly from an environmental protection standpoint. In part, this stems from how decades ago, there were several instances of poorly executed logging where the biggest trees were repeatedly cut. These big trees are really resistant to fire; they are tall, have thick bark, and their crowns are well off the ground. By logging big trees repeatedly for decades, in conjunction with fire suppression, we shifted the character of the forest, harming especially those species that are threatened and endangered. Now, when people see equipment in the woods for mechanical restoration, they think that it is just a euphemism for logging. This is one of the issues with which the Forest Service, in particular, is struggling. We now have many safeguards in place environmentally; on most Forest Service grounds, they generally will cut smaller trees that are most prone to fire while avoiding trees bigger than 30 inches in diameter. These are some of the struggles that we face, but I want to stress that both methods of restoration can be pretty effective. Mechanical restoration is likely a little more precise—we can cut exactly the trees we want. We could write it out and say, “This is how I want the forest structured,” and we could have that plan translated almost perfectly. When we use fire, we can have ideas for how we want to use it, but there is some randomness to the process. We have to be willing to take a little bit of uncertainty. However, mechanical thinning is also

INTERVIEWS

BC

: We struggle with doing true landscape treatments and projects now because of our historical use of something called the “stand scale” for forest management. This method looks at specific parts of the forest (between 20 to 50 acres) at a time to determine what each “block” needs. As such, we have not quite put it all together at a large scale to think about how we can manage for different characteristics across the landscape. We need a change in mindset where the landscape is considered a unit instead of individual pieces of land. Yes, they may be differentiated by us into different blocks, but they are connected at an ecological level, and the presence of fire would influence that entire area. I am not exactly sure if we also need better planning tools for forest managers or if we just require this mindset and cultural shift. I know that people are already talking about this shift, and I think it is happening to a certain extent in some areas, such as the Plumas National Forest. We just struggle sometimes with adapting for different constraints on implementation. For example, we might need to consider protecting or not causing a disturbance within certain areas dependent on the species living there. When we start considering all of those different constraints on the landscape, we sometimes force ourselves into blocking it up into these little chunks. Hopefully, that is something we can overcome.

BSJ

: In your paper “Impacts of different land management histories on forest change,” you state, “While it is clear that climate has a role in [the recent increase of large fires], the role of forest change cannot be ignored.” In the media, climate change is frequently presented as the root cause of forest fires, with far less attention attributed to forest change. What are your thoughts on this disparity?

BC

: This has been an issue we have been trying to get across, and our efforts lately have been fairly effective since media attention to fire has increased in the last couple of years, particularly in California. Recently, we have had two unique fire seasons. Last year, we had massive amounts of fire all over the state. This year, in Northern California, we mainly had a couple of massive individual fires that included fires down in giant sequoia country—another focus of the media. It has been a good opportunity for us to highlight the impact of forest change. It is easy to blame the recent fires on climate because things have changed so rapidly. Indeed, the climate has increased the dryness of forests, and the frequency of lightning is perhaps another related issue that has a climate “piece” to it. From a media standpoint, though, just mentioning climate may be the easiest sell because it does not require much accompanying explanation. I am not saying

FALL 2021 | Berkeley Scientific Journal

73


that these individuals in the media are lazy; it takes time to understand the complexity of this issue, factoring in the varied impacts of fire suppression, logging, and accumulation of fuels. When people go to these forests, they mistakenly assume they are looking at the forests’ natural conditions since these forests have always appeared the same to them. It is really difficult for them to gain perspective of what these forests looked like historically. It is a harder message to get across how much forests have changed, but, to a certain extent, I think recent efforts are helping. We are now able to use many different data sources to demonstrate some of this change. The point is, it is not climate or fuels individually, but both together that are impacting forest fires. We do not have to have an argument about which one is more important; they both have happened, and they both are still happening. It is easier to manage the forest in the short term rather than the climate, although we should have goals to address the latter issue over the next few decades. However, since we do not have this amount of time to wait with regards to the way fires are burning, my argument is that we have to commit to forest management in the near term while at the same time trying to deal with some kind of climate mitigation over the long term. For some

“The point is, it is not climate or fuels individually, but both together that are impacting forest fires.” folks, the climate piece alone, unfortunately, meets a narrative that they want to tell. There are a lot of factors that come into trying to tell that story properly, but I think we are making headway on getting both the fuels and the climate piece into it.

BSJ

: In January of 2020, you testified in front of the House of Representatives regarding the impact of wildfires on the environment and energy infrastructure. What was this experience like, and what impacts do you hope this hearing has on future policy?

BC

: That was a really interesting experience that I was fortunate to be able to have. It was nerve-wracking and a little overwhelming speaking to lawmakers since I was surrounded by people staring at me from stadium-like seating. It was a lot of pressure but a great opportunity to try to get the message out about the role of forest change and the fact that we can actually do something about it with large-scale restorations. I talked a little bit about some of our experiences with doing forest restoration treatments at the university’s Blodgett Forest. While some representatives in that subcommittee meeting truly wanted to engage in a conversation, some people were clearly coming at it from a climate standpoint (perhaps to help their constituency) and others were clearly coming at it from the angle that our forest management has been terrible. It was so funny to see that right in front of me. We see it on TV with how politicized everything has been lately. But to see it right in front of your face, about a topic on which you are the expert, and have them telling you their spiel about it was somewhat interesting and funny.

74

Berkeley Scientific Journal | FALL 2021

Figure 2: Figure 2a (above) is an area treated for fuel reduction and forest restoration in the early 2000s, which later burned in the 2007 Antelope Complex and again in the 2019 Walker Fire. Figure 2b (left) is an area that severely burned in the 2000 Storrie Fire. The photograph was taken in 2010 following no forest management for a decade following the fire. Both are sites of active research.

BSJ BC

: What are your thoughts on increasing or resuming the implementation of traditional indigenous burning practices?

: The idea of how Native Americans historically managed the landscape has always been intriguing to people. Many like to think that indigenous peoples lived in harmony and peace with nature. Although I am sure that was partly true, these communities actually were actively managing forests with fire as their main tool. People are now referring back to the knowledge of these tribes as a way to scale up our implementation of fire. This is a neat development as it makes some people more willing to use fire. Your average firefighter from the Forest Service saying, “We need to burn this land since it has too much fuel,” is not as palatable to people compared to someone with a tribal background saying “Here is how we have used fire historically across our landscape. What do you think about doing a similar burn on your land?” I fully support anything that gets more good fire on the landscape. One of the areas where this is already being demonstrated is up in the Klamath Mountains in Northwest California, where tribes have their own licences to burn as well as great existing partnerships with the Forest Service. It is a neat model that has proven to be highly effective in that area, and it could be expanded with great gains. REFERENCES 1. 2.

3. 4.

Headshot: [Photograph of Brandon Collins]. Brandon Collins. Figure 1: Collins, B. M., Miller, J. D., & Sapsis, D. B. (2019). A quantitative comparison of forest fires in central and northern California under early (1911–1924) and contemporary (2002– 2015) fire suppression. International Journal of Wildland Fire, 28, Article 2. https://doi.org/10.1071/WF18137 Figure 2a: [Photograph of forest burned in Antelope Complex and Walker Fires]. Brandon Collins. Figure 2b: [Photograph of forest burned in Storrie Fire]. Brandon Collins.

INTERVIEWS


WHERE IS EVERYONE?

THE SEARCH FOR LIFE IN THE VAST UNKNOWN

BY SHREYA RAMESH

A

liens are everywhere in modern science fiction. From H.G. Wells’s first description of alien life in The War of the Worlds to the diverse landscapes in George Lucas’s Star Wars universe, these foreign lands help us escape life on Earth and imagine one of space travel to galaxies far, far away. Yet sometimes we forget that these bustling metropolises filled with diverse creatures and landscapes are only fictional since life outside of Earth has not yet been discovered. Despite scientists’ efforts, we still have not discovered organisms—even as simple as tiny bacteria—on planets that our rovers, like Perseverance and Curiosity on Mars, now call home. We live in a universe with trillions of galaxies, many of which have solar systems with the ideal conditions to sustain life, yet we have not discovered any signs of extraterrestrial life. So, where is everyone? THE PARADOX

This very question is the same one Italian physicist Enrico Fermi supposedly asked his colleagues at Los Alamos National Laboratory in New Mexico1. In fact, his query became the motivation behind the Fermi paradox. The Fermi paradox is merely a question of probability and highlights that despite the age of our universe and the almost infinitely large number of planetary systems that could inhabit life, humans have yet to encounter extraterrestrial life forms.2 Fermi is not the only one to ponder this existential question.3 Dr. Frank Drake, a prominent astronomer and astrophysicist, developed a formula to estimate how many alien societies could theoretically exist. In his formula, Drake theorized that there are a multitude of astronomical factors that influence where and how civilizations would form. For example, most civilizations most likely require stars of a certain size to provide enough light and thermal energy for biological functions, such as photosynthesis. Additionally, planets must have the proper conditions to sustain life, which include having enough carbon to build biomolecules and water for organisms to survive. However, the lack of observed life forms does not match up with the high numbers of civilizations that the Drake equation could theoretically yield. Consequently, astrophysicists have pondered this question of probability over the years and have developed different explanations for this disparity.


Figure 1: The Drake Equation. A formula that can be used to estimate the number of possible extraterrestrial civilizations, where N = number of civilizations with which humans could communicate, R* = average rate of star formation, fp = fraction of stars with planets, ne = number of planets that could host life, f l = possible planets that could develop life, fi = planets where intelligent life could develop, fc = planets where life could communicate, L = amount of time that civilizations can communicate. ANSWERS TO AN UNANSWERABLE QUESTION

SEARCHING

A seemingly obvious answer is that extraterrestrials simply do not exist in our universe. Developed by Frank Tipler and Michael Hart in the early 1980s, the Tipler-Hart solution postulates that aliens have not contacted humans because they simply do not exist.4 Another explanation is that they exist but are not technologically advanced enough to detect our presence or to reach out to us yet. For all we know, life outside of Earth could consist merely of simple microorganisms. Alternatively, some scientists speculate that aliens have already visited us in the past, but life on Earth was not advanced enough to understand or recognize them as extraterrestrial organisms. However, there is no evidence that such advanced civilizations have visited Earth, and we still have not detected any signs of alien life from advanced radio telescopes. One of the most recognizable solutions to this paradox is known as “The Great Filter”.5 Robin Hanson, the researcher who coined this idea, speculated that fundamental barriers prevent some civilizations in the universe from expanding. Potential barriers could include uncontrollable ones such as the initial atmospheric composition of a planet and its distance from the sun. But they could also include problems that civilizations can control such as pollution and global conflict. These barriers would then theoretically prevent civilizations from advancing past a certain point of their development. Perhaps humanity is the only civilization in our universe’s 14 billion-year history that has overcome the Great Filter. Or, perhaps, we are the only civilization left in the universe that has yet to encounter the Great Filter.

As we have developed new technologies to explore space, we have also created new projects that may begin to help us answer these hard questions that space exploration has created. One early example of such a project is the Voyager missions. In the 1970s, NASA launched the first probes that were to reach interstellar space beyond the reaches of our solar system.6 With this in mind, scientists included records onboard which could provide to any alien civilization proof that other life in the universe exists. These “Golden Records” contain greetings in various languages, music from around the world, and photos representing our planet and the accomplishments of those living on it, such as images of technological innovations and of the Earth itself taken from outer space. On the cover of the record, NASA included valuable information that aliens could use to learn more about life on Earth, including our location relative to 14 known pulsars and visual directions which could be used to better understand the contents of the record.7 Hopefully, one day, the Voyager probes will be found by some distant extraterrestrial civilization who will discover that they are not alone in the universe— and who may then in turn provide humans with a similar sense of clarity. Another ongoing project that NASA and other organizations have been working on is the search for exoplanets—which are planets that inhabit star systems outside of our solar system. Currently, there are nearly four thousand confirmed planets in our galaxy, although astronomers predict that nearly trillions may exist in our galaxy alone.8 Astronomers are currently searching for plan-

76

Berkeley Scientific Journal | FALL 2021

FEATURES


"Is humanity doomed for a solitary existence in our universe? Or can we revel in the fact that our presence may be truly special?" Listen project, use similar technologies to survey nearly a million nearby stars and star systems as well as nearly 100 galaxies that are close to the Milky Way.14 The radio telescopes used for this project are located all around the world, from West Virginia to Australia. These telescopes are linked together in order to conduct a deep and comprehensive search of all the various radio waves that can be found throughout outer space. Despite not yet having found any signs of life, these projects symbolize scientists’ optimism for discovering that we are not alone in the universe. ANSWERS Figure 2: The cover of the Golden Record on board the Voyager missions. ets with conditions similar to those of Earth since these conditions have clearly proven the possibility to sustain life. For example, astronomers have analyzed the atmosphere of countless planets by examining the light they emit or absorb, splitting this light into a spectrum of different bands of colors, and then using these band arrangements to determine the atmosphere’s chemical composition.9 Any signs of oxygen, carbon dioxide, or methane in these bands are regarded as potential signatures of life or as indicators that life could develop on these planets in the future. Similarly, astronomers analyze the star systems hosting these exoplanets; stars that are slightly cooler than our Sun combust for slightly longer, meaning that they can burn for tens of billions of years, increasing the chances that life has time to evolve.10 If a planet is the ideal distance away from these types of stars, they may be potential hosts for life. An alternative method to search for life is by scanning the skies for radio waves. Radio waves are a great way to communicate across space since they travel at the speed of light and are not absorbed by the dust in space.11 As a result, the Search for Extraterrestrial Intelligence (SETI) Institute set up various projects to detect alien life as advanced as us using this technology.12 For example, the Allen Telescope Array, located in Hat Creek, CA, is one of SETI’s first arrays of radio telescopes. It includes a series of forty-two large antennas containing specialized receivers designed to detect any radio signals that other civilizations may transmit from their own technology or activities. These antennas are set up in such a way that internal mirrors within the large dishes reflect and amplify the incoming signals that scientists can use to make observations.13 With this arrangement of radio telescopes, researchers can observe multiple star systems at any time of day, maximizing the chance of observing life. Additional SETI projects, such as the Breakthrough

FEATURES

Searching for life in our universe is difficult, especially when the only evidence of life that we have is our own existence. There are numerous research projects to address Fermi’s original question, yet, after nearly seventy years of research, we still have no answer. In the end, we may never get answers in our own lifetimes or even during humanity’s existence simply due to factors beyond our control and the sheer vastness of outer space. Is humanity doomed for a solitary existence in our universe? Or can we revel in the fact that our presence may be truly special? Our existence in the universe may be something extraordinary; against all the odds and barriers that the universe may throw at us, humanity still exists and thrives.

Figure 3: Allen Telescope Array, a SETI initiative to detect radio signals Acknowledgements: I would like to acknowledge Steven Giacalone and Tyler Cox for their helpful feedback about SETI's research initiatives and general expertise about our current understanding of the search for life in our universe.

FALL 2021 | Berkeley Scientific Journal

77


REFERENCES 1. 2. 3. 4.

5. 6. 7. 8. 9.

10. 11. 12. 13. 14.

Jones, E. (1985). "Where is Everybody? "An Account of Fermi’s Question. https://doi.org/https://doi.org/10.2172/5746675 Howell, E. (2018, April 27). Fermi Paradox: Where Are the Aliens? Space.Com. https://www.space.com/25325-fermi-paradox.html Shostak, S. (2021, July). Drake Equation. SETI Institute. https://www.seti.org/drake-equation-index Hart, M. (n.d.). An Explanation for the Absence of Extraterrestrials on Earth. Quarterly Journal of the Royal Astronomical Society. https://articles.adsabs.harvard.edu//full/1975QJRAS..16..128H/0000128.000.html Hanson, R. (1998, September 15). The Great Filter- Are We Almost Past It? https://mason.gmu.edu/~rhanson/greatfilter. html Voyager—Mission Overview. (n.d.). Retrieved December 6, 2021, from https://voyager.jpl.nasa.gov/mission/ Lewin, S. (2017, September 5). Dear e. T.: Math on voyager’s golden record tells a story. Space.com. https://www.space. com/38024-math-of-voyager-golden-record.html The Search for Life. (n.d.). Exoplanet Exploration: Planets Beyond Our Solar System; NASA. https://exoplanets.nasa.gov/ search-for-life/can-we-find-life Brennan, P. (2018, September 10). Other skies, other Suns: The search for Exoplanet Atmospheres. NASA. Retrieved November 15, 2021, from https://exoplanets.nasa.gov/news/1522/ other-skies-other-suns-the-search-for-exoplanet-atmospheres/. The Search for Life: The Habitable Zone. (2021, April 2). Exoplanet Exploration: Planets Beyond Our Solar System; NASA. https://exoplanets.nasa.gov/search-for-life/habitable-zone Seti observations. (n.d.). SETI Institute. https://www.seti.org/ seti-institute/project/details/seti-observations SETI. (n.d.). SETI. https://www.seti.org/seti-institute/ Search-Extraterrestrial-Intelligence Shostak, S. (n.d.). Allen Telescope Array Overview. SETI Institute; SETI Institute. https://www.seti.org/ata Breakthrough Initiatives. (n.d.). https://breakthroughinitiatives.org/initiative/1 IMAGE REFERENCES

15. Nature Galaxy Sky Stars People Dark Man Night. (2020). [Photograph]. https://www.maxpixel.net/Nature-Galaxy-SkyStars-People-Dark-Man-Night-2601716 16. Gill, K. G. (2014). Europa Rising - Drake Equation [Graphic]. https://www.flickr.com/photos/kevinmgill/14486519161/ 17. Jet Propulsion Laboratory. (n.d.). Making of the Golden Record [Photograph]. https://voyager.jpl.nasa.gov/galleries/ making-of-the-golden-record/ 18. Gutierrez-Kraybill, C. G. K. (2008, May 9). Allen Telescopes Soda Blasting [Photograph]. https://commons.wikimedia.org/ wiki/File:C_G-K_-_Allen_Telescopes_Soda_Blasting_(by). jpg

78

Berkeley Scientific Journal | FALL 2021

FEATURES


The Effect of Conflict on Healthcare Workers in Syria: Results of a Qualitative Survey By: Sarah Abdelrahman, Rohini Haar, MD MPH ABSTRACT The purpose of this study is to understand how the conflict in Syria, having devastated the healthcare system, has affected Syrian healthcare workers. We provide a secondary analysis of a summer 2019 survey from Physicians for Human Rights (PHR) conducted with 82 Syrian healthcare workers living in neighboring countries as well as in Northeast and Northwest Syria. Our descriptive analysis found that 48 participants reported an average of 16.52 hours of work per day, and 40 participants reported caring for an average of 43 patients per day while working in Syria during the conflict. 68 participants reported facing barriers to performing their work, and 59 participants reported facing risks as a medical professional. 71 participants experienced traumatic events during their work as a medical professional, and 70 participants experienced stress in the month prior to the interview. This analysis illustrates the negative effect that armed conflict has on healthcare workers through disruptions in their workload, limited resources, risks faced, insecurity, and mental health outcomes. These factors require long-term consideration in order to improve security, training, and resources for healthcare workers. Sarah Abdelrahman is a graduate student in the Division of Epidemiology and Biostatistics at the School of Public Health at the University of California, Berkeley. Rohini J. Haar is an adjunct professor in the Division of Epidemiology and Biostatistics at the School of Public Health at the University of California, Berkeley and an emergency medicine physician in Oakland, California.

INTRODUCTION Since 2011, Syria has been engulfed in a complex civil war marked by both targeted and indiscriminate attacks on civilians and civilian infrastructure. The ongoing conflict and resulting humanitarian crisis have left over 5.6 million Syrians as refugees, 6.2 million internally displaced, and a documented 380,636 dead by the start of 2020—with the true death toll estimated to be much higher.1,2 The ongoing armed conflict in Syria has severely impacted the country’s healthcare system. Since the conflict began in 2011, the health sector has suffered from systematic and widespread attacks against healthcare facilities and medical workers.3 Not only have healthcare professionals in Syria been directly targeted and killed, but they have also been systematically persecuted through legalized and extra-judicial means, including forced disappearance, detention, torture, and execution.4 Physicians for Human Rights (PHR), a U.S.-based international advocacy organization, has documented the killing of 923 medical professionals since 2015. Moreover, Syrian healthcare workers suffer from frequent threats, the destruction of health infrastructure, limited medical supplies and resources, and lack of surveillance and monitoring capacity.3,6,7 All of these factors, along with the complexities of working in a dynamic and insecure context, have affected the ability of Syrian health professionals to treat their patients. There is mounting evidence indicating the severe health impacts of the Syrian conflict on the population at large, including the rise of infectious diseases, non-communicable diseases, and mental health issues.6-9 Past research has also examined the conflict’s effects on medical workers, specifically in besieged areas, as well as attacks on healthcare.4,5,10 The contributions of past research have demonstrated a heavily deteriorated health sector with a significant impact on the ability and willingness of healthcare workers to

RESEARCH

continue practicing their profession. Due to the restrictions and safety concerns that exist in this climate, there is a critical need for information about the experiences of Syrian healthcare workers themselves and their perspectives on how the conflict has impacted them and their work. This study aims to explore how the conflict has affected Syrian healthcare workers and their ability to provide their services. In particular, we aim to elucidate: 1. The workload, training, and resources of Syrian healthcare workers during the conflict from 2011 to 2019. 2. The practical barriers and risks faced by Syrian healthcare workers. 3. The mental health and security concerns of these workers during the conflict. METHODS We collected secondary survey data from PHR that was used for another purpose. We compiled the data into an excel sheet and selected 32 of the 46 survey questions relevant to our research question. We did not include questions relating to detention, as these findings were already published in a separate PHR report.5 After obtaining the data and translating the interview answers from Arabic to English, we categorized the data into healthcare worker characteristics and conflict experiences. Healthcare worker characteristics categories include the participant’s type of health specialty, gender, location of origin and work, and type of work setting while they were working in Syria. The questionnaire included open-ended questions; therefore, we categorized the responses based on an inductive analysis of the concepts and themes that emerged. We utilized a deductive approach for specific, frequently used terms—i.e., when terms were used in four or more participant’s answers, that theme became a

FALL 2021 | Berkeley Scientific Journal

79


subcategory. Conflict experiences were categorized into three categories:

graphs to give a visualization and comparison of each subcategory count. All descriptive analyses were conducted using R.

1. 2. 3.

Ethical Considerations: PHR interviewed Syrian healthcare workers residing in Northeast and Northwest Syria, as well as outside of the country, because these populations are less exposed to danger and risk of reprisals than those remaining within areas controlled by the Syrian government. The researchers minimized the risks by obtaining oral informed consent from survey respondents and recording no identifying data. Researchers also took all necessary precautions to ensure confidentiality of records and of interview sites, subjects, and times. PHR’s Ethical Review Board (ERB) approved this research.

Healthcare worker workload, training and resources. Their barriers and risks faced. Their personal health and security.

See appendix for definitions of each category and subsequent subcategory. Inclusion and Exclusion Criteria: The selection criteria for participants included being a Syrian national and having worked as a healthcare worker (HCW) in Syria during the conflict (2011-2019). Healthcare workers were defined as those professionally involved in the search for, collection, transportation, or diagnosis or treatment—including first-aid treatment—of the wounded and sick, and in the prevention of disease. Healthcare workers could include physicians, nurses, paramedics, ambulance drivers, search-and-rescue personnel, and others. The exclusion criteria included being under the age of 18 or being unable to provide consent due to linguistic barriers, cognitive impairment, or other disability. Sampling and Data Collection: PHR surveyed Syrian healthcare workers from July 2019 to November 2019, initially to identify healthcare workers who had been detained during the conflict.5 Through PHR’s network, members of the research team who were connected to displaced Syrian healthcare workers invited them to participate in the study. Those who gave their informed consent participated in the initial surveys. At the end of each survey, the participants were asked to recommend another potential participant for the study (snowball sampling). Survey participants were based in Turkey, Lebanon, Jordan and Northwest and Northeast Syria. Survey Design: PHR’s survey sought to collect the following information: demographic, biographical, professional, trauma-related, and detention-related, as well as data about HCW’s professional and personal experiences in Syria during the conflict and information on their mental health. Questions were asked in a variety of formats, including yes/no, Likert scale, checklist, and open-ended. The questionnaire was translated from English into Arabic and then back-translated. The survey questionnaire consisted of a total of 46 questions and was conducted via phone, with the exception of some surveys in Jordan that were conducted in-person. The survey was administered verbally (phone or in-person) and data was noted on paper forms, then later entered onto an encrypted, secure platform (Kobo) by PHR researchers. Data Analysis: We compiled the data to determine the total responses for each relevant interview question: the numerical count (percent) for each category of healthcare worker characteristics and conflict experiences. For any numerical categories, we also found the minimum, maximum, and mean. We graphed the data into bar

80

Berkeley Scientific Journal | FALL 2021

RESULTS PHR interviewed a total of 82 Syrian healthcare workers. We included data from 71 of the 82 participants. 11 reported having been directly involved in armed conflict and were excluded from further analysis because the research team could not contextualize what that involvement meant or how it could potentially bias the findings. We utilized data for 32 (out of 46) of the interview questions as they were relevant to the research question. For these questions, an average of 65 participants answered each question, with a minimum of 41 and a maximum of 71. Healthcare Worker Characteristics: Of the 71 participants, there were 29 (40.85%) doctors, 13 (18.31%) nurses, eight (11.27%) pharmacists, seven (9.86%) paramedics, and 14 (19.72%) other types of healthcare workers. There were 62 (87.32%) men and nine (12.68%) women among the participants. Of the 69 participants who provided that information, 33 (47.83%) worked in field hospitals, 33 (47.83%) worked in hospitals, 15 (21.74%) worked in clinics, nine (13.04%) worked in humanitarian organizations, and 17 (24.64%) worked in other

Table 1: Healthcare Worker Characteristics

RESEARCH


types of health-related work settings. During the time participants were interviewed, 24 participants remained working as healthcare workers, while the other 47 participants were either unemployed or worked outside of healthcare. The three main governorate locations of origin of the 71 participants include Damascus (35.21%), Daraa (15.49%), and Aleppo (15.49%). Table 1 provides information on healthcare worker characteristics. Conflict Experience: Workload, Training and Resources Subcategories: 48 participants reported an average of 16.52 hours of work per day while they were health workers in Syria. 40 participants reported tending to an average of 43 patients per day, with 68.1% being direct patient care. 44 (80%) of 55 participants had other healthcare workers with their specialty in their work location and 32 (72.73%) of 40 participants reported that they treated war wounds. 30 (42.86%) of 70 participants reported they were not trained to conduct the work that they performed and 34 (61.43%) of 70 participants did not have the appropriate resources at their disposal to perform their work. Barriers and Risks Subcategories: Participants were asked in an open-ended question what the three main barriers were to perform their work, to which 68 participants responded. As previously stated, when terms were used in four or more participant’s answers, that theme became a subcategory. This open-ended question had five subcategories: limited Staff and Resources, Targeted, Bombardment, Insecurity, and Violence. Under the ‘Staff and Resources’ subcategory, 33 (48.53%) of 68 participants reported having limited medical supplies, 23 (33.83%) reported having limited qualified specialists, five (7.35%) reported having no funding, and three (4.41%) reported having inadequate health facilities, no training, and flight of healthcare workers each. Under the ‘Targeted’ subcategory, 13 of 68 participants (19.12%) experienced attacks on health facilities and 12 (17.65%) were directly targeted. Under the ‘Bombardment’ subcategory, 23 of 68 participants (33.82%) experienced civilian bombardment and blockades and eight (11.76%) experienced limited transportation of medical supplies and patients. Under the ‘Insecurity’ and ‘Violence’ subcategories, 26 of 68 participants (38.24%) felt unsafe and 10 (14.71%) experienced violence respectively.11 (See Figure 1). Participants were asked, in an open-ended question, what the three main risks were as a medical professional, to which 59 participants responded. This open-ended question had four subcategories: Bombardment, Detention, Insecurity, and Targeted. Under the ‘Bombardment’ subcategory, 25 (42.37%) of 59 participants experienced civilian bombardment and blockades. Under the ‘Detention’ and ‘Insecurity’ subcategories, 21 (35.59%) were detained or arrested, and 17 (28.81%) felt unsafe respectively. Under the ‘Targeted’ subcategory, 15 (25.42%) faced death threats, 10 (16.95%) were directly targeted, and nine (15.25%) experienced attacks on health facilities. Mental Health and Security Subcategories: 65 participants were forcibly displaced an average of three times, with 16 (24.62%) displaced once, 21 (32.31%) displaced twice,

RESEARCH

Figure 1: HCW Barriers faced in performing their work. 15 (23.08%) displaced three times, nine (13.85%) displaced four times, and four (6.15%) displaced five times. Participants were asked in an open-ended question what their reasons were for departing Syria, from where they worked as healthcare workers, to which 66 participants responded. From this open-ended question, six subcategories emerged: Insecurity, Violence, Bombardment, Detention, Targeted, and Recapture of Area. Under the ‘Insecurity’ subcategory, 24 (36.36%) of 66 participants left due to feeling unsafe, including fear for themselves and their family. Under the ‘Violence’ subcategory, 13 (19.70%) left due to armed forces, shootings, the participants’ desertion of the army and their fear of being drafted. Under the ‘Bombardment’ and ‘Detention’ subcategories (22.73%) of the 66 participants that departed from Syria left due to civilian bombardment and blockades, and 10 (15.15%) left due to past detention or arrest respectively. Under the ‘Targeted’ and ‘Recapture of Area’ subcategories, 16 (24.24%) left due to being wanted by the Syrian government and fear of being arrested, and six (9.09%) left due to the Syrian government’s recapture of the area in which the participant worked respectively. Participants were asked whether they resided as practicing healthcare workers in Syrian government-controlled, oppositioncontrolled, or ISIS-controlled areas before they departed Syria. Of the 67 participants that responded, 22 (32.84%) resided in Syrian government-controlled areas, 39 (58.21%) resided in oppositioncontrolled areas, and eight (11.94%) resided in ISIS-controlled areas. Participants were asked a series of yes/no questions about experiencing traumatic events during their work as healthcare workers, to which all 71 participants (100%) responded. Of these, 66 (92.96%) said yes to facing attacks, including direct bombardments, battles, or other types of large-scale violence; 65 (92.86%) said yes to feeling that their life was threatened because they were working as a medical professional in Syria; 50 (71.43%) said yes to personally being threatened with death or injury; 52

FALL 2021 | Berkeley Scientific Journal

81


Figure 2: Medical professionals that experienced traumatic events during their work.

Figure 3: Health professionals that experienced stress over the past month.

(73.24%) said yes to feeling intense fear, helplessness, or horror; and 22 (31%) said yes to being physically injured. (See Figure 2). Participants were asked a series of yes/no questions about experiencing stress over the past month when interviewed, to which 70 participants responded. Of these, 51 (72.86%) said yes to trying hard not to think about traumatic events or avoiding situations that reminded them of the events; 42 (60%) said yes to having nightmares about the events; 40 (57.14%) said yes to being on constant guard, watchful, or easily startled; 36 (52.17%) said yes to feeling numb or detached from people, activities, or their surroundings; and 31 (44.29%) said yes to feeling guilty. (See Figure 3). DISCUSSION

was not concomitant with appropriate training. All the participants interviewed experienced traumatic events during their work as a medical professional including being personally threatened with death or injury; feeling intense fear, helplessness, or horror; experienscing the loss of colleagues or family members; and being physically injured. Those that were directly targeted and threatened because they were healthcare workers, and were detained or arrested as a result, felt unsafe and departed from their place of residence—many in this group were displaced more than once. Their experiences working as medical professionals during a conflict have impacted their mental health, with 70 of 71 participants reporting having experienced nightmares and avoiding situations that reminded them of their experience, in addition to experiencing feelings of guilt.

This study provides a deeper understanding of healthcare worker experiences during conflict through a descriptive analysis exploring healthcare workers’ heavy workloads, barriers, and violence faced as well as the impacts of such traumatic experiences on mental health. To our knowledge, these interviews with 24 current and 47 former Syrian healthcare workers represent the largest sample of healthcare workers in Syria to report on their experiences. We found that 48 participants reported an average of 16.52 hours of work per day and 40 participants reported tending to an average of 43 patients per day, with 68.1% being direct patient care. Comparing this statistic to physicians in the US that work an average of 51 hours per week (10.20 hours per day) and tend to an average of 20 patients per day illustrates the severe work burdens of these healthcare workers.12 Limited medical supplies, attacks on health facilities, and limited transportation of medical supplies and patients lead to many healthcare workers not having the appropriate resources to treat their patients and further exacerbate the burden that disease brings upon the healthcare system. Many of the healthcare workers also reported they were not trained to perform their work, where most had to treat war wounds due to civilian bombardment and violence. This suggests that task-shifting

82

Berkeley Scientific Journal | FALL 2021

Limitations There are several important limitations to this study. Although larger than other studies of healthcare workers in conflict, the sample size was small. Participants were selected through PHR’s network and secondary snowball sampling; thus, the sampling was not random, and this raises the risk of selection bias. The sample was also not representative of the whole population due to limited geographic distribution of participants and lack of gender diversity—the majority of participants were men. Furthermore, secondary data was used in this study, and the interviews were initially conducted to identify respondents who had specific experiences of detention or targeting which limits the number and type of questions asked. While the study provides insights into the experiences of healthcare workers, we are not able to probe deeper into how or why conflict experiences resulted in these responses. Further qualitative work could explore how conflict, or attacks on health specifically, impacted healthcare workers, and more randomized or community based quantitative work may help avoid the biases associated with the homogeneity of this dataset.

RESEARCH


CONCLUSION This study explores how the conflict in Syria has had a negative effect on healthcare workers through their workload, training and resources, barriers and risks faced, and mental health and security. The flight of medical professionals, attacks on health facilities, bombardment of civilians and civilian structures, and overall violence led to many healthcare workers being overworked while managing with little to no training and limited medical resources to perform the work they are tasked with, all of which impacts their mental health. Without additional and meaningful resources, training, and protection, healthcare workers will continue to face enormous stress, high work burdens, and mental health consequences that ultimately weaken the entire health system. Healthcare workers are crucial in conflict settings and play a critical role in providing care to all people, regardless of political affiliation or other factors. Now, after nine years of brutal conflict, both the Syrian government and international stakeholders must adopt measures to combat the severe shortage of, and prevent the continued flight of, medical professionals. This can be done by providing adequate training and salaries to all healthcare workers as well as ensuring their safety regardless of their location, the affiliation of medical facilities where they work, their political affiliations, or the civilian populations they serve. The Syrian government and the international community should also maintain an equal and adequate supply chain of resources such as medical equipment, medication, and vaccines as well as safe transportation of medical supplies and patients. This includes ensuring the successful provision of humanitarian-aid deliveries to populations living outside of government-controlled areas and the hospitals that serve them through all available avenues, including by restoring essential cross-border aid mechanisms through both the Northeast and Northwest. In recently reconciled areas, such as Eastern Ghouta and Daraa, which have changed from opposition to government control, the Syrian government needs to provide safety and access to work for medical professionals who reside there to ensure that they are not subject to retaliation for their work. The international community should hold the Syrian government accountable for protecting healthcare workers and health facilities. The systematic attacks on civilians and civilian infrastructure, and the targeted attacks of the health sector and other violations of international humanitarian law, have not only contributed to the flight of medical personnel out of Syria, but have also severely eroded the professional capacities as well as the mental and physical health of those who have remained within the country. The effect of this on the health system and on the communities they serve has been catastrophic. ACKNOWLEDGEMENTS

DECLARATION OF INTEREST STATEMENT The authors declare that there is no conflict of interest. REFERENCES

Internally Displaced People. UNHCR Syria. Accessed November 14, 2021. https://www.unhcr.org/sy/internally-displaced-people 2. Ri S, Blair AH, Kim CJ, Haar RJ. Attacks on healthcare facilities as an indicator of violence against civilians in Syria: An exploratory analysis of open-source data. PLOS ONE. 2019;14(6):e0217905. doi:10.1371/journal.pone.0217905 3. Haar RJ, Risko CB, Singh S, et al. Determining the scope of attacks on health in four governorates of Syria in 2016: Results of a field surveillance program. PLoS Med. 2018;15(4):e1002559. doi:10.1371/journal.pmed.1002559 4. Fouad FM, Sparrow A, Tarakji A, et al. Health workers and the weaponisation of health care in Syria: a preliminary inquiry for The Lancet–American University of Beirut Commission on Syria. The Lancet. 2017;390(10111):2516-2526. doi:10.1016/S0140-6736(17)30741-9 5. Koteiche R. “My Only Crime Was That I Was a Doctor” - Physicians for Human Rights. Published 2019. Accessed November 14, 2021. https://phr.org/our-work/resources/my-onlycrime-was-that-i-was-a-doctor/ 6. Abbara A, Blanchet K, Sahloul Z, Fouad F, Maziak AC and W. The Effect of the Conflict on Syria’s Health System and Human Resources for Health. World Health & Population. Published September 15, 2015. Accessed June 17, 2020. https://www. longwoods.com/content/24318//the-effect-of-the-conflicton-syria-s-health-system-and-human-resources-for-health 7. Siege and Death in Eastern Ghota.” Assistance Coordination Unit: 1-7. Assistance Coordination Unit; 2018. Accessed November 14, 2021. https://www.acu-sy.org/en/wp-content/uploads/2018/03/Ghouta_En_010318_.pdf 8. Taleb ZB, Bahelah R, Fouad FM, Coutts A, Wilcox M, Maziak W. Syria: health in a country undergoing a tragic transition. Int J Public Health. 2015;60(1):63-72. doi:10.1007/s00038014-0586-2 9. Jefee-Bahloul H, Barkil-Oteo A, Pless-Mulloli T, Fouad FM. Mental health in the Syrian crisis: beyond immediate relief. The Lancet. 2015;386(10003):1531. doi:10.1016/S01406736(15)00482-1 10. Fardousi N, Douedari Y, Howard N. Healthcare under siege: a qualitative study of health-worker responses to targeting and besiegement in Syria. BMJ Open. 2019;9(9):e029651. doi:10.1136/bmjopen-2019-029651 11. Violence and the Use of Force. International Committee Red We thank Rayan Koteiche and other staff at Physicians for Human Cross; 2011. Accessed August 25, 2020. https://www.icrc.org/ Rights for conducting this unique healthcare worker survey and en/doc/assets/files/other/icrc_002_0943.pdf allowing us to use their data. We thank Joseph Leone for assisting 12. Elflein J. • Key information physicians United States 2018 | us on the manuscript writing. Finally, we thank all the healthcare Statista. Published 2018. Accessed November 14, 2021. https:// workers that participated in the questionnaire. www.statista.com/statistics/397863/doctors-portrait-in-theus/

RESEARCH

1.

FALL 2021 | Berkeley Scientific Journal

83


APPENDIX

that were displaced categorized into numerical categories.

Definition of Subcategories Workload, Training and Resources Subcategories: ‘Training and Resources’ including: a. ‘Other HCWs with your specialty’ refers to the number of participants that had other medical professionals with the participant’s specialty/skill set in their location. b. ‘Treated wounds’ refers to the number of participants that treated war wounds. c. ‘Trained to Conduct Work’ refers to the number of participants that were trained to conduct that type of work/ provide that type of treatment. d. ‘Resources’ refers to the number of participants that felt they had the appropriate resources at their disposal to perform their work.

‘Departure Reason’ refers to the number of participants whose reason for departure included: (Open-ended question) a.‘Insecurity’ which refers to feeling unsafe and fear for themselves and their family. b.‘Violence’ which refers to armed forces, shootings, the participants’ desertion of the army and their fear of being drafted. c.‘Bombardment’ which refers to civilian bombardment and blockades. d.‘Detention’ which refers to being detained or arrested. e.‘Targeted’ which refers to being wanted by the Syrian government and fear of being arrested. f.‘Recapture of Area’ which refers to the Syrian government’s recapture of the area the participant worked.

‘Hours/Day’ refers to the number of hours a day the participant worked on average.

‘Control of Area’ refers to the number of participants that worked in an area controlled by the Syrian government, opposition groups or ISIS.

‘Percent Care’ refers to the percentage of time that was dedicated to providing direct patient care. ‘Nb Patients/Day’ refers to the average number of patients a day the participant tended to. Barriers and Risks Subcategories: ‘Barriers’ refers to the number of participants that faced barriers to performing their work, where the barriers included: (Openended question) a. ‘Limited Staff and Resources’ which refers to limited medical supplies (medication, equipment, resources), qualified specialists and training, inadequate health facilities, flight of HCWs, and no funding. b. ‘Bombardment’: which refers to civilian bombardment and blockades and limited transportation of medical supplies and patients. c. ‘Insecurity’ which refers to feeling unsafe. d. ‘Targeted’: which refers to HCWs being targeted, attacks on health facilities. e. ‘Violence’ which refers to armed forces, shootings, the participants’ desertion of the army and their fear of being drafted. ‘Risks’ refers to the number of participants that faced risks as a medical professional, where the risks included: (Open-ended question) a. ‘Targeted’ which refers to HCWs being targeted, attacks on health facilities, and death threats. b. ‘Bombardment’ which refers to civilian bombardment and blockades. c. ‘Insecurity’ which refers to feeling unsafe and random kidnappings. d. ‘Detention’ which refers to being detained or arrested.

‘Traumatic Events’ refers to the number of participants during their work as a medical professional that experienced: a. ‘Attacks’ which refers to direct bombardments, battles, or other types of large-scale violence. b. ‘Felt Threatened’ which refers to feeling that their life was threatened because they were working as a medical professional in Syria. c. ‘Felt Fear’ which refers to feelings of intense fear, helplessness, or horror. d. ‘Threatened’ which refers to personally threatened with death or injury. e. ‘Injured’ which refers to being physically injured ‘Stress Indicators’ refers to the number of participants that experienced stress over the past month including: a. ‘Avoid Thinking’ which refers to trying hard not to think about traumatic events or avoiding situations that reminded them of the events. b. ‘Nightmares’ which refers to having nightmares about the events or thinking of the events when they did not want to. c. ‘On Guard’ which refers to being constantly on guard, watchful, or easily startled. d. ‘Felt Detached’ which refers to feeling numb or detached from people, activities, or your surroundings. e. ‘Felt Guilty’ which refers to feeling guilty or unable to stop blaming themselves or others for the events.

Personal Health and Security Subcategories: ‘Amount of Times Displaced’ refers to the number of participants

84

Berkeley Scientific Journal | FALL 2021

RESEARCH


Wildfire Significance within the San Francisco Bay Area’s Air Quality By: Scott Hashimoto, Rohith A Moolakatt, Amit Sant, Emma Centeno, Ava Currie, Joyce Wang, and Grace Huang; Research Sponsor (PI): Professor Amm Quamruzzaman ABSTRACT Due to the increase in frequency and severity of wildfires in the San Francisco Bay Area, wildfire smoke has become a significant public health hazard linked with lung morbidity and increased mortality in exposed populations. Wildfire smoke consists of many types of particles, each with its own set of adverse effects. This study focuses on particulate matter smaller than 2.5 microns, or PM 2.5, one of the main contributors to adverse health effects, especially to the pulmonary and cardiovascular systems. In particular, we examine trends in 21 years of publicly available data from the Environmental Protection Agency on PM 2.5 air pollution by county by comparing PM 2.5 levels in the wildfire season months (May through October) to the rest of the year. Upon initial review, our findings may seem counterintuitive; on the three examined features of the dataset (mean, median, and maximum), the non-wildfire season generally yielded higher concentrations of mean and maximum PM 2.5 than the wildfire season. However, over time, the gap between the seasons has shrunk, which we propose is partially due to the PM 2.5 maximums driven by recent wildfires. Although the historic gap of PM 2.5 levels between wildfire and non-wildfire should be explored, the acute maximums offer a compelling climate and public health threat to the San Francisco Bay Area region as a whole. Given the severe public health consequences of exposure to PM 2.5 and wildfire smoke, we urge policy makers to take additional preventative and mitigative action during and in preparation for annual wildfire seasons. Major, Year, Departmental: Hashimoto (Molecular Cell Biology, Junior); Moolakatt (Molecular Environmental Biology & Interdisciplinary Studies Field, Junior), Sant (Computer Science, Junior), Centeno (Ecosystem Management and Forestry & Data Science, Junior), Currie (Ecosystem Management and Forestry, Junior); Wang (Molecular Environmental Biology, Sophomore); Huang (Rhetoric & Environmental Economics and Policy, Junior), Department: Interdisciplinary Studies Field

INTRODUCTION Wildfire smoke has quickly become one of the most apparent and pressing climate issues for the San Francisco Bay Area. The western United States has seen consistent and rapid increases in wildfire activity since the 1980s, characterized by a rise in the frequency, severity, size, and total burned area associated with wildfires. California recognizes emerging wildfires as one of the significant threats to be expected under climate change;1 aggregate fire indices have risen by 20% following temperature increases and precipitation decreases in autumn over the last four decades.2 California wildfires from the last five years (Carr, Camp, Lightning Complex, and Dixie) highlight the necessity of wildfire prevention and mitigation. Wildfire smoke, in particular, has been a public health risk for vulnerable populations and is projected to become increasingly severe amidst high emissions climate change scenarios.3 Smoke is notoriously difficult to measure and model, and data on it is scarce, relatively recent, and often incomplete. By studying levels of particulate matter (PM), a metric tracked by the United States Environmental Protection Agency (EPA) since the 1990s, we can draw some relevant conclusions from the incomplete picture the data provides. In particular, we focused our study on the levels of particulate matter that is sized about 2.5 micrometers (PM 2.5) or smaller as those are some of the most harmful forms of pollution from the wildfires that can be tracked. PM can be sourced from multiple pollutants, however, as one of the primary particles of wildfire smoke, particulate matter is widely used to measure levels of smoke during wildfire events.4,5 Additionally, since PM 2.5 is a particularly hazardous form of air pollution to human health, monitoring its levels is a valuable metric.

RESEARCH

LITERATURE REVIEW Altogether, the consequences of climate change continue to worsen wildfire conditions yearly. Impacts such as alterations to precipitation cycles, increased drought, and more frequent extreme heat events all contribute to more favorable conditions for extreme wildfire events. Recent climate studies indicate that precipitation within California will increasingly fall as rain rather than snow in shorter, but heavier, storm events later into autumn.6 This delayed wet season exacerbates wildfire risk as September, October, and November (typically the wettest) are drier, thereby extending the wildfire season before the shorter rain season begins. Furthermore, research suggests that wildfires can reburn similar areas quicker when drought conditions have been active within the region, although this relationship needs to be explored further in fire prone areas, such as chaparral.7 Wildfire smoke is composed of many hazardous pollutants, including carbon monoxide, nitrogen dioxide, ozone, volatile organic compounds, and particulate matter (PM).8 The exact health effects of smoke, particularly associated with long term exposure, are difficult to measure and have become an active area of research today. However, there are clear dangers in regard to lung function and mortality. Within the Amazon, exposure to high levels of PM 2.5 from biomass burning was associated with reductions in lung function among school children.9 In Russia, wildfire smoke exposure in association with a 44-day long heat wave was expected to have caused 2000 more deaths than otherwise projected.10 A 13-year study on bushfire and dust exposure in Sydney, Australia yielded similar results on the heightened dangers of wildfire mortality during and the day after extreme heat events.11

FALL 2021 | Berkeley Scientific Journal

85


These studies highlight a growing link between the threat of smoke exposure and extreme heat, a critical intersection between climate threats as both are expected to increase in frequency under climate change. In recent years, wildfires are estimated to make up half of PM 2.5 emissions in the Western United States and about a quarter nationwide.12 Other primary sources of PM 2.5 include vehicle emissions, secondary sulfate, biomass combustion, and secondary organic aerosol (SOA).13 Historically, PM is associated with high mortality: in the London air pollution episode of 1952, 3000-4000 deaths were attributed to the dramatic levels of PM.14 PM 2.5 is dangerous for vulnerable populations, particularly the elderly, infants, persons with chronic cardiopulmonary disease, influenza or asthma,15 and increases in PM 2.5 lead to higher rates of emergency room visits, hospitalizations, and inpatient spending.16 This study also estimates that 1 microgram per cubic meter in PM 2.5 exposure for one day causes 0.69 additional deaths per million elderly, and that a 1 μg·m-3 increase over 3 days in PM 2.5 is expected to cause a loss of about 3 life-years per million beneficiaries. To put these numbers in context, wildfires often surpass 35 μg·m-3 (safe level of AQI) and have the potential to reach levels greater than 250 μg·m-3 (hazardous level of AQI).17 One study found that, since the mid 1900s, PM levels have been decreasing by an average of 0.66 ± 0.10 μg·m-3 per year around the United States likely due to more stringent environmental regulation. The exception to this trend was in the Pacific Northwest seeing the highest increase in Sawtooth National Forest of an increase of 0.97 ± 0.22 μg·m-3 per year. The authors concluded that it was due to an increased wildfire occurrence in the Rocky Mountains.18 This study indicated that wildfires have a significant effect on the long term air quality of a region, although it highlights that a regional analysis is necessary to find accurate results. The Bay Area Air Quality Management District released a comprehensive report about the effect PM has on public health in November 2012, and it provides a strong public health framework for our wildfire discussion. Among other air pollutants, PM was only recognized as a severe public health threat beginning in the mid-1990s due to a series of compelling health studies regarding exposure. METHODS We focused our study on the levels of PM 2.5, or particulate matter that is sized about 2.5 micrometers or smaller, as it is one of the most harmful forms of pollution we can track that is directly related to wildfire smoke. Our objective for this study was to chart the relationship of PM 2.5 levels to wildfire activity within the San Francisco Bay Area using all of the available data from the United States Environmental Protection Agency. We tracked nine different counties that constitute the San Francisco Bay Area: Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Solano, and Sonoma to determine if trends we gathered are consistent across this region. We aimed to determine how PM 2.5 levels have changed since 1990, especially in regard to recent wildfires. Our data was collected from the United States

86

Berkeley Scientific Journal | FALL 2021

Environmental Protection Agency Air Quality System (AQS). We analyzed daily PM 2.5 AQI values from 1999-2020 for each of the nine San Francisco Bay Area counties. We focused our attention on PM 2.5 to measure the persistence of wildfire smoke, as wildfire smoke is one of the major contributors to PM 2.5 levels in the United States. Furthermore, to analyze trends between wildfires and AQI values, we split each year into two categories: wildfire season (May 1st to October 31st) and non-wildfire season (November 1st to April 30th). We arrived at the reasoning behind this set of dates by referencing a similar study which analyzed wildfire-specific particular matter, using these dates to distinguish wildfire season from non-wildfire season. 19 In order to interpret the data, we plotted the mean, median, and maximum PM 2.5 AQI value trends using linear regressions during the wildfire season and non-wildfire season for each San Francisco Bay Area county. We also ran 2 Independent Sample T-tests on the differences between the fire and non-wildfire seasons over various periods of time. To do this, we processed the data in a Jupyter Notebook using the following Python libraries: pandas to manage the data tables, matplotlib to generate the graphs, and scipy to run the statistical tests. RESULTS The linear regression trends for mean and median values were not found to be statistically significant for both the non-wildfire season and the wildfire season. However, during the non-wildfire seasons from 1999-2020, all nine San Francisco Bay Area counties exhibited a significant negative correlation between maximum PM 2.5 AQI values and time, but during the wildfire seasons, this trend does not hold (the r-value is slightly positive, in fact). This suggests that there has been an increase in extreme events causing maximum PM 2.5 values to rise over time, possibly influenced by wildfires. Due to the unpredictable nature of wildfires and wildfire smoke, as well as the significant length of the wildfire season, there are often periods of time during the wildfire season without wildfires influencing the San Francisco Bay Area’s air quality. Moreover, wildfire smoke is typically relatively short-lived in a specific region following a wildfire event. These factors make it difficult to know how much of the daily PM 2.5 AQI values during the wildfire season is due to wildfires, especially when dealing with mean and median values. T-Tests revealed a significant difference between the wildfire and non-wildfire seasons from 1999-2020, showing that, in terms of mean and maximum PM 2.5 AQI, the non-wildfire season, in general, had higher levels of PM 2.5. However, as we exclude less recent years (1990-2008) and focus on 2009-2020, the results show that the difference between the wildfire and non-wildfire season is less significant, indicated by larger P-Values (P > 0.05). In other words, more recently, the difference in PM 2.5 AQI between the wildfire season and non-wildfire season is becoming less noticeable. DISCUSSION Our findings show that, for most of the counties in the San Francisco Bay Area, mean and maximum PM 2.5 AQI has reliably been

RESEARCH


Figure 1: “Wildfire Season Max PM 2.5 AQI Alameda County”: This graph displays a non-significant correlation between maximum PM 2.5 AQI and time during the wildfire season (May 1st to October 31st) in Alameda County.

Figure 2: “Non-Wildfire Season Max PM 2.5 AQI Alameda County”: This graph displays a negative correlation between maximum PM 2.5 AQI and time during the non-wildfire season (November 1st to April 30th) in Alameda County.

higher during the non-wildfire season. However, by analyzing the average differences of the median, mean, and maximum PM 2.5 levels, we observed that the difference between the two seasons has been shrinking. Essentially, PM 2.5 levels have historically been higher during the non-wildfire seasons, although in the last 5-10 years, the difference between the two seasons has decreased, suggesting an influence by recent phenomenon. We believe this influence is driven by the increased frequency of wildfire in San Francisco and the accompanying wildfire smoke of each fire. In addition, it is logical that median values do not show significant differences between wildfire and non-wildfire seasons if the difference is due to extreme weather events, such as wildfires. Given that the dramatic increase in frequency of wildfire is fairly recent, we expect the heightening PM 2.5 levels to continue to increase without severe wildfire prevention efforts under worsening climate change. We call for further analysis of PM 2.5 levels in accordance with demographic data for Bay Area communities given the

skewed health effects left on the young and elderly populations to gauge vulnerability to extreme smoke events. Although PM 2.5 is one of the most dangerous common air pollutants for health, wildfire smoke is composed of many air pollutants, and the exact health consequences, although known to be negative, are difficult to determine. The link between extreme heat events and smoke mortality highlights a severe climate risk due to the expectation of a higher frequency of both due to climate change. Seasonal variations in the air temperature and wind patterns may have influenced the PM 2.5 levels recorded, however, the specifics of these dynamics are special to each region, and lie beyond the scope of this paper. Additionally, more specific geospatial data on air quality is needed within the San Francisco Bay Area, as it is difficult to pinpoint the areas with the worst air quality. For future research, innovation in detecting wildfire smoke levels independent of existing air pollution levels would greatly improve the capacity of future health research regarding the effects of wildfire smoke in the population. In this

1999 - 2020 T-Test Results. Napa County only has data since 2007

2009-2020 T-Test Results. Shaded have p-values ≤ a = 0.05, and therefore pass the hypothesis test.

Figures 3 and 4: 1999 - 2020 T-Test Results and 2009-2020 T-Test Results: These charts display the P-Values for a T-Test which convey the difference between wildfire season and non-wildfire seasons. For this study, we used P < 0.05 as the cutoff for significance. The gray areas represent a statistically significant difference between the wildfire and non-wildfire seasons while white areas denote a statistically insignificant difference.

RESEARCH

FALL 2021 | Berkeley Scientific Journal

87


study we use particulate matter as a metric to track wildfire smoke levels, however, background pollution from other sources, such as automobile commuting and wood burning in homes, may have skewed the data from the effects of wildfire smoke. We call on lawmakers within the San Francisco Bay Area to recognize wildfires to be a major contributor to local air pollution, and therefore the public health of the Bay Area. Measures to mandate and increase access to more stringent air filtration within buildings will contribute to building more resilient and safer spaces for communities. In working to adapt communities to threats of wildfire, this paper aims to highlight wildfire as both an urban planning issue and an acute climate impact that requires regional, national, and global mobilization to adequately address. 1. 2.

3.

4.

5.

6.

7.

8.

9.

88

REFERENCES Bedsworth, L. (2018). California’s Fourth Climate Change Assessment Statewide Summary Report. 133. Goss, M., Swain, D. L., Abatzoglou, J. T., Sarhadi, A., Kolden, C. A., Williams, A. P., & Diffenbaugh, N. S. (2020). Climate change is increasing the likelihood of extreme autumn wildfire conditions across California. Environmental Research Letters, 15(9), 094016. https://doi.org/10.1088/1748-9326/ ab83a7 Mills David, Jones Russell, Wobus Cameron, Ekstrom Julia, Jantarasami Lesley, St. Juliana Alexis, & Crimmins Allison. (2018). Projecting Age-Stratified Risk of Exposure to Inland Flooding and Wildfire Smoke in the United States under Two Climate Scenarios. Environmental Health Perspectives, 126(4), 047007. https://doi.org/10.1289/EHP2594 Kundu, S., & Stone, E. A. (2014). Composition and sources of fine particulate matter across urban and rural sites in the Midwestern United States. Environmental Science: Processes & Impacts, 16(6), 1360–1370. https://doi.org/10.1039/C3EM00719G Holder, A. L., Mebust, A. K., Maghran, L. A., McGown, M. R., Stewart, K. E., Vallano, D. M., Elleman, R. A., & Baker, K. R. (2020). Field Evaluation of Low-Cost Particulate Matter Sensors for Measuring Wildfire Smoke. Sensors, 20(17), 4796. https://doi.org/10.3390/s20174796 Swain, D. L. (2021). A Shorter, Sharper Rainy Season Amplifies California Wildfire Risk. Geophysical Research Letters, 48(5), e2021GL092843. https://doi. org/10.1029/2021GL092843 Parks, S. A., Parisien, M.-A., Miller, C., Holsinger, L. M., & Baggett, L. S. (2018). Fine-scale spatial climate variation and drought mediate the likelihood of reburning. Ecological Applications, 28(2), 573–586. https://doi.org/10.1002/eap.1671 Reid, C. E., Brauer, M., Johnston, F. H., Jerrett, M., Balmes, J. R., & Elliott, C. T. (2016). Critical Review of Health Impacts of Wildfire Smoke Exposure. Environmental Health Perspectives, 124(9), 1334–1343. https://doi.org/10.1289/ ehp.1409277 Jacobson, L. da S. V., Hacon, S. de S., Castro, H. A. de, Ignotti, E., Artaxo, P., & Ponce de Leon, A. C. M. (2012). Association between fine particulate matter and the peak expiratory flow of schoolchildren in the Brazilian subequatorial Amazon: A

Berkeley Scientific Journal | FALL 2021

10.

11.

12.

13.

14.

15.

16.

17.

18.

19.

panel study. Environmental Research, 117, 27–35. https://doi. org/10.1016/j.envres.2012.05.006 Shaposhnikov, D., Revich, B., Bellander, T., Bedada, G. B., Bottai, M., Kharkova, T., Kvasha, E., Lezina, E., Lind, T., Semutnikova, E., & Pershagen, G. (2014). Mortality related to air pollution with the moscow heat wave and wildfire of 2010. Epidemiology (Cambridge, Mass.), 25(3), 359–364. https:// doi.org/10.1097/EDE.0000000000000090 Johnston, F., Hanigan, I., Henderson, S., Morgan, G., & Bowman, D. (2011). Extreme air pollution events from bushfires and dust storms and their association with mortality in Sydney, Australia 1994–2007. Environmental Research, 111(6), 811–816. https://doi.org/10.1016/j.envres.2011.05.007 Burke, M., Driscoll, A., Heft-Neal, S., Xue, J., Burney, J., & Wara, M. (2021). The changing risk and burden of wildfire in the United States. Proceedings of the National Academy of Sciences, 118(2). https://doi.org/10.1073/pnas.2011048118 Zou, B.-B., Huang, X.-F., Zhang, B., Dai, J., Zeng, L.-W., Feng, N., & He, L.-Y. (2017). Source apportionment of PM2.5 pollution in an industrial city in southern China. Atmospheric Pollution Research, 8(6), 1193–1202. https://doi. org/10.1016/j.apr.2017.05.001 Bernard, S. M., Samet, J. M., Grambsch, A., Ebi, K. L., & Romieu, I. (2001). The potential impacts of climate variability and change on air pollution-related health effects in the United States. Environmental Health Perspectives, 109(suppl 2), 199–209. https://doi.org/10.1289/ehp.109-1240667 Pope, C. A. (2000). Epidemiology of fine particulate air pollution and human health: Biologic mechanisms and who’s at risk? Environmental Health Perspectives, 108 Suppl 4, 713–723. https://doi.org/10.1289/ehp.108-1637679 Deryugina, T., Heutel, G., Miller, N. H., Molitor, D., & Reif, J. (2019). The Mortality and Medical Costs of Air Pollution: Evidence from Changes in Wind Direction. The American Economic Review, 109(12), 4178–4219. https://doi.org/10.1257/ aer.20180279 Aguilera, Rosana, Thomas Corringham, Alexander Gershunov, and Tarik Benmarhnia. 2021. “Wildfire Smoke Impacts Respiratory Health More than Fine Particles from Other Sources: Observational Evidence from Southern California.” Nature Communications 12 (1): 1493. https://doi. org/10.1038/s41467-021-21708-0. 18. McClure, C. D., & Jaffe, D. A. (2018). US particulate matter air quality improves except in wildfire-prone areas. Proceedings of the National Academy of Sciences, 115(31), 7901–7906. https://doi.org/10.1073/pnas.1804353115 Liu, J. C., Wilson, A., Mickley, L. J., Dominici, F., Ebisu, K., Wang, Y., Sulprizio, M. P., Peng, R. D., Yue, X., Son, J.-Y., Anderson, G. B., & Bell, M. L. (2017). Wildfire-specific Fine Particulate Matter and Risk of Hospital Admissions in Urban and Rural Counties. Epidemiology (Cambridge, Mass.), 28(1), 77–85. https://doi.org/10.1097/EDE.0000000000000556

RESEARCH


Special thanks to our donors: Michael Delaney Wenbin Jiang Hassan and Humaira Khan Sophia Liu Clifton and Norma Russo Jack Yin Nancy Yin

FALL 2021 | Berkeley Scientific Journal

89



Turn static files into dynamic content formats.

Create a flipbook

Articles inside

Wildfire Significance within the San Francisco Bay Area’s Air Quality

16min
pages 85-90

The Effect of Conflict on Healthcare Workers in Syria: Results of a Qualitative Survey

21min
pages 79-84

Where is Everyone: The Search for Life in the Vast Unknown Shreya Ramesh

9min
pages 75-78

Burning Questions with a Forestry Expert (Dr. Brandon Collins

18min
pages 71-74

Cultured Meat: Growing Meat in the Lab Jane Li

9min
pages 67-70

Innovating Unprecedented Treatments for Celiac Disease (Dr. Detlef Schuppan

16min
pages 61-66

Rewriting Textbooks with Single-Particle Tracking Microscopy (Dr. Robert Tjian

14min
pages 50-53

Let’s Take a Trip into Mental Health Anna Castello

10min
pages 54-57

The Sunset of Twilight Sleep Jonathan Kuo

10min
pages 58-60

It’s Lights Out and Away We Go Siddhant Vasudevan

12min
pages 46-49

Engineering Logevity and the Reversibility of Aging (Dr. Irina Conboy

14min
pages 36-40

Dark Energy, Robots, and Intergalactic Cartography Ibrahim Abouelfettouh

13min
pages 19-23

Provisional Truths: The History of Physics and the Nature of Science Jonathan Hale

9min
pages 41-45

Finding Meaning in Sound: Auditory Perception and Adaptation (Dr. Frédéric Theunissen

13min
pages 14-18

Blazing a New Trail: Wildfire Suppression in California Luyang Zhang

8min
pages 29-32

Examining the Role of Availability Heuristic in Climate Crisis Belief Gunay Kiran

7min
pages 33-35

Dual Imaging: a New Frontier in MRI (Dr. Ashok Ajoy

17min
pages 24-28

Forging Stars: The Technology Behind Fusion Power Marley Ottman

8min
pages 10-13
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.