JOURNYS Issue 12.2

Page 1

Journal of Youths in Science

17 32

Volume 12 Issue 2

WOMEN IN SCIENCE Hana A. Chang

CALIFORNIA REAL ESTATE Renee Wu

41

ENHANCING CAT LITTER Julia Choi


Torrey Pines High School San Diego, CA Scripps Ranch High School San Diego, CA

Contact us if you are interested in becoming a new member or starting a chapter, or if you have any questions or comments. Website: www.journys.org // Email: eic@journys.org Journal of Youths in Science Attn: Mary Anne Rall 3710 Del Mar Heights Road San Diego, CA 92130 1 | JOURNYS | SPRING 2021


table of

CONTENTS Journal of Youths in Science Issue 12.2 - Spring 2021

3

Finding the Constant to an Accelerating Universe: the Hubble Constant Anagha Ramnath

8

Quantum Cryptography: Is it Perfectly Secure? Christopher Taejoon Um

12

Using Nanodrugs to Break Through the Blood Brain Barrier Amy Oh

17

Women In Science: Draw, Lose, or Win Hana A. Chang

20

Polygence Student Alex Finds Her Passion for Medicine through Alzheimer’s Disease Research Ashleigh Provoost

23

Monastrol and Dihydropyrimidines: the Future of Small Molecule Kinesin Eg5 Inhibitors Krithikaa Premnath, Ria Kolala, Tyler Shern, Ansh Rai, Ishani Ashok, Audrey Kwan

28

Machine Learning for a Greener Planet Suryatejas Appana

32

Analysis of the Duration of California Real Estate on the Market Renee Wu

35

Alzheimer’s Disease Diagnosis Using Deep Learning with CycleGAN for Data Augmentation Sunny Wang

41

Enhancement of Cat Litter Using Probiotic & Bacterial-Limiting Solutions Julia Choi SPRING 2021 | JOURNYS | 2


Finding the constant to an accelerating universe:

Th

eH

n a ubble Const

INTRODUCTION Have you ever heard that the universe is expanding? We may just be floating into the abyss of space until the universe tears itself apart. As terrifying as that sounds, scientists are busy trying to figure out the rate at which the universe expands: the Hubble Constant [1]. This number has been debated since the 1920s which was when Edwin Hubble, along with Georges Lemaitre [2] discovered one of the most crucial aspects of cosmology - that the universe is expanding. It opened up a whole world of theories and mysteries like dark energy [3] and the ultimate fate of the universe [4-5]. Georges Lemaitre theorized the expansion whereas Hubble confirmed this by discovering that all galaxies are red-shifted [6], which meant that they were moving away. He also discovered that there is a linear relationship between the distance to a galaxy and its velocity (the speed at which it is receding). In 2011, another discovery about the expansion was made, that it was accelerating. Saul Perlmutter, Brian Schmidt and Adam Riess are the 2011 3 | JOURNYS | SPRING 2021

art by claire hwa

t

by Anagha Ramnath

Nobel Prize winners for this terrifying, but incredible discovery [7]. This accelerated expansion is described by the Hubble Constant: velocity (km/s) / distance (megaparsecs), whose value is still being disputed to this day. We have come a long way since the number Hubble calculated, which turned out to be around ten times too high due to inaccurate data. Now we have two main methods of calculating it. One is through using standard candles (which are objects with known luminosities and are used to measure distances and redshifts of galaxies, both of which are needed to find the Hubble Constant), like supernova Ia [8-9] and Cepheid variables [10]. Cepheid variables are incredibly bright stars that can be used as reference points due to their high luminosities to measure large distances. Their luminosities have a relation to their pulsation (period-luminosity relation), which can be used to measure magnitudes, and therefore, their distances. Supernova Ia is a type of an explosion of stars that happen at the end of a star’s lifetime (when


they run out of gas in their core), and they usually occur in white dwarf stars that are part of a binary system. A supernova’s luminosity is measured in optical light in the form of light curves [11]. The magnitude is plotted against the time and most supernovae have the same or similar basic light curve in the B band (a wavelength of visible light that is part of the photometric system used in astronomy). Although we have tried to standardize other bands of light, the B band works best as it usually has only one peak luminosity whereas other bands may not have a clearly defined peak. But since there is a lot of scattering within this band of light, other methods need to be applied to make the distances measured more accurate. The estimate for the Hubble Constant using standard candles method comes to around 74.0 km/s/ Mpc. New methods like using gravitational lensing [1213] and pulsars to measure distances also come up with an estimate closer to the standard candles method. On the other hand, the second method uses the early universe [14], which helps us calculate the age of the universe and how it has evolved and expanded since the Big Bang. A detailed map of the cosmic microwave background radiation(CMB) [15], which is a remnant heat from the Big Bang, helps us calculate the constant. This is done by finding temperature fluctuations and patterns and using the standard cosmological model, which assumes a flat, homogeneous, and isotropic universe. Using its estimates for matter and energy in the universe as well as the temperature patterns from the CMB, the Hubble Constant is calculated at around 67.4 km/s/Mpc (megaparsecs). The same number is calculated through “Baryonic Acoustic Oscillations” (BAO) [16], which uses the clustering and distribution of galaxies to study the history of the universe. These patterns work as “standard rulers” (similar to standard candles) [17] for measuring the expansion and age of the universe. Since this method uses the standard model of cosmology [18] as a reference for how the universe began, the value is very much subject to change if the model changes. The difference between these two numbers has sparked one of the biggest controversies [19] in cosmology for what the true rate of expansion is. Although these numbers may not seem far, they will make a huge impact in figuring out the age of the universe. There is still some speculation concerning the accuracy of the devices used to make these measurements, since there isn’t a proper consensus among scientists to what the uncertainties of these numbers are. Nonetheless, we may even have to change the standard model of cosmology [20], based on what the true rate is.

Methodology For my estimate of the Hubble Constant, I used the standard candles method using the B band light curves of supernova Ia. From the Supernova Catalog [21], I randomly picked 50 Type Ia supernovae and used their data to find the distances to their host galaxies. Using the Supernova Catalog, I was also able to find the peak apparent magnitude for all the supernovae (through the B band light curves) that I needed for my distance calculation. For the radial velocities of the host galaxies, I obtained the data from Simbad (a database that collects measurements of galaxies from other publications) [22] along with their error bars. Simbad also helped me compare the distances I found with my methodology against the actual, verified distances to the host galaxies of these supernovae. Within the standard candles method, I tried three ways of finding the constant [23]: using Phillips Law (the equation), finding an average absolute magnitude, and using the luminosity decline rate relation. All these methods help reduce the little variation there is among supernova Ia light curves in the B band and helped me find the peak absolute magnitude, which was what I needed for the distance calculation [24]. The Phillips Law was discovered by Mark Phillips, an American astronomer who researched all classifications of supernovae. This law [25] is described as a relationship between the peak luminosity of a supernova and the rate at which the luminosity declines after the peak (typically 15 days). Pskovskii, a Soviet astronomer, was the first to propose the rate “β”, which was the mean rate of decline within a hundred days after the peak, and this rate was used to find the peak absolute magnitude of a supernova. But once the availability of data increased within supernova light curves, Mark Phillips was able to derive a different, easier rate and equation to calculate the peak absolute magnitude (Mmax (B)):

where “Δm15(B)” is the peak apparent magnitude subtracted from the apparent magnitude 15 days after the peak. To calculate the constant, I used the peak absolute magnitudes from the Phillips equation and plugged that into the distance modulus equation:

SPRING 2021 | JOURNYS | 4


where “m” is the apparent magnitude, “M” is the absolute magnitude, and “d” is the distance in parsecs. The equation helped me find the distance in parsecs since I already had the peak apparent magnitudes from the Supernova Catalog’s data. I then converted this to megaparsecs:

to fit the standard units for the Hubble constant. For the errors, all three of them were calculated using covariance matrices (which determined the uncertainties and correlations in my data) embedded in the code I used to find the lines of best fit. As shown in Figure 1, the Hubble Constant using this method came out to be 6.82 ± 6.39 km/s/Mpc, which is off by a factor of 10. The primary reason for it being significantly off is that the Phillips Law equation is outdated after the availability of new and more accurate data.

of the Hubble Constant using this method came out to around 73.53 ± 23.36 km/s/Mpc, which proves that most supernovae Ia have pretty similar magnitudes and luminosities. I did have to do a little bit of a point clipping where I trimmed the dataset (12 galaxies in total) to keep it consistent with the independently calibrated distances of those host galaxies and only removed points that were off by a factor of more than 1.5 but less than 0.75.

Figure 2: Hubble diagram found using one avg. peak absolute magnitude and the equation of the line. It shows a fairly linear relationship between distance and velocity.

Figure 1: Hubble Diagram using the Phillips equation with the equation of the line. there is a lot of scattering in the plot. For my second method, which was finding an average peak for the absolute magnitude, I picked 10 out of my 50 Type Ia supernovae to solve for the peak absolute magnitude using already verified distances and the distance modulus equation. I took out the average of these 10 peaks and used that average absolute magnitude for all 50 of my supernovae. I then used the distance modulus equation again to find the distances in parsecs (with one peak absolute magnitude) and converted them to megaparsecs. As shown in Figure 2, the estimate 5 | JOURNYS | SPRING 2021

For the third and final method, I used the luminosity decline rate relationship [26]. It describes a relationship between brighter and dimmer supernovae, stating that brighter supernovae decline more slowly after the peak than supernovae with dimmer peaks. This relationship is described by a graph, as shown in Figure 3 [27], with the rate (peak apparent magnitude-apparent magnitude 15 days after the peak) on the x-axis and the peak absolute magnitude on the y-axis. I used this graph to find the peak absolute magnitudes based on my rates and the rates that were present in the plot. This plot is also the more accurate version of the equation Mark Phillips found. Next, like the other methods, I used the distance modulus equation to calculate the distance in parsecs (using the peak absolute and apparent magnitudes) and then converted it to megaparsecs. Using this method, my estimate for the Hubble Constant came to


approximately 52.75 ± 10.05 km/s/Mpc, as depicted in Figure 4. As with the second method, I had to do some point clipping (12 galaxies in total) to keep my dataset consistent with the independently calibrated distances. I also set a condition which removed any points that had rates over two, since it wouldn’t fit the interpolation range of the graph I used to find the peak absolute magnitudes.

Figure 3: Graph showing the updated version of the Phillips Law. To find the peak absolute magnitudes, I interpolated over the data to find the magnitudes from the rates, instead of using an equation.

Conclusion Overall, the supernova light curves that I used had similar luminosities, which was proved with the second method (finding an average peak absolute magnitude) but the fact that I had to do some point clipping to get a more reasonable data set (and distances) shows that supernova Ia that is measured within the B band of light still do contain a lot of scattering. If we compare the Phillips Law equation method and the luminosity decline rate relation method, we can see that even though they describe the same relationship, the luminosity decline rate relation graph was more accurate. Phillips Law describes a linear relationship between the rate and peak absolute magnitude, when in reality, the graph should have been a curve, describing that dimmer supernovae decline faster from the peak than brighter supernovae. Because of the excessive scattering within this standard candles method, other standard candles like Cepheid variables and TRGB (tip of the red giant) [28] need to be used to verify the distances found with supernova Ia. Since there is a lot more variation among light curves within the B band of light, researchers are trying to use different optical bands of light, like the J band [29]. These bands may not have one clearly defined peak, but they will have less variation and scatter between supernovae. With more accurate data, we can hope to standardize supernova Ia better or completely remove them from our list of standard candles if the similarity between light curves decreases. It is still pretty impressive that two completely different methods of measuring the Hubble Constant (not to mention complete opposites) come up with numbers that are fairly close to each other. But if we want to solve this conundrum and figure out the history and future of the universe, we’ll need to come to a consensus on what the true rate of expansion is. Hopefully, the new methods being discovered will tell us the true value of this number.

Figure 4: Hubble diagram found using the luminosity decline rate relation with the equation of the line. Like figure 2, it shows a fairly linear relationship between distance and velocity and uses a more updated version of the Phillips equation. SPRING 2021 | JOURNYS | 6


References

[1] December 2019, Adam Mann-Live Science Contributor 03. “What Is the Hubble Constant?” Livescience.Com, https://www.livescience. com/hubble-constant.html. Accessed 29 Aug. 2020. [2] Reich, Eugenie Samuel. “Edwin Hubble in Translation Trouble.” Nature, June 2011, p. news.2011.385. DOI.org (Crossref), doi:10.1038/ news.2011.385. [3] August 2019, Adam Mann-Live Science Contributor 21. “What Is Dark Energy?” Livescience.Com, https://www.livescience.com/what-isdark-energy.html. Accessed 29 Aug. 2020. [4] Becker, Adam. How Will the Universe End, and Could Anything Survive? https://www.bbc.com/earth/story/20150602-how-will-the-universe-end. Accessed 29 Aug. 2020. [5] April 2019, Adam Mann-Live Science Contributor 24. “How Will the Universe End?” Livescience.Com, https://www.livescience. com/65299-how-will-the-universe-end.html. Accessed 29 Aug. 2020. [6] March 2018, Elizabeth Howell 16. “What Are Redshift and Blueshift?” Space.Com, https://www.space.com/25732-redshift-blueshift.html. Accessed 29 Aug. 2020. [7] “Astronomers Win Nobel Prize in Physics for Discovering the Accelerating Expansion of the Universe.” Symmetry Magazine, https://www.symmetrymagazine.org/breaking/2011/10/04/astronomers-win-nobel-prize-in-physics-for-discovering-the-accelerating-expansion-of-the-universe. Accessed 3 Oct. 2020. [8] Pruzhinskaya, Maria Victorovna, and Sergey Mikhailovich Lisakov. “How Supernovae Became the Basis of Observational Cosmology.” ArXiv:1608.04192 [Astro-Ph, Physics:Physics], Aug. 2016. arXiv.org, http:// arxiv.org/abs/1608.04192. [9] Andrea Thompson 09 February 2018. “What Is a Supernova?” Space.Com, https://www.space.com/6638-supernova.html. Accessed 29 Aug. 2020.

7 | JOURNYS | SPRING 2021

[10] Williams, Matt. “What Are Cepheid Variables?” Universe Today, 5 Oct. 2016, https://www.universetoday.com/40468/what-are-cepheid-variables/. [11] Type Ia Supernova Light Curves | COSMOS. https://astronomy. swin.edu.au/cosmos/T/Type+Ia+Supernova+Light+Curves. Accessed 29 Aug. 2020. [12] Steven, K. Blau. A Gravitational-Lensing Measurement of the Hubble Constant. Feb. 2017. world, physicstoday.scitation.org, doi:10.1063/PT.5.7346. [13] Cremonese, Paolo, and Vincenzo Salzano. “High Accuracy on H 0 Constraints from Gravitational Wave Lensing Events.” Physics of the Dark Universe, vol. 28, May 2020, p. 100517. DOI.org (Crossref), doi:10.1016/j. dark.2020.100517. [14] Ivanov, Mikhail M., et al. “H0 Tension or T0 Tension?” ArXiv:2005.10656 [Astro-Ph], July 2020. arXiv.org, http://arxiv.org/ abs/2005.10656. [15] The Cosmic Microwave Background. http://cosmology.berkeley.edu/Education/CosmologyEssays/The_Cosmic_Microwave_Background.html. Accessed 29 Aug. 2020. [16] Baryonic Acoustic Oscillations | COSMOS. https://astronomy. swin.edu.au/cosmos/B/Baryonic+Acoustic+Oscillations. Accessed 29 Aug. 2020. [17] Panek, Richard. “How a Dispute over a Single Number Became a Cosmological Crisis.” Scientific American, doi:10.1038/scientificamerican0320-30. Accessed 29 Aug. 2020. [18] Riess, Adam G., et al. “Large Magellanic Cloud Cepheid Standards Provide a 1% Foundation for the Determination of the Hubble Constant and Stronger Evidence for Physics beyond ΛCDM.” The Astrophysical Journal, vol. 876, no. 1, May 2019, p. 85. DOI.org (Crossref), doi:10.3847/1538-4357/ab1422. [19] Scientists Debate the Seriousness of Problems with the Value of the Hubble Constant. https://phys.org/news/2019-07-scientists-debate-seriousness-problems-hubble.html. Accessed 29 Aug. 2020. [20] June 2020, Chelsea Gohd 14. “‘Standard Model’ of Cosmology Called into Question by New Measurements.” Space.Com, https://www. space.com/universe-standard-model-hubble-constant-new-measurements.html. Accessed 29 Aug. 2020. [21] Guillochon, James, et al. “An Open Catalog for Supernova Data.” The Astrophysical Journal, vol. 835, no. 1, Jan. 2017, p. 64. DOI.org (Crossref), doi:10.3847/1538-4357/835/1/64. [22] Wenger, M., et al. “The SIMBAD Astronomical Database: The CDS Reference Database for Astronomical Objects.” Astronomy and Astrophysics Supplement Series, vol. 143, no. 1, Apr. 2000, pp. 9–22. DOI.org (Crossref), doi:10.1051/aas:2000332. [23] Loizides, Fernando, and Birgit Schmidt, editors. Positioning and Power in Academic Publishing: Players, Agents and Agendas: Proceedings of the 20th International Conference on Electronic Publishing. Ios Press, Inc, 2016. [24] Distance Modulus | COSMOS. https://astronomy.swin.edu.au/ cosmos/d/Distance+Modulus. Accessed 29 Aug. 2020. [25] Phillips, M. M. “The Absolute Magnitudes of Type IA Supernovae.” The Astrophysical Journal, vol. 413, Aug. 1993, p. L105. DOI.org (Crossref), doi:10.1086/186970. [26] Luminosity-Decline Rate Relation | COSMOS. https://astronomy.swin.edu.au/cosmos/L/Luminosity-Decline+Rate+Relation. Accessed 29 Aug. 2020. [27] WebPlotDigitizer - Extract Data from Plots, Images, and Maps. https://automeris.io/WebPlotDigitizer/. Accessed 29 Aug. 2020. [28] McQuinn, Kristen. B. W., et al. “Using the Tip of the Red Giant Branch As a Distance Indicator in the Near Infrared.” The Astrophysical Journal, vol. 880, no. 1, July 2019, p. 63. DOI.org (Crossref), doi:10.3847/1538-4357/ab2627. [29] Dhawan, Suhail, et al. “Measuring the Hubble Constant with Type Ia Supernovae as Near-Infrared Standard Candles.” Astronomy & Astrophysics, vol. 609, Jan. 2018, p. A72. DOI.org (Crossref), doi:10.1051/00046361/201731501.


Quantum Cryptography: Is it perfectly secure?

by Christopher Taejoon Um Art by Parastou Ardebilianfard Will there ever come a world where information can become perfectly secure? In today’s world, the importance of cybersecurity is rapidly increasing as securing and hacking information becomes critical due to the rise of Artificial Intelligence and the Internet of Things (IoT). Cyberwarfare is growing in size and scale, and many countries are also preparing for it [1]. With the emergence of quantum computing, quantum cryptography has been one of the most groundbreaking applications that led to various algorithms and protocols starting from the early 1980s. Adversary q u a n t u m algorithms began to emerge as the current-best public key cryptography scheme, RSA ( R i ve s t – S h a m i r – Adleman) —which uses the fact that finding factors of a large number is computationally impossible whereas multiplying such factors is easy— and led to a quantum factorization algorithm,Shor’s algorithm in 1994 [2]. But no need to worry yet, as quantum computers are still in development and many corresponding quantum cryptographic protocols have already been invented to prevent such adversarial schemes. Nevertheless in 2019, Google published a paper that claimed they had succeeded in realizing the world’s

first quantum computer Sycamore, which ran on 54 qubits and seemed to have achieved quantum supremacy over IBM’s supercomputer Summit [3]. The computer operated at a very low temperature (15 milli-Kelvin) and had 54 qubits, which can represent 2^54 states in Hilbert space. China is coming right after the U.S. in advancements as they have built the world’s first quantum satellite Micius, which was able to securely distribute quantum keys to different cities in China (up to 7600km’s away!) [4]. The European Commision is also preparing to launch its ten-year long flagship program to join the quantum revolution [5]. However, quantum c r y p t o g r a p hy can not fully replace public key cryptography, as certain schemes still remain unrealizable in the quantum environment. In this article, we overview the basic principles of quantum cryptography by exploring Quantum Key Distribution and post-quantum cryptography, as well as briefly go over current applications of quantum cryptography by diving into ID Quantique, a Swiss quantum cryptography company, and its business units. One of the major breakthroughs in quantum cryptography was achieving information-theoretic security through a method called Quantum Key SPRING 2021 | JOURNYS | 8


Distribution (QKD), where it is physically impossible to ever eavesdrop on the system’s information [6]. To understand this a bit better, we introduce three characters: Alice, Bob, and Eve. Alice and Bob are trying to communicate through a quantum channel, while Eve is the eavesdropper. In public key cryptography, Alice and Bob announce their public keys. Alice then encrypts her message with Bobs’ public key, and sends it to Bob through a public channel. Bob is the only person who can decrypt the message with his private key. Therefore, any eavesdropper has to use brute-force to find the private key, as the announced encryption key is in no use. However, the eavesdropper can still imitate the public key announcer Alice by intervening in the public channel. In QKD, this sort of attack called “spoofing” is physically impossible, due to the no-cloning theorem, which says that no unknown quantum state can be copied without disturbing the system. It means that Eve can not eavesdrop on the information without being detected by Alice and Bob. To add on to that, life-long security is obtainable once Bob “measures” the information because his measurement will collapse the state, and the information will be forever gone. The BB84 protocol is a well-known QKD scheme that is still referenced today as the concept of probabilistic randomness in improving the security, and was a significant leap [7]. Here we have a situation where Alice is using two random numbers and Bob is using one random number to secure Alice’s message. Since X and Z are not mutually orthogonal quantum states, it is impossible for Eve to measure 9 | JOURNYS | SPRING 2021

one without disturbing the other. A measurement is physically observing the information, which collapses the superpositions of many quantum states into one classical state. Therefore, they can not be distinguished and Eve has no guarantee in measuring one state perfectly. This is significant because Eve has to now purely guess with 50% correctness for every qubit that is encrypted, or else Bob will know there was a wrong basis measurement. As in Step 6 in figure 1, if enough error is introduced (# of wrong basis measurements) beyond typical noise in the environment, Bob and Alice will abort and start a new protocol. Eve can not avoid introducing disturbance to the system unless she guesses all measurements Alice used correctly, which becomes impossible as the size of the information gets larger. BB92 is another QKD scheme that tries to achieve security through randomness, but it reduces the number of basis states for measurement down to two [8]. It only leaves 45 and 90 degrees while getting rid of 135 and 180 in the polarization step. Due to this, the random classical bit that Alice prepares determines the classical bit that Bob prepares, however the keys will still remain secure as Eve since the states are still non-orthogonal. In the EPR protocol, we introduce a new concept called quantum entanglement [9]. Quantum entanglement is a physical phenomenon that occurs when two particles are correlated in their information. So, entangled states can not be represented independently without the other. For example, when we split a photon, we will have two particles in which the sum of their spin numbers add up to the original spin of the photon. Therefore, if we measure one of the two states, the other state will collapse as well. The importance of EPR pairs is that they always satisfy Bell’s Inequality if they are pure entangled quantum states, which means that no decoherence is introduced. Therefore, if Bell’s Inequality is not satisfied, for certain EPR pairs, we can conclude that an eavesdropper caused this decoherence or some noise have according to the Holevo bound,


which also determines the amount of information Eve can eavesdrop (# of EPR pairs) before she gets detected. In the key distribution step, the difference it has compared to BB84 and BB92 is that the secret key is not pre-determined by either Alice or Bob. This is because whichever base Alice or Bob chooses to measure, his/her secret key will only be kept when it matches the other person’s secret key, which is determined by the basis measured by the opposing person. Post quantum cryptography is the study of encryption schemes that can withstand such quantum adversarial algorithms in the near future. Symmetric key cryptography which even keeps the encryption key secret is harder for quantum algorithms to break as the security of the scheme is not based on computationally hard math problems. Public key cryptography (asymmetric key cryptography) is broken by the infamous Shor’s algorithm developed in 1994, which is a factorization algorithm that was able to solve RSA in polynomial time. Due to such theoretical advances in quantum adversarial attacks, the NIST (National Institute of Standards and Technology) has already been requesting proposals from around the world to increase asymmetric key quantum resistance. As of January 2019, it had received twenty-six proposals [10]. Lattice based cryptography has been drawing attention for the past several decades as it is proven

computationally hard even for a quantum algorithm. It is because SVP, the shortest vector problem, is still unsolvable in polynomial time even for a quantum computer, which is a lattice-based NP-hard problem. This problem is particularly hard to solve as the minimal norm of a vector on a lattice takes exponential time on the size (bit length) of the lattice basis. Lattice basis is a simplified representation of the entire lattice space, as purely representing it would take too much memory. Hash-based cryptography is a low-level scheme that uses digital signatures to sign a message that is usually reduced in complexity using a hash function, which means that as long as the hash function stays secure, the whole scheme remains untouched. The one time signature scheme suggested by Lamport assigns a key pair (signature) to every message once and for all. This is because the private key is generated using random bits, which makes it more guessable if used a second time. Therefore, Merkel proposed t using a binary tree that uses the same key pair multiple times as they go through a hash function to be placed in the leaves of the tree, from which they eventually produce the hashed public keys, without private keys being detected. However, the signature size of this method was still too large to run efficiently on computers. . Although breakthroughs have been made such as XMSS (eXtended Merkle Signature Scheme) -using a slight different structure

SPRING 2021 | JOURNYS | 10


than the standard Merkel tree, (child nodes are now masked and the root nodes of L-trees are added on the leaves instead of an OTS (one time signature) hash function)- there are still weaknesses [11]. Signatures can still be faked using an old secret key, since complete OTS does not exist. The fact that we have to keep secret keys secret, even though they will never be used again, is a critical security threat. Although it seems very far from quantum computing technology being realized, ID Quantique has already made large progress on quantum random number generators, quantum-safe cryptography schemes, quantum sensing hardwares, and much more. Recently in 2020, Samsung announced that it has created a world’s first quantum smartphone, Galaxy A Quantum [12]. Inside the QRNG chip made by ID Quantique, an LED emits a random number of photons to a corresponding CMOS image sensor, which inputs the number of photons received into a normal random number generator algorithm, increasing the true randomness of seeds. Also, quantum satellites in China have shown experimental progress on QKD transmission of keys up to 7600km apart and are preparing to do much more. QKD’s developed in China are now actually in use for several major banks in China for security. The US, Europe, and Japan are catching up and have started establishing their own networks for QKD. Although it is still extremely difficult to maintain a perfect physical environment to minimize noise, it is an established fact that QKD will change the world in the future. We have nothing to lose by being informed in advance of new developments. Technology does not automatically improve, and it is people who are wondering about and studying these possible future inventions, who are the ones that can change the world and enhance our future.

11 | JOURNYS | SPRING 2021

REFERENCES

[1] “Preparing for the Next Era of Computing With QuantumSafe Cryptography.” Security Intelligence, 26 Oct. 2016, https:// securityintelligence.com/preparing-next-era-computing-quantum-safecryptography/. [2]Shor, Peter W. “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer.” SIAM Journal on Computing, vol. 26, no. 5, Oct. 1997, pp. 1484–509. arXiv.org, doi:10.1137/S0097539795293172. [3]Arute, Frank, et al. “Quantum Supremacy Using a Programmable Superconducting Processor.” Nature, vol. 574, no. 7779, Oct. 2019, pp. 505–10. www.nature.com, doi:10.1038/s41586-019-1666-5. [4]Liao, Sheng-Kai, et al. “Satellite-Relayed Intercontinental Quantum Network.” Physical Review Letters, vol. 120, no. 3, Jan. 2018, p. 030501. arXiv.org, doi:10.1103/PhysRevLett.120.030501. [5]Acín, Antonio, et al. “The European Quantum Technologies Roadmap.” New Journal of Physics, vol. 20, no. 8, Aug. 2018, p. 080201. arXiv.org, doi:10.1088/1367-2630/aad1ea. [6]Tamaki, Kiyoshi, et al. “Information Theoretic Security of Quantum Key Distribution Overcoming the Repeaterless Secret Key Capacity Bound.” ArXiv:1805.05511 [Quant-Ph], Sept. 2018. arXiv.org, http://arxiv.org/abs/1805.05511. [7]Bennett, Charles H., and Gilles Brassard. “Quantum Cryptography: Public Key Distribution and Coin Tossing.” Theoretical Computer Science, vol. 560, Dec. 2014, pp. 7–11. arXiv.org, doi:10.1016/j.tcs.2014.05.025. [8]Tamaki, Kiyoshi, et al. “Security of the Bennett 1992 Quantum-Key Distribution against Individual Attack over a Realistic Channel.” Physical Review A, vol. 67, no. 3, Mar. 2003, p. 032310. arXiv.org, doi:10.1103/ PhysRevA.67.032310. [9]Ardehali, M. “A Quantum Bit Commitment Protocol Based on EPR States.” ArXiv:Quant-Ph/9505019, June 1996. arXiv.org, http://arxiv. org/abs/quant-ph/9505019. [10]Computer Security Division, Information Technology Laboratory. “Post-Quantum Cryptography | CSRC | CSRC.” CSRC | NIST, 3 Jan. 2017, https://content.csrc.e1c.nist.gov/projects/post-quantum-cryptography. [11]Buchmann, Johannes, et al. “XMSS - A Practical Forward Secure Signature Scheme Based on Minimal Security Assumptions.” PostQuantum Cryptography, edited by Bo-Yin Yang, Springer, 2011, pp. 117– 29. Springer Link, doi:10.1007/978-3-642-25405-5_8. [12]“ID Quantique & SK Telecom Announce First QRNG 5G Smartphone.” ID Quantique, 13 May 2020, https://www.idquantique. com/id-quantique-and-sk-telecom-announce-the-worlds-first-5gsmartphone-equipped-with-a-quantum-random-number-generator-qrngchipset/.


Using Nanodrugs to Break Through the Blood-Brain Barrier

by Amy Oh

|

art by Julia Liu

There are numerous barriers that prevent current drugs from being able to treat severe neurological disorders, and the Blood Brain Barrier (BBB) is one of them. This highly selective barrier between the brain’s capillaries and the brain tissues prevents the entry of not only pathogens but also drugs for treating neurological disorders such as brain cancer and Alzheimer’s. By using nanotechnology, these drugs can be successfully transported into desired locations of the brain.

Structure of the blood-brain barrier

The BBB (Figure 1) is mainly composed of the endothelial cells lining the blood vessels which form tight junctions between them [1]. This can make it difficult for molecules to pass from the blood to the brain tissue.

Nanocarries in drug delivery

The biological barriers faced by drug molecules in the gastrointestinal tract include the length of the segment,

Figure 1: Diagram of capillary cross section of the BBB [2] pH, mucus thickness, the time the drug remains in circulation, enzymes, and bacteria [3]. These barriers interact with the drug molecules and affect their integrity and absorption. Not only are they limited SPRING 2021 | JOURNYS | 12


Figure 2: Diagram illustrating that in order to achieve the above aims, there are mainly 4 different features of a nanoparticle which can be engineered: size, shape, surface and material [6].

by barriers in the gut, but also by hepatic barriers. Finally, there are fluctuations in drug concentration due to frequent doses and rapid release. These fluctuations can cause toxicity (when the concentration is too high) or ineffectiveness (when the concentration is too low). This means that there is a need for controlled release or sustained delivery. To optimise the efficiency, the nanocarriers (nanoparticles which are transport modules for drugs) must be engineered to: 1. Only target specific cells, and not affect normal and healthy cells. 2. Reduce immunogenicity (the body’s immune response to a non-self-substance) [4]. 3. Remain in circulation for a long period of time. 4. Cross the 5 main barriers from mouth to brain hydrochloric acid in the stomach, the highly selective small intestine walls, enzymes in the liver, proteins and enzymes in the bloodstream and the Blood-Brain Barrierwithout being degraded [5].

Size and shape

The NNI (National Nanotechnology Initiative) defines 13 | JOURNYS | SPRING 2021

nanoparticles as being 1 to 100 nm in size in at least one dimension. Nanoparticles must be small enough to “migrate” across the capillary wall, not be engulfed by immune cells in the reticuloendothelial system, or be filtered by the lungs, spleen and liver. Particles larger than 200 nm are likely to be filtered in the spleen, whereas nanoparticles under 100 nm remain in the blood. If the desired aim is prolonged time in circulation, the particle should have the “critical radius” (150 nm) which requires the greatest time to reach the endothelium walls. Also, particles smaller or larger than 100 nm were found to drift more easily towards the edges of the bloodstream which makes it easier to be taken up by cells [7]. Therefore, the optimum size of a nanocarrier should be around 150 nm in radius but this could change depending on its destination and surface. The geometry of nanoparticles affects the interactions between the cells and the particles, and hence their transport characteristics. They are influenced by haemodynamic (dynamic of blood flow) forces, electrostatic forces, and buoyancy differently. For example, spherical particles experience no lateral drift

and therefore tend to travel in the center of the bloodstream. However, non-spherical particles such as cuboids or ellipsoids experience more drag and move towards the capillary walls by rotating and tumbling due to unequal moments produced by haemodynamic forces. In addition, Discher, Professor of Chemical and Biomolecular Engineering at the University of Pennsylvania, and his team found that particles with very high aspect ratios (width:height) remained in blood ten times longer than spherical particles. Hence, changing the shape of the nanoparticle adjusts the forces which affect it, allowing researchers to take control of its movement in the blood [7].

Surface

There are two ways in which nanoparticles can target diseased cells or tissues: active and passive targeting. Both methods, as illustrated in Figure 3, depend on the surface of the nanocarrier. Active targeting nanoparticles have ligands on the surface such as antibodies or peptides which can bind to the specific receptors on tissues [8]. On the other hand, passive targeting nanocarriers do not have ligands on their surfaces to bind to receptors, so they move towards desired locations using affinity and binding sites such as pH, temperature, and biology of cells including the tumor’s vasculature or leakiness. Active targeting is


due to a phenomenon called “accelerated blood clearance”[9]. Some nanocarriers are disguised in cell membrane to maintain the physical and chemical properties of the nanocarrier but have the biological property of natural cells, like a Trojan Horse. The advantages of cell membrane coating is that the cell membrane expresses specific markers that are like host cells and so prevent them from being engulfed by macrophages, leading to prolonged time in circulation.

Material

When engineering nanoparticles, the material from which they are made should be suitable for the function and the diseases the drug is treating. In particular, they should be tested successful in crossing the BBB. The two diseases that will be discussed in this article are brain cancer and Alzheimer’s disease.

1. Nanotechnology in Cancer Treatment

Figure 3: Diagram showing mechanisms of passive and active targeting [8] much more effective as it reduces side effects by not affecting healthy tissue and increasing the amount of drug delivered. Furthermore, there are a variety of substances used to coat the surface of nanocarriers to prevent them from being engulfed by

the body’s immune cells. The most common surface coating of nanocarriers is poly(ethylene glycol) (PEG) which is a hydrophilic polymer that creates a hydration layer to reduce interactions with plasma proteins and suppress RES (reticuloendothelial system) uptake. However, this is not the perfect coating material as recent studies have found that in subsequent doses of PEGylated nanoparticles, the liver quickly clears the nanoparticles

Cancer – the uncontrolled division of cells – is the second leading cause of death following cardiovascular diseases. The predominant problem with current chemotherapy is that it targets any rapidly dividing healthy cells such as hair follicle and stomach cells, causing numerous side effects including nausea, hair loss and fatigue. In context of the blood brain barrier, glioblastoma is the most common and aggressive type of brain cancer in adults.The current treatment of glioblastomas is temozolomide – a chemotherapy drug in the form of a pill. However, this drug is ineffective for 60- 70% of patients, and 15- 20% of patients develop clinically significant toxicity [11] due to the inevitable SPRING 2021 | JOURNYS | 14


fluctuations in drug concentrations. Nanoparticles can open new doors for cancer treatments with less side effects to healthy tissue and organs due to their specific targeting nature. The first example of nanocarrier material used to deliver chemotherapy drugs is liposomes. Liposomes are spherical cells with a lipid bilayer and have a hydrophobic core, but a hydrophilic surface so insoluble drugs can be transported. They are coated with PEG to increase circulation time, as well as being coated with glucose- vitamin C complex which increases the amount of liposomes being delivered to the targeted site [12] Another example is dendrimers which are highly branched macromolecules that include a core and branches ending in functional groups for specific targeting. Dendrimers conjugated with glioma homing peptides showed better uptake by glioma cells, enhanced penetration into the brain, and improved localisation of the tumor [13]. Chemotherapy drugs carried in dendrimers could potentially ameliorate the survival rate of glioblastoma patients. Finally, one of the most recently developed nanocarrier materials is a carbon nanotube. Carbon nanotubes are cylindrical molecules consisting of rolledup sheets of graphene and they can either be single walled (with diameter <1nm) or multi - walled. [14] Due to their hollow hydrophobic interior and large surface area for surface coating of targeting chemicals, they have been tested successful both in vitro and in vivo for increased uptake of insoluble drugs by tumor cells and the penetration of the BBB [15]. However, there is currently more focus on the use of carbon nanotubes in photothermal therapy. Photothermal therapy (PTT) is a minimally invasive treatment that kills cancer cells using the heat energy released from the photothermal agent, having been stimulated by a specific band light. Carbon nanotubes are able to absorb infrared light provided by laser power and convert it into heat energy efficiently. This means they can be used to not only deliver chemotherapy drugs, but also reduce tumor size. When modified with targeting peptides, carbon nanotubes can kill cancer cells with increased specificity[16]. Yang and his team, who researched nanomaterials in cancer therapy at the Medical School of Nanjing University, tested this on mice in vivo and the results showed 100% elimination of tumor, survival over 100 days without a single death, and no obvious side effects[17]. Thus, various materials with hydrophobic cores and modified surfaces are used as nanocarriers for 15 | JOURNYS | SPRING 2021

chemotherapy drugs, but also as photothermal agents to kill cancer cells using heat energy.

2. Nanotechnology in Alzheimer’s Treatment

Nanotechnology is also used for the treatment of Alzheimer’s disease (AD). AD is the most common neurodegenerative disease and is caused by the buildup of protein called beta - amyloid. It’s a devastating disease for patients and their family with common symptoms including progressive memory loss, mood changes, cognitive deficits such as impaired judgment and decision making, and language disturbances[18]. Currently, there are only treatments to improve symptoms and delay the progression of AD. Nanocarriers play a role in the transport of these drugs, but more importantly, nanotechnology is being used in developing a neuroprotective approach where the progression of AD can be stopped. One of the main neurochemical features of AD is the degradation of acetylcholine neurotransmitters by the enzyme cholinesterase. Current drugs contain this enzyme inhibitor but are challenged by the BBB and therefore require high doses which lead to side effects[19]. In addition, there are various nanosystems such as diamondoid derivatives which protect neurons from beta amyloid toxicity and oxidative stress of free radicals. Diamondoid derivatives have been FDA approved and are already in commercial use in slowing down the progression of moderate to severe Alzheimer’s Disease. These nanoparticles block excessive activity of a glutamate (an excitatory neurotransmitter) receptor which causes neuronal injury or death [19]. By reducing overactivity which causes cell death, the progression of the neurodegenerative disease can be slowed down. Nanotechnology is a new but growing field showing huge potential in curing two of the most feared and common neurological disorders: brain cancer and Alzheimer’s disease. The problems faced by current drug delivery systems include numerous side effects due to non- specific targeting, toxicity caused by short circulation time and quick release, their inability to cross the BBB, and degradation by various biological barriers. These challenges can be overcome by nanocarriers, which are engineered carefully in all aspects such as size, shape, surface, and material in order to adapt to their function. Whether it’s delivering drugs more effectively to the brain or being used in new treatments, nanocarriers are integral in the future of pharmacology.


At each stage of the design, important questions such as “How will it target the cells?”, “How can the circulation time be lengthened?”, and “How can the concentration of drugs be increased?” are considered to ameliorate the effectiveness of drugs. Nanoparticles may be miniscule in size, but they have a huge potential to impact future medicine.

References

[1]Richard Daneman and Alexandre Prat (2015) The Blood- Brain Barrier https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4292164/ [2]Leigh Hopper (July 2019) USC News: Healthy blood vessels may be the answer to Alzheimer’s prevention https://news.usc.edu/158925/ alzheimers-prevention-healthy-blood-vessels-usc-research/ [3]Bahaman Homayun, Xueting Lin and Hyo-Jick Choi (March 2019) Challenges and Recent Progress in Oral Drug Delivery Systems for Biopharmaceuticals https://www.mdpi.com/1999-4923/11/3/129 [4]Jayanta Kumar Patra et al. (September 2018) Nano based drug delivery systems: recent developments and future prospects https:// jnanobiotechnology.biomedcentral.com/articles/10.1186/s12951-018-0392-8 [5]Taylor Mabe (July 2015) TEDxGreensboro: Nanoscience and drug delivery-- small particles for big problems https://www.youtube.com/ watch?v=0wFwXUhHu5c [6]Sonali Pardhiya, R. Paulraj (2016) Nanotechnology in Drug Delivery https://www.semanticscholar.org/paper/ Nanotechnology-in-Drug-Delivery-21-2-Role-of-in-Pardhiya-Paulraj/ c278654a6669297a939f355d2fbc0958325c5e39 [7]Mary Caldorera-Moore et al. (April 2010) Designer nanoparticles: Incorporating size, shape, and triggered release into nanoscale drug carriers https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2845970/ [8]Mohamed F> Attia et al. (May 2019) An overview of active and passive targeting strategies to improve the nanocarriers efficiency to tumour sites https://onlinelibrary.wiley.com/doi/full/10.1111/jphp.13098 [9]Yao Liu et al. (November 2019) Cell Membrane Coating Technology: A promising strategy for Biomedical applications https://link.springer. com/article/10.1007/s40820-019-0330-9 [10]The Brain Tumour charity (n/a) Glioblastoma prognosis https:// www.thebraintumourcharity.org/brain-tumour-diagnosis-treatment/ types-of-brain-tumour-adult/glioblastoma/glioblastoma-prognosis/ [11]Chamberlain MC (October 2010) Temozolomide: therapeutic limitations in the treatment of adult high- grade gliomas https://www. ncbi.nlm.nih.gov/pubmed/20925470 [12] Syed Rizvi and Ayman Saleh (January 2018) Applications of nanoparticle systems in drug delivery technology https://www.ncbi.nlm. nih.gov/pmc/articles/PMC5783816/ [13]Gao H et al. (December 2013) Glioma- homing peptide with a cellpenetrating effect for targeting delivery with enhanced glioma localization, penetration and suppression of glioma growth https://www.ncbi.nlm.nih. gov/pubmed/24120853 [14]Michael Berger (n/a) Carbon nanotubes- what they are, how they are made, what they are used for https://www.nanowerk.com/ nanotechnology/introduction/introduction_to_nanotechnology_22.php [15]Wang S et al. (October 2017) Augmented glioma- targeted theranostics using multifunctional polymer- coated carbon nanodots https://www.ncbi.nlm.nih.gov/pubmed/28666100 [16]Zhizhou Yang et al. (July 2019) Advances in nanomaterials for use in photothermal and photodynamic therapeutics https://www.ncbi.nlm. nih.gov/pmc/articles/PMC6579972/#!po=28.9474 [17]Yang K et al. (March 2012) The influence of surface chemistry and size of nanoscale graphene oxide on photothermal therapy of cancer using

ultra-low laser power. https://www.ncbi.nlm.nih.gov/pubmed/22169821 [18]Yiannopoulou K and Papageorgiou S (January 2013) Current and future treatments for Alzheimer’s disease https://www.ncbi.nlm.nih.gov/ pmc/articles/PMC3526946/ [19]Amir Nazem and G. Mansoori (October 2011) Nanotechnology for Alzheimer’s disease detection and treatment https://www.researchgate. net/publication/215729793_Nanotechnology_for_Alzheimer’s_disease_ detection_and_treatment [20]Wilson B et al. (January 2008) Poly(n-butylcyanoacrylate) nanoparticles coated with polysorbate 80 for the targeted delivery of rivastigmine into the brain to treat Alzheimer’s disease. https://www.ncbi. nlm.nih.gov/m/pubmed/18291351

SPRING 2021 | JOURNYS | 16


DRAW, LOSE, OR WIN

BY HANA A. CHANG

WOMEN IN SCIENCE: 17 | JOURNYS | SPRING 2021

Everyone remembers the in the middle of Silicon Valley, one school science fair. There were of the STEM capitals of the world, papier-mâché volcanoes, one kid I could get drawings that proved brought his guinea pigs, and all the people thought women could be kindergarteners did the exact same scientists. I thought I’d just show up project. At the elementary school at my high school, get a couple of science fair a few years ago, my sister sketches, and prove that we live in did a social experiment, and I helped a open-minded community. That her gather data. This experiment confidence quickly drained when was called the “Draw-A-Scientistthe drawings I got didn’t live up to Test”. my expectation. At first I had only She got the idea from asked a few people, so I was sure I an article had gotten a our mom biased sample. showed her. I asked more The original students to “Draw-Aparticipate, Scientistand ended up Test” started getting sixteen in 1966, when drawings a sociologist instead of named David the two or Chambers three that I asked over had planned. 4,000 children Although to draw a it wasn’t scientist. Only anything near 28 of the kids conclusive drew women, evidence for and all of them anything, were girls [1]. I was still By the disappointed eighties, the when all of my number of interviewees, female scientist when asked Figure 1: A female high school student’s drawing of a scientist. drawings to draw a had already scientist, drew increased to a man: some about 30%, and the media today with crazy white Albert Einsteinhas so much to say on feminism, style hair, others wearing white lab you’d expect that some of it has coats, and all with funny-shaped sunk in by now. One would think chemistry equipment. Even though that we’d know better today, right? some people only spent three Yet this percentage has remained seconds scrawling out stick figures, unchanged for the last 30 years. I while others spent five minutes on didn’t believe it when I first heard detailed portraits, it didn’t change it. When my sister asked for help the fact that they were all drawing getting responses to her own test, I men. With all the new media was so confident that I didn’t take formats and equality movements it seriously. I thought that surely, we’ve created, our biases and


The answer to that question doesn’t need extensive research. It’s found in movies, TV, books, and other media. For example, Dr. Frankenstein, Doc Ock from Spider-Man, Dr. Freeze from Batman, the six-eyed alien in Lilo & Stitch, Dr. Jeykll/Mr. Hyde, and the list goes on. We see these “scientists” everywhere, so as we grow older and are exposed to more of them, we start thinking that scientists must be men. But what about the few kids who drew women? We can find a solution by examining their examples. One of my sister’s friends drew a picture of a geneticist with a Punnett square. When asked why, she told us that her sister was learning about Punnett squares in school and showed her how to use them. Thus, when we asked her to participate in our project, her impression of a scientist was her sister and the Punnett square, so she drew a female scientist using a Punnett square. Exposure to diversity helps us become more open-minded. In this case, just a child looking over her sister’s shoulder and being curious was enough. This impression must be something we remember; the elementary school student remembered the Punnett square because she was intrinsically curious about it. So to reach our audience, we shouldn’t just slap a picture of a female scientist on our textbooks; we should incorporate them into something that people will pay attention to and remember.

Studies by the Perception Institute show that TV shows, movies, radio programs, the Internet, music, books, comics, magazines, and even video games have already changed how people see race stereotypes and gender roles [3]. We’re already so willing to embrace changes in gender representation—just look at how full the Internet is of equal rights movements and inclusive programs. Following through with that spirit of change in writing our characters and casting our movies can help make our expectations reality. Changing how we see

ART BY LILIAN KONG

stereotypes about gender haven’t changed at all. In order to really change, we should know how things got this way. Why do so many students draw male scientists? My hypothesis is that people visualize men when they hear the word “scientist,” and that’s why so many of them drew men. It isn’t that people think women can’t be scientists, they just don’t associate them with science. That means we should find a way to include women in science. If we can find a way to educate people so they can visualize a female scientist, they might start also start drawing them in the test. That’s also what my quick “Draw-A-Scientist-Test” suggested. When I asked my classmates to describe the scientists they drew, many of them gave answers like (and I quote) “holding a science thing”, “measuring stuff in a bottle”, “he’s got a couple of… what do you call these? Vials? Yeah, he’s using a vial” I was even more disappointed that my sister, who had surveyed her 5th grade classmates, had gotten better results than me. Of the first six students she interviewed, two drew female scientists, two drew male ones, and two didn’t identify a gender. There were variants of the crazy chemist, but there was also a geneticist and an ecologist. Why would older students be more biased than younger ones? An article analyzing the original “DrawA-Scientist-Test” claims that “at age 6, girls draw 70 percent of scientists as women, but this proportion flips around ages 10 to 11 and by 16, they draw around 75 percent of scientists as men [2].” So, what changed our minds? Why did we decide how scientists should or shouldn’t look? What convinced us that women don’t look like scientists?

SPRING 2021 | JOURNYS | 18


scientists could be as simple as just imagining them a little differently. Change their gender or their ethnicity. Change the characters you expect to see. These simple changes will cause us to realize that scientists don’t fit within a stereotype—and neither do any other jobs. And by succeeding in that tomorrow, we can also make it true in reality. We already know that women aren’t encouraged in science fields as much as men are, and UNESCO Institute of Statistics shows that less than 30% of the world’s researchers are women [4]. Therefore, girls today may not even realize that they might actually be able to pursue science. However, using the media to expose people to the possibility of scientists of different genders could encourage them to become those scientists who aren’t just men. Ideally, the real science world

would have people from every walk of life, all dedicated to finding out how our world works. In order to limit stereotypes, we should surround ourselves with a variety: movies where women are cast in scientist roles, TV shows where male and female researchers work together and treat each other equally, and books where women make the great inventions and innovations. Think back to the elementary school science fair. I could not tell you how many projects were done by boys or girls, but definitely there was a broad variety of genders, ethnicities, and ages, and I hope it stays that way. As far as we know, they were just interested in learning something new. That’s the one thing all scientists really do have in common. Anyone with a question can become a scientist.

WORKS CITED

[1] Guarino, Ben. “Only 3 in 10 children asked to draw a scientist drew a woman. But that’s more than ever.” Washington Post, published 20 Mar 2018, https://www. washingtonpost.com/news/speaking-ofscience/wp/2018/03/20/only-3-in-10-childrenasked-to-draw-a-scientist-drew-a-woman-butthats-more-than-ever/, accessed 14 Oct 2019 [2] Yong, Ed. “What We Learn From 50 Years of Kids Drawing Scientists.” The Atlantic, published March 20, 2018, https://www. theatlantic.com/science/archive/2018/03/whatwe-learn-from-50-years-of-asking-children-todraw-scientists/556025/, accessed June 4, 2019 [3 ] Godsil, Rachel & MacFarlane, Jessica & Sheppard, Brian. “Pop Culture, Perceptions, and Social Change.” Perception Institute, published February 2016, https://www. unboundphilanthropy.org/sites/default/files/ PopJustice%20Volume%203_Research%20 Review.pdf, accessed June 6, 2019 [4] UNESCO Institute of Statistics, http:// uis.unesco.org/en/topic/women-science, accessed June 5, 2019

Figure 2: Left: A male high school student’s drawing of a scientist. Right: A female elementary school student’s drawing of a scientist. 19 | JOURNYS | SPRING 2021


Polygence Student Alex Finds Her Passion for Medicine through Alzheimer's Disease Research

Alex

Initially, when I began my Polygence project, I didn’t know what to expect. I knew that Polygence — an online research academy for ambitious high school students — would match me to a mentor who would help guide me in an independent study project, and that I would be able to pursue my passion of creative writing, something that my copious schoolwork had forced me to toss aside. Other than that, I was in the dark. Now, just two months later, I have four short stories that are almost complete, and I may even be able to pursue my lifelong dream of being published. I couldn’t be happier with how my project has turned out. I have something to show for my efforts, and I’ve been able to make new connections—not only with

by Ashleigh Provoost

my mentor, but also with other Polygence students. I was given the opportunity to speak to Alex, another Polygence student, who’s written a review paper on Alzheimer’s disease. She is one of five students that were selected to present their research at the Symposium for Rising Scholars. In the interview below, I talked more with Alex about her project, the research process, and the overall Polygence experience. How did you find out about Polygence, and what motivated you to take on a project? I had a meeting with my college counselor, and I told her that I wanted to do undergraduate research at college, since I’m interested in medicine. She recommended SPRING 2021 | JOURNYS | 20


Initially, when I began my Polygence project, I didn’t know what to expect. I knew that Polygence — an online research academy for ambitious high school students — would match me to a mentor who would help guide me in an independent study project, and that I would be able to pursue my passion of creative writing, something that my copious schoolwork had forced me to toss aside. Other than that, I was in the dark. Now, just two months later, I have four short stories that are almost complete, and I may even be able to pursue my lifelong dream of being published. I couldn’t be happier with how my project has turned out. I have something to show for my efforts, and I’ve been able to make new connections—not only with my mentor, but also with other Polygence students. I was given the opportunity to speak to Alex, another Polygence student, who’s written a review paper on Alzheimer’s disease. She is one of five students that were selected to present their research at the Symposium for Rising Scholars. In the interview below, I talked more with Alex about her project, the research process, and the overall Polygence experience.

stages of dementia, and I wanted to learn more about it, so it was the perfect mix between my interest and my mentor’s knowledge of the topic.

How did you find out about Polygence, and what motivated you to take on a project? I had a meeting with my college counselor, and I told her that I wanted to do undergraduate research at college, since I’m interested in medicine. She recommended Polygence because it gives accepted students the chance to do intense research before college. It was something that I could do virtually, so it was a great opportunity for this summer. I went to the Polygence website and was totally blown away by the things that young kids were doing, and the experienced mentors they had the ability to work with. It was hard to think that something like Polygence could even exist, because you usually do this level of research in college, or beyond. Once I was accepted, I was paired up with my mentor Marija, and my project has been a success since then.

What was your workload like? Was it overwhelming at all? It really depended on the week. There was more work at the end with developing the paper that I had decided to write, but in the beginning, it was just reading the papers that I used for my research. The papers could be long, but it was all so interesting to me that it didn’t feel long—it was actually fun to read them. The work was never overwhelming, either. I was also taking a college class while doing my research and Marija was always accommodating to that, so I never had too much work in between sessions.

Did you have the particular topic of Alzheimer’s in mind when you decided to begin your research? Did your mentor help guide you in choosing a topic? I had no idea that I wanted to do my project on Alzheimer’s; I just had an interest in biology and anatomy. When I first met Marija, I knew that she had studied at Yale and had been researching Alzheimer’s for two years at Stanford Medical School. That’s where we came up with the idea to do the project on Alzheimer’s. One of my family members was also going through early 21 | JOURNYS | SPRING 2021

How did you and Marija decide to write a paper for your final project? What were your paper-writing sessions like? Marija and I decided to do a paper because Alzheimer’s is a very dense topic, so I thought that the best way to synthesize the information was to put it into a paper. Even though I had never written any type of scientific paper in my life, I knew that writing a paper would be the best way to present the information. At first, I would write what I thought sounded good, and I would send it over to Marija. For our next session, she would edit it, and then during the session she would explain to me how to improve it. She would help me make the paper sound more professional, while also making it sound more scientific. In my first six sessions, the paper was by no means the best, but after I learned how to craft a scientific paper and write like a scientist, it significantly improved towards the end of our sessions.

How did you go about conducting your research? Was it more independent or more collaborative with your mentor? Marija would help me with the research—she had access to a lot of the scientific papers through her medical school, and she would send PDFs for me to read. For the first couple of sessions, I learned how to read the papers, dissect the information and then put it into an outline. From the outline, I was able to form the paper, and then Marija would edit it. I was independently doing the research, reading the papers, and writing the paper myself, and then Marija and I would collaborate when we went over her edits.


What was unique about the Polygence program? Did you enjoy having Marija as your mentor? I think what’s unique about Polygence is the one-onone attention you get for a project that’s tailored to you, your interests, and your timeframe. All of the mentors are highly educated in specific fields, giving each student a unique experience. So your project is you and your mentor working on something that you’re both super passionate about. Through Polygence, I had experiences that I wouldn’t have through a high school class: I learned how to read scientific papers that I normally wouldn’t have access to, and I was able to learn how to write a paper and how to present scientific information. It was really nice to do this work on my own schedule while still having someone there with me every step of the way. There’s so many connections that you can make through Polygence, too: there’s an instant connection with your mentor, and you can reach out to other mentors as well. There’s just so many benefits to this program—it’s not just about being able to do research, but it’s also about learning the new skills to go with it. What advice would you give a student about to begin their Polygence project, or looking to apply to Polygence? I would tell a new Polygence student to not be afraid of making mistakes, and to never be afraid of being wrong. I knew absolutely nothing about Alzheimer’s disease going into my first session, but now I know the information like the back of my hand. When I was reading the papers, there were definitely things that went over my head, and that’s okay. Marija taught me how to dissect scientific papers and learn from my mistakes. Not only did I learn how to dissect scientific information, but I also became a better presenter. My first time presenting was definitely not my best, but by the end of my ten sessions, I completed a ten minute presentation and presented it to Marija with no problem. I want to tell whoever’s thinking about doing a Polygence project that your mentor will always be there to help. They’re highly educated, and they understand that you’re not an expert in your topic. I was afraid that Polygence would be too advanced for me, since all of the

mentors are extremely educated and knowledgeable, but I was absolutely wrong. Marija was able to take a dense topic like Alzheimer’s disease, bring it down to a high school student’s level, and really teach me the material. She had the patience and she took the time to walk me through every step of the way. How has Polygence influenced your plans for the future? This project definitely solidified my love for medicine. At first, I was kind of unsure about going into the medical field; I took anatomy last year in school, and I did like medicine, but I wasn’t 100% set on it. Learning from Marija and hearing about her experiences further

art by Seobin Oh developed my interest in the medical field greatly. Now, I know more about what I want to do in the future. I’m deciding between three things: being a surgeon, a physician’s assistant for surgery, or a medical scientist. As my own Polygence creative writing project comes to an end, I’ve begun to realize how gratifying an independent project can be. The act of the project coming to fruition is much more rewarding knowing that I’ve been driven enough and passionate enough to make a final product that is entirely my own. I’m eager to share my work with my peers and submit it for publication, and I’m beyond excited to see Alex present her research. SPRING 2021 | JOURNYS | 22


Monastrol and Dihydropyrimidines: The Future of Small Molecule Kinesin Eg5 Inhibitors by Krithikaa Premnath, Ria Kolala, Tyler Shern, Ansh Rai, Ishani Ashok & Audrey Kwan ART BY SEOJIN OH

ABStRACT

Dihydropyrimidines (DHPMs) are a group of privileged heterocycles found to have various biological effects in cells. One specific DHPM, monastrol, inhibits Eg5 kinesin – a key protein involved in cell divisionand can be useful in limiting the growth of cancerous tumors. Monastrol, discovered in 2000 by Tim Mitchinson at Harvard Medical School, and related DHPMs are being synthesized and investigated in hopes of creating novel anticancer drugs. DHPMs can be synthesized very efficiently through a multicomponent reaction (MCR), which are extremely important in synthetic chemistry. Herein, we wish to report on the pharmaceutical industry’s efforts in engineering different dihydropyrimidines.

Introduction

Over a third of the world’s population has been diagnosed with cancer at some point in their lifetimes. In 2018, cancer and its related effects took the lives of over 9.6 million individuals worldwide [1]. This illness currently has no definite cure, and many cancer treatments are detrimental to the health of noncancerous cells. Some well-known anti-cancer drugs include taxol, doxorubicin, and etoposide (Figure 1). Taxol (1) targets rapidly growing cancer cells by attaching to their microtubules, preventing cancer cells 23 | JOURNYS | SPRING 2021

from further dividing [2]. Doxorubicin (2) slows down the spread of cancer cells in the body by inhibiting DNA synthesis, causing tumor growth to slow or halt [3]. Etoposide (3) blocks topoisomerase activity that breaks phosphate backbone in the DNA, leading to cancer cell death [4]. While effective in combination, these drugs exhibit a long list of side effects which can be detrimental to patients’ health [5]. DHPMs are biologically important partially saturated pyrimidine ring with two separate functional groups replacing two of the double bonds (Figure 2). By exhibiting potent antiproliferative activity, some DHPMs captured the attention of many researchers with the hopes of creating new cancer therapies [6]. One DHPM, monastrol, is capable of crossing cell membranes and halting mitosis by inhibiting kinesin Eg5 – a motor protein involved in the assembly and separation of the mitotic spindle. After long periods of mitotic arrest, monastrol induces apoptosis, programmed cell death, in cells. In this way, monastrol and its analogs have demonstrated to inhibit uncontrolled cell division and cause pronounced tumor regression. Additionally, the use of monastrol as an anticancer agent has been demonstrated to be less cytotoxic to neighboring cells than taxol. The visualized potency gives promise in the development of anticancer


Figure 1: Structure of taxol (1), doxorubicin (2), and etoposide (3) Figure 2: General structure of a dihydropyrimidine

drugs [7]. However, monastrol is a relatively weak anticancer agent and is not completely effective. Other DHPMs structurally similar to monastrol have shown great potential in the creation of anticancer agents capable of treating aggressive cancers such as glioma, renal, and breast cancers [8].

HISTORY & DISCOVERY OF MONASTROL

Monastrol was first discovered in 2000 by the Mitchison group at Harvard Medical School in a highthroughput screen (HTS) [9]. It was found that upon allosteric binding of monastrol to its pocket on kinesin Eg5, microtubule assembly is inhibited, and therefore, the basal ATPase activity is inhibited through a slower ATP release. Monastrol changes the conformation of Eg5 kinesin, which decreases the affinity of ATP for microtubules, leading to its misformation. This Figure 3: Crystal structure of monastrol bound to kinesin/Eg5 (a); Zoomed in crystal structure of monastrol bound to kinesin Eg5 (b); Chemical structure of Monastrol (c)

inhibition of ATPase activity shows a decreased affinity for microtubules, and monastrol stabilizes a conformation, allowing for an easy reversal when ATP is hydrolyzed. The periodical interactions between ATP reversals and microtubules yields a non-productive kinesin Eg5 complex that can establish a monoastral spindle which alters mitotic function. The incorrect formation of the spindle fibers halts mitosis, making monastrol capable of stopping the cell cycle. While monastrol has properties that arrest the cell cycle, in clinical trials, they were only partially successful [7]. Monastrol does not inhibit progression through S and G2 phases of the cell cycle or centrosome duplication. While it is successful in some areas, the goal for the researchers now, is to optimize its functions and make the inhibition more successful. This can be done by modifying starting reagents and creating new analogs [10]. In the last two decades, scientists have begun to conduct various experiments to elucidate the structure-activity relationship of monastrol and related dihydropyrimidines by creating analogues towards the development of more potent compounds against cancer cells. Strategies to improve the efficacy of monastrol-like anticancer compounds are currently in the works [7].

BIOLOGICAL MONASTROL

ACTIVITY

OF

A combination of two phenotype-based screens, one specific post translational modification and the other illustrating microtubules and chromatin, were used to select compounds affecting mitosis. Monastrol was one of those compounds, arresting mammalian cells in mitosis with monopolar spindles. With an in vitro study, monastrol was discovered to inhibit mitotic kinesin Eg5, which is a motor protein needed for spindle bipolarity. By experimenting with the effectiveness of monastrol on HeLa cells (most commonly used human cell line), the cytotoxic activity of monastrol was tested [11]. Monastrol inhibited and targeted kinesin Eg5 in the

SPRING 2021 | JOURNYS | 24


mitotic spindle to stop the replication of the cell. To study the effectiveness of the monastrol on the HeLa cells, the researchers used time-lapse video microscopy and biochemical analysis. The mitotic result is controlled by the spindle checkpoint in mitosis. This checkpoint remains active until all the kinetochores on the chromosomes are attached to a spindle during metaphase in mitosis. The active checkpoint generates a ‘‘wait anaphase signal,’’ which inhibits the anaphasepromoting complex. This complex prevents the degradation of several key mitotic proteins, which must be degraded for anaphase initiation to occur. The presence of unattached chromosomes or a lack of spindle tension that is generated by bipolar chromosome attachment results in continued checkpoint activation, mitotic arrest, and eventually programmed cell death. Compounds that target the mitotic spindle are among the most effective cancer drugs in medical use. In checkpoint compromised HeLa cells, monastrol induced apoptosis following mitotic exit into the next G1 phase, showing that Eg5 inhibition can lead to caspase activation and apoptosis in the absence of critical checkpoint components, such as BubR1 or Mad2.

Molecules that inhibit Kinesin EG5 (Figure 4) shows several molecules that inhibit the protein kinesin Eg5, much like monastrol.

HIGH-THROUGHPUT SCREENING

Extensive computational docking experiments of monastrol to its binding pocket on kinesin Eg5 have been conducted to shed light upon its structure activity relationship (SAR). Monastrol has been previously docked to its (R) and (S) enantiomers onto the active site of a Leishmania donovani PTR1 (LdPTR1) model [16]. Monastrol fits well into the binding pocket with Arg17, Asn109, Ser111, Asp181, Tyr191, Tyr194, Lys198, Leu226 and Ala230 residues, forming key interactions with Arg17, Asn109, Ser111, Asp181, Tyr191, Tyr194, Lys198, Leu226 and Ala230. Results have indicated the necessity of the ethyl ester group of pyridine for tight binding to PTR1. The carbonyl oxygen of this group has been predicted to serve as a hydrogen bond acceptor, interacting with the nitrogen atoms of side chain Arg17 and backbone Ala230. The hydrogen bonding interaction between Tyr194 and the hydroxyl group substituted on the third position of the phenyl ring has also demonstrated the overall stability of the complex. The docking results onto PTR1 also indicated that both monastrol (R)- and (S)- enantiomers (Figure 4) have very 25 | JOURNYS | SPRING 2021

Table 1: Molecules that inhibit kinesin Eg5 (crystal structure and ligand interactions obtained from UCSF Chimera) similar binding affinities. Monastrol (R) exhibited IC50 values of 5.23 × 10−5 mol/L with binding free energy of −24.92 kJ/mol, while monastrol (S) exhibited values of 6.94 × 10−5 mol/L with binding free energy of −24.20 kJ/mol. Monastrol (R) and (S) enantiomers were also docked into the active site of human kinesin Eg5. While monastrol (R) showed IC50 values of 3.42 × 10−3 mol/L with binding free energy of −14.35 kJ/mol, monastrol (S) demonstrated an IC50 value 6.42 × 10−3 mol/L with binding free energy of −12.76 kJ/mol. This study established how monastrol has superior antileishmanial activity and provides insight on its potential in preclinical studies[16].

Multicomponent reaction: biginelli condensation relevance

Multicomponent reactions (MCRs), wherein different starting materials that undergo different reactivity are mixed and matched (Figure 5) have always been very important to medicinal chemistry, offering advantages such as to produce large libraries of


Figure 4: (a) Structure of monastrol (R) enantiomers; (b) Structure of monastrol (S) enantiomers compounds. These types of reactions mix and match different starting materials that undergo a similar reactive process. The Biginelli reaction17 is a prime example of a multicomponent reaction that has been used and altered to create various analogs of dihydropyrimidines which create different analogs of DHPMS with structural similarities of monastrol. Due to the recently discovered pharmacological properties associated with dihydropyrimidines, the multicomponent Biginelli reaction has been experiencing a resurgence in scientific interest. In the past decades, a range of biological effects such as antitumoral, antibacterial, and anti-inflammatory have been attributed to synthetic dihydropyrimidines. To this day, chemical synthesis of similar compounds has expanded the field’s understanding of the structureactivity relationship of such compounds in a variety of biological contexts. In 1891 Pietro Biginelli reported the synthesis of functionalized 3,4-dihydropyrimidin- 2(1H)- ones (DHPMs) via the three-component condensation reaction, an acid-catalyzed cyclocondensation reaction of thiourea, ethyl acetoacetate, and benzaldehyde. The reaction was carried out by simply refluxing a mixture of the three components dissolved in ethanol with a catalytic amount of HCl. The product of this reaction was identified by Biginelli as 3,4-dihydropyrimidine2(1H)-one[18]. Monastrol is an example of a DHPM and was synthesized similarly to the Biginelli condensation (Figure 6).

Figure 5: A multicomponent reaction allows one to synthesize complex compounds in a one-step reaction

Work in DHPM-derived molecules

Scientists have been working on synthesizing DHPM analogs that could potentially serve as anti-cancer drugs. These compounds work through a variety of mechanisms: not all bind to Eg5 kinesin.

Concluding remarks

Monastrol is a small, cell-permeable molecule that inhibits Eg-5, a kinesin-related motor protein that is involved in the assembly and maintenance of the mitotic spindles. By inhibiting Eg-5, monastrol is a promising candidate in cancer therapy. Since the discovery of monastrol, there have been further investigations into other small molecule inhibitors of kinesin Eg5. These include several DHPM derivatives that have been tested and show rising leads. Literature provides accurate and important data relating to the structure-activity relationship of a number of kinesin Eg5 inhibitors and other small molecules. In specific, the Biginelli condensation reaction gives rapid access to compounds structurally similar to monastrol. Further investigation into creating more analogs could yield a wide variety of new molecules with various uses within medicine. This continued creation of new dihydropyrimidine and monastrol analogs have a lasting effect on the current and future treatments of cancer, potentially reducing Figure 6: Biginelli Condensation Reaction of Monastrol

SPRING 2021 | JOURNYS | 26


Table 2: Structures and relevance of novel DHPM analogs the mortality rate of the illness through their activity as potent anticancer agents.

References

[1] Siegel, R. L., K. D. Miller, and A. Jemal. “Cancer statistics, 2018. 68, 7–30.” (2018). [2] Wani, Mansukhlal C., Harold Lawrence Taylor, Monroe E. Wall, Philip Coggon, and Andrew T. McPhail. “Plant antitumor agents. VI. Isolation and structure of taxol, a novel antileukemic and antitumor agent from Taxus brevifolia.” Journal of the American Chemical Society 93, no. 9 (1971): 2325-2327. [3] Ye, Xiang S, Li Fan, Robert D Van Horn, Ryuichiro Nakai, Yoshihisa Ohta, Shiro Akinaga, Chikara Murakata, et al. “A Novel Eg5 Inhibitor (LY2523355) Causes Mitotic Arrest and Apoptosis in Cancer Cells and Shows Potent Antitumor Activity in Xenograft Tumor Models.” Molecular Cancer Therapeutics 14, no. 11 (November 2015): 2463–72. [4] “General Cancer Information.” Doxorubicin (Adriamycin) | Cancer drugs | Cancer Research UK, December 17, 2019. https://www. cancerresearchuk.org/about-cancer/cancer-in-general/treatment/cancerdrugs/drugs/doxorubicin. [5] Meresse, Philippe, Elsa Dechaux, Claude Monneret, and Emmanuel Bertounesque. “Etoposide: discovery and medicinal chemistry.” Current medicinal chemistry 11, no. 18 (2004): 2443-2466. [6] Heath, Ester, Metka Filipič, Tina Kosjek, and Marina Isidori. “Fate and effects of the residues of anticancer drugs in the environment.” (2016): 14687-14691. [7] (a) Hoda, S.; Ahmad, E. Meysam, S. Lett. Drug Des. Discov. 2020, 17(8), 983-992. (b) Kiue, Akira, Tetsuro Sano, Aya Naito, Haruaki Inada, Ken‐ichi Suzuki, Masaya Okumura, Junko Kikuchi et al. “Reversal by two dihydropyridine compounds of resistance to multiple anticancer agents 27 | JOURNYS | SPRING 2021

in mouse P388 leukemia in vivo and in vitro.” Japanese journal of cancer research 81, no. 10 (1990): 1057-1064. [8] Asraf, Hila, Rachel Avunie-Masala, Michal Hershfinkel, and Larisa Gheber. “Mitotic slippage and expression of survivin are linked to differential sensitivity of human cancer cell-lines to the Kinesin-5 inhibitor monastrol.” PLoS One 10, no. 6 (2015). [9] Guido, Bruna C., Luciana M. Ramos, Diego O. Nolasco, Catharine C. Nobrega, Bárbara YG Andrade, Aline Pic-Taylor, Brenno AD Neto, and José R. Corrêa. “Impact of kinesin Eg5 inhibition by 3, 4-dihydropyrimidin-2 (1H)-one derivatives on various breast cancer cell features.” BMC cancer 15, no. 1 (2015): 283. [10] Mayer, Thomas U., Tarun M. Kapoor, Stephen J. Haggarty, Randall W. King, Stuart L. Schreiber, and Timothy J. Mitchison. “Small molecule inhibitor of mitotic spindle bipolarity identified in a phenotypebased screen.” Science 286, no. 5441 (1999): 971-974. [11] Cochran, Jared C., Joseph E. Gatial, Tarun M. Kapoor, and Susan P. Gilbert. “Monastrol inhibition of the mitotic kinesin Eg5.” Journal of Biological Chemistry 280, no. 13 (2005): 12658-12667. [12] Chin, Gregory M., and Ronald Herbst. “Induction of apoptosis by monastrol, an inhibitor of the mitotic kinesin Eg5, is independent of the spindle checkpoint.” Molecular cancer therapeutics 5, no. 10 (2006): 2580-2591. [13] Skoufias, Dimitrios A., Salvatore DeBonis, Yasmina Saoudi, Luc Lebeau, Isabelle Crevel, Robert Cross, Richard H. Wade, David Hackney, and Frank Kozielski. “S-trityl-L-cysteine is a reversible, tight binding inhibitor of the human kinesin Eg5 that specifically blocks mitotic progression.” Journal of biological chemistry 281, no. 26 (2006): 1755917569. [14] Huang, Yi-Wen, Li-Shu Wang, Hsiang-Lin Chang, Weiping Ye, Yasuro Sugimoto, Michael K. Dowd, Peter J. Wan, and sYoung C. Lin. “Effects of serum on (-)-gossypol-suppressed growth in human prostate cancer cells.” Anticancer research 26, no. 5A (2006): 3613-3620. [15] DeBonis, Salvatore, Dimitrios A. Skoufias, Luc Lebeau, Roman Lopez, Gautier Robin, Robert L. Margolis, Richard H. Wade, and Frank Kozielski. “In vitro screening for inhibitors of the human mitotic kinesin Eg5 with antimitotic and antitumor activities.” Molecular cancer therapeutics 3, no. 9 (2004): 1079-1090. [16] Walters, Seth H., and Edwin S. Levitan. “Vesicular antipsychotic drug release evokes an extra phase of dopamine transmission.” Schizophrenia Bulletin (2019). [17] Biginelli, P. “Ueber Aldehyduramide des Acetessigäthers” European Journal of Inorganic Chemistry, no. 24 (1891) [18] Kaur, Jaspreet, Shyam Sundar, and Neeloo Singh. “Molecular docking, structure–activity relationship and biological evaluation of the anticancer drug monastrol as a pteridine reductase inhibitor in a clinical isolate of Leishmania donovani.” Journal of antimicrobial chemotherapy 65, no. 8 (2010): 1742-1748. [19] Kappe, Oliver. “Recent Advances in the Biginelli Dihydropyrimidine Synthesis. New Tricks from an Old Dog.” Accounts of Chemical Research 33, no. 12 (September 28, 2000): 879–88. [20] Ragab, Fatama A.F, Sahar M. Abou-Seri, Salah A. Abdel-Aziz, Abdallah Alfayomy, and Mohamed Aboelmagd. “Design, Synthesis and Anticancer Activity of New Monastrol Analogues Bearing 1,3,4-Oxadiazole Moiety.” European Journal of Medicinal Chemistry 138 (September 29, 2017): 140–51. [21] Figueiró, Fabrício Brackmann, Franciane Frasson Mendes, Patricia Helena Farias Corbelini, Fernanda Lucia Janarelli, Elisa undefined Jandrey, Dennis undefined Russowsky, Vera undefined Eifler-Lima, and Ana undefined Battastini. “A Monastrol-Derived Compound, LaSOM 63, Inhibits Ecto-5’Nucleotidase/CD73 Activity and Induces Apoptotic Cell Death of Glioma Cell Lines.” International Journal of Cancer Research and Treatment 34, no. 4 (April 2014): 1837–42. [22] Klein, Emmanuel, Salvatore Debonis, Bernd Thiede, Dimitrios A. Skoufias, Frank Kozielski, and Luc Lebeau. “New chemical tools for investigating human mitotic kinesin Eg5.” Bioorganic & medicinal chemistry 15, no. 19 (2007): 6474-6488.


Machine Learning for a

Greener Planet by Suryatejas Appana art by Jenny Han

Abstract

Climate change is a growing problem due to greenhouse gas emissions from contributors such as deforestation and agriculture. Quantifying the impacts of these contributors in addition to analyzing and interpreting the collected data can be challenging, and a new method for data analysis is required. Machine learning using artificial neural networks provides a cutting-edge way to deal with this issue using data classification techniques. In this article, we explore how artificial neural networks and convolutional neural networks work at a basic level. We will then see how they can be potentially beneficial when applied to detecting deforestation, forest regeneration, and precision agriculture. 1. Introduction

Climate change is becoming a greater problem that needs to be addressed. From 1900 to 1970, greenhouse gas emissions rose by approximately 90% and are continuing to rise today [1]. Two of the largest contributors to greenhouse gas (GHG) emissions are deforestation and farms. Forests are important carbon sinks because they sequester, or absorb, carbon dioxide from the atmosphere. However, deforestation can release a large amount of deposited carbon, making forests a source of CO2 emissions. In fact, up to 20%

of the Amazon rainforest releases more carbon than it absorbs because of deforestation [2]. Farms are also major sources for GHG emissions. For example, because agriculture requires fertile soil, nitrogen-based fertilizers are often used. Processing these fertilizers requires an immense amount of energy, and while most of this fertilizer stays within the soil for crops, some of it is converted into nitrous oxide (N2O), a very potent GHG [3]. For these reasons, finding sustainable agricultural practices and better ways of managing forests have become increasingly important in mitigating climate change. Collecting and understanding agricultural and forest cover data to find these solutions continues to be challenging, but analyzing this information can benefit from new tools and methods that utilize remote sensing data. One revolutionary way to deal with climate change is machine learning. Machine learning techniques that use remote sensing data may provide viable ways to find solutions to two critical contributors to climate change: deforestation and agriculture. 2.

Machine

Learning

Neural Networks

and

Artificial

Machine learning (ML) is a part of artificial intelligence (AI) in which computer systems can learn from data through experience without being explicitly programmed. Specifically, ML algorithms use existing data to make predictions or recognize patterns in new SPRING 2021 | JOURNYS | 28


data. One machine learning model that has become increasingly popular is the artificial neural network (ANN). In this discussion, we will explore how ANNs and a specific type of ANN, the convolutional neural network (CNN), work for classifying data and how they can be applied in tackling the negative environmental effects of deforestation and agriculture, potentially impacting climate change in the process. ANNs are algorithms that can be conceptualized as having several different layers that are interlinked with each other by connections between neurons [4]. The first layer, called the input layer, includes a set of neurons that each take in values from the input data. These values are then passed onto the second layer of the neural network, called a hidden layer. A neural network can have several hidden layers that affect the algorithm’s prediction. Here, each neuron takes all the values from the input neurons and multiplies them by values called weights, which influence how the neural network predicts an output. Finally, the multiplied values are passed through a function called an activation function, which returns a number within a certain range of values so that the neural network can be used for classification. This process works the same for all the neurons in the hidden and output layers. In essence, each neuron multiplies the input values by weights, sums them up, and returns their values after passing them through an activation function. The final layer in the network is used to predict the final output for the input data. This prediction can be a specific class, or category, that corresponds to the input data. How do ANNs learn to predict accurate values from input data? ANNs learn from data by a process called backpropagation [4]. The basic idea of backpropagation is that the algorithm updates the values of the weights depending on the offset between each predicted value and each actual value from the dataset. Initially, the algorithm uses random values for the weights. It then updates the weights by minimizing the offset, or cost. After a neural network is trained with a lot of data, it would have optimized the weights with a minimized cost so that it can accurately make predictions.ANNs can be used for image classification by using the pixel values for the input layer of the neural network [4]. However, it turns out that image classification can be done better using convolutional neural networks (CNNs). A CNN is a type of neural network that is effective for computer vision and image classification because it is able to deal with complexities in the images without depending on the specific pixel values [5]. Moreover, a CNN is able to 29 | JOURNYS | SPRING 2021

recognize the orientation or spatial location of an object in an image. Unlike normal ANNs, CNNs include layers called convolutional layers. These convolutional layers use filters with values that allow the layer to extract high-level, or important, features. First, the filters map across rectangular sections of the input image where the pixel values are multiplied by their corresponding filter values. Then, all of these multiplied values are summed up to make a single value of a new matrix of values called a convolved feature. Figure 1 [6] demonstrates this process.

Figure 1: Convolutional filter mapping across a square section of the input image to produce a single value of the smaller convolved feature [6]

Figure 2: Sample process [6] In short, the convolutional layer takes an input image and, using a filter, represents the image with a matrix of values with a smaller size, the convolved feature, while keeping the important features on the image. In order to further decrease the size of this matrix, a layer called a max pooling layer is then added, which reduces the number of pixels in the convolved feature. Figure 2 [6] shows a basic representation of max pooling. A CNN architecture may use several convolutional and max pooling layers in sequence in order to reduce the size of the image and process the image data with fewer computations [5]. The original image ends up having a much smaller size and is later passed through a fully-connected layer, where all the neurons are linked


to the neurons of the output layer. This output layer finally classifies the original image. Figure 3 [8] shows this complete process. The legend shows different layers present in the neural network. It is important to note that the size of the image decreases as it passes through the network, making the algorithm computationally efficient. 3. Applications of Machine Learning in Forestry and Agriculture

We will now explore how image processing with a CNN can be effectively used for detecting illegal deforestation, locating better areas for forest regeneration, and finding efficient ways to use agricultural resources. 3.1. Machine Learning Deforestation

to

Mitigate

Detecting illegal deforestation is one way machine learning can be effectively applied for climate change mitigation. Global Forest Watch (GFW) is a website application that provides real-time remote sensing data for forest change, land cover, and land use as well as climate and biodiversity [7]. GFW has become useful to data analysts because a lot of the forest data that had been scarce is now available in an open-source platform.

The website also has the feature of detecting and sending deforestation or forest change alerts based on satellite image data. One such system is the Global Land Analysis and Discovery (GLAD) alert system, which can detect deforestation in an area as small as 30m x 30m and can send alerts within a week [7]. GFW currently uses its detection systems to alert conservation organizations, such as the Amazon Conservation Association, policymakers, journalists, and companies that seek to reduce deforestation rates around the globe. In this way, monitoring deforestation using machine learning has the potential to be successful towards reducing climate change, but it has not been widely adopted by conservationists and governments. Online platforms such as Global Forest Watch that are facilitating environmental data analysis can convince these groups that implementing machine learning may be the key to monitoring deforestation and tackling climate change. Similar to Global Forest Watch, Planet is a company that provides available image data taken through remote sensing [8]. This data describes land cover in different parts of the Amazon rainforest and has separate labels corresponding to atmospheric conditions and land use. A CNN can use the image data and classify each SPRING 2021 | JOURNYS | 30


image into these labels. Furthermore, the CNN can be used to detect changes in the image classifications. In this way, illegal deforestation in a particular area can be mitigated through a time-effective way by alerting local government officials and conservation organizations about detected changes in the land cover [8]. 3.2. Machine Learning for Detecting Land Abandonment

Another way CNNs can be advantageous is by using images of land cover to predict whether an area of land is likely to be abandoned from agricultural use or other purposes. This is what scientists from the University of Łódź in Poland achieved [9].” They used multispectral data from remote sensing to train a CNN and predict whether a portion of land would likely be abandoned in the near future. Achieving an accuracy of 0.78 on the test data, or data the model was not trained with, the model showed that it is possible to use machine learning to successfully and accurately predict whether a forested land area will be abandoned. Abandoned land is important because it indicates potential areas where plants and trees can thrive undisturbed by humans [10]. By identifying where land is likely to be abandoned, potential sites for forest regeneration can be identified using a time-efficient method. In this way, machine learning may potentially revolutionize the process of afforestation and contribute to a positive impact on climate change. 3.3 Machine Agriculture

Learning

for

Precision

Finally, machine learning using neural networks can be applied to improving the efficiency and sustainability of agriculture. Precision agriculture is the process by which farming can be done productively and efficiently while keeping the land healthy [3]. One way that machine learning can be applied to precision agriculture is by optimizing plant production by providing the right treatment for the crops based on crop health (whether the crop is healthy, unhealthy, or partially unhealthy) [11]. Remote sensing images of crops can be fed into a CNN which can classify the crop based on its health. This means that specific, unhealthy crops can be given special treatment using farm resources such as fertilizers and herbicides, which improves the efficiency of using these resources while reducing their environmental impact.

31 | JOURNYS | SPRING 2021

4. Conclusions

We have seen how neural networks can be applied to providing solutions to critical problems in environmental science concerning climate change. While machine learning can be effectively applied to deforestation, forest regeneration, and precision agriculture, it can also be applied to problems such as energy efficiency, carbon sequestration, and climate prediction [3]. A variety of machine learning applications in climate change such as these are outlined in “Tackling Climate Change with Machine Learning” by Climate Change Ai, an expert group that finds potential solutions for climate change using data science techniques [3]. By providing a timeeffective, data-driven approach to solving environmental problems, machine learning may become a viable solution for dealing with climate change. References

[1] US EPA, OAR. “Global Greenhouse Gas Emissions Data.” US EPA, 12 Jan. 2016, https://www.epa.gov/ghgemissions/global-greenhouse-gasemissions-data. [2] Gatehouse, Gabriel. “Deforested Amazon Areas ‘Net Emitters of CO2.’” BBC News, 11 Feb. 2020. www.bbc.com, https://www.bbc.com/ news/science-environment-51464694. [3] Rolnick, David, et al. “Tackling Climate Change with Machine Learning.” ArXiv:1906.05433 [Cs, Stat], Nov. 2019. arXiv.org, http://arxiv. org/abs/1906.05433. [4] Nielsen, Michael A. Neural Networks and Deep Learning. 2015. neuralnetworksanddeeplearning.com, http:// neuralnetworksanddeeplearning.com. [5] Saha, Sumit. “A Comprehensive Guide to Convolutional Neural Networks — the ELI5 Way.” Medium, 17 Dec. 2018, https:// towardsdatascience.com/a-comprehensive-guide-to-convolutional-neuralnetworks-the-eli5-way-3bd2b1164a53. [6] Wang C, Xi Y. Convolutional Neural Network for Image Classification. Johns Hopkins University, http://www.cs.jhu. edu/~cwang107/files/cnn.pdf [7] Vizzuality. Forest Monitoring, Land Use & Deforestation Trends | Global Forest Watch. https://www.globalforestwatch.org/. Accessed 30 Aug. 2020. [8] “Identifying Land Patterns from Satellite Imagery in Amazon Rainforest Using Deep Learning.” DeepAI, 2 Sept. 2018, https://deepai. org/publication/identifying-land-patterns-from-satellite-imagery-inamazon-rainforest-using-deep-learning. [9] Krysiak, Stanisław, et al. “Detecting Land Abandonment in Łódź Voivodeship Using Convolutional Neural Networks.” Land, vol. 9, no. 3, Mar. 2020, p. 82. www.mdpi.com, doi:10.3390/land9030082. [10] Wachal, Maria. “Deforestation Solutions, Anyone? Machine Learning Can Help!” Medium, 22 Apr. 2020, https://blog.softwaremill. com/deforestation-solutions-anyone-machine-learning-can-help78f6c4c17e43. [11] Abdullahi, Halimatu Sadiyah, et al. “Convolution Neural Network in Precision Agriculture for Plant Image Recognition and Classification.” 2017 Seventh International Conference on Innovative Computing Technology (INTECH), 2017, pp. 1–3. IEEE Xplore, doi:10.1109/ INTECH.2017.8102436.


by Renee Wu Analysis of the Duration of California Overall Changes To look at the optimal sale time in California, aggregated real estate data was pulled from Zillow, Real Estate ranging all the way from January of 2010 to December of 2019 (“California Home Prices”). As seen in the on the following charts, the average days of real estate on the

Market Abstract

In investment, the timing of buying and selling is crucial to profits. In the real estate community, there is often debate amongst investors on the perfect timing to sell real estate. While many contend that the perfect time is during the summer or spring, there has been little statistical support of these theories. The purpose of this paper is to identify the optimal time for California real estate sales based on the length of time properties have been on the market. In this paper, various types of graphs are used in order to determine a concrete pattern that can be used as a basis for real estate sales. Cyclical functions are used for the majority of the research as they are commonly used as a tool to analyze monthly variances. These functions are based on a twelve month scale and vary in average days California properties are on the market. This way, the analysis can emphasize more on the individual changes from month to month rather than year to year. The data is compiled across multiple years and is analyzed at the same time for each month. To analyze this data, a mean, median, or even a LOESS curve is utilized to find the trend of the average length real estate is on the market. In a LOESS curve, the points are sectioned off into different groups. Within these groups, points are calculated using a focal point, and are determined based on the points closest to the focal point. The closer the data point to the focal point, the larger the influence the data has on the final point. . This is done for every point in the model until the LOESS curve is complete. This process makes the model less impacted by outliers. Through the use of these methods, the optimal time for selling real estate based on average days on the market can be found.

California market has dropped over the last ten years from almost 100 days on the market to slightly above 50. This shift could have been caused by a variety of factors, including the Great Recession from the end of 2007 to 2009. While the data only tracks the changes starting from 2010, the effects are still great enough to be seen. The Great Recession was caused by the housing market booming and busting when financial institutions overlended and over-marketed mortgage backed securities at exorbitant levels to sometimes unqualified borrowers (Hall). After the recession, consumers were then hesitant to invest in the very market that had caused the crash. The increased average days of real estate on the market demonstrates the tentative investors. However, from 2012 onwards, the days on the market fell as people slowly gained confidence in real estate and began to buy or sell property, with the days on the market generally stabilizing from 2013 to 2017, followed by an even further decrease. When looking at initial data displayed in Figure 1, one can instantly recognize the cyclical form taking place from 2013 to the end of 2019. This initial instinct led to the creation of a graph (Figure 2) depicting the percent change from one month to the next over the course of

Figure 1: Average Days on Market for Californian Real Estate 2010-2020 SPRING 2021 | JOURNYS | 32


ten years. From this chart, my initial theory of there being a monthly pattern was confirmed and it can be seen that the average number of days on the market of real estate indeed has a cyclical pattern.

Monthly Basis

After the assurance of a repetitive, reliable pattern from year to year, the next step was finding a way to demonstrate the percent change of the duration on the market on a month by month basis (Figure 3). Each year is represented by a scale of colors, with the oldest data from 2010 being the darker blue, and the most recent from 2019 being lighter blue. There were no significant outliers from year to year. For example, the darkest blue colors did not show that the percent change in days on the market to stay the same from January to February, rather, for almost all the years, can be seen in tight clusters, there are specific months where the percent change would be similar, whether it be 2010 or 2019. The month of December demonstrates this. From 2010 to 2019, the month of December has stayed between zero to approximately 0.1 percent change in average days of real estate on the California market. The data points were usually in close clusters, which caused the mean and median to be relatively similar and consistent throughout the year. The average standard deviation of the points from each month is 0.0375. Some months varied more than others; months such as March had relatively larger standard deviations of 0.083, while months like June had smaller deviations of only 0.015. Overall, the standard deviation is still quite minimal. The green line depicted demonstrates the mean of the percent change from year to year, which Figure 2: Percent Change of Average Days on Market for Californian Real Estate 20102020

33 | JOURNYS | SPRING 2021

Figure 3: Percent Change of Average Days on Market for Californian Real Estate on a Monthly Basis

is-3.4*10-4%. When the mean and median (red and blue) are so closely related, it can be said that the data has minimal outliers and that both the mean and median are accurate measures of this set of data points. Both the mean and median in Figure 4 display the most dramatic average percent decrease in days real estate is on the market to be in March, followed by a sharp increase in April and a more gradual incline through to January, where it once again dropped dramatically. This pattern has high implications; it could potentially mean that the month with the least days on the market for real estate is March. The high increase in the duration California properties are on the market from November to January is likely due to low demand during the holiday season as most are busy with familial obligations (Fuscaldo). On the other hand, the dip in April and March is most likely caused by the high demand as families start to search for a home before the school year begins (Thorsby). Figure 5 denotes a similar chart to Figure 3 and 4, except instead of a mean or median, it uses a LOESS residuals curve to find the curve for average days on the market for Californian real estate. In Figure 5, the blue line demonstrates the LOESS curve which is based on the weight of local points, while the gray area following the blue line represents the confidence band. The confidence band is the uncertainty in a curve based on limited data. As can be seen in the chart below, the confidence band is fairly thin, meaning that the difference between the highest and lowest points are significant. As can be seen in Figure 5, the LOESS curve conveys similar results. . It still shows that November through January have the longest durations for average days on the market, as well as significant drop in the


Figure 4: Percent Change of Average Days on Market for Californian Real Estate on a Monthly Basis with Mean and Median

spring. However, recall that in Figure 4, both the mean and median of California real estate duration displayed March to be the lowest point, but according to the LOESS model it seems that April is the lowest point. The idea of how families are busy in the winter for holidays but prepared to shop for housing around spring time supports the results the LOESS model provides, but what is causing the LOESS curve to point to April as the lowest point instead of March? This all leads back to the way LOESS curves are constructed. A LOESS model can be more accurate than the mean or median because it is more influenced by the local points near the focal point, whereas the mean and median can shift quite easily due to a couple of low values, such as in Figure 3 where March is quite low compared to Figure 4. As mentioned earlier, March’s standard deviation is larger compared to the other months, which could be the reason the LOESS curve was only different from the mean and median in this month and not the others.

Conclusion

Although there are a few minor differences between the median and mean and the LOESS model, the general shape of the curve still suggests that the lowest point is in the spring from March to April. This means that the lowest average days California real estate is on the market is during those months. The decrease in days on the market could be caused by a rise in demand due to increased pressure to purchase in anticipation of a new job after the summer or new school, indicating an increase in price as demand increases. The worst time for real estate sales in regards to length of time on market is typically in the winter, where families and individuals

Figure 5: Percent Change of Average Days on Market for Californian Real Estate on a Monthly Basis with LOESS Model

are busier with the holidays and are more reluctant to see open houses. This indicates potentially lower costs, as sellers are more eager to have properties taken off their hands. The national average of selling a property for its listing price is 57% if there is an offer within the first week; for week two, that percentage “drops down to 50%, then 39, 32, and so on” (Price Reduction Strategy). This means that as there is an increase in duration a property is on the market, the chance of a sale at its listing price rapidly decreases. In fact, some real estate agents drop home prices “after its second week on the market,” with reductions ranging from 5 to even 10 thousand dollars (Olick). The implications of this could be large; choosing the wrong month to sell a property could result in thousands of dollars in losses.

References

[1] Fuscaldo, Donna. “Why the Holidays Are a Good Time to Sell a House.” Investopedia, Investopedia, 18 Feb. 2020, www.investopedia. com/articles/personal-finance/102615/why- holidays-are-good-time-sellyour-house.asp. [2] Hall M. The Effect of Supply and Demand on the Housing Market. Investopedia. http://www.investopedia.com/ask/answers/040215/howdoes-law-supply-and-demand-affect-housing-market.asp. Published April 12, 2020. Accessed May 11, 2020. [3] How Long to Wait Before House Price Reduction - When to Drop Home Price Lower: Zillow. (2020, July 01). Retrieved October 05, 2020, from https://www.zillow.com/sellers-guide/when-to-reduce-house-price/ [4] Olick, D. (2017, October 27). How to know when to drop the asking price on your home. Retrieved October 05, 2020, from https://www.cnbc. com/2017/10/27/how-to-know-when-to-drop-the-asking-price-on-yourhome.html [5] Thorsby, Devon. “Why Spring Is the Perfect Time to Sell Your Home.” U.S. News & World Report, U.S. News & World Report, 17 Apr. 2019, realestate.usnews.com/real-estate /articles/why-spring-is-the-perfecttime-to-sell-your-home. [6] Zillow, Inc. “California Home Prices & Home Values.” Zillow, 9 Feb. 2020, www.zillow.com/ca/ home-values/. SPRING 2021 | JOURNYS | 34


Alzheimer’s Disease Diagnosis using Deep Learning with CycleGAN for Data Augmentation

Abstract

Alzheimer’s disease is a progressive disease that causes deterioration of neurons in the brain, leading to dementia and eventually death. Diagnosis of Alzheimer’s conventionally consists of a combination of neuropsychological tests and laboratory tests, and clinical diagnosis accuracy lies at around 77%, As Alzheimer’s is associated with loss in brain mass, which can be discerned from MRI scans, it is a suitable task for deep learning and computer vision. An accurate and efficient machine learning model could be of great assistance to physicians as it could reinforce their diagnosis. However, deep learning typically requires large amounts of data, and medical data is often scarce. A recent breakthrough in machine learning, the generative adversarial network (GAN), allows for the generation of realistic images, providing a potential solution to a lack of data. In this study, we construct ResNet50 based convolutional neural networks to perform Alzheimer’s disease classification using MRI scans, achieving an F-1 score of 89%. Furthermore, by generating samples using CycleGAN, we demonstrate that GANs can significantly improve classification accuracy when used for data augmentation, achieving an F-1 score of 95%.

Introduction

by Sunny Wang art by Marissa Gaut 35 | JOURNYS | SPRING 2021

Alzheimer’s disease (AD) is a progressive disease characterized by the loss of cognitive ability and is the sixth leading cause of death in the United States. AD is typically classified and diagnosed based on a variety of factors, including cognitive and laboratory tests. According to a study conducted by Beach et. al., the overall clinical diagnosis accuracy was 77% with a low true negative rate [1]. This is far from perfect, increasing the demand for a computer-assisted tool to reinforce physicians’ diagnosis. As AD causes the breakdown and death of neurons, these changes in brain mass can be observed through technology such as magnetic resonance imaging (MRIs). These scans are suitable for computer vision and deep learning algorithms, such as the convolutional neural network (CNN), which has achieved impressive results in classification. Diagnosis of AD using machine learning can serve as a powerful tool for physicians, supplying an additional metric for diagnosis. CNNs typically require large datasets to perform effectively. However, medical data is often scarce and limited in size, largely due to the high standards of consistency and organization required for medical data,


and the cost and time required for data collection. This raises a demand for data augmentation techniques to improve medical machine learning models. One recently introduced technique is the generative adversarial network (GAN) [2], which achieves promising results in image generation. Using a relatively small dataset, GANs can generate similar but original images, as opposed to image modifications used in classical data augmentation. The use of CNNs for Alzheimer’s disease diagnosis has become more prominent in recent years. Hosseini-Asl et al. [3] used a deeply adaptive 3D convolutional neural network (DSA 3D-CNN) achieving a 94.8% accuracy in task-specific classification. Glozman et al. [4] proposed a network of 2D CNNs, applied to each of the three images extracted from each sample, which achieved a maximum of 83% accuracy. However, there have been relatively few applications of GANs in classifying AD, with none of them using it for the purpose of data augmentation. In this study, we investigate the potential for using deep learning in Alzheimer’s disease classification by creating a convolutional neural network model. We will also test the feasibility of using GANs for data augmentation, specifically using the CycleGAN architecture [6].

Methodology

Figure 1 outlines the model pipeline, consisting of dataset acquisition, preprocessing (including GAN augmentation), and classification using a CNN.

Data Acquisition

Data used in the preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). The specific dataset used for training was the ADNI1 standardized MRI dataset, categorized by severity (highest to lowest): AD, MCI, and normal cognition (NC). For this study, we trained a network to classify between AD and NC. The dataset contained 705 samples labeled as NC and 476 samples labeled as AD.

Data Preprocessing

The data was stored in NIFTI files, a 3D neuroimaging format. The samples were converted into three-dimensional numpy arrays using nibabel and then matched with ground truth labels. As the NIFTI image data is three-dimensional, slicing was required to prepare samples for training. To capture as much information from the original image, we extract three slices, one each from the axial, coronal, and sagittal orientations. The slices are taken by retrieving the midpoint of each axis length, and were resized to 224 x 224. Two different methods of preprocessing were considered and tested: Skull stripping was applied, which is the process of removing the skull from the MRI images. This isolates the brain tissue, allowing for more consistency among samples. This was done using the extractor function from the deepbrain library. RAS + ISO transforms and histogram normalization were performed using the TorchIO library [5]. These transforms change the orientation of the MRI image to also improve consistency.

GAN based Data Augmentation Figure 1: Overview of the model pipeline.

We constructed the CycleGAN models using the implementation from [6]. The model architecture is shown in Figure 2.

SPRING 2021 | JOURNYS | 36


Figure 2: CycleGAN architecture.

The model consists of two generators, where one is trained to convert NC samples to AD samples and the other is trained to convert AD samples to NC samples. During training, a real NC image is passed through the first generator, and the resulting fake AD image is compared with a separate real AD image through the discriminator, computing GAN loss. The loss equation, as displayed in equation (1), is the same as the one proposed in the original GAN paper [2]. G represents the generator, DY represents the discriminator for AD samples, x represents a real NC image, and y represents a real AD image.

This process is repeated, starting with a real AD image, resulting in another GAN loss, represented in equation (2). F represents the generator, DX represents the discriminator for NC samples, x represents a real NC image, and y represents a real AD image.

The fake images are also passed through the second generator, returning a reconstructed version of the original image. Cycle consistency loss is computed by summing the losses from comparing the original NC image x with its reconstructed image F(G(x)) and comparing the original AD image y its reconstructed image G(F(y)). This is represented in equation (3), adapted from [6].

37 | JOURNYS | SPRING 2021

The overall loss function incorporates both GAN losses and the cycle consistency loss and is represented in equation (4). λ is a constant representing how much weight is placed on the cycle consistency loss, and λ = 10 is used as described in the paper [6]. The objective of this loss function is to minimize G and F, the two generators, and maximize Dx and Dy, the two discriminators.

The generator is based on the ResNet architecture and consists of downsampling, 9 residual blocks, and upsampling. Instance normalization and reflection padding are used as described in [6]. The tanh activation function is used in its last layer to scale the output image between -1 and 1. The generator architecture is shown in Figure 3. Figure 3: CycleGAN generator architecture.


Figure 4: CycleGAN discriminator architecture. The discriminator is a CNN using PatchGANs, which classify whether an image is real or fake based on patches. This decreases the number of parameters needed and is effective for images with high resolutions. The model also uses LeakyReLU as its activation function and utilizes instance normalization. The discriminator architecture is shown in Figure 4. The original dataset was split according to the label and randomly paired. Three individual CycleGAN models were created, where each one was trained on data from a different MRI slice. Each model used the Adam optimizer with a learning rate of 2e-4 and was trained for 100 epochs with a batch size of 1, as specified in the CycleGAN paper. The trained model was used to generate sufficient samples to create a balanced dataset. An AD version of each NC sample was generated and vice versa. A total of 705 AD samples and 476 NC samples of each orientation were generated, for a total of 1181 images of each class, as shown in Table 1.

Table 1: Dataset sizes after GAN augmentation.

Convolutional Neural Network Classifier

We used a transfer learning approach to create the model architecture as it would save training time and is generally effective when datasets are small. We used the

ResNet50 convolutional neural network (CNN) as our pre-trained model. The pre-trained ResNet50 architecture takes in 3-channel RGB images while the MRI scans are grayscale. To match the network, the one channel images were transformed into three channels by stacking the tensor three times across dimension 0. The last layer was also modified to become a binary classifier. We used a modified CNN with multiple inputs in order to better encapsulate volumetric data. The model architecture consists of three ResNet50 CNN models, where outputs from each CNN are concatenated and passed through fully connected layers, which returns the diagnosis group. The model architecture is shown in Figure 5.

Figure 5: Proposed multiple CNN architecture. The neural network was fine-tuned using the Adam optimizer with a learning rate of 1 e-4 and trained for 50 epochs with a batch size of 32. A training, validation, and testing split of 80%-10%-10% was used. CNN models were evaluated using accuracy, precision, recall, and F1 score, with the F1 score being the primary indicator for classification performance.

Results and Discussion Comparison of Preprocessing Methods

Two methods of preprocessing were tested, skull stripping and TorchIO transforms. Figure 6 displays the resulting images after applying preprocessing.

SPRING 2021 | JOURNYS | 38


Figure 6: Normal sample (left) and Alzheimer’s sample (right). The original slices are displayed in the first column, the skull stripped samples are displayed in the second, and the TorchIO transformed samples are displayed in the third. Figure 7: CycleGAN image synthesis. Table 2 compares the results from applying different methods of preprocessing on the three input ResNet50 network as shown in Figure 5. The different transforms were not compatible with each other, so results were obtained separately.

Table 2: Comparison of ResNet50 networks with different preprocessing. The model utilizing TorchIO transforms improved upon the unmodified model, increasing the F1 score from 0.863 to 0.875. However, the model utilizing skull stripping outperformed both models with a score of .891. This is likely because it improves the consistency among samples in the dataset, which makes it easier for the model to extract important features. The model with skull stripping was kept for the remainder of the study.

By observation, the synthesized Alzheimer’s sample displays more dark space throughout the brain when compared to the normal sample that it was transformed from, which is an indication of loss of brain mass and a characteristic of Alzheimer’s disease. On the contrary, the synthesized normal sample on the bottom right has much less dark space. While the quality of our synthetic images has not been verified by experts, they exhibit many characteristics of real MRI images. Comparison with GAN Augmentation When using CycleGAN for augmentation, an additional 705 AD samples and 476 NC samples of each orientation were generated, for a total of 1181 images of each class. Table 3 shows that there was a substantial increase in performance in the CNN model when GAN augmentation is applied. The F1 score for the ResNet50 model increased from 0.863 to 0.946. The F1 score for the ResNet50 using skull stripping increased from 0.891 to 0.951.

CycleGAN Generation Results

Examples of CycleGAN generated images are shown in Figure 7.

39 | JOURNYS | SPRING 2021

Table 3: Comparison of CNN models with GAN augmentation.


These results indicate that the addition of CycleGAN improves CNN classification performance. From this, it is reasonable to infer that the synthesized images had meaningful features that benefited the model. The increased size and balance among classes in the CycleGAN augmented dataset are also factors that are potentially responsible for the increase in performance. Overall, these results demonstrate the effectiveness of GANs in data augmentation.

Conclusions

In this study, we constructed CNN models utilizing the ResNet50 architecture to diagnose Alzheimer’s disease while also addressing the problem of size limitations in medical datasets with the use of generative adversarial networks (GANs). Using the ADNI1 dataset, we demonstrated that the addition of GANs can greatly improve the classification accuracy of deep learning models for Alzheimer’s disease diagnosis. Specifically, we used CycleGAN to generate images of one class using the other, balancing the dataset and increasing its overall size. Our results show that classification accuracy improved substantially, increasing the F1 score from 0.863 to 0.946 for the standard model and 0.891 to 0.951 for the model utilizing skull stripping. Due to the lack of large datasets in many medical fields, the results obtained in this study can be generalized to many other medical imaging fields as well. Overall, with promising results in data augmentation, GANs have the potential to significantly improve upon classification tasks across a wide variety of applications.

References

Thomas G Beach et al. “Accuracy of the clinical diagnosis of Alzheimer disease at National Institute on Aging Alzheimer Disease Centers, 2005–2010”. In: Journal of neuropathology and experimental neurology 71.4 (2012), pp. 266–273. Ian Goodfellow et al. “Generative adversarial nets”. In: Advances in neural information processing systems. 2014, pp. 2672–2680. Tanya Glozman and Orly Liba. Hidden cues: Deep learning for Alzheimer’s disease classification CS331B project final report. 2016. Ehsan Hosseini-Asl, Robert Keynton, and Ayman El-Baz. “Alzheimer’s disease diagnostics by adaptation of 3D convolutional network”. In: 2016 IEEE International Conference on Image Processing (ICIP). IEEE. 2016, pp. 126–130. Fernando Perez-Garcia, Rachel Sparks, and Sebastien Ourselin. “TorchIO: a Python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning”. In: arXiv preprint arXiv:2003.04696 (2020). Jun-Yan Zhu et al. “Unpaired image-to-image translation using cycle-consistent adversarial networks”. In: Proceedings of the IEEE international conference on computer vision. 2017, pp. 2223–2232. SPRING 2021 | JOURNYS | 40


Enhancement of Cat Litter Using Probiotic and Bacterial-Limiting solutions by julia choi art by marissa gaut INTRODUCTION As levels of microdust continue to increase in several parts of the world, predominantly in Eastern Asia, concerns also arise on the effect it has on humans. Several researchers have pointed out that various industrial pollutants increase the toxicity of fine dust. Fine dust can lead to adverse health effects, such as lung cancer and bronchial asthma [1]. Current studies on the relationship between fine dust and health are mainly targeted at humans. However, it has been reported that cats are increasingly sensitive to dust layers within homes, resulting in feline hyperthyroidism or even premature death [2]. Surprisingly, cat litter is one of the main sources for toxic dust particles [3], and since cats regularly clean their fur with their mouths, they are prone to ingesting such toxins. General cat litter is classified into three types: clumping, non-clumping, and silica gel crystals. The most prevalent type is clumping litter, composed of bentonite. Clumping litter has the advantage of being inexpensive, but it has poor water absorption. Some cats prefer non clumping litter. Silica gel litter is made of silica gel beads. It is highly absorbent, controls odor well, and tends to last longer than other litters. However, it is very expensive and can be toxic if ingested in large amounts 41 | JOURNYS | SPRING 2021

[4]. Based on this information, it was essential in this research to come up with a solution that would increase absorbency, minimize dust particles, and be relatively inexpensive. Polymers are materials that can do all of the above. Among such polymers, polyethylene glycol (PEG) and polyvinyl alcohol (PVA) were tested for their effects on fine dust elimination. PEG has been commonly used for biological experiments including protein crystallization, fusion of cells, and concentrating viruses [5]. Furthermore, because of its low toxicity, PEG is used in various industrial products. PVA is a resin frequently used in medicinal practices. Due to its minimal health risks, PVA is often used as a coating agent for pharmaceutical and dietary supplements. While investigating the effects of dust particles in cat litter on felines, it was found that cat litter is a breeding ground for harmful bacteria. Toxic parasites such as Toxoplasma are transmitted from cat feces to the litter box, which then proliferates at a rapid rate [6]. Probiotics were the most effective means of reducing bacterial colonies in litter. The most common probiotics are among the Lactobacillus type. Among those, Lactobacillus acidophilus (L. acidophilus) and Lactobacillus plantarum (L. plantarum) have antibacterial tendencies. These are antimicrobial, thus enhancing disease control by inhibiting pathogens from entering the body and multiplying. The steps taken during this experiment are as follows: firstly, the contamination levels of cat litter and cat fur were observed. Then, the level of dust contamination and absorption after applying PEG and PVA were measured to figure out the most effective concentrations and volumes. Afterwards, L. acidophilus and L. plantarum were applied to the cat litter and changes in the contamination level of bacteria were recorded in order to find the ideal combination of polymers and probiotics. Finally, this new solution was tested to ensure that it has no toxic effects on endothelial and epithelial cells in cats.

RESULTS 1. Degree of contamination in cat litter and fur Effects of distilled water and ethanol on the removal of microdust on cat hair Measurements were taken to exterminate this dust directly from the fur. After washing the fur with


Figure 1: Effects of distilled water and ethanol on the removal of microdust on cat fur

either distilled water or 70% ethanol, the fur was observed under the microscope to compare the levels of contamination before and after washing. Table #1 indicates that washing had little to no effect. This shows a likely chance that cats will ingest dust on their fur even after washing it with ordinary household supplies.

2. Effects of polymers on the extermination of microdust in cat litter

Table 1: Changes in contamination levels of cat fur after rinsing in ethanol or distilled water

the solutions are not effective. Figure 2 shows that, once 3500 µL was added to the cat litter, a substantial amount of water was left at the top. This indicates cat litter can absorb up to 3.5 mL of water. When observed again after three days, the unused cat litter had less water at the top when compared to the used cat litter. Therefore, the used cat litter absorbed more water than the unused cat litter.

A. Efficacy of microdust extermination when using polymers When physically observed, the turbidity of the cat litter with added polymers was significantly lower. B. Changes in the water absorbance of cat litter with added polymers b-i. Water absorbance of used and unused cat litter 5 g of regular cat litter was tested to see how much water it could absorb. This would serve as the control group; if the polymer solutions did not absorb as much water as regular cat litter, it would mean that SPRING 2021 | JOURNYS | 42


Figure 2: Water level observations after injecting distilled water in unused and used cat litter

b-ii. Effect of different polymer volumes and concentrations on water absorbance To compare against the control group, different volumes and concentrations of PEG or PVA were prepared. The different volumes tested for PEG or PVA were 100, 200, 300, 400 or 500 µL. The different concentrations tested for PEG were 10%, 20%, 30%, 40%, or 50%. PVA was not dissolved in distilled water any further than 25%, so the concentrations tested for PVA Figure 3: Absorbency levels based on polymer volume (A) and concentration (B)

43 | JOURNYS | SPRING 2021

were 5%, 10%, 15%, 20%, or 25%. The result for water absorbency is shown in Figure #3 A. The results indicated a general trend that as more polymers were added to the cat litter, the water absorbency increased, with 500 µL being the optimal amount. In Figure #3 B, 50% PEG was used as a stock solution, and stock solutions of 0.2, 0.4, 0.6, 0.8, or 1.0 ratio of PEG to water were used for 10%, 20%, 30%, 40%, or 50% PVA solutions. For the PVA solution, 25% PVA was used as a stock solution. Stock solutions of 0.2, 0.4, 0.6, 0.8, or 1.0 ratio were used for 10%, 20%, 30%, 40%, or 50% PEG solutions. There is a

general increase in water absorbance as concentration increases until a peak is reached at 5.75 and 6.0 mL. For PEG, the peak is at 50% (ratio 1.0) with 5.75 mL, but has minimal variation between 20% and 40%. These results indicate that a range from anywhere between 20% and 50% is best to use for adding PEG to cat litter. PVA peaks at 20% (ratio 0.8) with 6 mL, but the only optimal solution available to use is 20%. Therefore both PEG and PVA solutions not only retain the water absorption qualities of regular cat litter, but outperform it.

3. The effects of probiotics on bacterial inhibition A. L. acidophilus and L. plantarum extract effects on bacterial inhibition on cat hair In order to confirm the degree of contamination by bacteria in the fur samples, a bacterial culture experiment was performed on an agar plate*. Figure #4 shows the images captured two days after incubation. Under observation,


Figure 4: Test tubes after adding PEG and PVA solutions to cat litter

Figure 5: Agar plates after bacterial culture of fur

Figure #6: OD of three different bacteria in fur based on probiotic addition

three separate bacterial colony types were identified. The small amounts of bacteria from each colony were captured and mixed with nutrient broth, and OD* values were measured after cultivation at 30 degrees Celsius for 48 hours. Then, L. acidophilus and L. plantarum extract were added to each bacterial culture. After incubation at the same conditions, OD values were measured to observe any substantial differences in bacterial growth. The reduced OD shown in Figure #6 reveals that bacterial growth was inhibited by both probiotics. B. L. acidophilus and L. plantarum extract effects on bacterial inhibition in cat litter

The effects of the two probiotics were measured against the contamination of cat litter. A higher OD value of bacteria cultured from cat litter before adding the probiotics demonstrates that the probiotics are effective against inhibiting bacterial growth. Figure #7 indicates that bacterial colonies grew on agar plates after culturing used or unused cat litter. There were more bacterial colonies in used cat litter than unused cat litter, which indicates that bacteria from cat feces had contaminated the litter. A total of two types of bacterial colonies was found, which were then cultivated and measured for OD values. The values before

adding the probiotics are in blue on Graphs A and B in Figure #8. Though L. acidophilus and L. plantarum extract seems to increase the OD value (indicative of bacterial growth) of unused cat litter, the extract decreased the OD values of all the other bacterial colonies. It appears as if the probiotics created a suitable environment for the first colonial bacteria found; however, because most bacteria are transmitted from cat urine or feces, this anomaly does not count widely against the experiment. This data suggest that probiotics L. acidophilus and L. plantarum extract are effective against bacterial inhibition. SPRING 2021 | JOURNYS | 44


Figure 7: Agar plates after bacterial culture of unused and used cat litter Figure 9: Microscopic picture of cell viabilities of different combinations of probiotics and polymers

Figure 8: OD of two different bacterium in unused and used cat litter based on probiotic addition

4. THE OPTIMAL MIXTURE CONCENTRATION OF POLYMERS AND PROBIOTICS A. The relative cytotoxicity of the enhanced cat litter When this solution is added to cat litter, it could also be ingested by the cats while they are grooming. To ensure whether the new solution produced any side effects to cats when put into the cat litter, the toxicity of the solution was confirmed on two types of cells: CPAE and MAC-T cells. CPAE cells are endothelial cells and MAC-T cells are epithelial cells. These are the ideal types of cells to examine in order to test toxicity because they are the first types of cells affected by the polymers when it is ingested. A microscopic observation of the live cells is pictured in Figure #9. The control in this experiment is pictured in the top left corner. There was no significant decrease in cell number compared to the control. This data shows that there is no toxicity in either cell type with any combination of the solutions. B. The optimal concentration of probiotics and polymers in terms of water absorbency 45 | JOURNYS | SPRING 2021

Figure 10: Cell viabilities of different combinations using PVA 25 and PEG 50 The ranges of concentration and volume to get rid of dust is described in Section 2 of the results of the experiment for PVA and PEG, along with the fact that L. acidophilus and L. plantarum extract are effective against bacteria. This section aims to combine these two effects in the most optimal way. Firstly, the water absorbency levels were measured when the mixtures of probiotics and polymers were used. Only 2 g of cat litter was used to test water absorbency in Table #2. As shown below, though the most effective combination was PVA at 20% concentration with MRS, PVA with L. plantarum and L. acidophilus extract were at an acceptably high level as well. For PEG, 40% seemed to be the optimal concentration (named PEG 40), and it did not matter whether L. plantarum or L. acidophilus extract was used. C. Dust particles In part c, it was confirmed that PEG 40 with L. plantarum extract or PVA 20 with L. plantarum extract were the most effective for inhibition of bacterial growth. In order to verify whether the new solution had the same effect on diminishing dust, these solutions were added


Table 2: Water absorbance levels of combined solutions of probiotics and polymers Figure 13: Observed dust particle changes after adding polymer/probiotic combinations to cat litter

Figure 11: Spectrophotometer readings measuring bacterial levels in PEG/probiotic combinations

as water absorbency was tested and the results were recorded above. In the end, it was found that the best solution was the PEG concentration at 20% with L. plantarum extract. The purpose of this research was to introduce a material to enhance the common properties of cat litter while providing additional benefits such as bacterial disinfection and dust contamination control. Since the solution also increases water absorbency, less cat litter is needed to absorb more water, which is very economical. Finally, cats and humans will have a decreased chance of being infected by any dust or bacteria. The proposed solution offers many benefits to the subjects and meets all of the aforementioned criteria, and therefore is the optimal way to enhance modern cat litter.

References

Figure 12: Spectrophotometer readings measuring bacterial levels in PVA/probiotic combinations to the cat fur. Figure #13 shows that the number of dust particles found on the cat fur substantially decreased. The PEG solution was lower than PVA but both are relatively effective.

CONCLUSION A series of tests including the effects of polymers and probiotics on bacterial and dust inhibition, as well

[1] “Chemical Hope.” Science History Institute, 1 Feb. 2017, https:// www.sciencehistory.org/distillations/chemical-hope. [2] “Common Types of Cat Litter in the Market Today.” Mercola.Com, https://healthypets.mercola.com/sites/healthypets/archive/2015/03/23/ common-types-cat-litter.aspx#! Accessed 31 July 2020. [3] “House Cats at Risk of Death from Dangerous Chemicals Unless Homes Kept Dust Free.” Express.Co.Uk, 27 Feb. 2017, https://www. express.co.uk/news/nature/772535/house-cats-dust-death-chemicalsclean-home. [4] “Is Your Cat Litter Making You (and Your Cat) Sick?” Creative Loafing: Tampa Bay, https://www.cltampa.com/news-views/ environment/article/20744756/is-your-cat-litter-making-you-and-yourcat-sick [5] Kang, Dongmug, and Jong-Eun Kim. “Fine, Ultrafine, and Yellow Dust: Emerging Health Problems in Korea.” Journal of Korean Medical Science, vol. 29, no. 5, May 2014, pp. 621–22. PubMed Central, doi:10.3346/ jkms.2014.29.5.621. [6] Prevention, CDC-Centers for Disease Control and. CDC Toxoplasmosis - General Information - Frequently Asked Questions (FAQs). 28 Feb. 2019, https://www.cdc.gov/parasites/toxoplasmosis/ gen_info/faqs.html. SPRING 2021 | JOURNYS | 46


ACS San Diego Local Section The San Diego Local Section of the American Chemical Society is proud to support JOURNYS. Any student in San Diego is welcome to get involved with the ACS San Diego Local Section. Find us at www.sandiegoacs.org! Here are just a few of our activities and services:

Chemistry Olympiad

The International Chemistry Olympiad competition brings together the world’s most talented high school students to test their knowledge and skills in chemistry. Check out our website to find out how you can participate!

ACS Project Seed

This summer internship provides economically disadvantaged high school juniors and seniors with an opportunity to work with scientist-mentors on research projects in local academic, government, and industrial laboratories.

College Planning

Are you thinking about studying chemistry in college? Don’t know where to start? Refer to our website to learn what it takes to earn a degree in chemistry, the benefits of finding a mentor, building a professional network, and much more!

www.sandiegoacs.org 47 | JOURNYS | SPRING 2021


President Kevin Song

Editor-in-Chief Jessie Gan

Vice Presidents Gwennie Liu, Jade Nam

Assistant Editor-in-Chiefs Jason Cui, Jenny Han

Coordinators Ben Hong, Riya Irigireddy, Ashvin Kumar, Erica Wang

Copy Editor Grace Zhou

Contributing Writers Suryatejas Appana, Ishani Ashok, Hana A. Chang, Julia Choi, Ria Kolala, Audrey Kwan, Amy Oh, Krithikaa Premnath, Ashleigh Provoost, Arsh Rai, Anagha Ramnath, Tyler Shern, Christopher Taejoon Um, Sunny Wang, Renee Wu Design Manager Kevin Song Contributing Designers Kevin Song Scientist Review Board Members Titan Alon, Benjamin Grinstein, Christina Hoong, Hari Khatuya, Aneesh Manohar, Luke Miller, Tapas Nag, Ceren Yardimci Tumay

Contributing Editors Marissa Gaut, Jenny Han, Jae Kim, Nandita Kodali, Grace Zhou Graphics Manager Amy Ge Assistant Graphics Manager Julia Liu Contributing Graphic Artists Parastou Ardebilianfard, Marissa Gaut, Amy Ge, Jenny Han, Claire Hwa, Lilian Kong, Julia Liu, Seobin Oh, Seojin Oh Web Designer Logan Levy

Staff Advisor Mrs. Mary Ann Rall Dear Valued Reader, We are pleased to present the second JOURNYS issue of the 2020-21 year, Issue 12.2! As always, we would like to gratefully acknowledge all who made it possible, from authors who worked with us to make their work publication standard, to JOURNYS staff and SRB members who graciously volunteered their time and dedication to each and every article. We have acknowledged before that this year has been a difficult one for everyone, yet we are constantly inspired and amazed by the tenacity of our peers to push through unprecedented challenges and continue to foster their love of science and journalism. We hope to “journey” on far into the future as a journal that combines scientific rigor and wonder with the spirit of collaboration. Thank you so much for your continued support! Kevin & Jessie SPRING 2021 | JOURNYS | 48


Journal of Youths in Science

2020-2021


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.