JOURNYS Issue 9.2

Page 1

VOLUME 9 ISSUE 2

03 11 39

A TALE OF TWO POISONS By Jonathan Kuo

ROBOTIC ASSISTED SURGERY By Jennifer Yi DR. SAWREY INTERVIEW By Emily Zhang

Art by Richard Li


The Journal of Youths in Science (JOURNYS), formerly known as Falconium, is a student-run publication. It is a burgeoning community of students worldwide, connected through the writing, editing, design, and distribution of a journal that demonstrates the passion and innovation within each one of us. Torrey Pines High School Scripps Ranch High School Mt. Carmel High School San Diego Jewish Academy San Diego, CA

Crean Lutheran High School Irvine, CA

Orange County School of the Arts Santa Ana, CA

Ardsley High School Ardsley, NY

Mead High School Spokane, WA

For more information about submission guidelines, please see https://www.journys.org/submit

Contact us if you are interested in becoming a new member or starting a chapter, or if you have any questions or comments. Website: www.journys.org // Email: eic@journys.org Journal of Youths in Science Attn: Mary Ann Rall 3710 Del Mar Heights Road San Diego, CA 92130 1 | JOURNYS | WINTER 2017


issue 9.2 - winter 2017

3 03 05 07

7

11 12 15

A Tale of Two Poisons Jonathan Kuo

Modern Forensic Science Kishan Shah

Natural Antioxidant & Nano-Antioxidant Effects Against Oxidative Stress Jessie Gan

The Heart’s Electrical System Allison Jung

Robotic Assisted Surgery Su Kim

The Analysis of Factors Leading to the Allergenicity of Proteins Ronin Sharma

19

21 21 MATHEMATICS 23 25 25 Psychology 28 29 29 Environment 33 37 39 41 43

37

miscellaneous

Reproductive and Oncofertility Science Academy: A Review Leona Hariharan

RSA Cryptography Kevin Ren

Self-Learning Code Daniel Liu

The Effect of Pop Music on Focus Deepika Kubsad

Sleep Paralysis Alina Luk

Soil Textures at Various Altitudes Melba Nuzen

Ocean Thermal Energy Conversion Jennifer Yi

GSDSEF Summaries Interview with Dr. Sawrey

Emily Zhang

Interview with Mrs. Boardman-Davis

Jonathan Lu

Science Opportunities 2 | JOURNYS | WINTER 2017


a tale of

two poisons // by Jonathan Kuo // art by Richard Li

Imagine you’re walking through an airport. While walking toward your terminal, you stumble into a nondescript woman, who accidentally spills part of her drink on your torso. You quickly excuse yourself and rush to your terminal, which is boarding its final passengers. You’re a bit out of breath at this point and feel a tad nauseous, but you figure it’s just from the stress of working for the past few days and from the fact that you rarely exercise anyway. Minutes later, as you’re settling into your seat, you begin feeling nauseous. You puke, you find it increasingly difficult to breathe, and you eventually die from apnea, or cessation of breathing. Although this scenario may seem reminiscent of an action movie, a similar event occurred last February when Kim Jongun’s half-brother Kim Jong-nam was assassinated at Kuala Lumpur International Airport in Malaysia. According to the BBC, Kim was attacked by two young women, one who splashed a liquid on his face, and one who covered his face with a cloth laced with liquid. After alerting the receptionist at the airport of his distress, he was quickly rushed to the hospital, but died from a seizure en route. An autopsy revealed that the nerve agent VX was involved in his death [1]. VX is one of those nasty chemicals that governments outlaw and chemists refuse to work with. Synthesized by British scientist Dr. Ranajit Ghosh in 1952 when he was searching for an alternative pesticide to the organochloride DDT (dichlorodiphenyltrichloroethane), VX belongs to a similar class of compounds called organophosphates. Organophosphates are deadly because they inhibit an enzyme called acetylcholinesterase (AChE), which helps break down the neurotransmitter acetylcholine during synaptic transmission. AChE’s active site has two main portions that interact with molecules: an anionic site that forms electrostatic interactions and an esteratic site made of several catalytic amino acids. Both sites are located deep within a gorge consisting of aromatic amino acids, contributing to the high specificity of AChE. When AChE acts normally 3 | JOURNYS | WINTER 2017

on acetylcholine, the quaternary nitrogen moiety of choline— basically the part of acetylcholine attached to a nitrogen atom— is held in place by the anionic site, positioning the acetyl group in the esteratic site. AChE then hydrolyzes acetylcholine, resulting in acetic acid and choline as shown in Figure 1. However, when an organophosphate such as VX interacts with AChE, the phosphate moiety of VX covalently binds to and blocks the esteratic site, preventing normal AChE function, as shown in Figure 2 [2].

Figure 1: Positioning and hydrolysis of ACh [3]

Figure 2: Inhibition of AChE by an organophosphate [3]


Typically, acetylcholine binds to muscarinic receptors, conspiracy theorists trying to destroy the world. Synthesis of VX receptors found in classical neuromuscular junctions; is achieved through a four-step process known as the transester autonomic motor fibers; and nicotinic receptors, receptors found process, which is relatively short in the world of organic chemistry: throughout the cerebral cortex, hippocampus, and brainstem. phosphorous trichloride is methylated and reacted with ethanol, Upon administration of VX, acetylcholine lingers in these areas, undergoes nucleophilic attack by an ethanolamine, then reacted repeatedly stimulating postsynaptic neurons and resulting in with sulfur; the mixture is left to isomerize in situ (in its original symptoms typical of excess cholinergic signaling—first sweating solution) to produce a mixture of two enantiomers of VX, both and twitching, then nausea, vomiting, diarrhea, coma, and of which are deadly [5]. However, such a synthesis requires a welleventually respiratory system failure. Because acetylcholine equipped chemistry laboratory to prevent inhalation of toxic also plays a role in the brain, cognitive effects may occur; but, fumes at any step of the process as well as a good knowledge of these symptoms have not been precisely identified since patients organic chemistry to work up products and actually isolate VX exposed to VX typically die before a psychological examination. from any side products made during the synthesis. Additionally, An important detail of Kim Jong-nam’s assassination, the reagents to make VX require some sort of chemical license however, is that the two women to work with, which can be hard involved in the attack did not to obtain for the average citizen. die. Acute toxins such as VX Organophosphates are deadly VX and other are typically characterized by a because they inhibit an enzyme organophosphates could pose a measure called LD₅₀, which is the greater risk from governments acetylcholinesterase seeking to engage in chemical minimum dose of a toxin that called kills 50% of a sample population. (AChE), which helps break warfare. Modern use of chemical Bajgar reports that, in humans, was first established in the neurotransmitter warfare an oral dose of VX has an LD₅₀ down WWI with the utilization of of 5 mg/70 kg—which is easily acetylcholine during synaptic chemicals such as chlorine gas less than a drop of liquid for transmission. (Cl₂), phosgene (COCl₂), and the average woman. Because of mustard gas (C₄H₈Cl₂S). Nerve VX’s high toxicity, as well as its agents soon followed these extremely low volatility, it is unlikely that both women managed toxic agents, with a set of nerve agents known as the G-series to avoid any contact with VX or simply wipe VX off of their skin. synthesized by German scientists (hence the name G-series); VX has one more property that makes it a good assassination this series includes well-known agents such as sarin and soman. weapon: it’s a binary poison. In other words, VX can be However, partly due to the sheer destruction caused by chemical synthesized as two relatively non-lethal compounds: Agent QL agents, they played a far less prominent role in World War II (a phosphonite compound) and either Agent NE (elemental than they did in World War I. Indeed, chemical agents have been sulfur) or Agent NM (a sulfur-based compound). This offers banned multiple times in the modern era, beginning with the two reasonable methods that could have allowed the women Geneva Protocol in 1925, then followed by the Biological and to survive the attack: a) they could have self-administered VX Toxin Weapons Convention in 1972 and further sanctions such antidotes such as atropine, which blocks muscarinic receptors as the 1993 Chemical Weapons Convention in future years [6]. to protect the nervous system from excess stimulation, or However, as demonstrated by the attack on Kim Jong-nam, obidoxime, which stops organophosphates from binding to chemical warfare has certainly not been untouched by political acetylcholinesterase. Or b) each women was given one of the entities such as North Korea and thus could play a role in future ingredients to VX, which would then react on Kim Jong-nam’s armed conflicts. face without killing themselves. It’s more likely that Agent NM In general, you shouldn’t be too worried about attacks from (sulfur compound) was used rather than Agent NE (elemental organophosphates such as VX unless you’re a high-profile sulfur), because the reaction between Agent QL and Agent NE individual that is at risk for assassination, in which case VX is highly exothermic [4], and no descriptions of burns to his face is merely one item on your bucket list of dangers. So, feel no have been reported [1]. fear: you can walk through airports safely, and chemical attacks Fortunately, it’s likely that VX won’t pose any human risk might be one of the few scenarios where the TSA’s no-liquid from mad scientists messing around in their basements or policy might actually save your life.

References [1] North Korean leader’s brother Kim Jong-nam killed at Malaysia airport. BBC News. http://www.bbc.com/news/world-asia-38971655. Published February 14, 2017. [2] Massoulié J, Pezzementi L, Bon S, Krejci E, Vallette F-M. Molecular and cellular biology of cholinesterases. Progress in Neurobiology. 1993;41(1):31-91. doi:10.1016/0301-0082(93)90040-y. [3] Cholinesterase Inhibitors: Including Insecticides and Chemical Warfare Nerve Agents Part 4 - Section 11 Management Strategy 3: Medications 2-PAM (2-Pyridine Aldoxime Methylchloride) (Pralidoxime). Centers for Disease Control and Prevention. https://www.atsdr.cdc.gov/csem/csem.asp?csem=11&po=23. Published October 16, 2010. [4] Tucker J. War of Nerves: Chemical Warfare from World War I to Al-Qaeda. New York, NY: Anchor; 2007. [5] Benschop HP, Jong LPAD. Nerve agent stereoisomers: analysis, isolation and toxicology. Accounts of Chemical Research. 1988;21(10):368-374. doi:10.1021/ ar00154a003. [6] Chemical and Biological Weapons. International Committee of the Red Cross. https://www.icrc.org/en/document/chemical-biological-weapons. Published April 8, 2013. 4 | JOURNYS | WINTER 2017


modern Forensic Science

By: Kishan Shah || Art By: William La

A masked man wearing gloves enters a home through an open window and commits a murder, but stops in the kitchen to drink a can of soda, leaving the empty can on the kitchen counter. The decomposing body of an unidentified middle-aged female human is discovered in the woods, but there are no suspects or motive. In the past, crimes like these may have remained unsolved for years or forever. However, with recent scientific advances, such cases have become much easier to solve. Just over 30 years ago, detectives were unable to use DNA sequencing to convict crime suspects; today, sophisticated scientific testing has become a staple of modern crime investigations. Forensics is a term used to describe the scientific tests or techniques employed to solve a crime. Until the 1970s, investigators relied on rudimentary forensics methods that were painstakingly inefficient and often produced small and rather insignificant leads. That all began to change with the introduction of newer tests based on advances in molecular biology, chemistry, and genetics, along with the development of powerful computer hardware and software. In 1987, Florida rapist Tommie Lee Andrews became the first person in the United States to be convicted using DNA technology. Today, common biological forensics tests include basic DNA testing, DNA phenotyping, RNA analysis, and ancestry information markers. Reliable DNA sequencing is a significant scientific breakthrough that investigators have capitalized on to solve cases. To sequence DNA, it must first be extracted from cells within a tissue or fluid sample, and then quantified and amplified for testing. Once the DNA is analyzed, its sequence can be compared to samples of known DNA profiles. This process allows investigators to compare DNA in a tissue sample from a crime scene with an individual’s DNA to confirm or rule out the individual as a suspect. If there is no suspect to compare the tissue sample DNA with, investigators can run a comparison with DNA data stored in the FBI’s National DNA Index System (NDIS) to search for a match. NDIS is part of the Combined DNA Index System (CODIS), which allows law enforcement 5 | JOURNYS | WINTER 2017

agencies in the United States to share and compare DNA profiles [1]. In the hypothetical case of the murder by a masked man, saliva from the soda can and tissue samples from the deceased victim contain genetic material which can be extracted and analyzed. DNA sequenced from the soda can may be compared with the databases to identify the murderer, and DNA sequenced from the unidentified victim can be compared with DNA profiles voluntarily contributed by relatives of missing persons to identify the victim. Ancestry informative markers (AIMs) are sets of polymorphisms in DNA that occur with significantly different frequencies between populations from different geographical regions. AIMs can be used to estimate the geographical origins of an individual’s ancestors and are particularly useful for distinguishing between ethnic groups [2]. However, a major challenge posed by this new technology is that it is not currently sophisticated enough to differentiate among heavily mixed populations, in which multiple ancestries have blended together over generations. Although the technology isn’t perfect, it can at least be used in some cases to rule out certain individuals [2]. DNA phenotyping is widely used by forensic specialists because it helps them piece together an image of what a suspect may look like. This relatively new technology can be used to determine traits such as geographic ancestry, eye and natural hair color, and even possible facial features. The first step in the phenotyping process is gathering and sequencing genetic material left at the crime scene. The genetic sequence is then entered into a complex computer program that further analyzes the DNA to develop a realistic image of the suspect’s face. According to Sergeant Stacy Gallant, a cold-case homicide investigator, the results of DNA phenotyping forced her team to change the


suspect they had originally focused on in one case [3]. In that case, fragments of the perpetrator’s tissue were found under the victim’s nails, likely resulting from a physical struggle. Based on the available evidence, the company Parabon NanoLabs was able to develop a computergenerated portrait of the murderer. His ancestry, based on the information provided, was northern European. Prior to obtaining results from the DNA phenotyping test, investigators had believed the suspect to be of Mexican origin. Despite the promise of DNA phenotyping, investigators will need to take its results with caution since the computer-derived facial images of a suspect are not 100% accurate, as many aspects of a person’s appearance are not encoded in DNA. Nevertheless, 40 different law enforcement agencies and organizations have currently adopted the method. While DNA phenotyping can be used for individual identification, it has its limitations. One such limitation is that it cannot determine what type of tissue or fluid is present in a sample. RNA profiling, on the other hand, has the ability to distinguish between different types of body tissues and fluids, since different mRNAs are found in different types of cells. Biological tissue and fluid analysis is of paramount importance in the field of forensics. For example, RNA profiles of blood can differentiate between menstrual blood and arterial/venous blood, which is helpful in identifying the origin of blood stains left at a crime scene. RNA expression files can also reveal the age of a blood stain because different types of RNA degrade at different rates and may offer possible insights into diseases t h a t could have

contributed to a victim’s death [4]. One of the most useful recently developed technologies in forensics is x-ray photoelectron spectroscopy (XPS). This technology can help investigators spot materials present at crime scenes that are easily overlooked by the human eye. For example, XPS can identify a tiny fiber on the floor of a crime scene in seconds and may also be able to discern chemical speciation of fingerprints, layers of substrates deposited on surfaces due to a fire or explosion, and particulate materials and cosmetics [5]. Another new breakthrough in the field of forensics is visible wavelength reflectance hyperspectral imaging, in which a camera equipped with a liquid-crystal tunable filter takes pictures in multiple wavelengths. This technology can differentiate between bloodstains and other dark stains on the crime scene without any physical contact, reducing the risk of tampering with evidence. It expedites the process of collecting and analyzing blood samples, since it can quickly identify bloodstains which can then be sent to the lab [6]. Anyone who has watched one of the many forensics shows on television today has seen the aforementioned technologies utilized to analyze crime scenes and convict suspects. The development of modern forensics techniques has helped convict criminals more confidently, bring justice to victims and their families, and make crime scene investigators’ jobs a little easier.

References [1] Norrgard K. Forensics, DNA fingerprinting, and CODIS. Nature Education. 2008;1(1):35. https://www.nature.com/scitable/ topicpage/forensics-dna-fingerprinting-and-codis-736. [2] Phillips ML. Crime scene genetics: Transforming forensic science through molecular technologies. BioScience. 2008;58(6):484-489. doi:10.1641/b580604. [3] Pollack A. Building a Face, and a Case, on DNA. The New York Times. https://www.nytimes.com/2015/02/24/science/buildingface-and-a-case-on-dna.html. Published February 23, 2015. Accessed October 5, 2017. [4] Bauer M. RNA in forensic science. Forensic Science International: Genetics. 2007;1(1):69-74. doi:10.1016/j.fsigen.2006.11.002. [5] Watts JF. The potential for the application of X-ray photoelectron spectroscopy in forensic science. Surface and Interface Analysis. 2010;42(5):358-362. doi:10.1002/sia.3192. [6] Exciting New Technologies in Forensic Science. CriminalJusticeProgramsOnline.com. https://www. criminaljusticeprogramsonline.com/exciting-new-technologies-alterthe-forensic-science-landscape/. Published August 25, 2014. Accessed October 5, 2017. 6 | JOURNYS | WINTER 2017


Natural Antioxidant & Nano-Antioxidant Effects Against Oxidative Stress

abstract Reactive oxygen species (ROS) arise during normal bodily function, such as respiration. These highly reactive, unstable molecules are missing an electron and often steal electrons from biomolecules, causing oxidative stress, damage, and havoc in the cell. The harm that ROS, a type of free radicals, induce on cells is called oxidative stress, which is a major contributor to the development of chronic diseases such as atherosclerosis, Alzheimer’s disease, diabetes, and cancer. Antioxidants can combat oxidative stress, but the effectiveness of natural antioxidants and new nano-antioxidants still needs to be fully characterized and compared. The objective of this experiment is to characterize and investigate the mechanisms of natural and nano-antioxidants, and to determine which natural and nano-antioxidant is most effective against oxidative stress in Saccharomyces cerevisiae (baker’s yeast). Four natural antioxidants—catechin in green tea, allicin in garlic, glutathione, and vitamin C—and three nano-antioxidants— MitoQ, C₆₀ (carbon fullerene), and gold nanoparticles— were tested. Green tea has been proposed as the best natural antioxidant in terms of having flexibility to donate electrons to ROS, while MitoQ is said to be the superior nano-antioxidant due to its ability of targeting ROS in mitochondria. Results of this experiment showed that green tea and C₆₀ were the most effective natural and nano-antioxidants to counter oxidative stress. Glutathione and vitamin C displayed prooxidative effects and were detrimental to yeast survival at high concentrations.

or increase insulin resistance in arterial cells [3]. Moreover, free radicals can contribute to the development of cancer by reducing DNA replication fidelity [4]. ROS are mainly created during normal physiological processes, namely cellular respiration and respiratory bursts. Cellular respiration is a series of reactions that takes place in the mitochondria to produce adenosine triphosphate (ATP), the energy currency for most living organisms. A specific part of cellular respiration called the electron transport chain is a constant source of ROS. During this stage, high-energy electrons reduce oxygen, but 2% - 4% of these electrons prematurely oxidize oxygen and create a superoxide anion, a highly hostile free radical. Respiratory bursts are produced by the body’s immune system to purposely generate ROS and utilize them to combat pathogens. Furthermore, in the presence of oxidized iron, more free radicals may occur due to the reaction of oxidized iron and hydrogen peroxide. Cumulatively, these processes initiate oxidative stress in the body, which in turn harms the cells.

introduction Oxidative stress, a major inducer of several chronic diseases worldwide including atherosclerosis, Alzheimer’s disease, diabetes (type 1 and type 2), and cancer, is caused by ROS. ROS are produced in the mitochondria during cellular respiration. As highly unstable molecules, they destroy cells by causing DNA oxidative damage, lipid peroxidation, and protein oxidation. ROS trigger the aggregation of highly oxidized low density lipoproteins (Ox-LDL), which are catalysts for atherosclerosis [1] (Figure 1), and also fuel the formation of beta-amyloid (Aβ) plaques and tau protein in Alzheimer’s disease [2]. In both cases of diabetes type 1 and 2, oxidative stress increases stress signaling on certain pathways that either decrease insulin biosynthesis in pancreatic β-cells 7 | JOURNYS | WINTER 2017

By jessie gan art by angela liu


Oxidative stress by ROS primarily affects three main macromolecules: lipids, proteins, and deoxyribonucleic acid (DNA). When ROS oxidize lipids in cell membranes, they set off

Figure 1: ROS-assisted formation of oxidized LDL from LDL [1] free radical chain reactions that may induce the cell membrane to lyse. Oxidative stress interferes with several levels of protein structure by modifying amino acids, altering protein bonding, and severing peptide bonds. Lastly, the oxidation of DNA occurs in either the nitrogenous bases or the sugar backbone molecules. When the sugar molecule is oxidized, the strand falls apart. However, when the nitrogenous base is oxidized, the molecular structure changes, causing it to bond with the wrong corresponding base and creating a mutation in the strand. These macromolecules are integral in the cell; by being damaged by oxidative stress, chronic disease may develop in the body over time. To combat oxidative stress, the body produces natural enzymatic antioxidants, which are molecules that reduce free radicals to a stable form while still remaining stable themselves. The flexibility of an antioxidant to give and receive electrons while maintaining stability is called its resonance, which can be interpreted as an antioxidant’s effectiveness. Naturally occurring antioxidants use one of two methods to reduce ROS via free radical scavenging: single-electron transfer (SET) or hydrogen atom transfer (HAT), both of which involve donating an extra electron to the ROS to stabilize it. These antioxidants are found in green tea, garlic, glutathione, and vitamin C. Green tea has polyphenols that confer stability through resonance stabilization [5] (Figure 2), allicin in garlic breaks down into a more competent

Figure 2: Bond-line structure of Epigallocatechin-3-gallate (EGCG)

antioxidant of 2-propenesulfenic acid [6], glutathione is a natural antioxidant produced by the body, and vitamin C is a unique “recyclable” antioxidant that helps regenerate other vitamin antioxidants like vitamin E. There are currently two classes of antioxidants on the consumer market, both which claim to have various health benefits, including the reduction of oxidative stress. Nano-antioxidants are synthetic antioxidants with a high surface area to volume ratio and differ from natural antioxidants because they aren’t as susceptible to degradation when ingested due to their ability to penetrate cell membranes. They are the focus of new antioxidant therapy for many of the aforementioned diseases [7], and include MitoQ, C₆₀, and gold nanoparticles. MitoQ is a combination of coenzyme Q10 and a tetraphenylphosphonium (TPP⁺) ion, which enables it to penetrate mitochondria, the origin of much oxidative stress from electron transport chain leakage [8]. C₆₀ are extremely stable molecular fullerenes (buckyballs) that are lipophilic and also target mitochondria [9] (Figure 3). Gold nanoparticles are biocompatible and can act as antioxidants, and they can also be coated with other antioxidants and act as a delivery system [10]. This research project tested the effects of four natural antioxidants and three nanoantioxidants against hydrogen peroxide-induced oxidative stress in Figure 3: 3D-representation of a C₆₀ S a cc h a r o myce s Fullerene [9] cerevisiae. The natural antioxidants tested were catechins in green tea extract, allicin in garlic, glutathione, and vitamin C. The nanoantioxidants tested were MitoQ, C₆₀ nano-antioxidant, and gold nanoparticles. The natural antioxidants and nano-antioxidants were tested separately against each other because nanoantioxidants require a much lower concentration to be effective. Even at a lower concentration, the nano-antioxidants rivaled the natural antioxidants in efficacy against oxidative stress. The hypothesis was that if Saccharomyces cerevisiae are exposed to oxidative stress, then the yeast protected by the antioxidant will show higher optical density than yeast without protection. Of the four natural antioxidants, green tea was predicted to be most effective in protecting yeast from oxidative stress, since it has a large polyphenol resonant molecular structure. Of the three nano-antioxidants, MitoQ was predicted to most effectively protect yeast from oxidative stress, since MitoQ is comprised of Coenzyme Q10 and a TPP⁺ salt ion which penetrates mitochondria and targets ROS effectively at their place of origin.

8 | JOURNYS | WINTER 2017


purpose To find which natural antioxidants and which nanoantioxidants most effectively defend model organism Saccharomyces cerevisiae against oxidative stress.

method This research investigated the properties and studied the effects of four natural antioxidants (green tea [Solaray], garlic [Trader Joe’s], glutathione [Setria], vitamin C [Nature Made]) and three nano-antioxidants (MitoQ [MitoQ], C₆₀ [Good and Cheap Carbon 60 Olive Oil], gold nanoparticles [Silver MTN Minerals]) in different concentrations against free radicals, using Saccharomyces cerevisiae as a model organism and hydrogen peroxide to induce oxidative stress. In the experiment, three trials were performed. In each trial, there were 40 test tubes for each antioxidant, with eight different mixture combinations, and five samples each. Each test tube was first prepared with the same basic nutrient mixture for yeast growth: 2 μl of 1 g yeast solution (yeast powder mixed with 4 mL distilled water), 30 mL water, 0.25 mL corn syrup, and 0.25 mL soup broth. The first three test tubes were prepared with controls of (1) basic mixture of yeast only (positive control), (2) basic mixture of yeast only with 0.5 mL of antioxidant (positive control), and (3) basic mixture of yeast only with 0.25 mL hydrogen peroxide (H₂O₂) (negative control). Positive control (1) ensured that the yeast actually proliferated, positive control (2) proved that the antioxidants did not negatively affect yeast that were not experiencing oxidative stress, and the negative control (3) confirmed that oxidative stress indeed caused a negative effect on yeast survival, hence reducing yeast optical density. The next five test tubes (4)–(8) contained the basic mixture, 0.25 mL of H₂O₂, and 0.5 mL of varying concentrations of antioxidant. Each of these test tube solutions was replicated with five samples to provide greater overall accuracy. For the natural antioxidants, the five concentrations were 1.56 mg/mL, 3.125 mg/mL, 6.25 mg/mL, 12.5 mg/mL, 25 mg/mL. For the nanoantioxidants, a serial dilution produced the five concentrations of 0.03125 mg/mL, 0.0625 mg/mL, 0.125 mg/mL, 0.25 mg/ mL, and 0.5 mg/mL, which is a lower range due to their superior reduction abilities. The yeast density for each test tube was measured in platinumcobalt units (PCU) before and after the 24 hour incubation period, using a handheld colorimeter from Hanna Instruments. PCU is the standard color scale used for measuring the color of water, or turbidity, which is caused by dissolved or suspended particles, and in this case corresponds to intact yeast cells. The PCU scale ranges from distilled water at 0 parts per million up to 500 parts per million of platinum-cobalt to water, and utilizes an LED light source and light detector to measure turbidity. Here, it was tested how the individual antioxidants affected yeast optical density after 24 hours. This measure

9 | JOURNYS | WINTER 2017

indicated the proliferation rate of the yeast and its survival when subjected to oxidative stress. The effectiveness of the antioxidants was compared within each group. Three trials were run for each antioxidant over several weeks and data was analyzed.

results In comparing natural oxidants, I found that green tea was the most effective natural antioxidant, surpassing the positive control yeast optical density. This is due to catechin-epigallocatechin-3gallate (EGCG), a resonantly stable molecule that has multiple hydroxyl (—OH) groups that facilitate hydrogen atom transfer to quench free radicals and remain stable. All concentrations of green tea led to increased density of yeast. Garlic was also able to prevent yeast from dying due to the H₂O₂-induced oxidative stress at all concentrations. The other natural antioxidants, glutathione and vitamin C, only prevented the yeast from dying at lower concentrations; at the higher concentrations, the yeast cells were no longer protected. This could be due to the pro-oxidative effects of glutathione and vitamin C, which may occur when high concentrations of antioxidants start to reduce transition metals and set off free radical Fenton reactions. Fenton reactions occur when reduced transition metals react with hydrogen peroxide and result in aggressive free radicals. In comparing nano-antioxidants, C₆₀ was found to be the most powerful, surpassing the positive control yeast optical density. While MitoQ was hypothesized to be the most potent nano-antioxidant, the results showed that C₆₀ fullerene was the best nano-antioxidant. C₆₀ possesses the ability to target mitochondria and has an extremely stable molecular structure with 60 covalently-bonded carbon atoms. This allows C₆₀ to remain resonantly stable after donating electrons to radicals. All nano-antioxidants protected the yeast cells from oxidative stress, and none had pro-oxidation effects at the concentrations used in this experiment. The experiment demonstrated clear results and successfully identified which antioxidants are most effective; however, it was not anticipated that such detrimental pro-oxidative effects would be exhibited by glutathione and vitamin C. The nano-antioxidants were also clearly superior to the natural antioxidants for a number of reasons, such as their high surface area to volume ratio, ability to enter mitochondria, and, like MitoQ and C₆₀, the ability to target the areas of ROS. In the experiment, even at a significantly smaller concentration, the nano-antioxidants were superior to the


natural antioxidants in terms of yeast survival measured in optical density.

REFERENCES [1] Madamanchi NR, Vendrov A, Runge MS. Oxidative stress and vascular disease. Arteriosclerosis, Thrombosis, and Vascular Biology. 2004;25(1):29-38. doi:10.1161/01. atv.0000150649.39934.13. [2] Liu Z, Li T, Li P, et al. The ambiguous relationship of oxidative stress, tau hyperphosphorylation, and autophagy dysfunction in Alzheimer’s disease. Oxidative Medicine and Cellular Longevity. 2015;2015:1-12. doi:10.1155/2015/352723. [3] Kaneto H, Katakami N, Matsuhisa M, Matsuoka T-A. Role of reactive oxygen species in the progression of type 2 diabetes and atherosclerosis. Mediators of Inflammation. 2009;2010:1-11. doi:10.1155/2010/453892. [4] Preedy VR. Cancer: oxidative stress and dietary antioxidants. London, UK: Academic Press; 2014.

Green Tea Sample Day 1

C₆₀ Sample Day 1

discussion The effects of oxidative stress in Saccharomyces cerevisiae are best combated by natural green tea catechins and C₆₀ nano-antioxidant. In the experiment, pro-oxidative effects were also found, which could indicate that a high concentration of antioxidants may be harmful. This should be taken into consideration when taking high doses of antioxidants. Antioxidants are an integral part in keeping the body in redox balance (constant state of molecules taking and giving electrons), but it’s also important to be mindful of the delicate balance that can be disrupted by too much or too little antioxidants. A continuation of this research could be to experiment with a combination of green tea catechins and C₆₀ nano-antioxidant as a “super-antioxidant,” to see if two of the most potent antioxidants applied simultaneously would make a more powerful reducer, or if they start to induce pro-oxidation. Different models, such as mice, should be tested to investigate the effects on mammalian organisms. Furthermore, the antioxidants should be tested in disease models to develop potential treatment for atherosclerosis, neurodegenerative disorders, diabetes 1 and 2, and cancer. Since an abundance of ROS in the body is common for many trauma and chemotherapy patients, treatments with strong antioxidants may be beneficial in many hospitals. To create this new antioxidant, different ratios of green tea catechins and C₆₀ nano-antioxidant need to be tested to discover which combination can most effectively fight oxidative stress.

[5] Lambert JD, Elias RJ. The antioxidant and prooxidant activities of green tea polyphenols: A role in cancer prevention. Archives of Biochemistry and Biophysics. 2010;501(1):65-72. doi:10.1016/j.abb.2010.06.013. [6] Nimse SB, Pal D. Free radicals, natural antioxidants, and their reaction mechanisms. RSC Advances. 2015;5(35):27986-28006. doi:10.1039/c4ra13315c. [7] Sandhir R, Yadav A, Sunkaria A, Singhal N. Nanoantioxidants: An emerging strategy for intervention against neurodegenerative conditions. Neurochemistry International. 2015;89:209-226. doi:10.1016/j. neuint.2015.08.011. [8] Apostolova N, Victor VM. Molecular strategies for targeting antioxidants to mitochondria: therapeutic implications. Antioxidants & Redox Signaling. 2015;22(8):686-729. doi:10.1089/ars.2014.5952. [9] Chistyakov VA, Smirnova YO, Prazdnova EV, Soldatov AV. Possible mechanisms of fullerene C₆₀ antioxidant action. BioMed Research International. 2013;2013:1-4. doi:10.1155/2013/821498. [10] Kalmodia S, Vandhana S, Rama BRT, et al. Bioconjugation of antioxidant peptide on surface-modified gold nanoparticles: a novel approach to enhance the radical scavenging property in cancer cell. Cancer Nanotechnology. 2016;7(1). doi:10.1186/s12645-016-0013-x.

10 | JOURNYS | WINTER 2017


The Heart’s Electrical System By: Allison Jung

Although the nervous system is typically associated with electrical signals in the human body, the heart also contains its own electrical system. This system helps the heart perform its solitary function: to constantly beat, day by day, through the course of human life. The heart has four chambers in which blood passes through; the two upper chambers are the left and right atria, and the two lower chambers are the left and right ventricles. The atria are responsible for pumping blood into the heart, while the ventricles pump blood out of the heart. The heart’s electrical system, responsible for controlling the heart rate, begins with a group of cells called the sinoatrial (SA) node. Located in the upper part of the right atrium, the SA node initiates contraction of the heart by generating action potentials. These action potentials create electrical currents within cells by shuttling ions through special protein channels. When ions move down their electrochemical gradients, their movements depolarize the typically negatively charged cell and create a difference in voltage of the membrane potential that can be propagated to other cells, forming the basis of the action potential. Action potentials typically require some sort of stimulus from the environment in order to occur: in neurons, chemical signals from neurotransmitters cause the initial change in membrane potential that drives the start of the action potential. The SA node, however, is able to circumvent this requirement through its specialized cardiomyocytes (cardiac muscle cells) called pacemaker cells. These pacemaker cells have a special property called automaticity, meaning that they are able to stimulate their own action potentials without any input from the environment. After the pacemaker cells of the SA node begin propagating an action

Art By: William La

potential, the electrical signal is transmitted through gap junctions between cardiomyocytes, travelling through the Bachmann’s bundle and hitting the left atrium. Following the SA node and Bachmann’s Bundle, the impulse travels down to the atrioventricular node, which is located in the middle of the heart and serves as a major electrical connection between the atria and ventricles. The impulse stops for a brief moment to allow the atria and ventricles to refill with blood, and then travels to the Bundle of His located in the walls of the ventricles, causing the ventricles to contract. The signal finally travels to the right bundle branch and the left bundle branch, which both split into millions of Purkinje fibers that allow the electrical signal to diverge in numerous directions. In addition to initiating heart contractions, the SA node sets the rate at which the heart beats, which is why it is nicknamed the heart’s “natural pacemaker.” Certain factors, however, such as fear, exercise, or sleep, can also affect the heart rate: exercise can double the heart rate, while sleep slows it down. An irregular heart rate is indicative of an arrhythmia, which can be categorized based on location or heart rate. Those that occur in the ventricles, for instance, are known as ventricular arrhythmias, while those in the atria are known as supraventricular arrhythmias. Bradycardia is a slow irregular heartbeat typically under 60 beats per minute, while tachycardia is a rapid heartbeat of over 100 beats per minute. Fibrillation is the most dangerous form of arrhythmia, caused by individual twitching or contraction of muscle fibers. Today, electrophysiology studies are utilized to identify the nature of abnormal heartbeats with new research and technology. In these studies, a specialized electrode catheter is inserted through a blood vessel connected to the heart, and electrical impulses are sent to stimulate the heart and measure its activity [1]. After an arrhythmia is identified, one method to treat it is with a pacemaker, a small device that is placed under the skin of your chest that detects electrical signals in the heart [2]. If the device detects irregular heart rhythm, it sends electrical impulses to the heart to stimulate regular beats. Another treatment for arrhythmias is cardiac ablation, which involves destroying the portion of cardiac tissue that is causing the abnormal heartbeat [3]. This procedure can be completed through open heart surgery, but is typically performed through a catheter. Treatments like these can help restore the heart’s vital electrical system in those with irregular heartbeats.

References [1] Understanding Blood Pressure Readings. Electrophysiology Studies (EPS). http:// www.heart.org/HEARTORG/Conditions/HighBloodPressure/KnowYourNumbers/ Understanding-Blood-Pressure-Readings_UCM_301764_Article.jsp. Published January 2, 2018. Accessed October 3, 2017. [2] Arrhythmia. National Heart Lung and Blood Institute. https://www.nhlbi.nih.gov/ health-topics/arrhythmia#Treatment. Accessed October 4, 2017. [3] Cardiac ablation. Mayo Clinic. http://www.mayoclinic.org/tests-procedures/ cardiac-ablation/home/ovc-20268855. Accessed October 4, 2017.

11 | JOURNYS | WINTER 2017


Today, robotic-assisted surgery allows surgeons to reduce incision sizes and conduct more accurate and precise procedures. In the future, robotics may be the answer to long distance surgeries, with doctors operating the console at a separate location from the patient. Currently, however, robotic-assisted surgery remains an underdeveloped field. Financial costs and lack of training pose challenges, and case samples are limited for the range of surgeries that exists today. To understand the risks, benefits, and future directions of robotic-assisted surgery, this paper will examine notable types of surgeries performed in the fields of pediatrics, gynecology, and cardiology, followed by training program recommendations and cost evaluation.

background Robotic-assisted surgery was first developed in 1983 by an orthopedic surgeon in Canada. The surgeon and his team named the robot “Arthrobot,� which paved the way for advanced robots in other fields like ophthalmology, urology, and gynecology. Arthrobot was followed by PUMA 560, the first robotic arm to assist a surgeon; PROBOT, the first robot built for prostate surgery; ROBODOC, the first robot used for hip replacement; and ZEUS, the first robot used for gynecological surgery [1]. In 2003, a company named Intuitive Surgical, Inc. purchased the company that created ZEUS, and developed the da Vinci surgical robot, which was approved for gynecological surgery by the Food and Drug Administration in 2005 [1]. Since then, a large multi-institutional study on the usage of the da Vinci robotic surgical system in gynecologic oncology has been published, and robotic-assisted surgeries in other fields have quickly followed. Based on a report by Intuitive Surgical, Inc., the number of robotic surgical systems doubled between 2007 and 2013 in the United States and Europe. By 2014, approximately 570,000 da Vinci procedures were performed worldwide, 79% of which were performed in the United States [1]. Table 1 below shows the decline in the rate of growth of robotic surgeries performed in various fields from 2007 to 2011.

Year

Total %

Thoracic %

Cardiac %

2007

284

300

266

2008

124

17

259

2009

64

254

-15

2010

21

27

10

2011

3

14

-17

Table 1: Growth rate of robotic surgery compared to the previous year [4]

12 | JOURNYS | WINTER 2017


In pediatrics, the robotic pyeloplasty treatment (kidney reconstruction) of ureteropelvic junction (UPJ) obstruction, a condition in which urine flow from the kidney is blocked, has experienced moderate success with robotic-assisted surgery. Studies have found that robotic pyeloplasty, in comparison to open pyeloplasty, was associated with shorter hospital stays and reduced pain medication usage, but longer operation times. In the long run, robotic pyeloplasty showed about a 17% greater chance of complete cure of hydronephrosis, a condition characterized by excess fluid in a kidney due to a backup of urine, and about 17.6 months shorter recovery time when compared to open pyeloplasty [2].

Robotic systems have also assisted gynecological surgeons perform hysterectomies (uterus removal), salpingectomies (fallopian tube removal), oophorectomies (ovary removal), myomectomies (uterine fibroid removal), and lymph node biopsies. Robotic surgery has been found to reduce morbidity and mortality rates of those who have gynecologic cancer. Similar to the robotic pyeloplasty treatment of UPJ obstruction, there has been less blood loss, faster recovery, less pain and scarring, and reduced risk of infection in gynecological surgeries [1].

The Ministry of Health and Welfare in South Korea examined the national data of robotic operations provided by the National Evidence-Based Healthcare Collaborating Agency to determine the overall trends of robotic cardiovascular and thoracic surgery. Valvular heart disease was the most common case suitable for robotic cardiac surgery in this study, followed by atrial septal defect repairs. There were no occurrences of serious surgical complications and mortality of the 50 patients involved [3]. For cases of robotic lobectomies, robotic esophagectomies, and robotic surgeries for mediastinal disease, there was a decrease in length of hospitalization and reduced complications [3].

The current studies on robotic-assisted surgeries have for the most part shown positive results. The learning curve of the technology is still being determined as leaders in the field work to devise an effective curriculum to train the next generation of medical professionals. In 2010, Bowen et al. demonstrated that an experienced open surgeon and fellowship-trained surgeon can quickly gain expertise in performing pediatric robotic-assisted laparoscopic prostatectomy (RALP) when trained through an established robotic-surgery program [4]. Proctoring was an essential part of the training program that significantly shortened the learning curve. In the study, after five proctored RALPs, an experienced open surgeon was able to operate robotic surgeries independently with a 96% success rate [4].

13 | JOURNYS | WINTER 2017


There are ongoing debates on whether or not robotic surgery has more advantages than other minimally invasive surgical methods. Although the cost of the robotic-assisted surgeries exceeds those of alternative choices, it is usually counterbalanced by reduced post-operative expenses and the ability for patients to return to work more quickly [3]. Some still argue that robotics surgeries is not worth its cost: in such cases like hysterectomies, reports have shown that robotic surgeries had barely, if any, better outcomes than open or minimally invasive surgery [5]. Other cases, like robotic prostatectomies, demonstrated greater benefits over laparoscopic surgeries [5]. On average, a robotic surgery requires $2,000 more than an open surgery [5]. Because the technology is so new, there is no guarantee that paying these higher costs will provide better results. More importantly, it will take time for robotic surgeries to become available to the public since unlike regular surgeries, they are not covered by medical insurance companies. Patients should consider their financial situation before choosing robotic surgery as a viable option.

In an era of technological advancements, robotic-assisted surgery will soon become the norm as more da Vinci Surgical Systems are purchased and utilized. Currently, robotic-assisted surgery has proven to be functionally effective in pediatrics, gynecology, and cardiology, yielding equal or better postoperative results for patients compared to traditional surgical methods. If more physicians undergo established robotic-surgery training programs and the cost of these procedures is lowered, robotic-assisted surgery may play an important role in the future world of medicine.

[1] Lauterbach R, Matanes E, Lowenstein L. Review of robotic surgery in gynecology—the future is here. Rambam Maimonides Medical Journal. 2017;8(2):e0019. doi:10.5041/rmmj.10296. [2] Howe A, Kozel Z, Palmer L. Robotic surgery in pediatric urology. Asian Journal of Urology. 2017;4(1):5567. doi:10.1016/j.ajur.2016.06.002. [3] Kang CH, Bok JS, Lee NR, Kim YT, Lee SH, Lim C. Current trend of robotic thoracic and cardiovascular surgeries in Korea: Analysis of seven-year national data. The Korean Journal of Thoracic and Cardiovascular Surgery. 2015;48(5):311317. doi:10.5090/kjtcs.2015.48.5.311. [4] Bowen DK, Lindgren BW, Cheng EY, Gong EM. Can proctoring affect the learning curve of robotic-assisted laparoscopic pyeloplasty? Experience at a high-volume pediatric robotic surgery center. Journal of Robotic Surgery. 2016;11(1):63-67. doi:10.1007/s11701-016-0613-9. [5] Wilensky GR. Robotic surgery: An example of when newer is not always better but clearly more expensive. The Milbank Quarterly. 2016;94(1):43-46. doi:10.1111/1468-0009.12178.

14 | JOURNYS | WINTER 2017


Allergies are caused by several environmental and genetic factors, yet whether allergies arise due to specific chemical properties of proteins is unknown. In this study, several differences between allergenic and non-allergenic proteins were analyzed: length, allergenicity levels, amino acid modifications, and hydrophobicity levels. A program in Python was created to analyze the amino acid distributions of 300 proteins (150 allergenic and 150 non-allergenic) extracted from the AllFam and Pfam databases. The amino acid modifications were extracted from the Allergome database. The Kyte-Doolittle hydropathy scale was used to identify the average hydrophobicity levels for all 300 proteins. Finally, Python was used to create matched sets based on the lengths of the proteins, creating groups of proteins that were statistically comparable. The primary matched set (proteins smaller than 200 amino acids) had significantly different lengths and amino acid modifications, but did not have significantly different hydrophobicity levels. Allergenic and non-allergenic proteins were found to be statistically different in four out of the six tests performed (p < .05), and allergenic proteins were found to have significantly different allergenicity levels. Thus, allergenic proteins and non-allergenic proteins differ significantly based on the lengths of their sequences, amino acid distributions, and amino acid modifications. Future research may involve isolating these protein regions to help create treatments and aid drug development to help millions of allergy patients.

An allergic reaction is the immune system’s response to a typically harmless substance. During an allergic reaction, antigens on an allergenic protein—which are normally harmless—stimulate the immune system’s malicious response to foreign molecules. This reaction causes B cells to differentiate into plasma cells, which secrete antibodies called immunoglobulin E (IgE). IgE antibodies then bind to mast cells, which release granules containing histamine, a chemical that causes common allergy symptoms such as itchiness, swelling, rashes, and hives. Allergic reactions can be evaluated on a scale of one (least severe) to five (most severe) depending on the allergenic symptoms present [1, 2]. Previous research supports the claim that any protein in sufficient quantities can stimulate an immune response in individuals especially sensitive to allergenic stimuli, but more recent research has revealed that specific chemical and structural properties make certain proteins allergenic [3]. Some of these properties include protein length, amino acid distribution, amino acid modifications, and hydrophobicity levels. Common amino acid modifications include disulfide bonds, modified residues, and glycosylations; these modifications typically happen after protein biosynthesis and can impact the function of the protein. Current research focuses on the stimulation of allergic reactions through studying IgE epitopes that attach to proteins considered foreign by the immune system. Ivanciuc et al. tested if a motif-based approach could be used to determine the allergenicity of proteins. Motifs are certain protein structures conserved in different families of proteins that could potentially impact protein folding and protein interactions. The study found that a motif-based approach could not conclusively determine the allergenicity of proteins, although predictions could be made [4]. 15 | JOURNYS | WINTER 2017

The primary goal of previous research was to analyze data from online allergen repositories to determine what features distinguish allergenic proteins. Analysis of online databases can be used to reveal similarities between selected proteins and allergenic proteins [5, 6]. There are five prominent protein databases: the Structural Database of Allergenic Proteins (SDAP), AllFam, Pfam, Allergome, and UniProt. SDAP includes structural and sequential information on many allergenic proteins and allows users to search individual allergenic proteins as well as allergen families, which are groups of allergenic proteins possessing similar qualities. SDAP is the first allergenic protein database that allows a user to retrieve and characterize IgE-binding epitopes of allergenic proteins [7]. AllFam hosts groups of allergenic proteins based on source and route of exposure. Pfam organizes proteins into a database of protein domain families; Pfam entries describe amino acid modifications and a “structures” subsections details protein primary structure. A combination of AllFam and Pfam information allows determination of internal protein modifications, or modifications to protein primary structure, and external protein modifications, or modifications to protein tertiary or quaternary structure, for most proteins. Allergome allows users to search for allergenic proteins based on their name [8]. It then displays a list of closely related allergenic proteins and can be used to identify associated amino acid modifications. UniProt combines protein data from the AllFam and Pfam databases, verifying the properties of the allergenic and non-allergenic proteins.


The amino acid distributions for 150 non-allergenic proteins (experimental group) and 150 allergenic proteins (control group) were analyzed by a Python algorithm which counted the number of each amino acid in the amino acid sequences. The AllFam and Pfam databases were used to obtain the sequences for the allergenic and non-allergenic proteins, respectively, which were saved in Excel. Next, the program determined protein hydrophobicity levels using information from the Kyte-Doolittle hydropathy scale [9], along with the aforementioned amino acid distribution data. After storing various data values from the hydropathy index of each amino acid, a function added the values from the hydropathy index for each protein, then divided that total by the number of amino acids in each protein sequence, yielding protein hydrophobicity levels. The hydrophobicity levels for the proteins were stored in an Excel file.

The amino acid modifications for each protein were determined with the Allergome database. The number of disulfide bonds, modified residues, and glycosylations for each protein was saved in Excel.

Protein allergenicity levels were determined in order to separate proteins into two groups: proteins with allergenicity levels greater than or equal to 50% and proteins with allergenicity levels less than 50%. These levels were obtained from the UniProt database.

Allergenic proteins were grouped into an “unknown set” containing proteins lacking amino acid modifications data and a “known set” with known amino acid modifications data. Non-allergenic proteins were also grouped using the same criteria. “Known sets” of proteins were then divided into more groups based on their length using Python’s matched set function. Python’s matched set function creates groups that can be compared with one another based on a property quality: in this case, length was used as the comparable quality. Next, an analysis of variance analysis (ANOVA) was performed for the allergenicity levels, amino acid modifications, lengths, hydrophobicity levels, and the percentages of amino acids in the sequences.

people in the U.S. suffer from nasal allergies

are the most common food allergy People in the U.S. visit the hospital

per year because of food allergies Yearly food allergy costs in the U.S. amount to nearly

Information courtesy of Asthma and Allergy Foundation of America

16 | JOURNYS | WINTER 2017


Five groups were created using Python’s matched set function as seen in Table 1. Note that allergen group 1 and allergen group 2 are comparable with non-allergen group 1 and non-allergen group 2, respectively, while non-allergen group 3 consists of proteins that were of a larger length; Python was unable to find a comparable group of proteins in the “known set” of allergenic proteins. Allergen group 2 and non-allergen group 2 were primary matched sets, meaning that Python determined that proteins in these two groups could be compared well, while allergen group 1 and non-allergen group 1 were considered secondary matched sets, meaning that proteins in these two groups couldn’t be compared as well as the proteins in allergen and non-allergen group 2.

Allergen group 1 proteins and non-allergen group 1 proteins did not have statistically different lengths, while allergen group 2 proteins displayed a statistically greater average length compared to nonallergen group 2 proteins. t-Test

p-Value

Allergen Group 2 and Non-allergen Group 2 (Large) < .001

Significant

Allergen Group 1 and Non-allergen Group 1 (Small) 0.703

Not Significant

Table 2 : Lengths of Amino Acid Sequences

t-Test

p-Value

Allergen Group 2 and Non-allergen Group 2 (Large) 0.149

Not Significant

Allergen Group 1 and Non-allergen Group 1 (Small) 0.561

Not Significant

Allergen Unknown and Non-allergen Unknown

Significant

0.012

Table 4: Hydrophobicity Levels

Researchers estimate that around

people under the age of 18 have food allergies The Center for Disease and Control Prevention estimates that from 1997-2011, the rate of food allergies in children have increased

Group Name

Protein Length (# of Amino Acids)

Allergen Group 1 (Small)

< 200

Allergen Group 1 (Large)

< 200

Allergen Unknown

-

Non-Allergen Group 1 (Small)

< 200

Non-Allergen Group 2 (Large)

200 - 800

Non-Allergen Group 3 (XL)

> 800

Non-Allergen Unknown

-

Table 1: Lengths of the Groups

Allergenic proteins whose allergenicity levels were greater than or equal to 50% had significantly more (p-value = 0.029) disulfide bonds than allergenic proteins whose allergenicity levels were less than 50%. Above 50% (Also 50)

Number of Disulfide Bonds

Below 50%

Number of Disulfide Bonds

Pers a 1

0

Hol l 1

0

Phl p 1

3

Ory s 1

3

Hev b 6

7

Cla h 6

0

Alt a 6

0

Rho m 1

0

Asp f 6

0

Alt a 4

1

Pol f 5

4

Asp f 10

1

Lol p 1

0

Cyn d 1

0

Pru av 2

8

Ves m 5

4

Pol e 5

4

Asp f 22

0

Pen ch 13

0

Pen c 22

0

Dol a 5

4

Jun a 3

8

Dol m 5

4

Lol p 3

0

Ves g 5

4

Ana c 1

0

Sl i 3

4

Ara h 5

0

Zea m 1

3

Asp f 8

0

Ves p 5

4

Ara h 6

5

Ves v 5

4

Fus c 2

1

Bra r 2

3

Cla h 5

0

Pha a 1

0

Sola l 1

0

Dac g 3

0

Cand b 2

0

56 Table 3 : Allergenicity Levels Information courtesy of Food Allergy Research and Education

17 | JOURNYS | WINTER 2017

23


Allergen group 2 proteins had significantly more disulfide bonds than non-allergen group 2 proteins. Allergen group 1 had significantly more disulfide bonds than non-allergen group 1. Allergen group 2 had significantly fewer modified residues than non-allergen group 2, while allergen group 1 and non-allergen group 1 did not have statistically different amounts of modified residues. Allergen group 2 had significantly more glycosylations that non-allergen group 2. Allergen group 1 had significantly fewer glycosylations than non-allergen group 1. t-Test

p-Value Disulfide bonds: Allergen Group 2 and Nonallergen Group 2 (Large)

< .001

Significant

Allergen Group 1 and Nonallergen Group 1 (Small)

0.002

Significant

Allergen Group 2 and Nonallergen Group 2 (Large)

0.002

Significant

Allergen Group 1 and Nonallergen Group 1 (Small)

0.537

Not Significant

Allergen Group 2 and Nonallergen Group 2 (Large)

0.043

Significant

Allergen Group 1 and Nonallergen Group 1 (Small)

0.027

Significant

Modified Residues:

Glycosylations:

Table 5: Amino Acid Modifications

Allergenic proteins and non-allergenic proteins had significantly different lengths of the amino acid sequences. Allergenic proteins with allergenicity levels greater than or equal to 50% had significantly more disulfide bonds than allergenic proteins with allergenicity levels lower than 50%. Allergenic proteins and non-allergenic proteins had significantly different amounts of disulfide bonds, modified residues, and glycosylations, but did not differ significantly in their hydrophobicity levels. Overall, allergenic proteins and non-allergenic proteins were found to be statistically different in four of the six tests performed. The chemical and structural differences between allergenic proteins and non-allergenic proteins play a key role in understanding allergy research and the further development of allergy treatments. Their differences help identify the distinguishing traits between allergenic and non-allergenic proteins, which must be taken into consideration when developing treatments that specifically target certain allergenic proteins. Future research includes isolating these protein modifications in a laboratory setting and testing the effects of these individual modifications. A greater number of allergenic and non-allergenic proteins can be identified and grouped into protein families to verify the initial results. In addition, more amino acid modifications can be identified and tested to provide a broader pool of data on chemical distinctions of allergenic proteins. Newer data repositories can be analyzed and advances in technology can be used to make further progress in determining factors impacting the allergenicity of proteins.

[1] Blom WM, Vlieg-Boerstra BJ, Kruizinga AG, Heide SVD, Houben GF, Dubois AE. Threshold dose distributions for 5 major allergenic foods in children. Journal of Allergy and Clinical Immunology. 2013;131(1):172-179. doi:10.1016/j.jaci.2012.10.034. [2] Toit GD, Roberts G, Sayre PH, et al. Randomized trial of peanut consumption in infants at risk for peanut allergy. New England Journal of Medicine. 2015;372(9):803-813. doi:10.1056/ nejmoa1414850. [3] Radauer C, Bublin M, Wagner S, Mari A, Breiteneder H. Allergens are distributed into few protein families and possess a restricted number of biochemical functions. Journal of Allergy and Clinical Immunology. 2008;121(4). doi:10.1016/j.jaci.2008.01.025. [4] Ivanciuc O, Garcia T, Torres M, Schein CH, Braun W. Characteristic motifs for families of allergenic proteins. Molecular Immunology. 2009;46(4):559-568. doi:10.1016/j. molimm.2008.07.034. [5] Ivanciuc O, Schein CH, Braun W. Data mining of sequences and 3D structures of allergenic proteins. Bioinformatics. 2002;18(10):1358-1364. doi:10.1093/bioinformatics/18.10.1358. [6] Mari A, Scala E. Allergome: a unifying platform. Arb Paul Ehrlich Inst Bundesamt Sera Impfstoffe Frankf A M. 2006;(95):29-39; discussion 39-40. [7] Sammut SJ, Finn RD, Bateman A. Pfam 10 years on: 10,000 families and still growing. Briefings in Bioinformatics. 2008;9(3):210-219. doi:10.1093/bib/bbn010. [8] Bateman A, Birney E, Cerruti L, et al. The Pfam protein families database. Nucleic Acids Research. 2004;30(1):276-280. doi:10.1093/nar/gkh121. [9] Finn R, Griffiths-Jones S, Bateman A. Identifying protein domains with the Pfam database. Current Protocols in Bioinformatics. May 2003. doi:10.1002/0471250953.bi0205s01.

18 | JOURNYS | WINTER 2017


REPRODUCTIVE AND ONCOFERTILITY SCIENCE ACADEMY: A REVIEW Oncofertility, a term first coined by Dr. Teresa Woodruff, is an emerging field that bridges the gap between oncology and reproductive medicine to expand fertility options for cancer survivors, whose survival rates have significantly increased in the last few decades [1]. In 2006, the National Institute of Health funded a grant in response to the rising concern for fertility preservation. One component of this grant was the creation of an educational outreach program, the Reproductive and Oncofertility Science Academy (ROSA), to encourage teenage girls to pursue careers in science and medicine. The ROSA program at the University of California, San Diego gave 13 students, myself included, the opportunity to further their interests in reproductive biology and oncology through a rigorous seven-week program involving intensive Saturday lectures, engaging field trips, and collaborative group meetings. Throughout the summer we also worked on research posters discussing recent breakthroughs in oncofertility and presented them to a panel of scientists and medical professionals at the program’s completion. During the Saturday sessions, we participated in workshops with top doctors and researchers, gaining hands-on experience through lab work and learning about real-life applications through lectures and discussions. Our first session focused on the foundation of the academy, reproductive biology, and metabolism, specifically focusing on ovarian cycles and the anatomy of the male and female reproductive systems. Understanding these topics is crucial in studying oncofertility because they give researchers perspective about how cancer treatments like radiation and alkylating agents can damage reproductive organs, leading to infertility [2]. To complement these presentations, we had the opportunity to retrieve the ovaries of mice through a dissection and observe them under a microscope, similar to what a researcher studying fertility drugs might do. Mice are ideal specimens to study because their reproductive systems closely resemble female reproductive systems, the only notable difference being that mice have two 19 | JOURNYS | WINTER 2017

MĂźllerian ducts while typical human females have one. Indeed, before many fertility treatments reach the human trial stage, they are typically tested in animal trials, often mice. The knowledge we gained through this session gave us a better understanding of how human reproductive systems function and how research is applied in animal trials. The second session focused on the pathophysiology of polycystic ovary syndrome (PCOS). PCOS is a multi-system reproductive metabolic disorder found in 5% - 10% of reproductive-aged women. Common markers include hirsutism, or excess facial and body hair growth, chronic anovulation, and polycystic ovaries. Anovulation, or the lack of oocyte release that initiates a menstrual cycle, causes disruptions in hypothalamicpituitary-ovary interactions. Specifically, the hypothalamus produces gonadotropin-releasing hormone (GnRH), which stimulates the production of follicle-stimulating hormone (FSH) and luteinizing hormone (LH) in the pituitary gland, which in turn trigger ovulation in the ovaries. During ovulation, the follicle releases estrogen and progesterone; however, with PCOS, ovulation is interrupted, and progesterone is not produced. This leads to greater risk for uterine cancer, non-alcoholic fatty liver disease, type 2 diabetes, and metabolic syndrome, common comorbidities of PCOS [3, 4]. The best way to prevent the onset of these comorbidities is through early diagnosis and treatment with birth control. During the third session, we learned about an innovative approach to treating infertility: in vitro fertilization (IVF). In IVF, mature eggs are retrieved from the ovaries and fertilized by sperm in a laboratory setting; from there, the embryo is implanted into the uterus. During this session, we visited the Scripps Institute of Oceanography (SIO) and the UCSD Regional Fertility Center. At the SIO, we observed the fertilization of sea urchin eggs under a microscope and learned about the role of marine species in fertility research. For instance, IVF is performed in algal gel to increase the success rate of fertilization. At the fertility center, we learned more about the techniques used for IVF through a series


of hands-on activities, such as analyzing sperm morphology and performing intracytoplasmic sperm injections (using discarded eggs and sperm). We also explored the ethical implications of fertility treatments through a series of case studies, allowing us to experience directly how ethics factors into oncofertility research. In general, ethics plays a vital role in a large number of scientific fields, but bioethics is especially important because of its significant implications on human life. The final session of the academy discussed current fertility preservation options for cancer patients as well as new techniques that may soon be introduced to the public. For example, we learned about in vitro maturation of immature follicles, a technique in which an immature follicle is isolated from the ovary and grown through hydrogel encapsulation. Once matured, these follicles can be used in IVF, allowing women to avoid ovarian stimulation. This session was helpful in reinforcing the knowledge we had gained during previous weeks, demonstrating how interdisciplinary research can allow us to solve problems like infertility in cancer patients. In accordance with this idea, each member of the academy created a research poster presenting a potential solution to an unmet need in reproductive medicine. My research focused on the viability of whole ovary autotransplantation for women at risk for premature ovarian failure (POF). One-fourth of women diagnosed with some form of cancer are of reproductive age and may be receiving chemotherapy; this treatment greatly increases survival rates, but it also may contribute to POF, or reduced ovarian function before the age of 40. One way to potentially lower the rate of POF in female cancer patients is autotransplantation of the ovary. In this process, ovarian

tissue is extracted from the patient prior to undergoing cancer treatment and frozen; when therapy has been completed, the tissue is thawed and transplanted back into the same patient. This technique could prove beneficial because follicle atresia (death) and ischemia (inadequate blood supply) is reduced, ovarian stimulation (medication-induced maturation of ovarian follicles) is avoided, and no delay in cancer treatment is necessary. While autotransplantation of the whole ovary has not yet occurred in humans, the procedure has seen success in other animals. In an Austrian study, four out of nine sheep regained luteal function and one of the four sheep was able to conceive spontaneously after the procedure [5]. In addition, there has been one case in which a monozygotic twin donated her ovary to her sister who was eventually able to reproduce [4] and 26 reports of successful births through autotransplantation of ovarian tissue [6]. These cases demonstrate the viability and restorative potential of whole ovary autotransplantation. While the ROSA program focused on reproductive medicine and oncofertility, it also empowered the 13 of us by furthering our interests in science. Along with the lectures about PCOS and bioethics, we also developed our work ethics, confidence levels, and communication skills. This was particularly important when presenting our research not only to scientists and medical professionals, but also to the general public on the last day of the program. Through intense group discussions, light-hearted lunch breaks, and panicky, last-minute poster-editing sessions, we formed lifelong friendships with each other. All in all, this academy was a life changing opportunity that solidified my interest in medicine and taught me valuable life skills.

REFERENCES [1] Rosendahl M, Wielenga VT, Nedergaard L, et al. Cryopreservation of ovarian tissue for fertility preservation: no evidence of malignant cell contamination in ovarian tissue from patients with breast cancer. Fertility and Sterility. 2011;95(6):2158-2161. doi:10.1016/j.fertnstert.2010.12.019. [2] Mclaren JF, Bates GW. Fertility preservation in women of reproductive age with cancer. American Journal of Obstetrics and Gynecology. 2012;207(6):455-462. doi:10.1016/j.ajog.2012.08.013. [3] Salama M, Woodruff TK. New advances in ovarian autotransplantation to restore fertility in cancer patients. Cancer and Metastasis Reviews. 2015;34(4):807-822. doi:10.1007/s10555-015-9600-2. [4] Silber SJ, Grudzinskas G, Gosden RG. Successful pregnancy after microsurgical transplantation of an intact ovary. New England Journal of Medicine. 2008;359(24):2617-2618. doi:10.1056/ nejmc0804321. [5] Imhof M, Bergmeister H, Lipovac M, Rudas M, Hofstetter G, Huber J. Orthotopic microvascular reanastomosis of whole cryopreserved ovine ovaries resulting in pregnancy and live birth. Fertility and Sterility. 2006;85(Supplement 1):1208-1215. doi:10.1016/j.fertnstert.2005.11.030. [6] Macklon KT, Jensen AK, Loft A, Ernst E, Andersen CY. Treatment history and outcome of 24 deliveries worldwide after autotransplantation of cryopreserved ovarian tissue, including two new Danish deliveries years after autotransplantation. Journal of Assisted Reproduction and Genetics. 2014;31(11):1557-1564. doi:10.1007/s10815-014-0331-z.

by leona hariharan art by seyoung lee 20 | JOURNYS | WINTER 2017


Keeping things secret from other people has been a challenge for much of human history. The famous general Julius Caesar needed to ensure that communications to his subordinates would not fall into his enemy’s hands. He thus employed a Caesar cipher, which transforms a plain-text message in English into an unintelligible encrypted message only he could understand [1]. Theoretically, only the intended recipient knows the key to decode the encrypted message

and retrieve the plain-text. Although secrecy is crucial in military operations, the general public also benefits from such privacy, whether in planning surprise birthday parties or simply creating email passwords. However, various attempts at secrecy have been rebuffed by certain effective code-cracking stratagems. For instance, suppose Alice wants to send the message:

to her friend, Bob. To improve the message’s security and prevent possible interception by a neighborhood bully (say Carl), Alice

decides to perform a Caesar shift, moving each letter forward in the alphabet by 3 (with wrap-around, i.e., the letter after Z is A):

If she and Bob agree beforehand on a shift of three, then if Bob receives the message, he can simply go back three letters in the alphabet to get the original message [1]. Carl should not know the shift, and therefore he cannot decode the message. Right? In fact, there are two simple ways Carl can obtain the original message. The first is simply trying all 26 possible shifts. Exactly one of the 26 resulting translations should make sense in the English language, which must be the message Alice is sending to Bob. The other method is subtler: frequency analysis. Using this method, Carl can bring up a table of how often each letter appears on average in English documents. Then, he can make a table for the given encrypted message. The tables should appear similar, except

shifted forwards by a few letters. With frequency analysis, Carl can find the shift and consequently, the original message [2]. There have been more ingenious codes created over the years, but most of them have been cracked using either brute force or simple tricks like frequency analysis, both of which have been advanced by the development of computers. However, the RSA cryptography algorithm, developed by the MIT researchers Ron Rivest, Adi Shamir, and Leonard Adleman, has proved to be difficult to break [2]. At its heart, the algorithm is a simple premise: multiplying numbers is computationally “easy,” but factoring them is hard. For example, consider the equation:

Using pencil and paper, it is quite easy to compute 83 * 127. However, the other direction, factoring 10,541 into smaller numbers, is much harder. One will need to test whether 2, 3, 5, 7, and so on, up to √10541 = 102.67 divides 10,541. This is significantly more challenging than multiplying two numbers. If we change 83 and 127 to 400-digit prime numbers, then multiplying them isn’t too difficult, but factoring them can easily take thousands of years even with the use of 100 million personal computers [2]. Thus, to create a secure code, Bob can first choose two prime numbers p and q. A prime number is a number with only two factors, one and itself. Then, Bob multiplies them:

He can then choose two other positive integers e and d such that

N = pq 21 | JOURNYS | WINTER 2017

de = 1mod(φ(N)) which means that the difference de - 1 is divisible by φ(N). Here, φ(N) = (p - 1)(q - 1) is the number of positive integers that are less than or equal to N and share no common factor with N other than 1 [3, 4]. For instance, let’s choose p = 7, q = 13. Then, we can compute N = 91 and φ(91) = 6 * 12 = 72, so a suitable choice can be d = 5, e = 29, since 5 * 29 - 1 = 144 is divisible by 72. The product N and number e can be announced publicly, and so are known as Bob’s public key.


However, Carl cannot use N alone to figure out p and q, which comprise Bob’s private key. To send a message to Bob, Alice can first convert her message to a large number M. For instance, she can let A = 01, B = 02, …, Z = 26, space = 27, and concatenate all the numbers (e.g., ATZ becomes 012026). Then, Alice encrypts M by computing the e-th power and taking the remainder when divided by N. This statement can be expressed as: C = Me (mod N) Alice then sends this number to Bob, who then raises it to the d-th power and takes its remainder when divided by N. The result is: Cd = Med = M (mod N)

by foreign sources. As a result, customers can trust companies such as Paypal to process their transactions accurately, politicians can be sure their private emails won’t be voluntarily exposed to the public, and human rights groups can communicate freely without fear of arrest and retaliation from an oppressive regime [2]. Of course, encryption can also be misused by criminals to conceal their actions from snooping governments. Hackers also rely on encryption to hijack data for a ransom, knowing that only they have the power to decrypt the data. While cryptography can be a blessing for privacy, it can also be a detriment to public safety [2]. Whatever the case, it is clear RSA cryptography has revolutionized not only the security industry, but also how all of us send messages and live in an increasingly connected world.

by Euler’s theorem, which states that Mφ(N) = 1 (mod N) [3, 5]. Therefore, Bob can retrieve M, and to get back the message, he can simply convert each pair of digits into letters (e.g., 012026 -> 01, 20, 26 -> ATZ). Meanwhile, Carl is left in the dark, because he cannot deduce M given Me (mod N) and e; in other words, he cannot take e-th roots. As a result, Alice and Bob now have a way to securely send messages to each other. The consequences of this encryption algorithm are enormous. Private emails, messages, and forms can be encrypted and sent securely over the Internet without fear of interception and decoding

references [1] Lyons J. Caesar Cipher. Practical Cryptography. http:// practicalcryptography.com/ciphers/caesar-cipher/. [2] Singh S. The Code Book: The Science of Secrecy from Ancient Egypt to Quantum Cryptography. New York: Anchor Books; 2000. [3] Weisstein EW. RSA Encryption. WolframMathworld. http://mathworld. wolfram.com/RSAEncryption.html. [4] Weisstein EW. Totient Function. WolframMathworld. http:// mathworld.wolfram.com/TotientFunction.html. [5] Lehoczky S, Rusczyk R. The Art of Problem Solving - The Basics (Volume 1). Vol 1. Greater Testing Concepts; 1995.

by kevin ren art by colette chiang 22 | JOURNYS | WINTER 2017


Self-Learning Code Basic ArtificiaL NEURAL NETWorks by Daniel Liu // art by daniel kim Background Some of the most challenging problems in computer science include recognizing handwritten and spoken numbers or words. Though these are skills humans learn when they are only a few years old, image recognition can be challenging for computers. Because computers are based on a strict, rule-based system, it is very difficult for a computer to learn abstract ideas. However, thanks to countless contributions from computer scientists, there are currently many ways for computers to recognize handwriting and voice relatively easily. One of these is through the use of artificial neural networks (ANN) [1]. Artificial neural networks are graphs that contain trainable mathematical equations, which are unique because they break free of the traditional rule-based programming due to the network’s ability to generalize abstract ideas. They essentially learn, or generate, an abstract representation of a data set that models a general trend by grouping similar data samples together. For example, if the data set is a list of every person’s name and whether that person breathes, then the ANN can generalize that data and conclude that every human breathes. ANNs have existed for a long time in varying degrees of complexity, and have directly competed with other data analysis algorithms, such as support vector machines and linear or logistic regression algorithms. ANNs were initially severely limited by the relatively slow processing speed of computers, which made other machine learning and data analysis algorithms more favorable. However, with the current increase in processing power of computers, ANNs have become feasible and more favorable than other methods due to their immense flexibility. For an ANN to predict results, it must be trained with existing input data and corresponding correct “target” results (a process known as supervised learning), in the same way that a child is guided through practice problems. Artificial neural networks are comprised of layers, and each layer contains a certain number of neurons. There are many types of neural networks, but a simple and common type is the supervised feed-forward (or forward propagation) network. These networks compute their result by firing from the input layer through any hidden intermediate layers, and finally to the output layer (Figure 1). Each neuron in each layer connects to every neuron in the next layer, and each of these connections has a parameter (a decimal number) that contributes to the final output. Each neuron also has an “activation function” that typically applies some non-linearity and scales the output of the neuron to model more complicated functions and limits how large or small the parameter can be. For each layer, there are also bias parameters for each neuron, which are similar to what the y-intercept does in a linear equation written in slope-intercept form (the parameter would have an effect similar to the coefficient of x, also known as the slope), as they apply an offset to each neuron. Thus, a neural network is just a massive tangle 23 | JOURNYS | WINTER 2017

of (usually non-linear) equations that model a complex function. When training a supervised feed-forward neural network, the parameters are adjusted so that the actual output comes close to matching the target results, as ANNs can only approximately model the true desired function. To predict outcome using a neural network, the value for every neuron is set to the sum of every single previous neuron’s values, multiplied by the parameter that connects that neuron to the next neuron. The activation function, which can “squash” the number to be within an acceptable, practical range and apply some nonlinearity to the numbers, is applied on the sum. The nonlinearity makes approximating complex functions much easier, as most complex functions that need to be modeled are not linear. Essentially, every neuron and parameter will change the output of the entire neural network, to some degree. The input layer’s neurons will be set to the input values, and the results will be in the output layer’s neurons. However, the output layer’s activation function can be slightly different than the other layers’ activation functions in that it outputs the certainty of the resulting class to which the network believes the input belongs. Training a neural network is much harder than predicting results. In fact, an efficient method for training a feed-forward neural network was developed much later after the concept of ANNs. There are many different training methods, but the most popular is called gradient descent. In this method, each parameter is adjusted according to the difference between the output and the target results (which is calculated using an error function, such as the sum of all the differences squared) and the contribution of each parameter to the outputs. An efficient algorithm to do so is called backpropagation [2, 3]. The initial conception and development of this algorithm directly led to an increase in the speed for training ANNs, which increased how large an ANN could be.

Figure 1: Interaction between inputs, hidden intermediate layers, and outputs in a supervised feed-forward network.


Example Neural networks can range from hundreds of layers that can take weeks to train on NVIDIA GPUs to a simple network of just two layers and two neurons. A neural network with two neurons, one parameter (technically two, if the bias is also counted as a parameter), and a “linear” activation function is in essence a linear function. The parameter is the slope of the function and the bias is the y-intercept. Because there is only one output and one input, the output would just be the input x multiplied by the parameter m, and then added to the bias b, similar to a linear equation in the form of y = mx + b. Because the activation function is “linear,” no extra nonlinearity is applied to the result y. In this case, backpropagation will just slowly adjust the parameter and bias, so that the resulting line will converge onto the theoretical best possible line that fits the dataset. Notice how the line will converge onto the theoretical best possible line, but not become the exact match of the theoretical best line. This is because the neural network is adjusted in small intervals to get the final result. The interval can be very, very small, but it is still limited by the precision of the computer and the time it takes to adjust the result in such tiny intervals. For example, five points from the equation, y = 5x + 3, are used as input for the neural network. To make this more interesting, a small offset is added to each point. Using the neural network library that I developed, the points and the line (to make predictions) that the program learns can be graphed (Figure 2) [4]. The result is similar to running a linear regression to find the line of best fit. An artificial neural network is much more flexible and sophisticated than simple linear regression. By adding a few more neurons, changing the activation functions, and adding an extra layer, classification in 2-D can be accomplished. 2-D classification can be easily visualized through graphing and it exemplifies the flexibility of an ANN. Four different groups of normally distributed points are to be split up into four different groups by the example program. The neural network contains three layers. The input is 2-D (two neurons). The hidden second layer will have three nonlinear neurons. The last layer, the output layer, will have four neurons, which will contain the percentages for each of the four categories. Using the neural network library, points in the 2-D space can be sampled, colored, and graphed to show the boundaries that the ANN learns to separate the four groups of data (Figure 3) [4]. This neural network can learn the boundaries of every single placement of the four groups of points (the groups can be anywhere) (Figure 4), even if the groups are overlapping. However if two groups are overlapping, then the error rate of the classification will be higher, since it is harder to differentiate between the two groups of data. After training with a thousand iterations, the neural network can fully learn, generalize, and represent the approximate boundaries to distinguish between data groups. Just like humans, computers (to a certain extent) can learn abstract ideas and generalize data by using many different functions simultaneously in complex structures like neural networks. There are many more efficient and more powerful ANNs, and each of them is fit for a specific type of learning. These ANNs can solve many problems that are difficult to represent with strict logical rules, such as voice recognition, handwriting recognition, image classification, etc. Through the research and development of better algorithms for learning and representing data in more abstract methods, computers can—and will—come closer to replicating the intelligence of humans.

Figure 2: Modeling of a two-neuron neural network with a “linear” activation function produces results approximating a line of best fit.

Figure 3: Separation of four groups of normally distributed points with a neural network, with boundaries shown using color.

Figure 4: Separation of a different set of four groups of normally distributed points using the same neural network as in Figure 3.

references [1] Stergiou C, Siganos D. Neural Networks. Imperial College London. https://www. doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html. Accessed September 27, 2017. [2] Fröhlich J. Backpropagation. Neural Networks with Java. https://mattmazur. com/2015/03/17/a-step-by-step-backpropagation-example/. Published 2004. Accessed September 27, 2017. [3] Nielsen MA. http://neuralnetworksanddeeplearning.com/index.html. Accessed September 27, 2017. [4] Liu D. Java-Machine-Learning. GitHub. https://github.com/Daniel-Liu-c0deb0t/ Java-Machine-Learning. Published 2017. Accessed September 27, 2017.

24 | JOURNYS | WINTER 2017


The Effect of Pop Music on Focus

By: Deepika Kubsad Purpose

This experiment sought to compare the focus of high school students listening to popular music with the focus of high school students in silence. The focus test was based on the Stroop effect, a psychological phenomenon in which most people are able to process words faster than colors. Stroop tests contain names of colors written in varying ink colors. Test subjects are told to ignore the words and instead focus on the color of the letters. It has been notably documented that most people read the words instead of identifying the colors [1]. This test was performed once in silence and once while listening to pop music, and the number of colors that were identified correctly was collected. The concept of focusing on a particular task in the Stroop test mirrors learning a new skill; all participants were taught to distinguish reading and identifying colors separately at a young age, so the process of distinguishing between the two is akin to learning. Many high school students claim that listening to music helps them focus and learn in class, but teachers often disagree. This experiment will give both teachers and students empirical evidence on whether listening to music actually increases focus and learning of students.

Literature Review

Although there are studies that relate instrumental and classical music to increased comprehension [2], there is a lack of studies relating to the effect of pop music on concentration. A study conducted at the University of Windsor tested “The Effect of Music on Work Performance” [3]. Data was collected from 41 male and 15 female software developers between the ages of 19-55 years in their work environments for five weeks. The study revealed that music generated a more productive work environment. The quality of work increased and duration of the task decreased with the presence of music when compared to previous productivity levels with no music, which the research indicated was due to the positive mood participants experienced after listening to music. Although this experiment yielded positive results with music, the “quality of work” mentioned in the paper is often subjective in the field of software development [3]. A study conducted at Liverpool John Moores University tested and compared the results of Stroop tests of meditators and 25 | JOURNYS | WINTER 2017

Art By: Yerin You

non-meditators [4]. A group of 25 Buddhist meditators were recruited for the experimental group, which was compared to another group of non-meditators as the control group. The variable of age was accounted for: the mean age of the control group was 27.5 years, and the mean age of the experimental group was 28 years. Both groups participated in the Stroop test with the objective of examining differences in concentration levels between meditators and non-meditators. Meditators performed better than non-meditators in the Stroop test [4]. A study conducted at Marshall University studied “Risk Behavior, Decision Making and Music Genre in Adolescent Males.” The study tested 33 males with two questionnaires, a computer card game, and a final questionnaire. The first questionnaire inquired about participants’


personal relationships and participation in risky behaviors such as alcohol consumption and marijuana use. The second questionnaire was a Positive and Negative Affect Schedule test that asked participants to rate their emotional response to the last week. The participants were then required to play a gambling card game online while randomly listening to rock music, classical music, or no music. In the final questionnaire, the participants were asked about their music preference. There was no correlation found between level of music enjoyment, the scores in the game, or reaction time [5]. Researchers at Carnegie Mellon University studied the amount of brain power lost during interruption. There were 136 participants, split into three groups, who were each asked to read and answer passage-based questions. One group was allowed to finish the test without interruption, while the other two groups were told that they may be contacted at a later time during their test. Interrupted groups answered 20% more questions incorrectly when compared to the control group. In the second part of the experiment, the first group was allowed to complete the test uninterrupted. The second group was warned at the beginning of the test of a possible interruption in the middle of the test, but they were not interrupted. The third group was warned at the beginning of the test of an interruption in the middle of the test, and they were interrupted. The third group still performed worse than the uninterrupted group, but the percentage of incorrect answers dropped to 14%. The results of the group that was told they were going to be interrupted but were never actually disturbed improved by 43%. The researchers concluded that once the brain has been interrupted and further interruption is expected, then the brain is able to adapt, which may explain the improvement in test scores of the participants [6]. A study conducted in Barcelona, Spain, linked sound volume to relaxation. In this study, 144 college students were asked to determine the decibels (sound levels) they preferred during relaxation with a wide array of music genres. The volume of the music was limited to three decibel levels, classified as loud, medium, and soft [7]. Participants were required to turn a dial right if they enjoyed the volume of the music and left if they disliked the volume of the music. Music was played continuously at various volumes, and heart rate was monitored. The participants were required to complete a survey about their level of relaxation after they had finished listening to music. The researchers concluded that the majority of people prefer softer music for relaxation. Although there have been studies analyzing the effects of music in various scenarios, there is a lack of studies specifically on high school teens and concentration levels using Stroop tests. The choice of Stroop tests for measuring concentration levels was based on a successful study conducted at Liverpool John Moores University where the concentration levels between meditators and non-meditators were compared using Stroop tests. Results from this study were linked to cognitive flexibility, which was an accurate predictor of concentration levels in students [4].

Research Design

The sample population was randomly recruited during school hours. 25 high school males and 25 high school females were asked to volunteer for the experiment, eliminating gender differences. The data collection took place in an empty classroom with only the researcher and volunteer to avoid any possible distractions. The participants, each tested individually, were seated in a standard desk and chair to imitate a classroom atmosphere. The researcher administered three tests while timing the participant with a stopwatch. In all tests, the participant was asked to identify the colors of the words in the order that they appeared. The first test consisted of the time-determining test in which color matched the stated word (Forms: Time Determining Test). The resulting time was used to determine the time allotted for participants to complete the Stroop tests. The participant was told to complete as much of the Stroop Test 1 by the time determined from the previous Time Determining Test. The researcher administered Stroop Test 1, tracked the number of correct colors named by the participant, and recorded the results in a data table. Stroop Test 2 was administered to the participant while they listened to music. The song selected was the popular “Gangnam Style” by PSY, which most participants had already heard prior to the experiment. The participant was told to turn the volume of the music player to their “normal” music volume to imitate their typical music-listening habits. The researcher tracked the number of colors correctly named by the participant. The figures and graphs used in this study are attached at the end of this article.

Statistical Techniques

The null hypothesis was that there was no effect of music on concentration levels. The alternative hypothesis was that listening to pop music would result in poorer Stroop test results in highschool teens, when compared to Stroop tests completed by highschool teens in silence. A paired, one-tailed t-test was used to determine statistical significance, as one group was tested at two different times. The results were statistically significant (p < .05). In other words, pop music was a distraction to teens trying to concentrate on the given task.

Analysis

The results of this experiment supported the alternate hypothesis that pop music decreases concentration levels in high school teenagers. The average decrease in results while listening to music was 8.30%. Females were less affected by music, with an average decrease in scores of 3.71%. Males were more affected by music, with an average decrease in scores of 9.49%. Regardless of gender, listening to music hindered the participants’ ability to concentrate on a given task. Real life application of this experiment can include limiting students from listening to music when completing tasks that 26 | JOURNYS | WINTER 2017


require concentration, such as classroom assignments. Possible errors in this experiment include that some of the participants were familiar with the test and may have thereby been expecting the variation between color and words. Their previous exposure and experience with this test may have contributed to these results. A possible extension of this experiment would be to test the effect of country music, or another genre, on concentration levels using Stroop tests.

Results

Forms

References [1] French J, Quagliata A. The Stroop Effect. The Stroop Effect. http://faculty.mercer.edu/spears_a/studentpages/StroopEffect/stroopeffect. htm. [2] Padnani A. The Power of Music, Tapped in a Cubicle. The New York Times. http://www.nytimes.com/2012/08/12/jobs/how-music-canimprove-worker-productivity-workstation.html?_r=0. Published August 11, 2012. [3] Lesiuk T. The effect of music listening on work performance. Psychology of Music. 2005;33(2):173-191. doi:10.1177/0305735605050650. [4] Moore A, Malinowski P. Meditation, mindfulness and cognitive flexibility. Consciousness and Cognition. 2009;18(1):176-186. doi:10.1016/j. concog.2008.12.008. [5] Hampton JE. Risk behavior, decision making, and music genre in adolescent males. 2009. [6] Sullivan B, Thompson H. Brain, Interrupted. The New York Times. http://www.nytimes.com/2013/05/05/opinion/sunday/a-focus-ondistraction.html?_r=0. Published May 3, 2013. [7] Staum MJ, Brotons M. The effect of music amplitude on the relaxation response. Journal of Music Therapy. 2000;37(1):22-39. doi:10.1093/ jmt/37.1.22. 27 | JOURNYS | WINTER 2017


by Alina Luk Imagine waking up in the middle of the night and realizing you were unable to move. What if you found yourself gasping for breath every time you tried to fall asleep? These are all symptoms of sleep paralysis, or the inability to perform voluntary movements upon awakening from sleep. Symptoms of sleep paralysis include hallucinations, muscle paralysis, chest pressure, breathing difficulties, and the inability to move body parts despite being consciously awake. Sleep paralysis is not a new condition; the earliest records of sleep paralysis are marked in historical paintings, like Henry Fuseli’s “The Nightmare” (1781) which depicts a demon sitting on a woman’s chest while she is asleep, and cultural folklore [1]. So why does sleep paralysis occur? In normal sleep, a person’s brain activity slows down before they enter rapid eye movement (REM) sleep. In the REM stage, the brain inhibits the release of neurotransmitters, allowing the body to enter a stage of paralysis. This stage usually ends before one wakes up, but during sleep paralysis, the person wakes up in the middle of this stage in a state of consciousness and paralysis. Furthermore, due to controlled respiration patterns under the REM stage, someone who is experiencing sleep paralysis may suffer from breathing difficulties. Breathing is further constricted by fear from activity in the amygdala, the part of the brain that is responsible for emotions, caused by hallucinations [1]. Early studies suggested that the inhibitory neurotransmitter glycine was responsible for the paralyzation sensation during sleep. However, this conclusion was proven wrong by a 2012 study conducted by Patricia Brooks and John Peever that was published in The Journal of Neuroscience [2]. The study found that paralyzation is an effect of both gamma-aminobutyric acid (GABA), an inhibitory neurotransmitter, and glycine on two motor neuron receptor types in the body, preventing muscle movement. In addition, sleep paralysis is believed to be influenced by factors such as stress, poor sleeping habits, sleeping position, possible sleeping disorders, age [3], and the time that the person falls asleep. Sleep paralysis isn’t common to everyone, which is one reason why it is particularly interesting to learn and study about. Most often, people are able to sense a presence in their room while they are sleeping, but since those that suffer from sleep paralysis are unable to move or speak, the whole experience becomes

extremely frightening. In an article published in Clinical Psychological Science, James Cheyne and Gordon Pennycook conducted a survey on 293 people who showed symptoms of sleep paralysis to measure the amount of postepisode distress that the patients experienced after an episode of sleep paralysis and how these feelings would affect the patients’ functionality the next day. They discovered that postepisode distress was elevated after experiencing sleep paralysis and a significant percentage of patients reported that their functioning the next day was affected, indicating that sleep paralysis could contribute to a significant reduction in productivity [4]. Sleep paralysis also more commonly affects those with traumatic stress, panic disorder, anxiety, and depression. Another study conducted on 862 siblings and twins revealed that sleep paralysis may be hereditary [3]. Sleep studies have advanced human knowledge on sleep disorders and other related issues, but there are still many mysteries about sleep paralysis that are yet unsolved. As of now, REM sleep disorders are being treated with different drugs, such as Clonazepam, a tranquilizer. Less severe cases of sleep paralysis can be resolved by maintaining a regular sleeping schedule and reducing stress. It is also possible to treat sleep paralysis by practicing better sleeping habits and seeking professional help. It is important to note that many REM sleep disorders may result in diseases, such as narcolepsy, a disorder which causes one to lose control of sleeping habits. This knowledge gained from sleep studies can promote further understanding of sleep disorders and how the human brain functions under sleep.

References

[1] Morton K, Mandell S. Paralyzed at Night: Is Sleep Paralysis Normal? End Your Sleep Deprivation. http://www.end-your-sleep-deprivation.com/ sleep-paralysis.html. Published 2010. Accessed September 22, 2017. [2] Brooks PL, Peever JH. Identification of the transmitter and receptor mechanisms responsible for REM sleep paralysis. Journal of Neuroscience. 2012;32(29):9785-9795. doi:10.1523/ jneurosci.0482-12.2012. [3] Bradford A. Sleep Paralysis: Causes, Symptoms & Treatment. LiveScience. http://www.livescience. com/50876-sleep-paralysis.html. Published September 13, 2017. Accessed September 22, 2017. [4] Cheyne JA. What Predicts Distress After Episodes of Sleep Paralysis? Association for Psychological Science. http://www.psychologicalscience. org/news/releases/what-predictsdistress-after-episodes-of-sleepparalysis.html. Published March 1, 2013. Accessed September 22, 2017. 28 | JOURNYS | WINTER 2017


SOIL TEXTURES

At various Altitudes

BY Melba Nuzen Abstract

Though they are very different, abiotic and biotic factors are often involved in intricate relationships; their interactions form the basis of many ecological communities. For example, previous studies suggest a relationship between soil texture and granularity and the flora that occupies soil. Factors such as moisture retention and nutrient retention can be affected by different types of soil and consequently hinder or help the growth of vegetation. To further study this relationship, particularly in forest biomes, this study examined abiotic soil and biotic conifer species. Samples of topsoil were collected from three conifer communities—Lodgepole Pine (Pinus contorta var. latifolia), Subalpine Fir (Abies lasiocarpa), and Whitebark Pine (Pinus albicaulis)—all of which grow at varying altitudes: the Lodgepole Pine generally grows at the lowest elevation out of the three species and Whitebark Pine at the highest. Samples were collected along the Lupine Meadows Trail near Grand Teton in Grand Teton National Park, Wyoming during the summer of 2017. At each conifer stand, 25 grams of soil were collected and the ribbon test was used to determine soil texture. Ten ribbon tests were conducted along a 30 meter transect in each community for a total of 30 samples, ten from each conifer community. Although initial research suggested that increasing elevation correlates with coarser soil texture, the results from this study revealed the opposite. As elevation increased, data indicated that soil texture transitioned from being relatively coarse to fine (p < .005). Despite this contradiction, future research in this field can help further define the relationship between soil, flora, and possible human impact on these organisms. 29 | JOURNYS | WINTER 2017

ART BY RICHARD LI


specific type of soil texture (Figure 3). All three of the conifer species provide valuable habitats and food sources for many animals—including Clark’s Nutcracker, a keystone species in the Greater Yellowstone Area—and protect soil from erosion with their root systems. However, the conifers and soils are at risk from human interference, particularly from outbreaks of blister rust and mountain pine beetle for the Whitebark. Therefore, studies related to all these forest types, and specifically what textures of soil they thrive on, are crucial in understanding and preserving ecosystems surrounding Jackson Hole.

Material and Methods Three sites were chosen within close proximity, along the Lupine Meadows Trailhead in Grand Teton National Park, Wyoming. Figure 1: Hiking trail with sample collection sites at various elevations

Introduction This study was conducted to further investigate the relationship between abiotic and biotic factors, specifically in the Jackson Hole area. Because of the interdependent nature of biotic and abiotic factors, there are many ecological connections that involve all three forest types of Lodgepole Pine, Subalpine Fir, and Whitebark Pine with the environment they thrive in. Though all three trees are conifers, they thrive at different mountainous altitudes. The Lodgepole Pine grows at lower elevations, starting at around 6,000 feet above sea level, and the Whitebark Pine at higher elevations beginning at around 9,000 feet above sea level [1]. As the species differ in elevation growth, initial research suggests the species also differ in the type of soils they thrive in. Previous research indicates that the Subalpine Fir thrives in medium texture soils, while Whitebark Pine lives in coarser soils [1, 2]. In turn, this suggests that coarser textured soils should be found at higher elevations since the Whitebark Pine grows at higher elevations. This conclusion is also supported logically: rain and other water run-off typically deposit heavier sediment first at higher elevations, leaving finer nutrients and minerals to trickle down and fall out later at lower elevations. Additionally, finer soils generally retain less moisture and more nutrients: rainfall and water in areas of fine soil are not absorbed as well as they are in larger, coarser soils, and consequently evaporate faster in finer soil. The Lodgepole, which grows at lower elevations, is normally the first to regrow after forest fires [3]. This supports this study’s hypothesis that lower elevations with finer soils may possibly offer more nutrients for the Lodgepole to regrow quickly. In this study, soil texture refers to the size of soil particles. Soil as a whole is composed of three main particles: sand, silt, and clay. The smallest and finest of these particles is clay, and the largest and coarsest is sand [4]. The amount of sand, silt, and clay found in any particular sample of soil determines the

Site Characteristics Over the course of two days, samples from the three sites were collected: Subalpine on the first day and Whitebark and Lodgepole on the second. For consistency, each site was chosen within 100 yards from the trail, and a 30 meter transect was drawn perpendicular to the trail. All sample sites were located on Northfacing aspects—the aspect of a slope indicates the way the slope faces. North-facing aspects in the Northern hemisphere generally receive less sun and retain more moisture. Therefore, vegetation growing on North-facing aspects in the Northern hemisphere typically thrives much better than South-facing vegetation.

Sampling Protocol At each conifer stand, transects were drawn perpendicular to the trail and ten samples were collected along the transect at three meter intervals. For each sample, 25 grams of soil were collected from four inches below the surface of the earth. Each hole was dug with a metal spoon and, after organic material was removed, the soil was weighed with a portable hanging scale to keep the samples as consistent as possible. To determine the soil texture of the different communities, the ribbon test was performed. Other procedures such as sifting require heavier equipment and drying the soil. Thus, in the interest of time, the ribbon test was chosen for convenience and practicality.

Figure 2: Ribbon Test

Though subjective, the ribbon test is often used to estimate soil texture, as affirmed by Colorado State University, the US Department of Agriculture, and the University of Michigan [4, 5, 6]. 30 | JOURNYS | WINTER 2017


After collecting 25 grams of soil, the sample was placed into the palm of the hand, and water was added until the soil reached a smooth, plastic consistency similar to dough. Then, the soil was rolled into a ribbon shape, placed between the thumb and forefinger, and pushed by the thumb over the forefinger. The soil broke with its own weight, and the length of the ribbon broken off was measured. The generic soil type was determined by the length of the ribbon that broke off. The ribbon test key indicates the category of soil texture collected. Finally, to further specify the texture of the sample, more water was added to a small pinch of the sample soil, and the soil was placed back into the palm. The coarseness of the soil was determined by the tester’s touch. Together, the length of the ribbon and the texture of the soil in the palm determined the type of soil [4].

Figure 3: Soil Texture Spectrum

Results

A total of 30 samples were collected at the three conifer communities. The raw data can be seen in Tables 1, 2, and 3.

Analysis and Conclusion Statistical Analysis

Table 1: Lodgepole Pine

Initially, the p-values generated by a two-factor ANOVA indicated that there was no correlation between data. However, a chi-square with two factors through RStudio produced a p-value of .0049. With a p-value of .00049, the null hypothesis was rejected. In comparison to the standard p-value of .05, there is a statistically significant difference between the soil textures of the three tree stands. The two factors considered were vegetation type—either Whitebark, Subalpine Fir, or Lodgepole—and soil texture. Soil textures were further simplified into three categories: fine, medium-coarse, or coarse. Whereas the three sites were viewed as three samples in the ANOVA, in the chi-square, the data was considered as 30 separate samples. Figure 4 indicates the visual representation of the RStudio chi-square: each block—coarse, medium, or fine—indicates the percent composition within each community. For example, more than half of the Lodgepole soil consists of coarse soil.

Table 2: Subalpine Fir

Table 3: Whitebark Pine

31 | JOURNYS | WINTER 2017

Figure 4: Soil texture of conifers (visual representation of RStudio)


Discussion and further studies The data gathered disproved both the proposed alternate and null hypothesis. Instead of soil transitioning from coarse- to finetextured from high elevations to lower elevations, the opposite was found. Based on collected data, soil texture actually changes from fine to coarse from high to low elevations, contradicting previous research conducted. Since the Whitebark Pine grows at high elevations, it has the most exposure to wind erosion, which could significantly reduce the particle size of the soil and explain the results of this study. Although trees and soil are delicately connected, their relationship is affected by many other additional factors, which were not studied in this research project. With time constraints and limited equipment, this study only briefly

Specific trees require specific kinds of soil. This information is crucial in human understanding of ecosystem fragility and economic stability.

addresses the complex relationship between soil texture and tree communities. Additional factors such as soil moisture, pH, and slope, would almost certainly have an effect on the texture of soil found in different stands. For example, the transect drawn in the Subalpine community was drawn along a very steep slope: the beginning of the transect was 42 feet higher than the end of the transect, but this elevation change was not taken into consideration during data analysis. In retrospect, there are several factors that could be improved for further studies. Firstly, due to time constraints, only one sample could be taken at each tree community. Had more data been collected, the data would have been more reliable. For example, the Lodgepole stand site was close to a stream, which mostly likely affected data due to the alluvial fan. In regards to consistency of sampling, as mentioned before, the slopes of the sites were varied. The samples themselves were collected only four inches from the surface of the earth; topsoil can be easily affected by weather, wildlife, and other factors. With more efficient equipment and more time, soil from deeper horizons can be collected for a more holistic study. Human error also played a role in the research. Because of

time limitations, the ribbon test was conducted by different students along the transect simultaneously. The subjectivity of the ribbon tests leaves room for error, possibly affecting the identification of soil textures. Regardless, the null hypothesis was rejected: specific trees require specific kinds of soil. This information is crucial in human understanding of ecosystem fragility and ecosystem stability, especially when discussing trees as important as Lodgepole Pines, Subalpine Firs, and Whitebark Pines. As mentioned before, these conifers play vital roles in their forest ecosystems. They provide shelter and food for dozens of fauna and stabilize the community around them. For future studies, forest impact on other factors, such as the keystone species Clark’s Nutcracker, grizzly bears, or soil itself would prove interesting for further research. As human influence spreads across the globe, even the smallest contributions can affect the interconnected systems already in place. This only reinforces the idea that every action has a consequence and emphasizes the need for careful attention to human actions and further investigation in this subject.

Acknowledgements

Many thanks to Teton Science Schools (TSS) for providing transportation, equipment, and references; to Roland Aranda, Anna Langlois, Victoria Lara, and Naomi Schulberg for help with sample collection and peer-editing; to the graduate students Clare Gunshenan and Peggie dePasquale for supervision and statistics guidance; to Grand Teton National Park; and to the Elementary Institute of Science of San Diego for a summer scholarship to attend TSS and make this research possible.

References [1] Watts T, Watts B. Rocky Mountain tree finder: a pocket manual for identifying Rocky Mountain trees. Rochester, NY: Nature Study Guild Publishers; 2008. [2] Forest Service, Arno SF, Hoff RJ. https://www.fs.fed.us/rm/pubs_ int/int_gtr253.pdf. Accessed July 13, 2017. [3] Lodgepole pine (Pinus contorta var. latifolia). Lodgepole pine. https://www.for.gov.bc.ca/hfd/library/documents/treebook/ lodgepolepine.htm. Accessed July 13, 2017. [4] Whiting D, Card A, Wilson C, Reeder J. Estimating Soil Texture. Colorado State University Extension. [5] Thien SJ. Natural Resources Conservation Service. Natural Resources Conservation Service Soils. https://www.nrcs.usda.gov/ wps/portal/nrcs/detail/soils/edu/?cid=nrcs142p2_054311. Accessed August 13, 2017. [6] Zak DR. A GUIDE FOR PREPARING SOIL PROFILE DESCRIPTIONS. Guide to Describing Soil Profile. http://www. umich.edu/~nre430/PDF/Soil_Profile_Descriptions.pdf. Published 2003. Accessed July 16, 2017. 32 | JOURNYS | WINTER 2017


ocean thermal energy conversion introduction

Due to the current rapid depletion of nonrenewable resources, scientists have turned their attention toward natural resources such as solar, wind, and tidal power. It is essential to have several working models of these systems beyond what is available today so that scientists and engineers can better utilize these technologies. One growing technology that uses the ocean as a basis of energy is ocean thermal energy conversion (OTEC), which generates electricity by harnessing the temperature difference between warm and cold seawater. OTEC technology was first conceptualized in 1870 by novelist Jules Verne in Twenty Thousand Leagues Under the Sea [1]. Decades later, physicist Jacques-Arsène d’Arsonval was named the father of OTEC after he further developed Verne’s design by suggesting usage of ocean temperature variations. The first official OTEC technology was built in Cuba in 1930 by Georges Claude, a pupil of d’Arsonval. Despite initial challenges installing the model, Claude’s first plant produced 22 kilowatts (kW) of energy; in power company terms, this means that Claude’s plant could produce up to 22 kilowatt hours (kWh) of energy per hour if running at full generating capacity [1]. According to the U.S. Energy Information Administration, “the average annual electricity consumptions for a U.S. residential utility was 10,766 kWh in 2016,” or about 29.4 kWh per day. In more understandable terms, Claude’s OTEC model could potentially power nearly 18 modern homes. But as scientists focused on cheaper ways to produce electricity, OTEC lost its appeal. Later, when Arab petroleum exporting companies declared an oil embargo that led to the 1970s energy crisis, a period when major industrial companies in the U.S., Japan, and Western Europe faced serious petroleum shortages as well as higher prices, OTEC started to increase in popularity once again [1]. Since then, scientists from around the world have been working on developing OTEC as nonrenewable resources become increasingly scarce. Today, there are several OTEC production sites operating globally. Some of these generate up to ten megawatts (MW) of electricity, demonstrating OTEC’s potential as a powerful alternative energy source.

33 | JOURNYS | WINTER 2017

technology

There are four main types of OTEC designs: the open cycle, closed cycle, kalina cycle, and hybrid system. The first type, the open cycle, is unique because it produces not only electricity, but also freshwater and air-conditioning. In open cycle OTEC, seawater is used as the working fluid. When it is pumped into the flash evaporator, the seawater vaporizes due to its lowered boiling temperature of 22 °C caused by the low pressure in the system. During flash evaporation, which is “a distinguishing feature of open cycle OTEC… approximately 0.5% of the mass of warm seawater entering the evaporator is converted into steam” [2]. This steam then creates electricity when it passes through a turbine. Once power is generated, the steam travels to the condenser where cold seawater turns the steam into desalinated water, in turn creating freshwater and producing air conditioning. The closed cycle uses working fluids with low boiling points, notably ammonia, to make the system more efficient. Here, warm seawater is used to vaporize the working fluid, which flows through a condenser to condense by mixing with cold seawater. Like the open cycle, the closed cycle creates electricity when vapor passes through a turbine. Once the cold seawater condenses, the vapor then pumps the seawater back to the closed system. This system is more efficient than the open cycle because it is more compact, using a smaller duct and turbine, but is able to produce the same amount of energy as the open cycle [3]. The kalina cycle is a variation of the closed cycle, where a mixture of water and ammonia is used as the working fluid. This method is more efficient than both the open and closed cycles because it converts more heat energy into electricity by having a boiling point trajectory instead of a boiling point [3]. Finally, the hybrid system is an OTEC design that combines the open and closed cycles “to obtain maximum efficiency” [4]. This cycle consists of two processes: the first stage to produce electricity with the closed cycle, and the second stage to produce fresh water. After electricity is generated, the warm water is flash evaporated and is cooled with the cold water discharge to create freshwater. Essentially, this process is an open cycle without a turbine [2]. Combining the two cycles doubles water production.


uses

cost

OTEC is currently being used in desalination, air conditioning, aquaculture, and coldwater agriculture [4]. In desalination, OTEC produces freshwater with the open cycle or hybrid system. After electricity is generated in the turbine, the vapor condenses into freshwater in the condenser. According to Patrick Takahashi of the University of Hawaii at Manoa, a “1 megawatt plant could produce 55 kg of water per second…[which] could supply a small coastal community with approximately 4000 m³ per day of freshwater” [5]. Desalinated water from OTEC can benefit areas lacking easily accessible freshwater sources, especially those that utilize less efficient desalination processes on island states [3]. OTEC plants also provide air conditioning and refrigeration with their cold seawater. They condense the working fluid, which is then pumped to the surface to provide homes and hotels with air conditioning. Additionally, both the Natural Energy Laboratory of Hawaii Authority and a test facility in Okinawa have successfully employed OTEC’s cold seawater for refrigeration purposes [5]. Cold seawater has also made chilled-soil agriculture possible, a type of farming proposed by researchers from the University of Hawaii in which soil is chilled by underground pipes. This enables the growth of spring crops like strawberries that thrive in cooler climates, potentially reducing the amount of exports needed for these foods. Furthermore, aquaculture is possible through OTEC because deep seawater is rich in nutrients such as phosphates and nitrogen and contains lots of phytoplankton, but doesn’t have many pathogens. An added benefit is that cold deep-ocean water can be mixed with warmer seawater to create an optimal temperature for mariculture. Pumping cold water from OTEC pipes (artificial upwellings) cultivates coldwater marine species, such as flounder, kelp, and microand macroalgae [5]. Non-native fish such as salmon, clams, and trout can also be farmed with this method.

Although there are many advantages of OTEC systems, they are quite costly compared to traditional energy plants. Installing a small scale OTEC plant (<10 MW) costs between $16,400 and $35,400 per kilowatt (/kW), while installing a 520 - 1,300 MW coal plant only costs between $2,934 and $6,599 [3]. In addition, the levelized cost of conventional energy resources is much cheaper than that of energy from OTEC, an important reason for public disinterest in OTEC technology.

Figure 1: Diagram of an OTEC plant [4]

challenges Another challenge of OTEC is maintaining an ideal temperature difference for energy generation. OTEC plants produce the maximum amount of energy at 26 °C and 4 °C seawater temperatures. However, using this temperature difference is difficult because large volumes of seawater are required to facilitate a significant amount of heat transfer [2]. OTEC also depends on specific weather conditions, as the technology functions best in ocean states. The durability of the power plants in extreme weather conditions should also be considered, as OTEC plants will be exposed to unpredictable ocean storms and uneven wave patterns, which may cause damage. Finally, it is important to consider OTEC’s harmful effects on the environment. For instance, metal pipes inside the ocean and ammonia discharge from the condenser would be detrimental to marine life and nearby coastal structures [3]. OTEC lacks consistent government funding because it is underdeveloped and has many unresolved issues. The existing 10 MW pilot plants are often underfunded because they are not reliable enough to earn the support of even private investors. In addition, acquiring licenses and permits for installing power plants is a long and expensive process. However, Luis Vega, the manager of the Hawaii National Marine Renewable Energy Center, speculates that “based on the implementation of similar technologies, later generation designs might reach cost reductions of as much as 30%” [2].

by jennifer yi art by madison ronchetto and richard li 34 | JOURNYS | WINTER 2017


future direction

conclusion

New methods for development are being considered in an effort to reduce the cost of OTEC plant installation. For instance, Straatman and van Sark from the University of Utrecht created a hybrid version of OTEC with an offshore solar pond that generated a sufficient amount of electricity. Using low-cost materials and natural substances, they were able to drastically lower the cost of constructing an OTEC plant. Furthermore, Straatman and van Sark’s research, along with other studies on lowering the price of OTEC plants, demonstrated OTEC’s potential to become a major energy generator. The financial feasibility of OTEC is still solidifying as researchers are developing the technology, but companies such as Offshore Infrastructure Associates, Inc. (OIA) ensure that the price of electricity from OTEC will become more stable with further developments [6]. In addition to improving the financial drawbacks of OTEC, OTEC developers should make an effort to educate others on the numerous benefits that OTEC provides. Leaders in this field should improve international summits to plan how to commercialize the technology. In fact, OTE Corporation, a major supporter of OTEC, was invited to the second World Ocean Power Summit in 2015. Their participation was notable not only because it showed that global environmental leaders acknowledge the potential of OTEC, but also because it increased the awareness of the underdeveloped technology. Finally, there are still many technological difficulties that OTEC developers will need to solve. For example, the developers still need to discover an effective way to prevent ammonia leakage from the pipes, adjust to the differing ocean temperatures, and combat natural disasters such as hurricanes and tsunamis.

It is crucial to turn to renewable energy sources such as OTEC to devise long-run solutions to combat the world’s environmental problems. As the human population continues to grow exponentially, the reality is that nonrenewable resources cannot sustain the world’s energy needs. If OTEC becomes a widely adopted technology, it will also assist with desalination, air conditioning, aquaculture, and coldwater agriculture. Ultimately, increased interest and successful pilot plants will only garner greater attention in the future.

35 | JOURNYS | WINTER 2017

references [1] OTEC History. OTEC International, LLC. http://www. oteci.com/otec-at-work/test-page/. Accessed August 08, 2017. [2] Vega LA. Ocean thermal energy conversion. Encyclopedia of Sustainability Science and Technology. 1st ed. Springer-Verlag; 2012:7296-7328. [3] Kempener R, Neumann F. Ocean Thermal Energy Conversion Technology Brief. IRENA. 2014. http://www. irena.org/DocumentDownloads/Publications/Ocean_ Thermal_Energy_V4_web.pdf. Accessed August 24, 2017. [4] Finney KA. Ocean thermal energy conversion. Guelph Engineering Journal. 2008; (1):17-23. [5] Masutani SM, Takahashi PK. Ocean thermal energy conversion (OTEC). Encyclopedia of Ocean Sciences. 2001:19931999. doi:10.1006/rwos.2001.0031. [6] Isaka M, Mofor L, Wade H. Renewable energy opportunities and challenges in Nauru. IRENA. 2013:1-16. https://www.irena.org/DocumentDownloads/Publications/ Nauru.pdf. Accessed August 20, 2017.


ACS – San Diego Local Section The San Diego Local Section of the American Chemical Society is proud to support JOURNYS. Any student in San Diego is welcome to get involved with the ACS – San Diego Local Section. Find us at www.sandiegoacs.org! Here are just a few of our activities and services:

Chemistry Olympiad The International Chemistry Olympiad competition brings together the world’s most talented high school students to test their knowledge and skills in chemistry. Check out our website to find out how you can participate!

ACS Project SEED This summer internship provides economically disadvantaged high school juniors and seniors with an opportunity to work with scientistmentors on research projects in local academic, government, and industrial laboratories.

College Planning Are you thinking about studying chemistry in college? Don’t know where to start? Refer to our website to learn what it takes to earn a degree in chemistry, the benefits of finding a mentor, building a professional network, and much more!

www.sandiegoacs.org 36 | JOURNYS | WINTER 2017


GSDSEF “Algorithm for Demonstrating an Achievable Folding Procedure for a Convex Polygon with n Sides (2 < n < 13)” By: Kevin Ren & Jodie Hoh

Background

This project explores a Mathematica algorithm for relating geometric shapes, mathematical transformations, origami, and robotics. A geometric folding algorithm for estimating the number of steps to fold a given n-sided (2 < n < 13) convex polygon from a square was derived from a previous idea researched by Erik Demaine, a professor at MIT. An algorithm was developed to exhaustively determine sequences of polygons of decreasing number of sides that can be folded to obtain the desired polygon. The algorithm then computes the number of folds each sequence requires. By taking the minimum number of folds over all sequences, the algorithm can estimate the total number of steps to generate the final results. Hypothesis Our algorithm will be able to successfully demonstrate complex folding of shapes: square, rectangle, triangle and complex polygons. Procedure A set of points defining the polygon are read by the algorithm. The sequential order of the points defines the order of folding. Mathematica graphically shows the actual folding steps of the object.

Results & Conclusion

Our initial version of the algorithm can generate simple shapes from a set of data points. Further refinement in the code will allow more complex polygonal shapes.

“Solving Flint’s Lead Detection Problem: Rapid, Low-Cost Lead Test via a Chromophoric Reaction between Sodium Rhodizonate and Lead” When lead enters water, it creates lead hydroxide, a clear, odorless substance that leaves no indications of its presence in water. If consumed, lead hydroxide can be potentially dangerous to the human body, raising blood pressure and damaging the kidneys, among many malicious symptoms. Although many tests have been developed to detect lead contamination in water, these are usually extremely expensive or difficult to use. In this project, a test was developed to rapidly and inexpensively detect lead contamination in water. Sodium rhodizonate is a chemical commonly used in forensic science to detect lead in gunshot residue; however, when sodium rhodizonate is mixed with an acid, it creates a solution that has a chromophoric reaction with lead hydroxide. This method of lead detection was tested on three different concentrations of lead in water: 0.05 M, 0.0001 M, and normal tap water. When the 0.05 M sample was tested, the test was 100% accurate, turning dark red once the reaction took place. When the tap water sample was tested, it was 100% accurate, remaining orange or yellow. When the 0.0001 M sample was tested, it was 87% accurate, turning pink. The test was rapid, accurate, and inexpensive, costing only $1 per test. 37 | JOURNYS | WINTER 2017

By: Richa Singh


Summaries “Statistical Prediction of Heart Disease Using Machine Learning” Heart disease is a leading cause of death, but advances in machine learning can be used to help prevent deaths due to heart disease. Using statistics, machine learning finds relationships between inputs and outputs that may not be directly connected. In this project, I used supervised machine learning, a type of machine learning that develops a classifier (i.e., an algorithm) that can find relationships between input and output data given an established set of data with known inputs correlated with known outputs. Specifically, twelve inputs were used, including chest pain type, fasting blood pressure, and others. I designed three experiments for evaluating my classifier’s accuracy with a data set I had obtained from the University of California, Irvine. The dataset was divided into a training set of data for a naive Bayes classifier I developed, and a testing set of data for the trained classifier. In the experiments, I manipulated the amount and types of data the classifier processed to observe how the prediction accuracies would change. My classifier scored in my predicted accuracy range of 71% - 89%. With further research, classifiers such as my own can be used to predict heart diseases and other complex disorders.

By: Muhammad Shaheer Imran

“Game, Set, Match: An Electronic Shoe for Playing Tennis” My project sought to solve a common problem during the summer months—shoes that become uncomfortably hot to wear during exercise. I was introduced to this problem through personal experience in extreme conditions: as a competitive tennis player, I’m often required to compete on hot days on a tennis court that has been baking in the sun, which can cause significant discomfort. I strove to create a functional electronic shoe that has the capability of directly cooling itself using Peltier tiles. Several features of Peltier tiles make them potentially suitable as a cooling system for an athletic shoe: they only require an energy source—which can be direct current—and are also lightweight and inexpensive. My final electronic shoe has no moving parts, is ergonomic, lightweight, and robust, and needs no maintenance other than replacing the energy source. Solving the problem of heat within a tennis shoe could be applicable to other footwear subject to high temperatures such as desert army boots and firefighter boots.

By: Alexandra Kuo

Last summer, JOURNYS members judged projects at the Greater San Diego Science and Engineering Fair. Here are some of the projects that won our award! Art by: Saeyeon Ju & William La

38 | JOURNYS | WINTER 2017


by Emily Zhang Dr. Barbara Sawrey is currently a director-at-large for the American Chemical Society (ACS), a non-profit organization that publishes numerous scientific journals related to chemistry and helps facilitate connections among chemists from around the world. As part of the board of directors, she helps ensure that the ACS is running properly by managing financial decisions and ensuring that actions taken are following the core values of the ACS. Dr. Sawrey is also a Dean Emeritus of Undergraduate Education at the University of California, San Diego (UCSD).

You have been involved in chemistry all your life. What made you first decide to pursue chemistry? Well, I come from a family of engineers, so I was thinking of something in science or engineering because I have strong math skills. When I went to college, I just tried a little bit of everything and liked chemistry the best. I really fell in love with it and just stayed with it. Chemistry is like problem solving to me—everything is a big puzzle. The more you know, the more pieces you have to fit together.

Has your interest in chemistry changed since when you were an undergraduate? I fell in love with teaching during graduate school, which helped me decide to stay in academia and teach. In between college and graduate school, I took five years off and had a very interesting job in the chemical industry as a flavor and fragrance chemist. That was in the 1970s when there were glass ceilings. There still are, but there was certainly a glass ceiling for women involved in industrial research and development. But there was, I think, a bigger glass ceiling for bachelor degree chemists to not go very high. You needed a master’s or PhD to move up in the ranks of the company. So, I decided to go back to graduate school and left industry. When I got to graduate school and started teaching, I decided that that was what I was going to do forever.

How did you pursue chemical education? Chemical education was sort of a natural outgrowth of loving teaching and trying to understand what goes on in students’ minds. I wanted to teach in a way that took advantage of that, because knowledge is not just transferred. Everybody comes with their own understanding, so you have to help everybody develop their own knowledge. And so, I had to know more about the psychology of learning. Soon, I found out that I was teaching all wrong and had to change my methods. I loved teaching from the beginning, but when I first started I thought all you had to do was explain things very clearly, and the more time you took and the clearer your explanation was, the more likely it was that everybody understood you. But that’s not the case, because you actually have to present students with hurdles that they have to jump over themselves. You can’t just tell them how to solve a problem, because all they’ll do is memorize it and won’t really understand it. And for students to be able to use the knowledge they have, you have to challenge them to practice it themselves and give them new situations in which to use it. 39 | JOURNYS | WINTER 2017

So, how did you change your way of teaching? Asking many more questions. It became much more Socratic. I would present a little bit of material and then ask questions; students would work either individually or in groups. It turns out that some students think that they can solve very simple problems, but as soon as they aren’t sitting in front of me, they can’t solve them anymore. They go home and try to work on it, and it doesn’t make any sense. So, I make students work on problems in class. And then, what usually happens is that there is a distribution of answers; nobody gets them all right. Then, I make them explain their answers to each other without me telling them what the right one is, and then vote for the best answer. Usually after doing that one or two times, the answers begin to coalesce on the correct one. I also then know what the common wrong answers were, and I can ask students, “Why did you think that was an answer?” Because understanding why they chose a wrong answer is more important than anything else. If I know why they chose a wrong answer, then I know how to adjust what we discuss and what common misconceptions they had.

You first joined the ACS in 1975. How has being a part of the ACS shaped your dreams and aspirations? I actually joined earlier than 1975; 1975 is when I had my first chemistry job and became a professional member. But I was a student member of the ACS in college, so I’ve been in the ACS even longer. The ACS has been very helpful to me everywhere that I’ve lived. It’s a community of people. We get together, and we all do different chemistry in our different jobs, but what we have in common is a love for science. In the United States there are 184 local sections of the ACS, and we have one in San Diego. It’s just a very valuable professional activity. But, I think I’ve learned most from the various committees that I have served on in the last 20 years: I have learned how to be a community member, chair a committee, and run pretty big projects through the ACS. It has taught me a lot about leadership.

What sort of activities have you been involved in outside of the ACS and UCSD? Well, I also sit on the board of directors for the Gemological Institute of America (GIA), a non-profit organization in Carlsbad that protects the public by setting standards for the quality of diamonds and gemstones. The diamonds are graded by their cut, clarity, color, and carat weight. The 4Cs, which determine diamond


grade, were developed by the GIA, so people from around the world send in diamonds for analysis. I have a strong interest in the colored gemstones, and I really enjoy learning about things like sapphires, rubies, and emeralds. It applies my scientific knowledge in a very different way. And then something I do professionally that’s very different from my science interests is serving on the governing board for the National Conflict Resolution Center. It’s located in San Diego and teaches people to behave with civility. As for my hobbies, I love the theater and opera; I attend many local theaters and the opera every year. I also love reading; I’m in a book club.

What is a day in your life like? What are your daily responsibilities and activities? Well my days have changed because, as of ten weeks ago, I retired from UCSD after working for 33 years. When I was working as a dean, my days would start very early in the morning and end very late at night. It was really busy work, answering a lot of emails and going to a lot of meetings. I used to tell people that during the day I would go to meetings and collect work, at night I would organize the work, and then on weekends I would do the work. But now I’m retired, so days are very different. But I’m still involved in all of those organizations I mentioned earlier and am still affiliated with UCSD.

Who are some mentors that you have met along the way? What have they taught you and how have they helped? Certainly, my parents were mentors. But one of my early heroes was my aunt. She’s now deceased, but she was a nun with a PhD in chemistry as well as a chemistry professor. When I was growing up, I already knew that women could be chemistry professors because I had an aunt who was one; so it was good to have her as a role model. Then when I was going through college and graduate school, many men mentored me. There weren’t that many female professors along the way when I wanted to go into the field, so many of the male professors were my mentors. It wasn’t until I was actually teaching and became a professor that I developed female role models and mentors who were also professors at UCSD. And actually, one of the awards that I won from the ACS is the Camille and Henry Dreyfus Award for encouraging women in the chemical sciences— that’s an award for me being a mentor to other people.

Wow, that’s a great achievement, especially since women are underrepresented in STEM. Actually about half of undergraduate chemistry students are female. Close to 40% of the students in graduate school are female. But, we’re not there yet with professors; only about 15% - 20% of professors are female. So, we have work to do. But the fact that about half the students in college with chemistry majors are women is great. That’s a very high number, higher than physics. It approaches the gender split in the biological sciences, and mathematics is actually now attracting a lot of women because of computer science.

If you were to give any advice to women seeking to pursue a career in chemistry or the sciences, what would it be? Be open to asking questions and looking for mentors—don’t think

you have to do it all on your own. There’s lots of people out there to help you, and you should ask for it when you need it.

Do you have any general advice for students interested in chemistry? Chemistry is called the central science because you need chemistry in order to do modern biology, geology, and even physics. It’s really at the center of most sciences, so you can’t go wrong with chemistry. Even if you end up in a different field of science, chemistry still is a good basis. It turns out to be a wonderful major for pre-meds or prepharmacy, and it’s also great for materials engineers, for instance. It’s just a really good generic undergraduate major, and you can develop specialties along the way. If people like to be problem solvers and puzzle solvers, it’s a great thing to do.

Is there anything else that you would like to let high school students know? College is not like high school; you have a lot more freedom and a lot more responsibility. When I was a high school student, things came pretty easily to me. And my first year in college was pretty easy, because it seemed like it was just a review of high school. Then I got to my second year of college, and that was like hitting a brick wall. It was all new material that I hadn’t ever seen before. And that’s when you have to decide, “Alright, can I do this or not? How am I going to go about changing my work and study habits?” Not everybody has to go to college right away and not everybody has to go to a four-year school. There is a level out there for everybody. I just want people to be lifelong learners no matter how they go about doing that. 40 | JOURNYS | WINTER 2017


by Jonathan Lu Mrs. Boardman-Davis is currently a registered nurse and legal nurse consultant at the Kaiser Permanente Fresno Medical Center in California. As a member of the medical field for more than 35 years, she has had experience working with a variety of medical professionals.

Could you explain your occupation and some of your daily responsibilities?

It seems like that would take a lot of time. Could you explain your typical work hours?

Well, I am a registered nurse (RN) and I currently work in the perioperative area, which includes pre-op teaching, prepping the patient for surgery, and completing their admission process. Then, right after surgery, I wake them up from their anaesthesia and transfer them to the floor or get them ready to go home, depending on the type of surgery. In the past, I’ve also worked in several areas of nursing, like oncology and urgent care, among others.

Well, in terms of my work at the hospital, I work regular eighthour shifts. I’ve worked in hospitals with 12-hour shifts, but I’ve gotten old enough so that I don’t have to work full time anymore. I haven’t stopped working, because in order to be a legal nurse consultant you have to be updated with current medicinal practices, or else they don’t consider you to be an expert. At the hospital, I currently work two to three days a week, and as an LNC, I can take as many cases as I want. I generally don’t take more than three cases simultaneously because they require a lot of time to go into charts and pull out information. This is done on my own time, but I can bill hourly for how long I take to review it, which could be any number of hours. If I’m deep in a case, I probably work at least three to four hours a day on that case, after which it’s usually on hold until the deposition or until the court case comes up which I may have to travel to in person. It is pretty flexible, but can be very intensive…hours a week at work is about 24 hours a week, or three shifts. I can also take calls from 11pm to 7am, which is another eight-hour shift, but I’m senior enough not to take calls anymore.

The other job I have is as a legal nurse consultant (LNC), which is a practicing nurse who gets contacted by attorneys pursuing medical malpractice cases. Typically, a medical malpractice case involves both the physician and the care they receive in the hospital, which is basically nursing care—that’s the part I would review. So an attorney would call me and fill me in on the case; usually some family is suing because of a bad outcome or because of a deceased patient. If I agree to take the case, they send me all the case files, and I review all the charts for the hospitalization and request past charts, previous interviews, or anything the attorneys have taken from different people involved in the case. Once an LNC takes a case, they can write a timeline of what happened based on the chart and give their written expert opinion, or they can be the actual expert witness and speak up on the stand for whatever side they might be taking. These are my opinions; they can take them or leave them. But, I don’t always take a case for legal representation if I don’t agree with it: you get a bad reputation if you take a case that’s not really legit.

So for clarification, you have two separate jobs? Yes. They both require that I be a practicing RN in my field, and I have been for many years, which makes me an expert in my field. There’s not a specific license that you get to become a legal nurse consultant; you can take certain courses that help you know what is expected of legal nurse consultants, but you don’t really take a class on being an expert because physicians and nurses are already experts in their respective fields. They’ll ask you what sort of degree you have, what qualifications you have (such as ACLS or BLS, which are both certifications in the healthcare field), how long you’ve been working, and other similar questions. From there, they’ll establish whether they consider you an expert or not and start asking you questions about the case, just as they would if you were on the stand. So the answer is yes, its two jobs: legal nurse consultant is my own private enterprise, and I don’t want to involve it with my employment at the hospital. Ethically, I also can’t take any cases for or against the hospital I work in because that would be a conflict of interest. They both require what I do as a nurse, but they’re two separate things. 41 | JOURNYS | WINTER 2017

How did you become interested in becoming a legal nurse consultant? When I had been a nurse for about eight or nine years, I took a continuing education course. During one class on changes in hospital policies, one of the presenters was an LNC. I had never heard of an LNC so I asked her what it was, how I could become one, and if I had to go back to school. She said that since I was already an expert, I could take some classes on legal reporting at UC Irvine like she did, and once I felt comfortable with it I could advertise myself as an LNC. When you work in a hospital, you see things—good and bad—and sometimes you wonder how they are resolved. You learn about how the legal system works and what you need to be careful about. But anyways, it was interesting for me to go to the class and learn that type of information, because now at work when we have meetings about legal problems, I can present information about things such as what sort of legal obstacles we might encounter and what the attorney is going to be looking for —just general information, because as I said, I wouldn’t take any cases about my work.

I just wanted to clarify a basic question: what’s the difference between a doctor and a nurse? Well, a physician is someone who goes to college for their bachelor’s degree, then goes onto four years of medical school for general medical education, and finally does four to five years of residency for the specialty they choose. It’s more in depth in terms of their


speciality in their areas—surgeons, of course, have to learn how to do surgery, but physicians know more about general medical practices for their areas. There are several levels of nursing. There aren’t that many licensed practical nurses (LDNs) anymore—mostly what you have now are RNs. All RNs have some sort of degree: most take a four year program and get a bachelor’s degree; some of them only take a two-and-a-half year program to get an associate’s degree. Beyond that, if you want to get a Master’s degree, you can become a nurse practitioner in a specialty area. You can also get a doctorate in nursing: most people who have doctorates in nursing practice at a higher level in their specialty, or they teach nursing school. Nurses are trained in all different areas and over time they choose one to specialize in, whether it’s labor and delivery or pediatrics—the thing about nursing is that it’s really flexible; throughout your career you can change.

Why did you choose to become a nurse? Ever since I was four or five I wanted to be a nurse. It may have been because my grandmother was a nurse, and my grandfather was a doctor, and at that time, girls were nurses and boys were doctors. I was always interested in medicine and biology, and it was the right fit for me. When I became older, I had kind of settled on being a nurse, and I didn’t really think about it. Doctors and nurses now do overlap a bit in most cases and work pretty well together. It’s more of a collaborative situation, and the care is much more specialized. I really like what I do.

I know a lot of students at my school want to pursue a career in the medical area. Do you have any advice for them? Two things: if science interests you; if how bodies function is interesting to you, then pursue it in whatever direction you want. But one of the most important things is that if you want to pursue the medical field, remember that while your education and what you know is important, an even more important part of the job for physicians and even more so for nurses is that you have to be able to relate to people. It’s not a job where you’re facing computers—it’s a job where you’re facing people, face-to-face. You’re facing them when they’re sick; you’re facing their families when they’re going through bad times. I remember when I was in nursing school, I met a lot of really smart people there. But when we were in the hospital, they didn’t know how to talk to people, or how to touch people— and they left because they never made that connection. It doesn’t matter how smart you are—if you can’t make that connection, then you can’t do the job. You can’t be uncomfortable with people, with naked bodies, or with touching people; you have to have the mindset that all of that is secondary to what it is that you need to do. The most important thing is to focus on getting your patients better and making them feel comfortable. It takes a little time; I mean, nobody just walks in being totally comfortable with it—you have to be open-minded about it and know that it is a hands-on human contact profession.

Were there any harsh challenges on the path to becoming a nurse or as a nurse?

The main challenges of the job are the patients and operations that you always remember. For as long as I’ve been a nurse, there have been sad things that stick in my mind. I worked in oncology,

and I’ve gotten to know patients that died and miraculous things that happened with patients that struggle but pull through. And in pediatric care, it was challenging working with extremely sick kids on ventilators—some of them hanging onto life by a thread. But you can’t let these challenges get you down; you have to move forward. It’s always a learning process, and you learn along the way about people and human nature.

I know for a lot of students that a big deterring factor for being a doctor or nurse is that you have to work long hours and that it can be stressful. What are some of the methods you use to destress after work? I’m lucky because I’m married to a physician, so it’s nice to be able to talk to somebody who knows what the job is like. I think it’s very important to have someone to talk about it with and have other interests or outlets to pursue. Another important thing is taking vacations; a lot of doctors don’t want to leave their patients because they’re committed to them, which is commendable on one hand but still stressful. It’s another reason why you have partners you can trust, so you can step away once in a while and have time for yourself. And yes, there are long hours and odd hours, and it’s one of the jobs where there are no breaks—the hospitals don’t close. So if you want a regular working schedule, it’s not a good choice for you. But a nice thing about working in a hospital is that you kind of create a family. When you’re working together in high-stress, life and death situations, everyone becomes very much a family.

So would you say that this job has been a good investment? Yes. It’s been a great experience, and I definitely wouldn’t trade it for anything else! 42 | JOURNYS | WINTER 2017


LSSI (Life Sciences Summer Institute) LSSI is a training and internship program for talented high schoolers to prepare them for careers in San Diego’s life sciences and health care fields. Students complete a 1-week Boot Camp where they receive training in fundamental lab skills and soft skills before completing a paid internship working alongside professional scientists. Students conduct research on a wide range of topics from pancreatic cancer to biofuel production. This past summer, employers and internship hosts included: Salk Institute, Scripps Institute, UCSD Research Scholars, BioLabs, Lab Fellows, New Leaf Biofuels, Dorota Skowronska-Kracyzwzyc’s Lab at UCSD, Grossmont College, and the Scripps Institute of Oceanography. Interested students can apply on our website: http://workforce.org/connect2careers-steam. For questions, please contact Alex Becker at alexanderb@workforce.org.

Internship, Local

Application due 3/30/2018

EarthFair The annual EarthFair in Balboa Park is the largest free annual environmental fair in the world. EarthFair 2018 will be the 29th annual event! Each year, the EarthFair draws around 60,000 visitors. Produced by 300 volunteers, EarthFair 2018 will feature more than 300 exhibitors, special theme areas, a Food Pavilion, a special Children’s Activity Area, four entertainment venues, the Children’s Earth Parade, the eARTh Gallery arts and crafts show, and the Cleaner Car Concourse. For more information, visit: http://www.earthdayweb.org/EarthFair.html.

Event, Local

4/22/2018

DNA Day Essay Contest National DNA Day commemorates the completion of the Human Genome Project in April 2003 and the discovery of the double helix of DNA in 1953. This contest is open to students in grades 9-12 worldwide and asks students to examine, question, and reflect on important concepts in genetics. This year’s DNA Day will be on Wednesday, April 25, 2018 and the question asks participants to argue whether or not medical professionals should be required for all genetic testing, including direct-to-consumer genetic testing. For more information, visit: http://www.ashg.org/education/dnaday.shtml.

Competition, National

3/9/2018

AAPT High School Physics Photo Contest For many years, this contest has provided teachers and students an opportunity to learn about the physics behind natural and contrived situations by creating visual and written illustrations of various physical concepts. Students compete in an international arena with more than 1,000 of their peers for recognition and prizes. For more information, visit: http://www.aapt.org/Programs/contests/photocontest.cfm

Competition, National 43 | JOURNYS | WINTER 2017

Application due 5/1/2018


SSSiN (Summer School of Silicon Nanotechnology) Run by Prof. Michael Sailor since 2003, the Summer School for Silicon Nanotechnology (SSSiN) is an intensive, six week workshop on the synthesis, properties, and applications of porous silicon-based nanomaterials. Based on the book Porous Silicon in Practice, this hands-on course begins with an intensive training in theory, techniques and laboratory methods of silicon nanotechnology, and concludes with a capstone “Discovery Project”—an independent research project implemented by the student under the mentorship of a current research group member. For more information, visit: http://sailorgroup.ucsd.edu/courses/SummerSchool/.

Internship, Local

Application due 5/1/2018

Stockholm Junior Water Prize Teams of up to three students may enter. Projects should be aimed at enhancing the quality of life through improvement of water quality, water resources management, or water and wastewater treatment. Projects can explore water issues on local, regional, national, or global issues. It is essential that all projects use a research-oriented approach, which means they must use scientifically accepted methodologies for experimentation, monitoring, and reporting, including statistical analysis. For more information, visit: https://wef.org/resources/for-the-public/SJWP/.

Research, National

4/15/2018

Clean Tech Competition The Spellman High Voltage Electronics Clean Tech Competition is a unique, worldwide research and design challenge for pre-college youth. The program encourages scientific understanding of real-world issues and the integration of environmentally responsible energy sources. The 2018 Competition challenge is “Solving Climate Change.” More information can be found at www.cleantechcompetition.org.

Proposal Submission, National

3/16/2018

Chemistry Olympiad The U.S. National Chemistry Olympiad is a multi-tiered competition designed to stimulate and promote achievement in high school chemistry. It is sponsored by the American Chemical Society. Each year, four students are chosen to represent the USA in the International Chemistry Olympiad (IChO), which is hosted every year in July in one of the participating countries.

Competition, National

2/23/2018

Beamline for Schools Competition The Beamline for Schools Competition offers high school students from around the world the opportunity to be real scientists and use a fullyequipped accelerator beam line at CERN. Everyone can participate in this life changing experience. Think of a simple and creative experiment and submit your proposal before the deadline! For more information, visit: https://beamline-for-schools.web.cern.ch/.

Proposal Submission, National

3/31/2018

OPSPARC Be the Spark and Ignite your Creative Thinking with NASA OPSPARC NASA’s Goddard Space Flight Center invites you to take part in the NASA OPSPARC 2018! Test your innovative thinking to create your own Spinoff that will make your world a better place. Winners are invited to the NASA Goddard Space Flight Center for a special awards ceremony and two days of in-depth, behind the scenes, hands-on workshops with scientists and astronauts in June 2018. Learn more: https://nasaopsparc.com.

Competition, National

2/20/2018 44 | JOURNYS | WINTER 2017


Dear Reader, In a purely biological sense, evolution describes the way organisms inherit changes over time. Darwin’s finches develop different types of beaks, peppered moth populations turn black due to air pollution, and bacteria grow resistant to various categories of antibiotics. But if we step back from the objectivity of science for a moment, evolution takes on a different meaning: it describes the way we improve ourselves in response to our environments. And in the past year, we at JOURNYS have done some evolving ourselves, growing as a science organization. Of course, we haven’t stopped delivering our usual content for issue 9.2. Is your New Year’s goal to keep up with current events? See how chemicals played a role in Kim Jong-nam’s mysterious assassination. Want the most recent updates in science and technology? Learn about ocean thermal energy conversion and the future of robotic surgery. In this issue, however, we’ve taken further strides toward providing content relevant to and helpful for our readers (that’s you!). We’ve added information panels on science programs and competitions, focusing on those that are no-cost so anyone can participate regardless of financial situation. We’ve also interviewed members of the professional STEM community, hoping to provide our readers with information about potential science careers and the often complex paths people take through the world of science. This year, we’re also helping JOURNYS members become more engaged and collaborative in their communities. As part of our role as a professional society in the Greater San Diego Science and Engineering Fair, members last spring awarded several projects that demonstrated notable originality, and we’ve collaborated with our winners to publish summaries of their projects in this issue. Recently, JOURNYS also set up a booth at ChemExpo 2017, an event aimed to show younger students how science can be involved in their everyday lives. And by updating our scientist review board with members from the Salk Institute, the All India Institute of Medical Sciences, the University of Pennsylvania, HP Inc., the Universidad de La Laguna, and many more, we’re encouraging closer collaborations among participating high school students and the scientists who ensure our magazine maintains its high standards of scientific accuracy. All of these steps couldn’t have been made possible without the efforts of the incredible JOURNYS team: the writers and editors for our articles, the artists for our graphics, the designers for our layout, the coordinators for our outreach and fundraising, the San Diego American Chemical Society for their sponsorship, and our teacher advisor Mrs. Rall for her continuous support. And of course, a big thanks to our readers, whose dedication and engagement keep us committed to share our work! Cheers, Jonathan and Stacy

45 | JOURNYS | WINTER 2017


EDITOR-IN-CHIEF Stacy Hu

PRESIDENT Jonathan Kuo

ASSISTANT EDITORS-IN-CHIEF Sumin Hwang, Melba Nuzen, Colette Chiang

VICE PRESIDENT Rachel Lian

SECTION EDITORS Minha Kim, Kevin Ren COPY EDITORS Jonathan Kuo, Stacy Hu DESIGN MANAGER William La DESIGNERS Stacy Hu, Sumin Hwang, Anvitha Soordelu, Colette Chiang, Yechan Choi, Angela Liu, Nathaniel Chen, Derek Fu, Daniel Kim, Dennis Li, Jade Nam GRAPHICS MANAGER Richard Li GRAPHIC ARTISTS William La, Yerin You, Seyoung Lee, Saeyeon Ju, Colette Chiang, Madison Ronchetto, Angela Liu, Alina Luk, Tony Liao, Daniel Kim

Dr. John Allen (University of Arizona), Mrs. Amy Boardman-Davis (Kaiser Permanente Fresno Medical Center), Mr. Brian Bodas (Torrey Pines HS), Dr. Richard Borges (Universidad de La Laguna), Dr. Alexandra Bortnick (UCSD), Mrs. Abby Brown (Torrey Pines HS), Mr. Mark Brubaker (La Costa Canyon HS), Dr. Gang Chen (Sorrento Therapeutics), Mr. Daniel Garcia (UCSD), Ms. Christina Hoong (UCSD), Ms. Samantha Jones (UCSD), Dr. Kelly Jordan-Sciutto (University of Pennsylvania), Ms. Greta Kcomt (Cal State University San Marcos), Dr. Hari Khatuya (Vertex

COORDINATORS Jonathan Lu, Emily Zhang, William Zhang, Jacey Yang SCIENTIST REVIEW BOARD COORDINATOR Jonathan Kuo CONTRIBUTING WRITERS Jessie Gan, Leona Hariharan, Allison Jung, Su Kim, Deepika Kubsad, Jonathan Kuo, Daniel Liu, Alina Luk, Melba Nuzen, Kevin Ren, Kishan Shah, Ronin Sharma, Jennifer Yi CONTRIBUTING EDITORS Jonathan Kuo, Daniel Liu, Yechan Choi, Derek Fu, Sanil Gandhi, Aditya Guru, Farrah Kaiyom, Chloe Ko, Richard Li, Angela Liu, Chonling Liu, Jade Nam, Rohan Shinkre, Kevin Song, Claire Wang, Edward Xie STAFF ADVISOR Mrs. Mary Ann Rall WEB TEAM Sumin Hwang, Ryan Heo

Pharmaceuticals), Dr. Caroline Kumsta (Sanford Burnham Prebys Medical Discovery Institute), Dr. Corinne Lee-Kubli (The Salk Institute), Dr. Tapas Nag (All India Institute of Medical Sciences), Dr. Arye Nehorai (Washington University in St. Louis), Dr. Julia Nussbacher (UCSD), Ms. Chelsea Painter (UCSD), Dr. Kanaga Rajan (UCSD, Sanford Consortium for Regenerative Medicine), Ms. Ariana Remmel (UCSD), Dr. Amy Rommel (The Salk Institute), Dr. Ceren Tumay (Hacettepe University), Dr. Shannon Woodruff (HP Inc.) 46 | JOURNYS | WINTER 2017



Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.