Journal of Youths in Science
VOLUME 6 ISSUE 1
Memory Our Memory is not so
Memorable, after all
Smell of
rain art by jacki li
ABOUT
The Journal of Youths in Science (JOURNYS), formerly known as Falconium, is a student-run publication. It is a burgeoning community of students worldwide, connected through the writing, editing, design, and distribution of a journal that demonstrates the passion and innovation within each one of us.
PARTICIPATING CHAPTER SCHOOLS SUBMISSIONS
Torrey Pines High School, San Diego CA Mt. Carmel High School, San Diego CA Scripps Ranch High School, San Diego CA Del Norte High School, San Diego CA Cathedral Catholic High School, San Diego CA Beverly Hills High School, Beverly Hills CA Alhambra High School, Alhambra CA Walnut High School, Walnut CA Lynbrook High School, San Jose CA Palo Alto High School, Palo Alto CA Mills High School, Millbrae CA Lakeside High School, Evans GA Blue Valley Northwest, Overland Park KS Olathe East High School, Olathe KS Delhi Public School, New Delhi, India Korean Youth Math Association
All submissions are accepted at submit@journys.org. Articles should satisfy one of the following categories: Review: A review is a balanced, informative analysis of a current issue in science that also incorporates the author’s insights. It discusses research, concepts, media, policy, or events of a scientific nature. Word count: 750-2000 Original research: This is a documentation of an experiment or survey that you did yourself. You are encouraged to bring in relevant outside knowledge as long as you clearly state your sources. Word count: 1000-2500 Op-Ed: An op-ed is a persuasive article or a statement of opinion. All op-ed articles make one or more claims and support them with evidence. Word count: 750-1500 DIY: A DIY piece introduces a scientific project or procedure that readers can conduct themselves. It should contain clear, thorough instructions accompanied by diagrams and pictures if necessary. Word count: 500-1000 For more information about our submission guidelines, please see http://journys.org/content/procedures
SPONSORS
2 | JOURNYS | FALL 2014
CONTACT
Contact us if you are interested in becoming a new member or starting a chapter, or if you have any questions or comments. Website: www.journys.org Email: info@journys.org Mailing: Torrey Pines High School Journal of Youths in Science Attn: Brinn Belyea 3710 Del Mar Heights Road San Diego, CA 92130
contents
FALL 2014
Volume 6 Issue 1
CHEMISTRY 4 Cerebral Folate Deficiency and Leuvocorin | MARIA GINZBURG 6 Smell of Rain | DANIELLA PARK 7 Stress: Your Archenemy | EMILY TRUONG 9 Fluoride Mitigation | RUCHI PANDYA 11 The Science of Gossiping | CHRISTINA BAEK
16
6
BIOLOGY Telomerase: A Magic Molecule for Longetivity? | ERIC TANG 13 Memory | HOPE CHEN 14 Our Memory Is Not So Memorable, After All | ERIC CHEN 16 Mind-Controlling Parasites | CHRIS LU 17 It’s Only Skin-Deep | JESSICA YU 18 The Mechanisms of Thoracic Aortic Aneurysm | FRANCIS LIN 20
PHYSICS & ASTRONOMY 21 Eye Can See You | JONATHAN XIA 23 Power of the Stars | RICHARD XU 23
MATHEMATICS & COMPUTER SCIENCE Traffic Light Detection and Tracking for the FRANK SU & 25
25
Remote Sensing | MEERA KOTA 28 String Theory vs. Loop Quantum Gravity | TYLER JOHNSON 30
Prevention of Automobile Accidents
WILLIAM HANG
FALL 2014 | JOURNYS | 3
Cerebral Folate Deficiency and Leucovorin by Maria Ginzburg edited by Cindy Yang Everyone has seen the beautiful, sugary cereals in the grocery store aisle, with colorful mascots and cheerful font,
but what some of you might not have noticed are the little boxes and circles at the top that tell you what supplements have been added in. Various grain products such as cereals and flours, specifically, are used as folic acid supplements. However, most people just grab the box and go, without every wondering what those labels at the top mean, and what kind of significance fortified foods like these have had in the modern world. While “older” fortified foods such as iodized salts have been around for a long time, fortified grain products are a rather recent innovation. It was only in 1996 that the FDA published regulations requiring the addition of folic acid to all kinds of grain products. Because cereals and grains are widely consumed in the U.S., they have become a major contributor of folic acid to Americans. Since grains with folic acid have been implemented, there have been vast decreases in neural tube defects. In such a short time span as four years (1996-2000), the U.S., Canada, Chile, and Costa Rice all fortified flour, and all reported a drop in neural tube defects among live newborn babies between 23 and 78 percent. Let me backtrack, and explain what folic acid is, or, more specifically, what folateand folic acid is, in more technical terms. Folate and folic acid are commonly mistaken with one another, and used interchangeably. However, it’s important to know the differences between the two in order to understand how they function fully. While folate is a water-soluble B vitamin that occurs naturally in green leafy vegetables and citrus fruits, folic acid is the synthetic (inactive and oxidized) form of folate that is found in supplements and fortified foods. While naturally occurring folate is unstable and can lose activity in foods over days or weeks, folic acid can be stable for months or even years, and is thus better suited for supplementation and fortification of foods. As mentioned before, folate is important in preventing neural tube defects and other neurological problems. Folate deficiency can also slow overall growth rate (since folate helps maintain and grow new cells). Most importantly, folate is vital for a properly functioning methylation cycle. (The methylation cycle is known as the cycle that turns DNA “on” and “off ”.) The addition of methyl groups to proteins is an important process both for synthesizing new types of proteins and for determining their behavior. They can also be added directly to DNA and can thus determine gene expression. This is explained by the fact that folate is important for the synthesis of purine and pyrimidine nucleic acids (the constituents of DNA and RNA). Thus it is obvious how folate is extremely important during cell replication before birth and during early life when rapid cell growth is occurring. There’s more, though. Folate and folic acid are only two of the four main types of folate. Folate is actually known in the scientific world as DHF or dihydrofolate, but folic acid is simply known as…folic acid. There is also folinic acid, known as leucovorin on prescription, and 5-MTHF, the activated kind of folate that is preferred by the body for absorption into the cells and use in the brain. 4 | JOURNYS | FALL 2014
HAIWA WU / GRAPHIC While an insufficient amount of folate appears be correlated with psychological and neurological problems such as depression, neural tube defects, autism, developmental regression, mental retardation, Down’s Syndrome, etc., more exact information can be found by studying Cerebral Folate Deficiency, or CFD. CFD is basically defined as any neurological syndrome associated with low cerebrospinal folate levels in the presence of normal folate metabolism outside the nervous system. More specifically, in CFD the folate level in the blood is normal but the level in the CSF (cerebrospinal fluid) is low. It is clear that something is blocking the transport of folate into the brain, and thus impeding its normal functions. Normally, in order to enter the brain, folate uses a specialized transport system in order to be transported across the blood-brain barrier. It is important to understand that the central nervous system is a rather protected region of the body, and thus it needs to be transported across the BBB by one of two specialized carriers.
Black circles are the four main types of folate, triangles indicate vitamins that facilitate a reaction, and orange squares indicate enzymes that facilitate a reaction. Right side of the image shows the methylation pathway.
These two specialized carriers are known as 1. Folate Receptor-alpha (FR-α/FR1) 2. Reduced Folate Carrier Folate Receptor-Alpha is a high affinity receptor, buthas a low capacity. It prefers folate (DHF) over 5-MTHF and folinic acid. It is essential for folate transport across the blood-brain barrier when extracellular folate concentrations are low. These FR-α receptors, however, can be blocked or bound by folate receptor antibodies. In a German clinical study of children with CFD, DNA sequencing of the FR1 gene was performed and found to be normal. However, folate antagonists were present with irreversible binding, or autoantibodies blocking the folate binding site of FR1. Thus folate could not get into the brain. Reduced Folate Carrier is a low affinity receptor, but has a high capacity. RFC has a higher affinity for folinic acid than it does for 5-MTHF and active folate (DHF). Not only does it transport folate across the blood-brain barrier, it also transports it into neurons once it has reached the central nervous system.Most importantly, however, the RFC receptors are not blocked by Folate Receptor Antibodies.
The amount of 5-MTHF in the cerebrospinal fluid and the amount of the blocking autoantibody are inversely related. Thus the low level of 5-MTHF in the CSF can result from decreased transport across the blood-brain barrier because of the folate receptor antibodies that bond to the folate receptors in the choroid plexus and block folate transport. Treatment of CFD with folinic acid for prolonged periods can result in significant improvement of clinical symptoms and a return of 5-MTHF levels in the CSF to normal.
The image above displays statistics from a clinical study of CFD and ASD (autism spectrum disorders). Folate Receptor Antibody concentrations were measured in 93 children with ASD and a high prevalence (75.3%) of patients with FRAs was found. It is also actually possible that many more children than currently recognized may suffer from CFD, since it is such an unknown condition. What was also interesting was that even in children without blocking FRAs, the CSF 5-MTHF concentration was below normal. This would be consistent with many other studies that have reported folate-related abnormalities in children with ASD. It is thus possible that FRAs act synergisti-cally with other folate abnormalities to reduce CSF 5-MTHF [2] In the same study of 93 children, children with FRAs were treated with leucovorin (folinic acid) for 4 months, with 2 mg/day with a max dosage of 50 mg per day. Approximately 33% of children treated with leucovorin showed moderate to much improvement in the following ar-eas: attention, receptive/expressive language, stereotypical behavior, and verbal communication. This was measured by the receptive and expres-sive CELF language index. There was also a low incidence of side effects. Other clinical trials have reported that treatment with folinic acid has also led to full control of epilepsy and resolution of various neuroana-tomical problems including white matter demyelination. Proper folate metabolism is important for the production of DNA monomers and for the creation of methyl groups needed in the methyla-tion pathway. If the right kind of folate can’t be produced in the brain or transported there, there will be neurological consequences. Studies with folinic acid have shown that it is safe and can/could raise the level of ac-tivated folate in the brain and may help clinically. The presence of FRAs is not the final answer to many neurological problems but CDF is actu-ally one of the few progressive neurological disorders that istreatable and potentially reversible. REFERENCES 1. Frye, Richard E., and Daniel A. Rossignol. “Cerebral Folate Deficiency in Autism Spectrum Disorders.” Autism Science Digest 2 (2011): 9-15. JSTOR. Web. 2. Rossignol, Daniel A., and Richard E. Frye. “Folate Receptor Alpha Autoimmunity and Cerebral Folate Deficiency in Autism Spectrum Disorders.” Journal of Pediatric Biochemistry (2012): 263-71. JSTOR. Web. 3. Edwards, Wendy, MD. Cerebral Folate Deficiency and ASDs.Proc. of Autism Canada Foundation Conference, Riverview, Canada. N.p., n.d. Web. 4. Coppen, A., and C. Bolander-Gouaille. “Treatment of Depression: Time to Consider Folic Acid and Vitamin B12.” National Center for Biotechnology Information.U.S. National Library of Medicine, n.d. Web.
FALL 2014 | JOURNYS | 5
Smell of Rain by Daniella Park edited by Achi Mishra Drip-drip. As water droplets lightly tap the streets, pedestrians scramble to find shelter from the precipitation. Rain is a component of the water cycle and, to some, a nuisance from mother nature. Raindrops are formed by the condensation of water vapor and then precipitated when the collection of water is heavy enough to fall by gravity1. While the onset of rain is commonly associated with the sensations of touch and hearing, a subtle hint of rain can be detected through olfactory receptors. Petrichor, or the scent of rain, serves as an indicator of a storm and possibly directs mating behaviors in other animals. Before rain is actually seen, heard, or felt, the coming of a thunderstorm can be detected by the scent of ozone. Ozone is formed when an electrical charge stimulates the splitting of nitrogen and oxygen molecules into separate atoms2. The nitrogen and oxygen atoms recombine into nitric oxide. Nitric oxide reacts with other atmospheric chemicals and some reactions produce ozone, a molecule with three oxygen atoms (O3). A thunderstorm carries O3 from higher altitudes to more accessible levels by downdrafts, air movement that carries air toward the ground. Ozone’s aroma is subjective to the individual and has the potential to trigger memories associated with the smell of ozone. However, ozone is not the only component that makes up the smell of rain. Actinomycetes are bacteria that thrive in damp and warm soil3, and are responsible for decomposing a wide variety
HAIWA WU / GRAPHIC 6 | JOURNYS | FALL 2014
of substances, including recalcitrant compounds like chitin and cellulose4. These compounds are active at high pH levels. When the soil is dry, actinomycetes decompose prolifically. Decomposition is the process in which organic material is broken down into inorganic chemical elements and is essential in the conversion of organic materials and making of compounds suitable for primary producers. The airborne molecules that result in decomposition chemically recombine with other elements on rock surfaces2. Geosmin is a specific metabolic byproduct of decomposition that adds to the smell of “wet soil” that lingers after a storm has passed2. When rain hits the surface of the earth, the combination of fatty acids, alcohols and hydrocarbons are released into the air, where it forms an odor that can be detected by an individual. As the chemicals float in the air, the odor molecules travel into the nose and enter the nasal cavity, where olfactory receptor cells are located5. The molecules bind to specific receptor proteins known as odorant receptors, triggering a series of action potentials that relay sensory information to the olfactory bulb. The nerve fibers from the olfactory bulb are linked to the amygdala and hippocampus, which are the components responsible for emotional impulses and memory. This direct connection to the limbic system may explain why certain odors like the smell of rain can trigger vivid memories-while other sensory information must be filtered by the thalamus, the sense of smell has direct ties to the brain’s intricate structures. The scent that results from weather is believed to carry messages to different species. Some biologists suspect that petrichor serves as a signal of spawning time for freshwater fish, in which fish release gametes into the water as their method of fertilization2. The signal of petrichor may serve as a reproductive barrier that maximizes reproduction between fish. To Australian aboriginal people, the smell of rain may serve as a cultural synesthesia that links the climate to the color green. The odor represents protection and cleansing and is manufactured into perfumes. Scientists at the John Innes Centre claim geosmin can direct camels to desert oases, because the fragrance of geosmin occurs in areas of high humidity6. Petrichor serves as a predictor of oncoming storms, a powerful mechanism that triggers emotions and memories and a tool for species to sustain life. Without it, we would not only lose the powerful sensory messages, but also the sweet smell that indicates the continuity of the water cycle. REFERENCES 1. Nicholson, J. “How is Rain Formed?” http://www.ehow.com/howdoes_4587413_how-rain-formed.html 2. Yuhas, D. “Storm Scents: It’s True, You Can Smell Oncoming Summer Rain.” http://www.scientificamerican.com/article.cfm?id=storm-scents-smell-rain 3. “The Sweet Smell of Rain.” http://www.npr.org/templates/story/story. php?storyId=12716163 (2007). 4. Ingham, E.R. “Soil Bacteria.” http://soils.usda.gov/sqi/concepts/soil_biology/ bacteria.html 5. Campbell, N.A. & Reece, J.B. Biology (Pearson, San Francisco, 2005)
Stress:
Your Archenemy by Emily Truong
Stress, stress, stress, a word we often intertwine in our daily vocabulary. So what exactly is stress? Stress is the process by which we perceive and respond to certain events. It is a response to a stressor, events that we evaluate as hard, challenging, or threatening. It is a common term for individuals who feel overwhelmed. Hans Selye, an Austrian-Canadian endocrinologist, was the first to demonstrate the existence of biological stress. He injected mice with extracts of various organs and every irritating substance seemed to produce the same symptoms. Selye believed he had discovered a new hormone, but later realized that it was the body’s general response to stress. Selye further established that the body deals with stress by adapting to stressors and gives out a response despite the source of stress. GAS, general adaptation syndrome, is the “general” response that the body releases when encountering stress. The first stage of GAS is alarm, in which the sympathetic nervous system readies our body for “flight or fight. “ The second stage of GAS is resistance, in which the body remains physiologically ready. The body releases hormones such as adrenaline. The last stage is exhaustion, in which the body has already dealt with stress and the parasympathetic nervous system brings the body back to peace. Because the body had channeled so much energy to deal with stress, the body is now exhausted and thus, is vulnerable to viruses and diseases.
Some sources of stress include frustration, conflict/catastrophe, life changes, pressure, emotional problems, traumatic events, fear, unrealistic expectations, etc. Though stress entails positive effects, let’s focus on the detrimental effects. Stress can result in physical detriment: increase of blood pressure, decline of immune system, decline of digestive system, tensing of muscles, inability to sleep, back or chest pains, headaches, heart disease, and etc. Not only can stress have negative physicial effects but also negative emotional effects such as constant negative thoughts, anger management problems, compulsive or obsessive behaviors, mood swings, loneliness, etc. Under the mentorship of Dr. Richard Hsu, I have conducted an experiment to discover whether stress affects a person’s reaction time. I hypothesized that if high school students encounter a stressful situation, their reaction times would increase, meaning it would be slower than usual. To conduct my experiment, I used Tetris (on a Nintendo DS or computer), forty participants, random addition questions, a stopwatch, pencil, and paper. First, I gathered my materials. I had my participants settle down to play Tetris on Level 1. At some point during level 1, I would ask them a simple addition question such as 12 plus 34. I would record the time it takes for them to answer the question and further repeat the process for levels 5 and 10. As the levels increased, the game of Tetris would be harder to handle leading to frustration, etc. The harder levels were used to induce higher levels of stress. By recording the reaction time, I hoped to discover an increase of time meaning stress did impair one’s reaction time. FALL 2014 | JOURNYS | 7
Raw data:
The average reaction time for level 1 was 3.489 seconds; level 5 was 3.786 seconds; level 10 was 5.054 seconds.
Most of the students had an increase of reaction time when playing a more difficult level. By playing a more difficult level and answering a simple addition question, the participant was multitasking. Multitasking, in this case, induced levels of stress, resulting in an increase of reaction time. In retrospect of whether stress increases your reaction time, I have concluded that when high school students encounter stressful situations, their reaction times increase, indicating a less efficient performance. If you were still wondering why you blanked out during the test even though you probably worked superbly during homework sessions, it was due to stress. Stress, your Achilles’ heel, caused you to have a slower reaction time, causing you to have a less efficient performance. According to the Yerkes-Dodson Law, a moderate level of arousal results in the optimal performance that we all strive to achieve. So next time you have a test, big game, debate competition, try to be more relaxed! Don’t shoot yourself in the foot by stressing out too much! Take a deep breath, be confident, and tackle whatever obstacle you have to encounter.
REFERENCES 1. Feature, Colette BouchezWebMD.”Emotional Distress Signs.” WebMD. WebMD 2. “Hans Selye: Birth of Stress.” The American Institute of Stress. 3. “How Stress Affects Your Health.” Http://www.apa.org.
Kristina Rhim / Graphic 8 | JOURNYS | FALL 2014
Fluoride Mitigation
The effects of Aluminum Sulfate, using the Nalgonda Technique, on mitigating the Fluoride Content and improving water quality for portable water. By Ruchi Pandya Edited by Emily Sun Abstract
The Earth is 2/3 water, but water in drinkable format is not in abundance. Only 2.7% of Earth’s water is fresh water, and only 0.7% of freshwater is surface water for consumption. The ground water in many regions of the world is plagued with various unhealthy elements, leading to long-term health problems. The purpose of this project was to reduce the fluoride content in ground water. Once this solution was found, it could be implemented to prevent skeletal fluorosis and other skeletal diseases in areas with contaminated water. To conduct this experiment, full water analysis was run on the water sample to determine the impure contents. Then, macro dosing of Aluminum Sulfate was done in 100mg, 200mg, 300mg, and 400mg quantities. The experiment was repeated for micro dosing as 280mg, 290mg, 310 mg, and 320mg. It was hypothesized that the 300mg dosing of Aluminum Sulfate would be the most effective because the Fluoride would precipitate out of the water, serving as a chemical method of mitigating the Fluoride content. It was determined that the 300mg dosing of Aluminum Sulfate to the contaminated water was the ideal Fluoride mitigating agent, as it brought the Fluoride content within the permissible range, without altering the other elements.
Introduction:
Nalgonda Technique The Nalgonda Technique is a combination of several operations including rapid mixing, chemical interaction, flocculation, sedimentation, filtration, and sludge concentration5. The Nalgonda technique considers the Sulfate and chloride contents of the water sample to give an ideal Alum dose in order to mitigate the fluoride.
PurPosE:
The purpose of this project is to determine the ideal concentration of Aluminum Sulfate to lessen the Fluoride content of Fluoridated water in towns on the Fluoride belt to within a permissible range.
Methods6:
Run full water analysis, before implementing a treatment plan. Fluoride Mitigation Prepare the Alum solution by measuring 10g of Aluminum Sulfate (98% AR Grade) into a beaker. Then dilute the solution to 100mL using distilled water. Swirl the beaker until the salt is thoroughly mixed into the water. Pour 1L of the sample into four separate beakers. Pipet 100mg, 200mg, 300mg, and 400mg into each of the beakers respectively. Next, place beakers on the Flocculator and set the speed to 278 rpm for 5 minutes. Slow the speed to 79 rpm for the next 15 minutes. Filter the sample into a beaker using a sterile filter paper funnel backed by a glass funnel. Run full water analysis on the treated sample. The Effect of 100mg, 200mg, 300mg, and 400mg Aluminum Sulfate dose on 100mL of water for mitigation of the Fluoride content.
Left: Narmada Canal, Gujarat. The state’s largest canal to supply drinking water and water for agricultural needs. Right: Sabarmati River, which used to provide water to a large portion of Gujarat.
HAIWA WU / GRAPHIC FALL 2014 | JOURNYS | 9
ORIGINAL RESEARCH
Water is the basic substance needed for life, and it often defines our activities and behavior. The fight for such a basic necessity is being tempered as civilization progresses, but now a new problem arises. Water supplied to cities all over the world is plagued with a plethora of unhealthy elements, and as clean water sources are not in abundance, a solution to purify water needs to be found. In spite of Earth being 2/3 water, water is not a resourcein abundance. Only 2.7% of Earth’s water is freshwater, and less then 0.7% of surface water is for consumption1. Water is a scarcity, and in rural towns, it is even more of a necessity. Radhesan, the subject of this experiment, is a rural town in Gujarat, India. Radhesan has a population of 1100 people, with its water source being only one tubewell located opposite to its town central office. This experiment uses Aluminum Sulfate to purify drinking water for villages with excess fluoride and water conditions similar to those of Radhesan. 8,252 out of 18,822 villages face problems of high salinity, fluoride, and nitrate content in drinking water. Thus, the people of Gujarat rely heavily on the water resources of the Narmada River, as groundwater pollution to has become “a permanent feature” of their groundwater2. Gujarat Water Supply and Sewerage Board (GWSSB) indicates that 4,341 villages suffer from high fluoride in water. 66.62 million people in India are affected by fluoride each year3. Though, Fluoride is essential to life - in that in quantities between 0.7ppm and 1.2ppm (BIS) - it prevents tooth decay and bone degradation. However, excess amounts of fluoride leads to dental and skeletal fluorosis, thyroid problems, and more severe illnesses4. It is hypothesized that a 300mg dose of Aluminum Sulfate will allow the Fluoride to precipitate out of the water, bringing the content within permissible range.
Results:
ORIGINAL RESEARCH
The Effect of 100mg, 200mg, 300mg, and 400mg Aluminum Sulfate dose on 100mL of water for mitigation of the Fluoride content.
The Effects of Micro-Dosing 280mg, 290mg, 310mg, and 320 mg on Alkalinity, Sulfate, and Fluoride on 100mL of water for mitigation of the Fluoride content.
The results show that the micro dosing has similar results to the macro dosing, thus 300mg is the optimum dose.
Discussion:
Aluminum Sulfate was used for the mitigation of Fluoride from the water sample. Nalgonda Technology uses Aluminum salt to reduce the Fluoride and Alkalinity from raw water. Both salts, Aluminum Sulfate and Aluminum Chloride, are found easily commercially, thus a suitable option for fluoride and alkalinity reduction. The choice between Sulfate and Chloride base mitigation methods was made based on the respective contents in the sample. The Nalgonda Technique relies on selective precipitation, stimulated by the reaction between Fluoride and Aluminum Sulfate. Aluminum Sulfate (Al2(SO4)3, reacts with Fluoride ions. Then, some Fluoride ions bond to the Aluminum ions, freeing the Sulfate to form Sulfuric Acid. This reaction creates two new products: Aluminum Fluoride, and Sulfuric Acid. Aluminum Fluoride is less water-soluble than both Sulfuric Acid and Fluoride, thus forms floccules, which can be filtered out after the treatment has been completed. The byproduct, Sulfuric Acid increases the Sulfate content by the same amount of Fluoride ions that bonded to the Aluminum ions. The increase in Sulfuric Acid lowers the pH to a more acidic state. To apply to other areas, a judgment call must me made Overall, the Nalgonda technique is a very practical method for overall water purification and treatment for people in both rural and urban communities.
Conclusions:
It is certain that water is necessary for all life, as we know it. All over the world, it is essential for communities to acquire and consume clean 10 | JOURNYS | FALL 2014
drinking water. Thus, given water quality with Alkalinity between 400 and 500 mg CaCO3 per Liter, the ideal Aluminum Sulfate Dose is 300 mg. This dose of Aluminum Sulfate mitigates fluoride, and brings it within the permissible standard. The dose also brings pH to a more suitable value for human consumption. It is recommended that communities with similar conditions as those of the village, Randhesan should treat their drinking water with 300mg/Liter Aluminum Sulphate before consumption, in order to bring the fluoride, and in effect pH contents to a consumable level. Acknowledgements: I would like to thank Mrs. Amanda Alonzo for encouraging me, and guiding me throughout the project. I would like to thank Mr. Umesh Pandya and Mr. Vijay Patel, for giving me this excellent opportunity. I also thank Mr. Mayank Acharya and Mr. Manish Patel for guiding me and giving me advice throughout the project. I also give thanks to Nirali Patel, Ami Patel, and Nisha Bhojak for helping me develop proper lab technique and procedure, and for treating me like a younger sister. REFERENCES
1. WASMO. “Fresh Water Resources.” Presentation, Gujarat. (2012). 2. “Over 2,000 villages to go thirsty in state.” http://www.rainwaterharvesting.org/ conflicts/Go_thirsty.htm. (2012). 3. TNN. “Threat of Fluorosis in drought-hit Gujarat.” http://www.fluoridealert.org/Alert/ India/Threat-of-Fluorosis-in-drought-hit-Gujarat.aspx. (2012). 4. Helmenstine, A. PhD. “How Fluoride Works.” http://chemistry.about.com/od/ howthingswork/a/How-Fluoride-Works.htm. (2012). 5. Devotta, S. PhD et al. Integrated Fluorosis Mitigation: Guidance Manual (NEERI, Nagpur, 2007). 6. Clesceri, L. S.; Eaton, A. D.; Greenberg, A. E. Standard Methods for the Examination of Water and Wastewater 20th Edition (Maryland: United Book Press, Inc, 1998)
The Science of Gossiping by Christina Baek edited by Gha Young Lee Gossiping, a usage of language in human interactions, is seen by the majority of the modern society as a character flaw that merely wastes away our time by talking shallowly about others with the sole purpose of entertaining our shallow curiosity. Ironically, it continues to predominantly stain our daily conversions, which R. I. M. Dunbar, the professor of evolutionary psychology, calls a “uniquely human phenomenon”1. However, you can be relieved, as studies show that our embarrassing attachment to gossip is not a character flaw but an innate part of humans -- an evolutionary adaptation embedded in our genetics2. People tend to believe that gossiping is limited to maliciously speaking ill of another. It is important to understand that a gossip can be positive and even neutral. Originally, gossiping was merely an activity people engaged with their “god-sibs”, those especially close with oneself1. When used appropriately, it is an effective way to learn information to learn about the personality of others which could greatly impact our position in the intricate web of human relationships for we know beforehand who to be friends with. Although it is not encouraged to merely listen to gossip to learn about others, it is a very advantageous component3. Humans are exposed to language their whole life, producing their first real words when they are around 18 months old. By the time humans are 3, they know about 1000 words4. Before going on, it is important to understand why language is so significant in the human society and how language acts as a powerful mechanism to bond social groups1. Humans belong to the catarrhines, a subgroup of primates, also known as the Old World monkeys and apes. Catarrhines, compared to other animals, have a strong sociality webbed from advanced forms of social cognition. The anthropoid primate societies of Old World monkeys and apes are extremely social1. Sociality is vital in order sustain a big group and to minimize the impacts of predators. However sociality obligates individuals to sacrifice behaving in ways that are ideal for themselves all the time, which leads to the
exposure of stresses including being harassed by those who are more dominant. In order to avoid these stresses, primates depend on the formation of alliances built upon trust and commitment. Alliances allow primates to rely on help when under such stresses. Unlike humans of the modern society, primates used social grooming to create these alliances1. It is hypothesized that grooming is able to create such sense of obligation between individuals by releasing endorphins which triggers a sense of relaxation by lowering heart rate and reducing anxiety4. Grooming can be compared to humans resorting to communicating through physical contact instead of words in order to convey our emotions. It is easier for humans to communicate one’s inner feelings through physical contact than words1. The greater the group size, the greater the amount of stress imposed is on the primate through factors including ecological competition. This calls for a greater need for alliances. Because humans have a naturally large group size of people we know personally, we needed a more efficient way to bond, an alternative to grooming: language1. Language is a much more effective alternative because it not only enables us to bond with multiple people with the same time, but it allows us to multi-task and exchange information1. Exchanging information plays a significant role in social bonding because it allows us to keep track of what is happening in our social network1. In a society where individuals had to cooperate with group members who were also their competitors for the limited resources, the ability to predict the behavior of others was vital when managing friendships and alliances. They had to know who was trustworthy beforehand. This called
TERESA CHEN / GRAPHIC FALL 2014 | JOURNYS | 11
for the necessity to gossip2. Gossiping was essential to avoid free riders. Free riders are those who receive all the benefits of the society without paying the costs for it. They disabled societies from being coherent, promoting the disperse of societies and allowing individuals to be faced with the danger of predators1. This necessity for gossiping has come down to us through genetics, and the use of gossiping to avoid free riders is apparent even in our modern society2. Kevin Kniffin, an anthropologist at the University of Wisconsin, found that, through tracking the social interactions of a crew team of 50 men and women, gossip levels were the highest when the team included a person who showed up late and missed practices. When the slacker left the team, there was a steep decrease in negative gossip and the other members began to talk about other topics5. The fact that gossiping was used as a way to avoid free riders may be an answer to why humans find an attraction to negative gossip more than positive or neutral gossip. In fact, a study by a team of researchers was able to find results that suggested that the human brain is hardwired to pay more attention to a person if they are connected to negative characteristics such as dishonesty and danger6. Their findings demonstrated that gossiping is not only influences our feelings for others, but our vision as well3. This was completed through the use of binocular rivalry. Binocular rivalry occurs when a different image is shown to different eyes. Since the brain can only handle one image at a time, the images compete for perceptual dominance which causes people to see alternating images6. Perceivers were shown 6 structurally neutral faces. They had to be neutral because the brain tends to linger on pictures with overt affective value over neutral images. Pictures with overt affective value include startled faces and disgusting pictures. Each face was paired with either negative gossip, positive gossip, neutral gossip. Some novel faces were not paired with gossip at all but negative nonsocial information, positive nonsocial information, or neutral nonsocial information instead6. 12 | JOURNYS | FALL 2014
If gossiping serves such a purpose, then why are we so interested in the lives of celebrities? A possible explanation is because the media bombards us about these celebrities that they become familiar friends. For early humans, any-body that we knew well was a socially important figure in the group. Another explanation is that the lives of celebrities allow humans to bond easier because they provide a com-mon interest and something to talk about2.
REFERENCES
1. Dunbar, R. I. M. Gossip in Evolutionary Perspective. Rev. Gen. Psychol. 8, 100-110 (2004). 2. McAndrew, F. T. “The Science of Gossip: Why We Can’t Stop Ourselves” http://www.scientificamerican.com/article.cfm?id=the-science-ofgossip (2008). 3. Anderson, E., Siegel, E. H., Bliss-Moreau, E., Barrett, L. F. The Visual Impact of Gossip. Science. 332, 1446-1448 (2011). 4. Dunbar, R. I. M. Grooming, Gossip and the Evolution of Language. (Harvard University Press, Cambridge, 1998). 5. Carey, B. “Have You Heard? Gossip Turns Out to Serve a Purpose” h t t p : / / w w w. ny t i m e s. co m / 2 0 0 5 / 0 8 / 1 6 / s c i e n ce / 1 6 g o s s. html?pagewanted=all&_r=0 (2005). 6. Hamilton, J. “Psst! The Human Brain is Wired For Gossip” http:// www.npr.org/2011/05/20/136465083/psst-the-human-brain-iswired-for-gossip (2011).
HAIWA WU / GRAPHIC
Telomerase: A Magic Molecule for Longevity? By Eric Tang There are many theories in prolonging life, and JOURNYS has covered several interesting theories, including melatonin1 and caloric restriction2. One of the most promising theories, however, is not covered in previous issues. In this theory, aging and the diseases of aging can be prevented by safeguarding DNA – the blueprint of life. The work around this theory has transformed our understanding of how cells age. Not surprisingly, the Nobel Prize for Physiology or Medicine for year 2009 was awarded to Drs. Elizabeth Blackburn, Carol Greider, and Jack Szostak, for their findings on “how chromosome ends are protected by telomeres and the enzyme telomerase”. It has been known for many years that DNA replicates repeatedly in cells to pass hereditary information from one generation to the next one. As early as the 1930s, Hermann Muller and Barbara McClintock, other two Nobel Prize Laureates (for Physiology or Medicine in 1946 and 1983, respectively) , observed chromosome ends, named telomeres, had special protective properties to ensure the integrity of DNA copy. The word telomere derives from the Greek terms telos (end) and meros (part). It was a puzzle for scientists for many years as to how telomeres function until Dr. Elizabeth Blackburn identified the first telomere sequence in 1970s. Since then, the field of telomere research has exploded.
Edited by Selena Chen It is now known that the telomeres -- the cap-like structures -- represent a very small fraction of our DNA, and yet have a huge impact on the behavior of cells as a biological clock. They protect chromosomes in much the same way as the plastic sheath on the end of a shoelace. But each time that a cell divides, the telomeres become a little bit shorter and eventually end up being too short to protect the chromosomes. However, the nature has a magic wand to reset the aging process in sex cells and other stem cells. These cells have high activity of telomerase, which is an enzyme that can replenish telomeres in the cells. In summary, telomerase has evolved as a promising target for clinical medicine. More detailed research is needed investigate the mechanism by which telomerase regulate cell functions. New drugs based on the results of these studies may be developed to prolong life and improve health. REFERENCES 1. Shu S. Melatonin – a new fountain of youth? JOURNYS 2011. 2. Sun E. Eat less, live longer? Caloric restriction and longevity. JOURNYS 2012. 3. de Jesus BB, Vera E, Schneeberger K, et al. Telomerase gene therapy in adult and old mice delays aging and increases longevity without increasing cancer. EMBO Mol Med 4:691-704, 2012. 4. Bojesen SE, Pooley KA, Johnstty SE, et al. Multiple independent variants at the TEERT locus are associated with telomere length and risks of breast and ovarian cancer. Nature Genetics 45:371-384, 2013. 5. http://www.geron.com/imtelstat. Accessed on April 29, 2013.
FALL 2014 | JOURNYS | 13
KRISTINA RHIM / GRAPHIC
Memory Late at night the day before a major exam, you are rereading the same page of text three times without processing any of the words. To cram or not to cram? That is not even a question. Knowing how the different types of memory work will help you make decisions as such. Unlike most body cells, neurons in the brain only divide to make new cells during fetal development and a few months after birth. Afterwards, no new brain cells are formed, though existing ones may increase in size until the age of about eighteen years1. These a hundred billion neurons are designed to last a lifetime and can store up to 1,000 terabytes of information2. Well, how do these connections in the neurons store information? The process of learning and creating memories generally follow four steps. The first process of creating a memory is encoding, which begins with the perception of an event through the senses and attention, which is regulated by the thalamus and the frontal lobe of the brain. Attention causes neurons to fire more frequently, making 14 | JOURNYS | FALL 2014
by Hope Chen Edited by Ruochen Huang the experience more intense and increasing the likelihood that the event is encoded as a memory3. These perceived sensations are decoded in the various sensory areas of the cortex, then combined in the brain’s hippocampus into a single experience. The hippocampus is then responsible for analyzing these experiences and deciding if they will be committed to long-term memory2. Next, consolidation occurs. This is the process of stabilizing a memory trace after it is initially acquired. It usually consists of two specific processes – synaptic consolidation, which occurs within the first few hours of learning, and system consolidations, during which hippocampus-dependent memories become independent of the hippocampus over a period of weeks to years2. During longterm potentiation, a synapse increases in strength as increasing numbers of signals are transmitted between the two neurons. As new experiences accumulate, the brain creates more and more connections and pathways and recreating the neural network. The brain organizes and reorganizes the synapses, which is known as neural plasticity, an important process for memory and learning3.
Storage is the third process of learning. It refers to retaining information in the brain from sensory memory, short-term memory, or long-term memory4. Each stage of memory acts as a filter of the flood of information presented on a daily basis. The more the information is repeated or used, the more likely it is to be retained in long-term memory. Long-term memories are not stored in just one part of the brain. Rather, they are widely distributed throughout the cortex, so even if one engram, or memory trace, was destroyed, there are still duplicates, or alternative pathways, elsewhere that allow the memory to be retrieved2. Lastly, the recall stage of memory refers to the subsequent reaccessing of events or information from the past, which has been encoded and stored in the brain, also called remembering. During recall, the brain replays a pattern of neural activity that was originally generated in response to a particular event. There is no real solid distinction between the act of remembering and the act of thinking. On the other hand, forgetting is rather the temporary or permanent inability to retrieve a piece of information or memory that had previously been recorded in the brain. Forgetting typically follows a logarithmic curve, so the information loss is quite rapid at the start but gradually slows down3. The human memory is often separated into three main groups: sensory memory, which lasts for a few milliseconds, short-term memory, which lasts for a few seconds, and long-term memory, which lasts for a lifetime. Sensory memory is the automatic perception and memory that disappears in a few seconds. Short-term memory allows people to retain seven pieces for a minute. Longtermed memory can last for weeks, months, years, or even a whole lifetime. Often times, to turn something from short-term memory to long-term memory requires repetition4. For a high school student, sleep is a luxury that should not be ignored, especially if there is an impending test the next day. Consolidation of different memories has been associated with sleep, and in particular the rapid-eye-movement phase of sleep. That, along with reviewing, play a large role in consolidation of the memory and will make you much more likely to remember the information for the test.
REFERENCES 1 Klemm, William R. “To Cram or Not to Cram? That Is the Question.” Memory Medic, 14 Jan. 2012. http://www.psychologytoday.com/ blog/memory-medic/201201/cram-or-not-cram-is-the-question 2 Mastin, Luke. “The Human Memory.” http://www.human-memory. net/index.html (2010) 3 “Brain Facts: A Primer on the Brain and Nervous System.” http:// www.brainfacts.org/about-neuroscience/brain-facts-book/
Types of Human Memory: Diagram by Luke Mastin
GRACE CHEN / GRAPHIC FALL 2014 | JOURNYS |15
Our Memory’s not so Memorable, After All
by Eric Chen Edited by Daniella Park
GRACE C
HE N /G RA PH IC
The human brain is one of the most complex marvels of nature. It has many amazing functions, including coordination of thought, analysis of information, and orchestration of nearly all the bodily processes, both voluntary and involuntary1. One of the defining traits of the human brain lies within another major function of the brain—memory. It is the structure of memory and the unmatched ability to store, sort, and connect different with certain types of memory that sets the human brain apart from that of an animal’s. Although the human memory is so distinct, there have recently been groundbreaking findings relating a rat’s memory to a human’s. In February of 2013, principal investigator and professor Jonathan Crystal and a team of neuroscientists from Indiana University examined an interesting property of a rat’s memory. “If you ask a rat whether it knows how it came to acquire a certain coveted piece of chocolate, the answer is a resounding, ‘Yes.’ ”3 The specific type of memory involved in remembering where something is learned is called source memory. Along with several other types of memory, source memory has been attributed uniquely to the human brain. When different source information in grouped together, it forms episodic memory, allowing for the differentiation of one event from another.4 In the elaborate study conducted by Crystal, rats were placed in an eight arm radial maze, with chocolate as the bait for the experiment. Chocolate was the perfect bait for this experiment, because
16 | JOURNYS | FALL 2014
“There’s no amount of chocolate you can give to a rat which will stop it from eating more chocolate,” according to Crystal3. The rats would try to find the piece of chocolate, and the researchers studied the memory patters of the mice after the experiment. The experimenters tested the source memory of the rats in a series of five experiments. The first two required the rats to remember how they obtained the chocolate— whether they were placed near the chocolate or if they had to run and find it on their own. The rats were placed in different mazes throughout the experiment to rule out the possibility of the learning from different clues in one particular maze.
The third experiment tested the duration of the specific memory used by the rats in remembering the chocolate retrieval process. The experiment showed that the memory lasted for a week. Because source memory lasts much longer than the ordinary forms of memory that usually last only one day, the results of the third experiment showed that the rats were in fact using source memory to remember the circumstances of each experiment. The fourth experiment tested whether or not the rats remembered additional rules in the experiment. In the fifth experiment, the researchers temporarily disabled the hippocampus region of the rat’s brain, which controls source memory. Because inactivation of the hippocampus prevented the rats from recognizing patterns and completing the tasks, the scientists confirmed the fact that the rats were relying on source memory3. So, besides the fact that rats really love chocolate and can remember things about it, what does all this research really mean? Research about these about many memory related diseases has often been limited by the fact that human memory has been considered to be much more complex than that of most other animals. The patterns observed in human memory are not necessarily observed in animals, and are often frustratingly hard to experiment with5. Because this experiment provided evidence that source memory does exist in animals, it opens up a whole new realm of research opportunities that have the potential to help cure many diseases related to memory failure. When the source memory studied in this experiment fails, one forgets where certain information comes from. A classic example of source memory failure is a person retelling a joke to the person that first told him/her the joke. Although this example of memory failure seems harmless or at most embarrassing, memory failure can lead to many more serious issues. Some diseases and disorders related to memory failure include Alzheimer’s, Parkinson’s, Huntington’s, schizophrenia, PTSD, and depression. According to Crystal, “If you can export types of behaviors such as source memory failures to transgenic animal models, you have the ability to produce preclinical models for the treatment of diseases such as Alzheimer’s.”3 Further research can help develop animal models of memory types that are impaired in human diseases and possibly open doors to new and innovative solutions. Although rats and humans may seem completely different, distinct species, they do in fact share two things in common: source memory and the intrinsic desire for chocolate.
REFERENCES 1. “Livestrong”. http://www.livestrong.com/article/162196-the-brains-main-functions/ (2010) 2. “The Human Memory”. http://www.human-memory.net/ (2010) 3. “Science Daily”. http://www.sciencedaily.com/releases/2013/02/130227085944.htm (2013) 4. “Current Biology”. http://www.cell.com/current-biology/retrieve/pii/ S0960982213000262 (2013) 5. Human Memory: Theory and Practice (Taylor & Francis Group, East Sussex, 1997).
Mind Controlling Parasites By Chris Lu Edited by Eric Chen
I
f I were to talk about parasites that can lodge into a host’s brain and affect the way it behaves, one might
ask me what science fiction novel I am discussing. However, this idea, known as neuroparasitology, is common among many organisms, including humans. In fact, there is a parasite that is said to affect about 40 percent of the human population1. These types of parasites can be extremely effective in animal hosts. One such example comes from an unnamed species of wasp that has the ability to infect spiders2. The spider, which is usually an orb spider, normally spins webs and eats whatever prey is caught. However, when a wasp successfully attacks it, the spider is momentarily paralyzed while the wasp prepares its egg on the spider. After the wasp leaves and the paralyzing effect wears off, the spider continues to behave as if nothing had occurred. While this is happening, the larvae on its abdomen feed on its nutrients through small punctures in its skin. One day before the larvae kill the host, the larvae are believed to secrete mindcontrolling chemicals. These chemicals command the spider to create a strange platform-like structure out of its webbing. The larvae would then kill the spider, and then wrap themselves into a cocoon in this structure, where they will be safe from wind, rain, and predators on the ground. It is unknown exactly how this occurs; however, such a sophisticated and specific form of neuroparasitology is hardly seen. Another parasite, known as Ophiocordyceps is a fungus that can control Camponotus ants2. When the ant is infected with the spores of the fungus, it wanders off its normal route to find a tree that is above the ant trails. At noon, the ant will face northwest and clamp a leaf with its jaws. About six hours later, the ant will die, and a few days later the fruiting body of the fungus will emerge from the ant’s head, ready to shoot spores to infect more ants.
TERESA CHEN / GRAPHIC The most well-known neuroparasite that is said to infect about forty percent of humans, is called Toxoplasma gondii. The intended targets of this parasite are actually cats and mice. This is because the parasite can only sexually reproduce within the intestines of cats. However, when the cat defecates, the parasite is often excreted as well. During this time, a wandering mouse may encounter the feces and the parasite, which infects the mouse. When the parasite is not in a cat, it attempts to maximize its chance of being eaten. To do this, it is believed to switch the pathways in the brain between fear and pleasure. A mouse normally panics when it encounters cat urine, and releases the pleasure hormone, dopamine, when it encounters female rodent urine. When infected, the mouse will seek cat urine and avoid female rodents. However, cat-owners and others may encounter this parasite from the feces of cats as well. It has recently been linked with behaviors such as reckless driving and a greater risk of suicide. Furthermore, it has different effects based on gender and is a possible cause of the stereotypes of men being introverted, and women being more sociable. Although this parasite may seem extremely dangerous, in reality, it is almost harmless to humans. After all, humans have been living with this parasite for millennia3, so there is no reason to worry about being infected.
REFERENCES 1. Burne, Jerome. “There Are Zombies among Us.” The Telegraph. The Telegraph, 26 Mar. 2013. 2. Ross, Phillip. “Parasite In Brain? Mind Control Bug Affects 40% Of Population.” International Science Times. International Science Times, 28 Mar. 2013. Web. 1 Apr. 2013. 3. Vibes, JG. “Scientists Claim 40% of Population Infected with “Mind Control Parasite”” Infowars.com. Info Wars, 1 Apr. 2013. Web. 1 Apr. 2013.
FALL 2014 | JOURNYS | 17
It’s Only
Skin-Deep by JESSICA YU edited by IVAN DANG
KATHERINE LUO / GRAPHIC
O
ften times, when the average person is asked what the largest human organ is, the most common answer would be the brain, or stomach. Many neglect to see that the answer lies before their very eyes, and only “skin-deep”, so to speak. Making up twenty percent of the human body weight, and with a total area of approximately two square meters, the skin is the largest organ of the body. This protective and vital membrane functions as the support of all other organs, maintains the immune system, regulates body heating, and manufactures vitamin D for our bodies1. Consequently, the many responsibilities and apparent significances of our skin allows us to consider the question, “What would we do without our skin?” Although not an everyday concern, this question primarily deals with the treatment of victims suffering injuries ranging from burns to frostbite and extreme skin trauma. Skin is composed of two main layers, the dermis and epidermis. The dermis is the lower layer that doesn’t grow back once the skin is severely injured, making it very hard for the skin to rebuild once damaged. For example, a severe electric shock leaves injuries similar to a third degree burn; a serious burn will leave the body vulnerable to infection and prone to dehydration. Keeping the patient in a sterile room and covering the burnt area with a skin graft from the patient’s own skin or a donor’s can help save his or her life. However, in more urgent and critical cases, the patient may still die because his or her body cannot produce enough skin quickly, or will reject the skin graft. This is where artificial, or synthetic, skin comes in. Artificial skin is a lab-produced, temporary or permanent replacement for damaged skin2. It is made of non-biological molecules and polymers that are not present in normal skin. Materials used are biodegradable and provide an adequate environment for the regeneration of tissue and are able to maintain a three-dimensional structure for at least 3
18 | JOURNYS | FALL 2014
weeks. This will allow ingrowths of blood vessels, fibroblasts that produce collagen in connective tissues, and epithelial cells that line body cavities. The materials must be immune-compatible in order to avoid reactions with the immune system and decrease the chance of an inflammatory response. Artificial skin does have many advantages, as well as disadvantages. The properties of the skin can be controlled, and added to enhance the skin to a greater effect. Being synthetic, the skin can also avoid the potential of disease transmission. However, synthetic skin does not have a membrane, nor does it exactly resemble actual skin, therefor it is problematic to produce a biologically compatible material 3. The first synthetic skin was developed by John Burke and Ioannis Yannas in 1981; Burke had been studying burn victims and recognized the need for a new type of skin replacement, while Yannas had been researching collagen, the cells that make up connective tissues. The artificial skin the team created was composed of two layers of polymers-compounds with repeating molecular structure- with one being synthetic and the other organic. The synthetic top layer is made of a thin silicone sheet designed like the human epidermis to prevent infection and dehydration. The bottom organic layer is scaffolding made of collagen from cow tendons and shark cartilage. The bottom half of the scaffolding is covered with a sugar molecule called glycosaminoglycan. The sugar mimics the texture of the dermis, tricking the human skin cells, called fibroblasts, to start generating human collagen. As more and more collagen is produced, the connective tissues eventually build a new dermis around the scaffolding, which dissolves away. The silicone layer is then peeled off and an epidermis produced from the patient’s skin and mouse-derived fibroblasts are placed over the newly grown dermis. Burke and Yannas’s creation allowed for larger areas to be covered than possible with skin grafts, and reduced infection rates from the use of immune-suppressing drugs. Today, the artificial skin they have created is known as Integra, which is used widely used for extensive burns and chronic skin injuries 4.
Integra is not the only artificial substitute in the market; as the researchers continue to delve into the field of synthetic skin, more and more improvements are made to make synthetic skin more durable, and biologically similar to actual skin. Take ICX-SKN, developed in 2007 by the British biotech company, Intercytex. Some artificial skins can be rejected by the patient’s body if they are not immune-compatible; however, ICX-SKN was designed to weave into the wound much more effectively. The skin uses a matrix made of fibrin, the protein found in healing wounds. Fibroblasts are added to help synthesize new tissue, and collagen is produced by the fibroblasts, which helps make the matrix stronger. The stronger matrix can thus withstand changes in the healing process. The process of allowing collagen to be directly produced by fibroblasts in the matrix allows ICX-SKN to mirror the skin’s biological process of repairing itself; thus the healing process leaves minimal scarring, which is especially helpful to elder patients suffering from chronic wounds 5. Recent studies in synthetic skin have led to new technologies to the treatment of patients, as well as new uses for synthetic skin. The Skin-Cell gun is an example of a new device used to treat victims in need of skin replacement. The process, developed in 2008 by Joerg Gerlach, uses a water-based solution containing the healthy stem cells on the patient. The solution isthen is sprayed onto the burn. The process takes only an hour and a half to finish, whereas applying skin grafts or synthetic skin may take up to months. The recovery time for the Skin-Cell gun, compared the traditional method, is also considerably faster. By using the Skin-Cell gun, the patient can recover in a matter of days, as opposed to the several weeks using traditional methods 6. The growing technology of artificial skin has also expanded to different fields besides basic wound treatment. In 2010, engineers at UC
The Skin-Cell gun as developed in 2008 Image Courtesy: Jorg Gerlach
Berkeley developed a pressure-sensitive electronic material made of nano-wires, known as e-skin. This opens many opportunities in technology, especially in terms of robotics. Imagine entire robots covered in e-skin; this will allow them to sense pressure and know how hard to grip objects, such as a wine-glass as opposed to a pot, without breaking them. The skin can also be wired to tap into the human nervous system in order to restore touch to people with prosthetic limbs or skin diseases such as leprosy 7. Leprosy is a disease that causes nerve damage, causing limbs to lose feeling and unable to operate. The patient not only loses the use of their limbs, but also can’t detect when they may be harming themselves 8. The e-skin can help restore touch in these patients so that it will seem like they never even had the issue to begin with. Similar to e-skin, a research team at Stanford University created a sensitive, conductive artificial skin that also has self-healing properties. The skin started with a plastic polymer that held together with hydrogen bonds. Hydrogen bonds were used because they are weaker than covalent or ionic bonds and can be easily broken and reformed. The plastic the scientists created was cut and in half; by thirty minutes it had completely joined together. As for conductivity, the team mixed bonded nickel with the polymer to allow electricity to flow across the skin. The end result can be used to provide touch for people with prosthetic limbs and even coat wires in computer systems 9. Artificial skin provides valuable cure for those in need of skin replacements, widening its use to benefit other fields outside of medicinal use including robotics, prosthetics, and even computer wiring. The expanding technology of synthetic skin shows the variety, the various uses, and the importance of our largest organ, even if it is just “skin-deep’.
A close up of e-skin, developed by UC Berkeley in 2010 Image Courtesy: Ali Javey and Kuniharu Takei, UC Berkeley
REFERENCES 1. American Skin Associations. “American Skin Association/Skin Resource Center/Healthy Skin.” http://www.americanskin.org/resource/ (2012) 2. Medical Discoveries. “Artificial Skin.” http://www.discoveriesinmedicine.com/Apg-Ban/Artificial-Skin.html (2012) 3. Halim, Ahmad Sukari, Teng Lye Khoo, and Shah Jumaat Mohd. Yussof. “Biologic and Synthetic Skin Substitues: An Overveiw.” http://www.ncbi.nlm.nih.gov/ pmc/articles/PMC3038402/ (2006) 4. Roos, David. “Skin Grafts.” http://health.howstuffworks.com/skin-care/information/anatomy/skin-graft5.htm (2012) 5. Ghosh, Pallba. “Artificial Skin ‘Cuts Scarring’.” http://news.bbc.co.uk/2/hi/health/6236282.stm (2007) 6. Teeghman, David. “Spray On Skin Cells for Burn Victims.” http://news.discovery.com/tech/spray-on-skin-cells-for-burn-victims.html (2011). 7. Yang, Sarah. “Engineers Make Artificial Skin Out of Nanowires.” http://newscenter.berkeley.edu/2010/09/12/eskin/ (2007) 8. PubMed Health. “Leprosy.” http://www.ncbi.nlm.nih.gov/pubmedhealth/PMH0002323/ (2011) 9. Anthony, Sebastian. “Stanford Creates Touch-Sensitive, Conductive, Infinitely-Self-Healing Synthetic Skin.” http://www.extremetech.com/extreme/140115stanford-creates-touch-sensitive-infinitely-self-healing-synthetic-skin (2012)
FALL 2014 | JOURNYS | 19
The Mechanisms of Thoracic Aortic Aneurysm and Dissection at the Molecular Level by Francis Lin Thoracic Aortic Aneurysm and Dissection (TAAD) occurs when the aorta dilates, thins, tears, and ruptures (Board, 2012). It is responsible for 0.5% to 1% of deaths in the United States. TAAD is usually asymptomatic, meaning it does not express symptoms or any unique characteristics before the aorta tears. However, blood enters the wrong layers in the aorta walls. Once the aorta tears, chest pain results, along with pale skin, a faint pulse, numbness, or even paralysis. Without immediate treatment, death will occur. Since TAAD is asymptomatic, people usually do not know they have TAAD before they experience the rupturing when it is too late; this fact is responsible for the high mortality rate, with about 30,000 people in the United States dying as a result of aortic rupture every year. There are numerous causes for TAAD, such as atherosclerosis, which is a condition where arteries become hard due to accumulation of fat and cholesterol. TAAD can also be caused by hypertension, or high blood pressure. However, 20% of the patients inherit the disease. The most common genetic cause of TAAD are mutations in ACTA2, the gene that codes for alpha smooth muscle actin (Milewicz, 2011). Before mutations of actin can be explored, the structure, function, and polymerization of actin must be first discussed. Actin is a protein and the monomer that is involved in many important cellular processes. (Cooper, 2000) Actin specifically makes up two of the three possible filaments in a cell: microfilaments and thin filaments. It works with another protein called myosin to ensure muscle contraction. It is also partly responsible for cell motility, cell division, vesicle and organelle movement, cell signaling, and many other processes. There are three different forms of actin: alpha, beta, and gamma. It also comes in the structural forms G-actin and F-actin. G-actin is globular actin while F-actin is filamentous actin. In other words, G-actin is the monomer while F-actin is the polymer. G-actin becomes F-actin in a process called actin polymerization, where each individual actin monomer attach to one other to form a filament. Actin polymerization starts with the process nucleation (Childs, 2012). Nucleation occurs when three actin monomers form a trimer. The barbed end is the base while the pointed end is the tail. After the trimer is created, elongation occurs. Elongation is the addition of actin monomers at both the pointed and barbed end, thus making the actin filament elongate. The addition of actin monomers is faster at the barbed end. After a period of elongation, actin starts to treadmill; the length of the filament does not change. Instead, actin monomers are being added to the barbed end at the same time and rate that actin monomers are being disassociated from the pointed end. The actin filament reaches its equilibrium phase, or steady state. Actin is prevalent in smooth muscle cells (Bergeron, 2011). On average, the ratio of myosin to actin is two to one; in smooth muscle cells, the ratio is ten to one. Smooth muscles form muscle tissues that contract without conscious thought. They contract in20 | JOURNYS | FALL 2014
voluntarily. Smooth muscle cells are coded by the gene ACTA2. They occur in layers and are found on the walls of organs, such as the aorta. The actin mutations that result in TAAD are missense mutations. Missense mutations are point mutations where one nucleotide is changed so that a different amino acid is produced. Missense mutations are named with the following format: letternumber-letter. The first letter is the letter designated to the normal amino acid that was supposed to be there. The number is the codon number on the DNA strand. The second letter is the letter designated to the amino acid created by mutation. More than thirty missense mutations have been identified in ACTA2. These include D24Y, R177H, R256C, R256H, N115T, and R116Q. These mutations are hypothesized to interfere with inter-monomer interactions, which mean they interfere with proper actin polymerization. This would account for abnormal actin growth, resulting in the phenotype TAAD. It seems as if many of the mutations that cause TAAD are also responsible for many other conditions, such as dissection in pregnancy, patent ductus arteriosus, childhood stroke, adult strokes, and coronary disease. It is important to note that each individual missense mutation does not result in all of the listed conditions, but rather account for only a couple. Nevertheless, it is important to note the interesting correlation between TAAD and other abnormal heart conditions. In order to characterize each mutation, scientists use budding yeast, or Saccharomyces cerevisiae, as a model (Bergeron, 2011). This is a legitimate model because its actin shares a 94% similarity with that of humans. Budding yeast is also easy to manipulate. The mutations are introduced into the budding yeast through the process of mutagenesis. After the mutations are introduced, cultures are grown. The scientists then apply a variety of stress conditions on the mutated yeast cells to test their durability. These stress conditions include but are not limited to different types of media, different levels of salinity, and different temperatures. These help characterize a mutation. Currently, scientists are trying to characterize the different missense mutations. After a complete characterization is done, hypotheses are made on how that mutation’s specific reactions to environmental factors contribute to the phenotypic result of TAAD. REFERENCES 1. Bergeron, S. E., Wedemeyer, E. W., Lee, R., Wen, K. K., McKane, M., Pierick, A. R., . . . Bartlett, H. L. (2011). Allele-specific effects of thoracic aortic aneurysm and dissection alpha-smooth muscle actin mutations on actin function. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov’t]. J Biol Chem, 286(13), 11356-11369. doi: 10.1074/jbc. M110.203174 2. Board, A.D.A.M. Editorial. Thoracic Aortic Aneurysm. U.S. National Library of Medicine, 14 May 2012. Web. Dec. 2012. 3. Childs, Gwen V. “TheActin Cytoskeletonmstheme.” Actin Cytoskeleton. University of Arkansas for Medical Sciences, 2001. Web. Dec. 2012. 4. Cooper, Geoffrey M. “Assembly and Disassembly of Actin Filaments.” Structure and Organization of Actin Filaments. U.S. National Library of Medicine, 18 Dec. 2000. Web. Dec. 2012. <http://www.ncbi.nlm.nih. gov/books/NBK9908/>. 5. Milewicz, D. M. (2009). Mutations in smooth muscle alpha-actin (ACTA2) cause coronary artery disease, stroke, and Moyamoya disease, along with thoracic aortic disease. Am J Hum Genet, 84(5), 617-627. doi: S0002-9297(09)00149-9 [pii] 10.1016/j.ajhg.2009.04.007
Eye Can See You by Jonathan Xia
Sheng Zhang / Graphic
Vision is one of our most important methods of obtaining information of our surroundings. We are aware that our eyes are responsible for the obtaining of the information. However, when looking at objects in our everyday lives, we are usually not aware of the processes occurring at super fast speed that allow us to see the world around us. The eye first has to refract the light, focusing it on the retina, the information is sent through the optic nerve to the brain, and then the information has to then be evaluated. All of this happens almost instantaneously. Letâ&#x20AC;&#x2122;s first start with the basics. Refraction occurs when light travels through a lens. There are two types of lenses: converging lenses and diverging lenses. Just like the names imply, converging lenses converge light through refraction, and diverging lenses diverge light through refraction. For any spherical lens, we can define an optical axis, or principle axis by drawing a line through the extremes of the lens on the spherical sides.
All the rays of light parallel to the principle axis will all converge on a point on the principle axis called the focal point. For a converging lens, the focal point will be on the opposite side of the side that the light is coming from, and vice versa for a diverging lens. Therefore, you can find the image of an object by tracing the rays of light. When diagramming a system, by convention, we would always have light traveling from left to right. Suppose a particular lens has a focal length f, and an object is a do away and the image is di away from the lens. do is positive if it is on the left of the lens. di is negative if it is on the left of the lens. If the lens is converging, f is positive. We can relate these quantities as follows:
Then, we can find the height because the distances are directly proportional to the heights of the objects. In equation form:
Where M is the magnification. The power of a lens is measured in diopters (D). A diopter is the reciprocal of the focal length, and 1D=mâ&#x2C6;&#x2019;1. The human eye is analogous to a simple camera in several respects. A simple camera consists of a converging lens that refracts light such that it focuses on a light-sensitive film. This will create a smaller, inverted image. It also has an aperture, which controls the amount of light entering into the camera. The eye also has a converging lens that focuses images on a light-sensitive film, the retina. The information detected is sent to the brain to be processed from 20 to 30 times per second.1 The eye has an internal diameter of about 1.5 cm and is filled with a transparent jellylike substance called the vitreous humor. The white, outer covering is known as the sclera. Light enters the eye through a curved, outer tissue called the cornea, and passes into the aqueous humor. Behind the cornea is the iris, and the iris has a central hole called the pupil (The iris actually contains the pigment that determines eye color). Through muscle action, the iris can change the size of the pupil (from a diameter of about 2 mm to 8 mm), thus controlling the amount of light entering the eye. Behind the iris is a crystalline lens, a converging lens composed of microscopic glassy fibers. There are two basic layers of the crystalline lens: an inner layer (nucleus) which has an index of refraction of n = 1.406 and an outer cortex with an index of refraction of n = 1.386. Muscles exert tension on the lens to cause the focal length to change, thus having the ability to focus an image. The image produced by the eye is an inverted image, but the brain automatically flips it right side up when processing information sent from the retina to the brain through the optic nerve. FALL 2014 | JOURNYS | 21
The retina is composed of two types of light receptors, called rods and cones. Rods are more sensitive to light and distinguish between light and dark. The cones can distinguish the frequency ranges of light (color) with sufficiently intense light. Most of the cones are near the central area of the retina. There are three types of cones: red cones, blue cones, and green cones. We can see from the graph that the cones are more sensitive to certain wavelengths than others (Color blindness is believed to result when one or more types of cones are missing).
The Physics Classroom
For example, shining a light with wavelength 600 nm will trigger the green cone and red cone a certain amount. The brain will interpret this as a yellow color. However, if we were to shine a red light and a green light, we would also trigger the red cones and the green cones, so our brain will interpret the combination of red and green light as yellow as well. Thus we can produce almost any color in the visible spectrum with just red, green, and blue. These three colors are known as the primary colors. The adding of these colors to create new colors is known as the additive method of color production. This is shown in the diagram below.
Eyes have a near point and a far point. The near point is the closest distance an object can be in order for the eye to focus it on the retina. The far point is the farthest it can see, and is taken to be infinity. The near point in children is around 10 â&#x20AC;&#x201C; 15 cm[1], and grows larger with age. Sometimes, certain defects may occur in vision. One is myopia, or nearsightedness, which is when the far point is not at infinity. When an object beyond the far point is being viewed, the image is in front of the retina. Myopia occurs when either the eyeball is too long, or the corneaâ&#x20AC;&#x2122;s curvature is too great. To correct this, we place a diverging lens in front of the eyes. To measure the degree of myopia one has, we would measure it in diopters. It would be equal to the power of the lens needed to correct the vision. Because the lens needs to be a diverging lens, the power would be negative. Farsightedness, or hyperopia, is the opposite of nearsightedness, where you cannot see nearby objects clearly. That is, the near point is not at the normal position. The image of the object would be behind the retina. Hyperopia arises when either the eye is too long or insufficient curvature of the cornea. Hyperopia is corrected with a converging lens. Similar to myopia, the degree of hyperopia is measured in diopters, and because the lens needed to correct hyperopia is converging, the power is positive. Another common defect of vision is astigmatism, which o c c u r s when a refractive surface (cornea or crystalline lens) is out of shape, causing different areas to have different focal lengths. Thus, points may appear as lines, or a line may be distinct in one direction and blurred in another. This can be corrected with lenses that have greater power in one plane and less power in another plane. Also note that astigmatism is lessend in bright light, because the pupil is smaller, thus avoiding the outer edges of either the cornea or crystalline lens. You have probably heard of 20/20 vision. However, what does that mean? Visual acuity is a measure of how your vision is affected by the distance between your eye and the object. This is usually determined by using a chart of letters placed at a given distance from the eyes. The numerator is the distance at which the test eye can see the symbol, and the denominator is the distance at which a normal eye can see the symbol. For example, someone with 20/30 vision can only see a specific symbol 20 feet away when a person with normal eyes can see the symbol 30 feet away. The human eye is a truly amazing organ. Much of the information we obtain from the physical world around us is from our eyes. The immense amount of information that is sent to the eye has to transferred to the brain to be processed, and is done almost instantaneously, making it one of the most amazing processes of the human body.
REFERENCES 1. Wilson, Jerry D., Bo Lou, and Anthony J. Buffa. College Physics. Upper Saddle River, NJ: Prentice Hall, 2003. Print.
22 | JOURNYS | FALL 2014
The Power
of Stars BY RICHARD XU EDITED BY ANNIE XU
Just think. For thousands of years, humans from all over the globe have had a common attribute: a fascination and respect for the Sun. This shows itself in Sun-god representations from the brinks of civilization, indiscriminate of location, in artifacts, sculptures and scriptures alike. The Sun has come to represent a symbol of power, an entity of supremacy that watches from above us, bringing warmth to Earth. When one stops to think a moment, one will realize just how much influence the Sun has over our lives. (In fact, it has so much influence that if it were to disappear, mankind would cease to exist.) Our bodies have adjusted its own body clock to fit with the rising and setting of the Sun. G.V. Hudson proposed the infamous Daylight Savings Time to fit with the cycles between the Earth and the Sun. All of the fruit and vegetables we eat rely on the Sun to grow. Our everyday lifestyle revolves almost entirely on the Sun, and our dependence on it may continue to grow. With the quickly depleting fossil fuels and the search for clean, new sources of energy, people may eventually harness the excess power and energy from the Sun. No, I’m not talking about sapping a tiny bit of energy during the day through an array of solar panels. Why settle for the next best thing? Why settle for nabbing a couple rays when we can create the Sun itself? The truth of the matter is that the Sun is a natural, self-sufficient fusion reactor. It relies on the fact that when smaller atoms fuse into a larger atom, some matter will convert into a substantial amount of energy. That being said, the Sun fuses atoms together self-sufficiently, essentially continuing to fuse and fizz until all of its fuel is depleted. This process of converting mass to energy yields nearly no harmful environmental impact, is reasonably safe without atmospheric pollutants, and has virtually no long-term radioactive waste.1 This mix of potential benefits from fusion energy pulls it above all other alternative energies as the ultimate energy source for future power plants.
GRACE CHEN / GRAPHIC
FUSION FUEL
Fusion Energy has a virtually infinite amount of fuel. Two major sources of fuel for fusion power plants are deuterium and tritium, heavy isotopes of hydrogen. Tritium is produced through the fusion of lithium. Lithium deposits are heavily abundant on Earth, with more than onethousand years’ worth of heavy consumption still left untouched. Deuterium occurs naturally in fresh and saltwater, in particular, the oceans. This means virtually infinite fuel for fusion power plants being produced naturally in the ocean.
FUSION SAFETY
The reaction that occurs in fusion yields three “wastes”, a helium atom, water, and a neutron, none of which is especially dangerous (water being required for life). While neutrons being dumped into the environment may cause some radioactivity, the amount of radioactivity varies up to 10,000 times less than through fission nuclear reactors. Chernobyl-type incidents are virtually impossible in a fusion power plant, considering the fact that the amount of fuel fed to the reactor at any time is just barely enough to hold itself for 10 seconds. If an accident were to occur, the system would shut itself down within this time period, and the conditions to continue to sustain fusion reactions would not be able to hold out. All environmental anomalies would quickly dissipate, and a safe shutdown would be assured. Fusion energy remains one of the cleanest and safest forms of energy, and holds great potential to produce long-lasting energy for all humans across the globe.2
PLASMA
The study and production of fusion yields a second benefit besides the production of safe, renewable energy. The most abundant form of matter in the known universe is the state of plasma, a state of matter discovered in recent decades. Plasma is an electrically conducting fluid-like substance that is composed of freely moving ions and electrons. Human understanding of plasma is still very limited, and their extremely complicated behavior has left scientists baffled until the 1990s: the years of significant fusion energy advancements. Fusion energy and the study of plasma walk hand-in-hand, and the joint research of these two continue to pave the way for advancements in both science and industry. 3
FALL 2014 | JOURNYS | 23
Internal view of the JET (Joint European Torus), the world’s largest magnetic confinement plasma physics experiment Image Courtesy: EUROfusion
THE PROCESS
Fusion is achieved through a fairly simple process, however takes incredibly massive masses of equipment to perform it. Firstly, a vacuum chamber must be made and cleaned through “baking” (just as an oven self-cleans itself). Hydrogen gas is then puffed into a ring-shaped chamber, and using an electrical transformer, a ring of plasma is formed. Carefully heated, this plasma continues to increase to 10 million Celsius, and is sustained through constant radio frequency and microwave power. This plasma is then carefully moved and centered in the vacuum chamber. The plasma is heated to over 100 million Celsius in this chamber through the use of high-intensity particle beams, radio frequency, and microwaves. Finally, self-sustaining fusion would occur and waste would be filtered out through a series of “blanket” filters, the “Shield” and “Breeding” Blankets for heat and radiation, and the “Power Generation Blanket” for extracting high quality heat. Heat would be collected through the “Power Generation Blanket” to drive electrical generators and produce electricity and power.4 While first generation fusion reactors would only serve a role as a supplement to other existing forms of producing energy, the production is still very high. One quart of fusion fuel translates to 6600 tons of coal being burned! 4
24| JOURNYS | FALL 2014
THE FUTURE
While we haven’t yet achieved self-sustaining fusion, dramatic advancements have continued to sprout through the research and development of fusion reaction equipment. These improvements have increased more rapidly than the iconic growth of data storage within a computer chip, and continue to expand human knowledge on the clockwork of the universe. Fusion energy is generally accepted as one of the few major sources of energy for future generations to come, and while we have some ways to go, there will be one day yet, where humans will be able to hold the power of the stars.
REFERENCES
1. “Understanding Fusion.” http://fusionforenergy.europa.eu/understandingfusion/. (2012) 2. Harnessing the Energy of the Stars (General Atomics, San Diego, 2008) 3. Gantz, James. The Pervasive Plasma State ( The American Physical Society, 2005) 4. “BlanketTechnology.” http://www.naka.jaea.go.jp/english/kougakue/pfc&blk/BLK/ BLK_P1.html. (2012)
Traffic Light Detection and Tracking for the Prevention of Automobile Accidents by Frank Su and William Hang Introduction
Description of Algorithm
Control mechanisms in traffic are invaluable to public safety as they prevent automobile accidents and allow cars to flow smoothly and predictably. One prominent control mechanism is the traffic light, employed at nearly every multi-lane intersection to organize traffic. Unfortunately, traffic lights may be ignored because of distracted or inattentive driving, resulting in accidents which in turn lead to property damage, serious injury, and/or death. According to the National Highway Traffic Safety Administration, about 1.2 million traffic accidents in 2008 were traffic signal related, with 2,635 of them being fatal1.
However, the problem of traffic light running due to driver unawareness or distracted driving is unique in that no commercial driving aids for the driver exist, unlike traffic sign inattentiveness. This experiment attempts to create an accurate image-based detection of traffic lights with backboard detection using connected-component analysis.
Hypothesis This algorithm will achieve or surpass a 92% precision rate and 94% recall rate for red traffic lights and a 94% precision rate and 96% recall rate for green traffic lights, and will achieve this goal without significant time delays.
1.A video is retrieved from a test bank of sample footage 2. Each sixth frame is sent, or queried, to the traffic light detection algorithm. This algorithm accepts the frame as an image and attempts to find traffic lights 3. The traffic light detection algorithm returns the coordinates of the detected traffic lights 4. Colored boxes corresponding to the traffic lights’ color are drawn around the traffic lights’ perimeter 5. The coordinates are then passed as arguments to the traffic light tracking algorithm. This algorithm attempts to find the same light in the next 6th frame 6. The tracking algorithm returns coordinates that are then used to draw a new series of boxes 7. Repeat processes 2-7 until a NULL pointer is returned when the video is further queried
Traffic Light Detection 1. The RBG image is split into its three constituent color space images: R, G, and B 2. R is subtracted from G, and a new image, C, is replaced with the resulting image 3. G is subtracted from R, and a new image, M, is replaced with the resulting image 4. R and G are then morphologically closed to remove pepper noise 5. R and G are then thresholded to emphasize brighter regions above a specific brightness. Note: B has been excluded from detection because traffic lights that are blue do not exist. This algorithm only detects red and green lights 6. Connected component analysis (CCA) or Blob extraction is performed on R and G to isolate contiguous regions that are similar in color Note: Red and green regions are stored in a class called bR and bG, respectively 7. The regions in bR and bG are then filtered based on size. Each region must be larger than 100 pixels or smaller than 3000 pixels, or else it will be excluded from the class 8. bR and bG are then filtered based on aspect ratio. This is height divided by width and vice versa. If the aspect ratio is less than 1.7 for both directions, it is not excluded
Haiwa Wu / Graphic
9. bR and bG are then drawn on buffer images called canvasR and canvasG, respectively. These images depict the remaining blobs in a visual format FALL 2014 | JOURNYS | 25
ORIGINAL RESEARCH
One cause of such accidents is driver inattentiveness, or when a driver is distracted or is unaware of a traffic light. Thus, an inattentive driver may drive into an intersection where traffic is flowing perpendicular to the vehicle’s velocity.
Video Querying and Display
ORIGINAL RESEARCH
Algorithm
Traffic Light Detection Continued 10. CCA is performed on the buffer images again, and the regions are stored in bR and bG. 11. Circularity detection is performed on the regions. Circularity is determined by whether or not a regionâ&#x20AC;&#x2122;s perimeter divided by its length/width falls in a certain range. The perimeter/length-width ratio must be less than 5 and greater than 1, or else the region is excluded Note: This is called circularity detection because it is the same method used to detect circles when the range is closer to 3.14. We use it to exclude objects that egregiously deviate from the general shape of a square 12. The coordinates and dimensions of the regions are passed into a global structure array, which can be operated on by other routines and algorithms in the program 13. Return
26 | JOURNYS | FALL 2014
Traffic Light Tracking 1. The coordinates for each detected traffic light are retrieved from the structure array, N. Elements of the array are represented by N[x] for any whole number x 2. A rectangular search area SA is opened around every set of coordinates in the next frame Note: This means that the next frame of the video is being used now 3. The area enclosed by SA is stored in an image called search 4. search undergoes processes 1-11 in Traffic Light Detection
Examples of detection
5. The largest remaining region after Traffic Light Detection is the traffic light being tracked Note: This assumption can be made because the search area is relatively small; thus, it is highly unlikely that it will encompass a different traffic light 6. The coordinates for that light are rewritten into the global structure array 7. Return
Testing Procedures 2. When the car is near an intersection, begin to take video 3. Repeat Steps 1 & 2 for as many videos are needed 4. Transfer video to computer 5. Run algorithm on video 6. Algorithm performs all steps in Video Querying and Display, Traffic Light Detection, and Traffic Light Tracking 7. Record total number of green and red traffic lights 8. Record total number of true positive green and red traffic lights Important Note: The definition of true positive and false positive in this project is if: within a 30 frame duration or five detection cycles, the traffic light is either correctly or incorrectly classified 9. Record total number of false positives
With a test bank of 44 videos and 76 traffic lights, the algorithm achieved a precision rate of 93.10% and a recall rate of 93.10% for red lights, and a precision rate of 97.78% and recall rate of 93.62% for green lights. The algorithm was able to recognize 27 out of 29 red traffic lights with two false positives, and 44 out of 47 green traffic lights with one false positive. The detection rate of green traffic lights was very reasonable, and this can be attributed to the brightness and well-defined shape of the lights. Their brightness allows them to easily satisfy the brightness threshold, and their well-defined shape allows them to easily satisfy the aspect ratio and circularity threshold. There is also very little occurrence of green objects in traffic exhibiting the brightness and roundness of green traffic lights, and this contributed to the high precision rate. The relatively high precision rate can be greatly attributed to the strictness of the algorithm. Because it employed thresholding and accepted and rejected candidates based on a rigid criterion, it rejected many false positives that if incorrectly detected, would lower the precision rate of the algorithm. The algorithm also rejected several actual traffic lights because of this strictness.
10. Note deficiencies and their reasons, if any
Conclusion
11. Repeat for all videos
This algorithm was not able to achieve the 94% recall rate for red lights and 95% recall rate for green lights as this project had initially hoped. However, the algorithm did achieve the 92% precision rate for red lights and 94% precision rate for green lights. The reasons for this were acknowledged in detail within Section 1 of the Discussion.
12. Calculate precision and recall
Results
It can thus be said that this project partially achieved its engineering goals but did not satisfy all of its engineering goals. Despite this shortcoming, this project successfully developed a daytime real-time traffic light recognition system capable of achieving reasonable precision and recall that, if improved, can potentially be used commercially in the prevention of accidents caused by driver unawareness of traffic lights. References 1. United States. United States Department of Transportation. National Highway Traffic Safety Administration. National Highway Traffic Safety Administration (NHTSA). National Highway Traffic Safety Administration, 2008. Web. 15 Jan. 2012. <http:// www-nrd.nhtsa.dot.gov/Pubs/811170.PDF>.
FALL 2014 | JOURNYS | 27
ORIGINAL RESEARCH
1. Mount camera on a small tripod, and place on the dashboard
Discussion/Analysis
R
emote Sensing by Meera Kota edited by Ahmad Abbasi
H
ave you ever wondered how weathermen can so easily tell you the 10-day forecast? Or how CNN can show you the extent of an oil spill? Well, this is all possible because of Remote Sensing. Remote Sensing is defined as the use of an instrument, such as a radar device or camera, to scan the earth or another planet from space in order to collect data about some aspect of it. Such examination can occur with devices (e.g. cameras) based on the ground, and/or sensors based on ships, aircraft, satellites, or other spacecraft. All of this allows for very detailed reports of that certain area and almost an “x-ray” view of the area. The origin of Remote Sensing dates back to the times of World War II during the 1940’s1. Remote sensors were mounted on balloons, which was useful in mapping enemy territory8. So, in the beginning remote sensing was used for military purposes but then transformed into a way of data collection for various things. Therefore technology was extended in the late 70’s to civilian activities in the form of satellite imagery and digital data2. It was revolutionary because to have satellites in space recording data was unheard of at that time. Tools were later introduced to extract and map satellite sensor data for monitoring of agriculture lands, forest fires, snow, ice and water resources. Surprisingly, even though remote sensing images are all over the news and in used scientific analysis, the practice of remote sensing is still quite young. With climate change and global warming, the need to record data became more imperative. It was recognized by the scientific community that economical acquisition of vast spatial and temporal scales of environmental data was only possible using satellite sensors. The science behind climate change has prompted various governments to use
Figure 2. Monitoring natural phenomena like the aurora borealis aka Northern lights as seen by NASA’s VIIRS DNB sensor. “Aurorae over Planet Earth” by NASA, NOAA, GSFC, Suomi NPP, Earth Observatory / processing by Jesse Allen and Robert Simmon In this example the application of nocturnal light detection helps in the study of natural phenomena like northern lights. The extent, occurrence and the scale of these natural events have provided in-sights to astronomers, scientist and physicists around the world.
28 | JOURNYS | FALL 2014
Figure 1: City lights of Los Angeles and San Diego using NASA’s VIIRS DNB sensor. Image courtesy: SeaSpace Corporation, USA. The knowledge of city and suburban lights due to human inhabitation provides a new insight into activities at night. This helps in the monitoring cities during power outages due to natural and man-made causes. Indirect estimation of the city population, its power consumption and urban growth can be measured especially in unknown areas around the world.
satellite-based technology for climate monitoring, weather forecasting, predicting, detecting and mitigating the effects of natural disasters. In the future, this will allow for analysis of the extent of the United States’ environmental footprint, and provide ways to ameliorate this issue. Various satellites use different sources of light and energy to track data. While majority of remote sensing methods use reflected solar radiation to map natural resources during day time, the idea of nocturnal mapping of low-light at nighttime is relatively a newer technology (at least for civilian applications). Using low-light provides the ability to track night time usage of lights, emissions, and other variables. NASA launched its newest earth observing satellite, the National Polar-Orbiting Operational Environmental Satellite System Preparatory Project (NPP) on October 28, 2011. One of the instruments called VIIRS, otherwise called as the Visible Infrared Imager Radiometer Suite measures visible and Infrared (IR) signals from the earth during day and night. In addition to the visible and IR bands, VIIRS carries a Day and Night Visible band (DNB)which has the unique capabilities to detect low-light at nighttime7. DNB can map city lights, forest fires; gas flares from oil drilling activities, lights from fishing boats, and natural phenomena such as aurora australis aka southern lights or aurora borealis, the northern lights3. This is useful because with all these natural disasters, having the correct and accurate information at all times is very important. With its night view, VIIRS is able to detect a more complete view of storms and other weather conditions, such as fog, that are difficult to discern with infrared, or thermal, sensors. Night is also when many types of clouds begin to form. The new sensor, the day-night band of the Visible Infrared Imaging Radiometer Suite (VIIRS), is sensitive enough to detect the nocturnal glow produced by Earth’s atmosphere and the light from a single ship in the sea.
The Science behind light detection DNB data are obtained from a sensitive, very-wide-dynamic range Charged-Coupled Device (CCD) detector in the main VIIRS sensor4. The instrument has a high signal-to-noise ratio (SNR) which allows detection of areas with very low as well as lights from bright targets at night. The detective elements of the DNB CCD are 15.4 micrometer x 24.2 micrometer photosites. Each of these photosites images an angle that corresponds to approximately 17 x 11 meters on the ground at nadir5. The term “pixel” is actually short for “Picture Element”9. These small dots are what make up a satellite image or any computer display. In the DNB sensor, the CCD design helps a number of sub pixel elements to be aggregated into single pixels with near-rectangular sample spacing on the ground. Unlike a camera that captures a picture in one exposure, the day-night band produces an image by repeatedly scanning a scene and resolving it as millions of individual pixels. Then, the day-night band reviews the amount of light in each pixel. If it is very bright, a low-gain mode prevents the pixel from over saturating. If the pixel is very dark, the signal is amplified.
Figure 4 (Right). Human inhabitation along the Nile River Valley delta as seen by the DNB nighttime data. Also seen are major cities of Cairo and Tel Aviv in Israel. Image Courtesy: SeaSpace Corporation, USA.
Conclusion:
Figure 3. Oil drilling activities in the State of North Dakota. Lights can be seen due to oil drilling using new technology and activity at night. Image Courtesy: SeaSpace Corporation, USA. North Dakota has seen new economic growth at the Bakken geological formation due to the new oil field6.
Low-light detection capability has just been recently introduced to the scientific community. This new exciting capability allow scientists to use remote sensing and detect areas where night time light is present. These areas may not necessarily be accessible by road or other means of transportation so Low-light detection at night along with Infrared technology can identify forest and urban fires well before they start to spread and destroy communities. Low-light monitoring technology during nighttime can help with identifying legal and illegal oil drilling and fishing around the world4. The DNB data can used for surveillance of areas where there is illegal activities near International borders without putting defense personnel in harm’s way. Applications presented in this article are just a few examples and there are many more interesting applications that can be developed for monitoring the earth’s resources at night.
REFERENCES 1. Aerospace. 2013. A Brief History of Space Exploration. 2. Jensen, J.R., 2000. Remote Sensing of the Environment: An Earth Resource Perspective, Upper Saddle River, New Jersey: Prentice Hall, 656pp. 3. Seaman C. 2012. Aurora Australis from the Day-Night Band. Cooperative Institute for the Research in the Atmosphere (CIRA). Blog Archives 4. NASA 2012. http://www.nasa.gov/mission_pages/NPP/news/earth-at-night.html 5. Tsugawa R. et. al. 2009. NATIONAL POLAR-ORBITING OPERATIONAL ENVIRONMENTAL SATELLITE SYSTEM (NPOESS) VIIRS IMAGERY PRODUCTS ALGORITHM THEORETICAL BASIS DOCUMENT (ATBD), (REF Y2466) (D43767 Rev B) CDRL No. A032 Northrop Grumman Space & Mission Systems Corporation California 90278 6. National Geographic 2013. The New Oil Landscape The fracking frenzy in North Dakota has boosted the U.S. fuel supply—but at what cost? 7. “Joint Polar Satellite System (JPSS) VIIRS Imagery Products Algorithm Theoretical Basis Document (ATBD).” NASA. Joint Polar Satellite System (JPSS) Ground Project, 24 Sept. 2011. Web. 28 Apr. 2013. 8. Pidwirny, M. “Introduction to Geographic Information Systems.” Fundamentals of Physical Geography, 2nd Edition, 2006. Web. 28 Apr. 2013. 9. ”Pixel.” Daily Definition RSS., n.d. Web. 29 Apr. 2013.
FALL 2014 | JOURNYS | 29
String Theory VS. Loop Quantum Gravity by Tyler Johnson In the hit TV show The Big Bang Theory, co-star Sheldon Cooper participated in a heated debate with fellow physicist Leslie Winkle over whether String Theory or Loop Quantum Gravity better explains the laws of physics. Immediately after this episode aired, multitudinous tee shirts bore “I prefer my space stringy not loopy” to signify public agreement with String theory. Due to frequent appearances in popular culture like The Big Bang Theory, String Theory (known as M-theory when referring to the inclusion of supergravity) has almost become an accepted norm, rather than a theory to consider, whereas Loop Quantum Gravity has gone virtually unnoticed. The primary goal of both theories is to unify the five fundamental interactions (electromagnetism, strong interaction, weak interaction, Higgs Interaction, and gravity)1. Currently, gravity operates under a completely different mechanism compared to the other four interactions. While the other four are described using the Standard Model of particle physics—this asserts that the interactions are mediated through the exchange of gauge bosons—gravity is explained by Einstein’s theory of general relativity. General relativity states that rather than space being a fixed entity, it is a four dimensional continuum that oscillates, bends, and obeys field equations2. Because relativity claims that gravitation is directly proportional to the mass, energy or momentum of the object, incredibly miniscule particles like atoms shouldn’t been able to distort space-time, yet singularities in black holes clearly break this rule. This obvious discrepancy between theories indicates the need for a quantum theory of gravity. Thus, String Theory and Loop Quantum Gravity become necessary. One key difference between the two theories is their treatment of relativity. String Theory maintains the background space-time theorized in relativity but couples it with quantum behavior predicted by the mathematics i.e. the exchange of massless, closed superstrings or gravitons3. String Theory’s attempt to overlay quantum effects on general relativity is called quantum foam—space itself has an inherent vacuum energy which at the Planck length creates particle-antiparticle pairs without violation of the conservation of matter thanks to the Heisenberg Uncertainty Principle4. Loop Quantum gravity accepts the field-like space-time of general relativity but stipulates that space-time is a quantum object thus, is granular at the Planck length. The granular structure is represented by the canonical quantization of volume and area of a region of space creating spin networks5. The actual physical region of space is then a quantum superposition of spin networks. Perhaps the most daunting aspect of String theory is the fact that in order to simplify the equations to a manageable difficulty, ten spatial dimensions or degrees of freedom are necessary. The explanation of these extra dimensions is quite cunning and simplistic. The dimensions are “compactified” into Calabi-Yau manifolds6 and are just too small to see. Just like a garden hose from a far distance appears to be a simple line with only one path, but up close it is evident that not only is there the direction of the hose itself but there is also an additional degree of freedom due to the cylindrical shape that allows for circular movement (cross section of a hose is a circle). Both of these theories have yet to be tested experimentally, but their individual theoretical implications have potential to advance physics is 30 | JOURNYS | FALL 2014
edited by Achinthya Soordelu
many restraints other than quantum gravity. For example, thermodynamic properties of extremal black holes can be modeled after an extremal configuration brane wrapped around extra dimensions 7. This correlation between black holes and branes was by no means intentional; it was a tremendous accolade for string theory nonetheless. On the other hand, Loop Quantum Gravity has made strides in its ability to calculation of entropy of black holes by building off of the Bekenstein-Hawking Formula8. Loop Quantum Gravity applied to cosmology can produce remarkable, paradigm shifting approaches to today’s most plaguing questions such as the Big Bang. In Loop quantum cosmology, the Big Bang Theory is altered to incorporate a perpetual retraction of the universe followed by rapid expansion, also known as the Big Bounce9. Thanks to the elimination of general relativistic singularities, it is conceivable that the newly found repulsive quantum gravity is the cause of inflation in the early moments of the universe. It is what happens next in the process that sets the two theories apart in regards to cosmology. Loop quantum cosmology predicts the attractive force of gravity then pulls all matter back in to a single point to create another Big Bang. On the other hand, String theory predicts the existence of dark energy10, implying that the universe, rather than retracting back to a single point, will continue to expand with a repulsive force linearly proportional to distance between objects. The years to come will have grueling tests for both theories. As the Large Hadron Collider increases its energy to look for Supersymmetric spartners11, the fate of String theory hangs in the balance. To Loop quantum gravity’s advantage, it has no real requisite phenomena like proton decay, supersymmetry, or magnetic monopoles which determine is future thus far. It is important to remember that both of these theories have yet to be experimentally tested. Both theories also potentially yield tremendous implications in cosmology, quantum mechanics, and gravitation. If the two theories are eventually disproven, the knowledge accrued will definitely benefit the advancement of physics and the acquisition of a working theory of quantum gravity. REFERENCES
1. Wolfram Research “Fundamental Forces”. http://scienceworld.wolfram.com/physics/ FundamentalForces.html (2007) 2. Florida State University Physics Department “Introduction to General Relativity”. http:// www.physics.fsu.edu/courses/spring98/ast3033/Relativity/GeneralRelativity.htm (Retrieved 6/18/13) 3. Princeton Physics Department “Graviton”. http://www.princeton.edu/~achaney/tmve/ wiki100k/docs/Graviton.html (Retrieved 6/17/13) 4. American Institute of Physics “Quantum Mechanics: The Uncertainty Principle”. http:// www.aip.org/history/heisenberg/p08.htm (2013) 5. Revelli C., Smolin L. Spin networks and quantum gravity (Cornell Univ. Lib., Cornell) 6. Greene B. String Theory on Calabi-Yau Manifolds (Cornell Univ. Lib., Cornell) 7. Greene B. Brane Physics in M-Theory (Cornell Univ. Lib., Cornell) 8. Stack Exchange “Why isn’t the Bekenstein-Hawking entropy considered the quantum gravitational unification?”. http://physics.stackexchange.com/questions/57799/whyisnt-the-bekenstein-hawking-entropy-considered-the-quantum-gravitational-un (2013) 9. Hesston College Physics Department “The Big Bounce Theory”. http://www2.hesston. edu/Physics/bigbounceraditya/WEBPAGE/BigBounce.htm (2008) 10. NASA Astrophysics “Dark energy, Dark Matter”. http://science.nasa.gov/astrophysics/ focus-areas/what-is-dark-energy/ (2012) 11. Murayama H. “Introduction to Supersymmetry”. http://hitoshi.berkeley.edu/public_ html/susy/susy.html (2011)
STAFF POSITIONS PRESIDENT Gha Young Lee EDITORS IN CHIEF Emily Sun, Joy Li ASSISTANT EDITORS IN CHIEF Carolyn Chu, Alice Qu VICE PRESIDENTS Abishek Chozhan (Torrey Pines), William Hang (Scripps Ranch) COORDINATORS Frank Lee, Caroline Zhang, Stephanie Yuan
Kalyani
Ramadurgam,
CONTRIBUTING AUTHORS Gha Young Lee (Torrey Pines), Chris Lu (Torrey Pines), Daniella Park (Torrey Pines), Emily Truong (Alhambra), Eric Chen (Torrey Pines), Eric Tang (Torrey Pines), Frank Su (Scripps Ranch), Hope Chen (Torrey Pines), Jessica Yu (Scripps Ranch), Jonathan Xia (Scripps Ranch), Maria Ginzburg (Torrey Pines), Meera Kota (Torrey Pines), Richard Xu (Scripps Ranch), Robbie Smith (Cathedral Catholic), Ruchi Pandya (Lynbrook), Tyler Johnson (Blue Valley Northwest), William Hang (Scripps Ranch) SCRIPT EDITOR Nilay Shah (Scripps Ranch)
SCIENTIST REVIEW BOARD COORDINATOR Abhishek Chakraborty (Torrey Pines)
SECTION EDITORS MinJean Cho, Stephanie Hu, Eric Chen, Hope Chen, Peter Manohar, Victoria Ouyang
SCIENTIST REVIEW BOARD Dr. Aaron Beeler, Dr. Akiva Cohen, Dr. Amiya Sinha-Hikim, Mr. Andrew Corman, Dr. Aneesh Manohar, Dr. Arye Nehorai, Dr. Benjamin Grinstein, Mr. Brooks Park, Dr. Bruno Tota, Mr. Craig Williams, Mr. Dave Ash, Mr. Dave Main, Mr. David Emmerson, Dr. Dhananjay Pal, Dr. Erika Holzbaur, Dr. Gang Chen, Dr. Gautam Narayan Sarkar, Dr. Greg J. Bashaw, Dr. Haim Weizman, Dr. Hari Khatuya, Dr. Indrani Sinha-Hikim, Ms. Janet Davis, Dr. Jelle Atema, Dr. Jim Kadonaga, Dr. Jim Saunders, Dr. Jody Jensen, Dr. John Allen, Dr. John Lindstrom, Professor. Joseph Oâ&#x20AC;&#x2122;Connor, Ms. Julia Van Cleave, Dr. Kathleen Boesze-Battaglia, Dr. Kathleen Matthews, Ms. Kathryn Freeman, Ms. Katie Stapko, Dr. Kelly Jordan-Sciutto, Dr. Kendra Bence, Dr. Larry Sneddon, Ms. Lisa Ann Byrnes, Dr. Maple Fung, Mr. Mark Brubaker, Dr. Michael Plewe, Dr. Michael Sailor, Mr. Michael Santos, Dr. Reiner Fischer-Colbrie, Dr. Ricardo Borges, Dr. Rudolph Kirchmair, Dr. Sagartirtha Sarkar, Ms. Sally Nguyen, Ms. Samantha Greenstein, Dr. Saswati Hazra, Dr. Simpson Joseph, Dr. Sunder Mudaliar, Dr. Sushil Mahata, Ms. Tania Kim, Dr. Tanya Das, Dr. Tapas Chakravarty, Dr. Tapas Nag, Dr. Thomas Tullius, Ms. Tita Martin, Dr. Todd Lamitina, Dr. Toshinori Hoshi, Ms. Tracy McCabe, Dr. Trilochan Sahoo, Ms. Trish Hovey, Professor. Xin Chen, Dr. Yifeng Xiong
MEDIA MANAGERS MinJean Cho, Aisiri Murulidhar GRAPHICS MANAGER Haiwa Wu (Torrey Pines) ASSISTANT GRAPHICS MANAGER Lauren Oh (Torrey Pines) CONTRIBUTING GRAPHIC DESIGNERS Aisiri Muralidhar, Amy Chen, Angela Wu, Cindy Yang, Crystal Li, Grace Chen, Jennifer Fineman, Katherine Luo, Kristine Paik, Lucy An, Mahima Avanti, Michelle Oberman, Tenaya Kothari DESIGN MANAGER Grace Chen (Torrey Pines) ASSISTANT DESIGN MANAGER Kelsey Chen (Torrey Pines) DESIGN EDITORS Grace Chen, Stephanie Hu, Alexander Hong, Alice Jin, Connie Chen, Daniela Sherwin, Maggie Fang, Patricia Ouyang STAFF ADVISOR Mr. Brinn Belyea
FALL 2014 | JOURNYS | 31
2009
2014