2015 ISSUE
QUANTA S
T
U
D
E
The Theory of Everything
exploring string theory and hidden dimensions in the universe
Pranav Mistry
the inventions of the most overlooked inventor of our time
N
T
M
A
G
A
Z
I
N
E
Food Allergies and the Immune System understanding allergic reactions and the search for the cure
ARTICLE INDEX Qualia: Differences in Human Perception BY BETTINA KING-SMITH ........................................................................... 1 Water’s Footprint: Mars Curiosity Rover BY SARAH BURNETT .................................................................................. 2-3 Just Do It: Science of Procrastination BY HANNAH LANGE ....................................................................................... 4-5 Is This Baby Racist? Innate Morality BY CHARLIE LEMASTERS ..................................................................................... 6 Emotional Education: Mirror Neurons BY AVERY ROGERS ............................................................................................ 7 Puffing Up: Food Allergies and the Immune System BY AVERY GULINO .................................................................. 8-9 Man-made Diamonds, for real? Synthetic Diamonds BY AARON LIU ..................................................................... 10-11 Fangirling: Explaining Erotomania BY GABI BURKHOLZ ............................................................................................. 12 Cow Farts BY LINETTE PAN ............................................................................................................................................ 13 Making Stem Cells BY BIANCA YANG ............................................................................................................................ 14 Using Stem Cells BY VENUS SUN ................................................................................................................................... 15 The Most Overlooked Inventor of Our Time BY RACHEL HONG .............................................................................. 16-19 Venera: The Soviet Explorers of Venus BY ZAFAR RUSTAMKULOV ........................................................................ 20-21 Airplanes Become Flexible? BY TANUJA ADIGOPULA ................................................................................................. 21 Exploring Mutualism BY JOSHUA KAHN ........................................................................................................................ 22 Faking Life BY SAMUEL FU ............................................................................................................................................. 23 Go With the Flow: Morphometric Analysis BY ZOE SIDDALL ................................................................................... 24-25 Nanoparticles and the Cure for HIV BY TYLER CHEN ................................................................................................... 25 Explaining Neural Networks BY PETER GRIGGS ............................................................................................................ 26 What are Semiconductors? BY DANIEL KIM ............................................................................................................ 26-27 The Foundation of Reality BY NATHAN CHENG ....................................................................................................... 28-29 Fountain of Youth: Blood Transfusions and Mortality BY STEPHANIE DAVIS ............................................................. 30 How to Live Forever BY BAILEY BJORNSTAD ................................................................................................................. 31 Explaining String Theory BY EVERS PUND ............................................................................................................... 32-33 Learning to Walk (Again) BY CHRISTIAN YUN ............................................................................................................... 34 2001: A Space Odyssey BY JAKE NEWMAN, BRANDON BERSON, AND JONAH GERKE ................................................ 35 The Neuroscience of Chess BY DANTE NARDO ....................................................................................................... 36-37 Freud: On Life and Death BY ELAINE CHEN .................................................................................................................. 38 Is Coffee Harmful or Helpful? BY JEROD SUN ........................................................................................................... 38-39 The Death of the Honeybee BY SKYLAR GERING ........................................................................................................... 40 Bubbles of Life: Protocells and Evolution BY ANDREW LI .......................................................................................... 41 Solar Panels: Reimagined BY DAVID DAI ...................................................................................................................... 42
featured
CONTENTS QUANTA STUDENT MAGAZINE // 2015
[pg.]
14
DEALING WITH STEM CELLS Bianca Yang and Venus Sun explain how stem cells are made and how they can be used.
4 Procrastination Looking at the neuroscience and recent scientific studies of procrastination. BY HANNAH LANGE 8 COVER Puffing Up A play-by-play of what happens during an allergic reaction and recent discoveries that might lead to a food allergy cure. BY AVERY GULINO 16 An Introduction to Pranav Mistry A look at three of the little-known Samsung executive’s twenty-one personal inventions. BY RACHEL HONG 32 Theory of Everything The search for a unifying theory explaining our universe has unveiled string theory. But what exactly is string theory? BY EVERS PUND 38 Coffee: Harmful or Helpful? A health and productivity analysis of America’s favorite addiction. BY JEROD SUN
From the Editor (next page→) Editor-In-Chief Angela Li discusses tradition and Quanta Student Magazine’s recent expansion.
Columns: 21 Airplanes become Flexible? 25 Nanoparticles and HIV 26 Explaining Neural Networks 38 Freud, on Life and Death
For a complete index of writers, editors and advisors, please see the inside of the back cover, on page 43.
FROM THE EDITOR A letter from Editor-In-Chief Angela Li I first approached Quanta Student Magazine’s former Editor-In-Chief, Tina Huang ‘14, about writing for Quanta in 2011. I was in eighth grade, she was in tenth grade, and I was shaking. Frankly, I expected rejection— Quanta was my school’s high school science magazine, and in addition to the fact that I was still in middle school, I had no experience with scientific writing. However, to my great surprise, Tina replied simply with: “Do you know what you want to write about?” I muttered that I didn’t. “That’s perfect,” Tina smiled, and handed me a list of draft deadlines. “I’m sure you’ll find something that excites you.”
EMAIL:
angelali@quanta-magazine.com
WEBSITE:
www.quanta-magazine.com
About the Cover Out of many other great photos, this colored scanning electron micrograph of a T-Lymphocyte (T-Cell) taken by David Scharf was eventually chosen to represent our 2015 Issue. To learn more about T-Cells and their role in the immune system, read Puffing Up: Food Allergies and the Immune System on page 8.
The idea that science should be exciting and accessible has always been Quanta Student Magazine’s purpose. Quanta provides the unique opportunity for student writers to research and share their thoughts on science-related topics that excite them, an opportunity that I have learned so much from for these past 4 years. So as this year’s Editor-In-Chief and Interschool Leadership Chair, making Quanta Student Magazine as inclusive to student scientists as possible has been my primary objective. And as a result, I am delighted to report that you are reading the largest issue in Quanta’s six-year history. The 2015 issue has been a result of the hard work of 32 student writers and 14 student staff members from 3 different schools. It has been, by far, the most time-intensive and logistically-challenging workload the Quanta team has ever encountered, but we’re proud of our results. We hope that you enjoy reading the 2015 issue as much as we enjoyed making it, and we’re sure that you, too, will find something that excites you.
ANGELA LI Editor-In-Chief and Interschool Leadership Chair Quanta Student Magazine
QUALIA
differences in human perception BY BETTINA KING-SMITH Imagine a ripe banana. Describe its color, without using figurative language, to a friend. Picture describing the color of the banana to a blind person. Everyone would agree that the banana was yellow, because society acknowledges that bananas are yellow. But what does yellow look like? If person A looks at a ripe banana, do they see the same yellow that person B sees? Do the colors people identify as yellow or any other color, actually appear the same to all humans? Scientists know for certain that some animal species see the world in a different spectrum than humans. Bees, for example, can see in ultraviolet, which allows them to find flowers to pollinate. Many snakes can detect infrared heat rays. But what about humans? After playing around for long enough with oldschool illusions and high-tech brains scans, it soon becomes evident that everybody’s brains are just a bit different – and so is everyone’s perception of the world. Qualia represents this idea of everyone perceiving things differently, and being unable to communicate exactly how they see the world through human language. Just like people could identify a banana as yellow, but be unable to say what yellow looks like. Philosophers and scientists have struggled significantly to create a specific definition of the word ‘qualia’ (imagine defining the word freedom – what are the right words to define such a concept?). According to philosopher C.I. Lewis, qualia is the “recognizable qualitative characters of the given.” In other words, qualia is like that little kid bouncing up and down in their seat asking a parent (or annoyed older sibling) “What’s it like to do this? What’s it like to be that?” A key idea behind qualia is that it is intrinsic yet impossible to explain. Picture that ripe banana from earlier. All capable persons would agree that they are looking at a yellow banana. Perhaps professionals in the field of optics, optometry, and neurology could explain the idea in scientific terms – the color yellow is an interpretation of certain wavelengths of light inside the brain. A well-educated individual could explain how light hits the eye, travels down the optic nerve, and how the different wavelengths of light are interpreted by the brain as different colors. A particularly eloquent fellow could make a comparison between the yellow of the banana and the yellow of the petals of a bright sunflower. But how does one express that inner yellow-ness of a banana? Imagine meeting an alien who could not feel pain. How could humans explain to the alien the unpleasant feeling accompanied by skin cracking and peeling because of sunburns, or the raw scratchiness in the throat after hacking coughs because of a bad cold? The aliens could understand the science behind pain, but how could humanity make them understand the raw emotions behind the pain? Everyone is just a little bit different and sees the world in their own distinct way. Qualia comes out of the unexplainable unique view of every individual person. Remember the saying that everyone is special and unique in their own way? Well, it’s true. QUANTA STUDENT MAGAZINE 2015
Qualia Case Study: #thedress The idea of qualia is that we all perceive things differently, but are unable to communicate these differences through human language. Although this could be partly attributed in differences in ability or sensory cell makeup, the vast majority of this difference is attributed to the different brain’s interpretations of sensory signals and concepts. #thedress, pictured above, has caused an internet uproar over the what the “true colors” of the pictured dress are. A Buzzfeed poll has found that roughly 70% of respondents see white and gold, while roughly 30% see blue and black– yet the wavelenghts of light bouncing off this picture and entering our eyes are ostensibly the same. #thedress is an exaggerated example of individual differences in perceived color, a rare instance when differences in perception are so great that they can be communicated through human language. 1
#selfie (From bottom, clockwise) Curiosity takes a self-portrait, the first sampling hole on Mt. Sharp, and layers at the base of Mt. Sharp
WATER’S FOOTPRINT
THE MARS CURIOSITY ROVER SEARCHES FOR REMNANTS OF WATER AND PAST LIFE BY SARAH BURNETT
FROM FLY-BYS TO ROVERS,
there have been over forty missions to Mars since 1960, of which only one third have been successful. The year 1964 marked the first successful mission to Mars, completed by the Mariner 4 with a fly-by that sent back 21 grainy pictures of the craters on Mars’s surface. Thankfully, today’s technology has improved significantly since 1964, and has allowed us to land directly on Mars to analyze soil and reveal the history of the red planet. We have launched a total of thirteen landers for Mars, six of which lost contact or malfunctioned and seven of which successfully landed and collected data. Curiosity is the most recent rover to land on Mars, having launched on November 26, 2011, and landed on August 6, 2012. Equipped with 17 cameras and a variety of tools ranging from a dust-removing wire brush to a vaporizing infrared laser, Curiosity is a far cry from the Mariner 4. Its design and operation requires not only precision and detail, but also patience. Because of the distance between Earth and Mars, radio signals take an average of 14 minutes and 6 seconds to travel to and from the rover. Curiosity has also taken a lot of wear and tear since its launch in 2011—the most problematic being damage and even holes in the rover’s six aluminum wheels. Additionally, in 2013, Curiosity was forced to revert to its backup computer after a memory glitch corrupted the main computer. Despite these obstacles, the rover has been able to travel across unknown areas autonomously for the first time in August, 2013—a hugely beneficial achievement QUANTA STUDENT MAGAZINE 2015
considering the communication delay. The most recent excitement surrounding Curiosity arose from its arrival at its long-term destination Mt. Sharp on September 11, 2014, after travelling over 6.9 kilometers from the landing site. Mt. Sharp is significant because it lies in Gale Crater, a place believed to be filled with sedimentary layers deposited by a series of massive floods. Analysis of these layers will give us insight far greater than what we have gained from previous samples, and will allow us to understand the conditions of Mars for the past two billion years. The rover is equipped with CheMin (Chemistry and Mineralogy), an instrument that uses X-rays to determine the composition of powdered rock samples. To obtain these samples, Curiosity drilled 2.6 inches down into Mt. Sharp on September 24, 2014, but the results have yet to be fully analyzed. The atoms in the sample will absorb some of the X-rays, then re-emit them at specific energy levels associated with the particular elements present. The re-emission data will reveal the composition of the rock, the temperature of the rock during its formation, and the acidity of the water that altered it—all of which will give considerable clues to the conditions on ancient Mars and whether or not it supported microbial life. Curiosity is just one more contributor to our increasing research on Mars. NASA has already announced a lander mission for 2016 called InSight to study Mars’s deep interior. In addition, NASA plans to launch another Mars rover similar to Curiosity in 2020. As we gain more and more knowledge of the Red Planet, plans for human missions to Mars are just on the horizon. 3
THE PREFONTAL CORTEX is used in thinking, decision-making, and focus.
THE LIMBIC SYSTEM deals with emotions and memory.
Just Do It
a glimpse into the science of procrastination BY HANNAH LANGE “Just do it.” That’s what Nike says, that’s what your parents say, that’s what your teachers say, but it is so much easier said than done, especially for those who procrastinate. Procrastination, simply put, is when two parts of your brain, one that wants to get tasks done, and the other that does not want to get tasks done, fight, and the latter wins. The limbic system, one of the oldest parts of the brain evolutionarily speaking, is automatic and looks for immediate mood improvement, while the biologically newer and weaker prefrontal cortex takes information in and processes it to make decisions. 4
It is this part of the brain that “really separates humans from animals, who are just controlled by stimulus,” says Timothy A. Pychyl, Ph.D. However, the prefrontal cortex is not automatic, and one must make a conscious decision in order to complete a task. Procrastination occurs when one gives in to what feels good at the moment, with the limbic system winning out over the prefrontal cortex. When this happens, the brain is rewarded with a small bit of dopamine, a chemical that makes you feel good by modifying the neurons in your brain, making you more likely to repeat the action again. For example, if one was
to begin studying for a final assessment a few weeks in advance, it would be much less stressful and more beneficial than cramming the night before. But in the immediate moment, choosing to put studying off is more appealing to the brain. Although fully aware of the cramming or rushing they will have to deal with later, chronic procrastinators enjoy the short-term benefits and ignore the long-term detriments. Another way this can be explained is with two ex-
The limbic system of the brain is automatic and looks for immediate mood improvement. amples. In one, a person is offered $100 today, or $110 in a month. Many would take the $100. On the other hand, if a person is offered $100 in a year, or $110 in one year and one month, many people will think, “If I can wait the year, I can wait the extra month.” In both of these examples, it is an additional $10 for one month, but because humans are influenced by something called “present bias” or “hyperbolic discounting,” they find that the more immediate the award, the higher the value, while the further away it is, the smaller the value. That is why procrastinating by doing something seemingly more enticing in the moment is perceived to be more rewarding than completing a task where the rewards come much later. Before research was done on the subject, it was believed that procrastination could be beneficial; the level of stress in procrastinators was lower than in their peers, probably because they were putting off work to do activities that they felt rewarded them more. However, in a study done by psychologists Dianne Tice and Roy Baumeister, it was found that the negatives to procrastinating greatly outweigh the momentary positives. For chronic procrastinators, work was completed later, and the quality suffered, as did their well-being; these students reported higher cumulative levels of stress, more illnesses, and had lower grades. And procrastination is not only detrimental to students: H&R block reported that, by rushing income tax reports before the deadline, people cost themselves hundreds of dollars. Furthermore, it has been found that “undesired delay” closely correlates with insufficient retirement savings and missed medical checkups. In a study conducted by Tice and Joseph Ferrari, it was concluded that students who are chronic procrastinators try to undermine their own best efforts, as they would prefer people to think they lack effort rather than ability. They brought in two groups of students, telling them everyone that they would be trying to solve a puzzle. They told one group that the puzzle was a test of their cognitive abilities, while they told the other group that the puzzle was meaningless and only meant for fun. Then both groups were given time before the puzzle to either mess around or prepare for the puzzle. The procrastinators who believed the puzzle was for fun behaved no differently than the non-procrastinators and made efforts to prepare, while the procrastinators who believed they were QUANTA STUDENT MAGAZINE 2015
being tested delayed practice. In The Journal of Social Behavior and Personality, Pychyl published his finding that chronic procrastinators feel guilty when procrastinating, and are aware of the harm they are doing, but are unable to divert themselves back to their tasks. Laura Rabin of Brooklyn College wanted to compare procrastination to the self-regulating parts of the brain that deal with impulsivity, self-monitoring, planning and orderliness, activity shifting, task initiation and monitoring, emotional control, short-term memory, and overall organization. These all fall under the umbrella of executive functions, and it was found that chronic procrastinators’ behaviors closely correlate with all nine listed. Rabin believes there is much more research to be done in this area, but says procrastination may be an “expression of subtle executive dysfunction.” So how would one go about fixing a “neurological expression of executive dysfunction”? It is not easy. As Ferrari says, “To tell the chronic procrastinator to just do it would be like saying to the clinically depressed person, cheer up.” Some of these researchers suggest counseling to change a person’s mindset—by making the completion of tasks seem like something to achieve rather than a burden, one is much more likely to get it done. Finding worthwhile personal meaning in the task itself also makes the person more likely to want to invest time. Piers Steel, Ph. D., who
Before research was done on the subject, it was believed that procrastination could be beneficial. says “we have a limited, depletable supply of willpower and resources,” suggests doing the ‘worst’ assignments first. It allows for the strongest chance of success, and leaves only the other easier and more bearable tasks. By doing otherwise, other chores suffer as one will be dreading the large task, making one “not completely present with anything else,” as Eva Wisnik, a time-management trainer, says. A way to make yourself more accountable for getting things done is to bring a friend or other person into the situation. It is easy to blow off going to the gym when you plan on going alone, but when planning on meeting a friend, it would be rude to not show up. Rather than risking the embarrassment, people who go with a friend are much more likely to show up and do the workout. Similarly, one may be more willing to do other tasks, such as fill out paperwork or write an essay, when held accountable by someone else. Or, if competitiveness strikes your fancy, set a timer to 10 minutes and “race” it by doing focused work for the full amount of time. Get rid of distractions; don’t turn to your phone or reorganize your binder. Just do the work and see how far you can get in that short amount of time. You may become engrossed in the work and continue for much longer. Sometimes, you may find it easier to “just start it.” As a consequence, you may find that you will “just do it.” 5
IS THIS BABY RACIST? A startling look at our innate morality BY CHARLIE LEMASTERS
Morality has been a question at every stage of life except for infancy. For a long time, the accepted theory on babies was that they were little blobs of flesh that could not think for themselves and did not have a sense of right and wrong. But in 2007, Karen Wynn disproved this with her famous Yale Baby Lab. Their results were shocking. To test if five-month-old babies actually had a sense of right and wrong, Wynn designed a puppet show such that the babies could relate moral activities to puppets. In these shows, one puppet would be trying to open a box and a puppet in a yellow shirt would help the puppet open the box. In another show, the puppet would try to open the box and a puppet in a blue shirt would slam the box shut. After the baby saw both shows, the researcher would show the baby both puppets and ask which one the baby would like. More than three-fourths of the babies tested reached for the helpful puppet in the yellow shirt. But this wasn’t enough; they wanted to see how just how old a baby must be before he or she perceives morality. It turns out that children as young as three months made the same choices as ones as young as five months. Wynn was able to show that babies do have an innate sense of right and wrong. But what about things like justice? To 6
test this, they showed the babies the same scenario as above but with a scene added beforehand. In this scene, the puppet that is trying to open the box steals a ball from another puppet. When shown this version of the show first, the babies more often than not chose the puppet that slammed the box shut, which shows that babies do have some sense of justice. But when a strong bias is introduced, such as puppets pretending to prefer either Cheerios™ or graham crackers, the babies would pick the one that liked the same thing they liked. Additionally, they wanted to see the puppet that liked the snack MAKING MOTHER PROUD A 5-month old boy choses a puppet after having seen a short puppet show. Over 75% of babies tested chose the same puppet.
they did not choose punished, like when trying to open the box. This was the case 87% of the time. They concluded that the babies liked to see those who were different from them treated badly, and those who were similar to them rewarded. This suggests that things such as racism and discrimination are hardwired into our brains and prevalent in the decisions that we make as young children. As we grow older, society and our parents mold us into kind, considerate, generous, and helpful people. And most important of all, we can glimpse into the minds of babies, which hopefully sets us on the track to learning more.
EMOTIONAL EDUCATION Why engaging mirror neurons might help kids really learn
neurons not only play a part in empathy, but also memory. With this in mind, the possible application of the psychological effects of mirror neurons are vast. In the realm of education, the use of mirror neuron research could significantly benefit teaching methods, specifically in the disciplines of history and current events. The typical teaching method for history is based on textbook readings and factual knowledge. While the frontal cortex can absorb a great amount of information for short-term retention, textbook knowledge is unlikely to be remembered in the long-term without any sort of emotion attached to it. Even if reading about a depressing event, the brain is unlikely to respond emotionally without visual and spatial cues. It is also improbable that the student will realize and appreciate the emotional magnitude of a historical event from a text-
BY AVERY ROGERS
E
mpathy is a fundamental human skill that, for decades, could not be biologically explained. The trait was examined only in psychology, and was considered a complex subconscious process without a specific neurological basis. However, in recent years, neurologists have discovered the brain components that create feelings of empathy: mirror neurons. Mirror neurons were first discovered in the 1990’s by a group of Italian scientists studying macaque monkeys. The monkeys were wired to a machine that processed brain activity, and were prompted to perform an activity that would spark neural activity. Then, the monkeys were shown a live demonstration of another monkey performing the same action. The watching monkeys had the same neural activity when watching the action as they did when performing the action, as if their brains could not distinguish its own actions from the other monkey’s actions. In humans, mirror neurons are found in both the superior parietal lobe (which processes language and sensory input) and the inferior frontal lobe (associated with reasoning and critical thinking skills). Scientists believe that mirror neurons are responsible for all feelings of empathy associated with touch, pain sensations, actions, and even emotions. Mirror neurons fire automatically and involuntarily, and cannot be controlled. For example, when viewing a video of someone being hit with a baseball bat, one cringes reflexively. This is not a conscious decision, but rather an unconscious response to the video by the brain’s mirror neuron system. As it turns out, humans are inherently empathetic creatures. It is not a learned skill, but an innate one. Additionally, humans are significantly more likely to remember emotionally charged events than emotionally detached ones, for seemingly obvious reasons. When humans are emotionally engaged, neurons are firing not only in the frontal and occipital cortices, which deal with critical thinking and eyesight, but also the temporal and parietal lobe, which contain mirror neurons. This activation of multiple brain centers creates a more powerful neurological response, and the brain recognizes this compound firing as a more important event to remember. Thus, mirror QUANTA STUDENT MAGAZINE 2015
the group yawn Scientists believe that mirror neurons are responsible for empathy associated with touch, pain, actions, and emotions. book reading, because the mirror neurons are not engaged. The question is: how can mirror neurons be incorporated into learning? Mirror neurons rely on visual, auditory, and human cues to fire. Simply put, mirror neurons only react when a person views something happening to someone else. In terms of teaching, mirror neuron research points to videos and movies as the most effective way to teach history and current events. Students are significantly more likely to remember events they can see and empathetically relate to, and are more capable of comprehending the emotion of these events. While our education system traditionally relies on the book as a teaching tool far superior to movies, science would suggest otherwise. In order to teach more effectively, teachers should shift their focus from readings to videos as a way to engage mirror neurons in the learning environment. 7
PUFFING UP: Food Allergies and the Immune System
FIVE YEARS AGO, I WENT
to my best friend’s birthday party at a fondue restaurant. My friends and I shared a booth together without any of the parents, and after we had finished giggling about being by ourselves, our waiter greeted us and tossed some children’s menus in our direction. I double-checked with him, “So you are aware that I have a severe allergy to nuts?” “Of course, we have our kitchen informed, and it is written on your order” he said with a grimace that might have passed for a smile. “I am sure that you will have a pleasant evening,” he added. Reassured, I dug in when the salads came. Yet, despite my precautions and the waiter’s assurances, the kitchen had put ground walnuts in the dressing. The salad tasted amazing, so I hardly noticed when my cheeks started to stiffen and hurt. But then I licked the inside of my cheeks, and they felt like mountain ranges. I felt like something was growing inside of me— a monster was trying to make its way up and out of my throat. I felt my tracheal and esophageal walls
I felt my tracheal and esophageal walls closing in. My best friend’s mother rushed me to the hospital: I was in anaphylactic shock. closing in. My best friend’s mother rushed me to the hospital; I was in anaphylactic shock. Much earlier, perhaps even before I was born, my antigen-presenting cells instructed my naïve T-cells that walnut proteins were intruders, or allergens. Only, instead of a normal bacterial response, my antigen-presenting cells chose to treat walnut proteins like parasites and mount an allergic response. Next, my naïve T-cells became type 2 helper T-cells, which generated numerous clones of themselves. These clones were sent to recruit B-cells to produce antibodies, specifically IgE antibodies, which attached themselves to the surface of rough mast cells. The antibodies were then chemically capable of grabbing hold of the walnut proteins so they could be removed by the “clean up cells” of my immune system, called phagocytes. While eating my salad, the walnut proteins bound to the IgE-covered mast cells, causing the mast cells to release massive quantities of histamine. The histamine caused swelling, itching, redness and on that night, a severe, life-threatening allergic reaction: anaphylaxis. After receiving epinephrine, oxygen, and Benadryl in the emergency room, my regulatory T-cells eventually signaled my immune system to stop the reaction. Most of the type 2 helper T-cells died in the process, but some continued to survive, programmed to attack the protein again if it was encountered. The new walnut memory in my surviving T-cells then enabled the immune system to react faster and more efficiently, thus worsening my walnut allergy. As a vertebrate, my immune system contains not only a basic innate immune system, but also an adaptive system that starts learning from the earliest stages of my prenatal development. Approximately 15 million Americans have food allergies, and someone like me is treated in an emergency room for a food allergy reaction once every three minutes. In fact, food reactions in pre-schoolers have increased fivefold from 2001 to 2011. The most accepted theory on the increase in food allergies is the hyQUANTA STUDENT MAGAZINE 2015
giene hypothesis, which proposes that improved hygiene and a corresponding decrease in bacterial infections predisposes us to an increase in the number of things our body interprets as allergens that produce an allergic immune response (IgE and associated histamine release). But why, for some people, does the decrease in bacterial infections lead to the increase in number of things our body sees as allergens? Why has the immune system changed overtime? From firsthand experience, I can attest that the immune response has changed and can change. When I was a toddler, for instance, I was allergic to milk and eggs. But by my eighth birthday, I was no longer allergic to either, and subsequently I enjoyed my first non-Jell-O birthday cake. Currently, the only “cure” for food allergies is oral immunotherapy, where the allergen is introduced in progressively larger quantities until desensitization is achieved. In this way, the immune system is taught to respond differently to the allergen. The desensitization to the allergen is thought to be due to a change in the type of antibody produced by the immune system and its associated response— from allergic to normal. In some patients, a tolerance to the allergen is permanently achieved. However, the initial risk with oral immunotherapy is that it is possible to undergo an anaphylactic allergic response during the treatment. Epinephrine is always at hand during this therapy, but I personally would nonetheless still be anxious. For instance, the death of 13-year-old girl, Natalie Giorgi, last year in California due to anaphylaxis from ingesting a peanut butter-infused rice crispy treat, despite ingestion of Benadryl and 3 doses of epinephrine, serves as a sobering and frightening warning of the severity and realty of food allergies. Recent events, however, give hope to finding a cure for food allergies. In 2007, an elderly woman received a blood transfusion from a young girl with severe peanut allergies. A few weeks later, the woman was successfully treated for anaphylaxis after eating a peanut butter sandwich. If a person who previously was not allergic to peanuts can be made to have an allergic response from a blood, perhaps the reverse is also possible. Could antibodies be taken from a person who does not have an allergy and then used in conjunction with oral immunotherapy to decrease/remove the risk of anaphylaxis? Only further research will tell. Although our understanding of the immune system has
Approximately 15 million Americans have food allergies, and someone like me is treated in an emergency room for a food allergy reaction once every three minutes. greatly advanced, we must continue the research into food allergy prevention and food allergy cures. Food allergies are a significant medical and psychological burden for an increasing number of children in our society, and we can hope that with time, the research that we are doing will yield results and perhaps some relief to this burden.
BY AVERY GULINO
9
Man-made Diamonds, for real?
The Art of the Synthetic Diamond. BY AARON LIU A diamond is truly a great gift for your loved one; it resists breakage and sparkles. It is indeed highly valued. Moreover, the reason for its high value is due to not only its beauty, but also its rarity. It takes almost an eternity for one single diamond to form in the Earth—a few billion years, to be more precise. A diamond starts out as something quite unexpected, a piece of coal or an animal’s corpse. According to the Smithsonian Institute, anything that contains carbon can become a diamond, but it takes a pressure equivalent to 725,000 pounds per square inch (51,000 kilograms per square centimeter) . First, the object containing carbon, like a piece of wood, becomes buried and is dragged about a hundred miles (161 km) underground by tectonic plates. The deeper the object goes the higher the pressure and the greater the heat it will experience. With enough heat, the carbon melts and impurities become vaporized. Under extreme pressure, the carbon atoms slowly bond into a tetrahedral structure—a three-dimensional shape that is made up of four triangular faces. This structure is very different from that of graphite (the metallic, black allotrope or form of carbon used in pencils), which is made up of hexagonal sheets of carbon atoms (which are one atom thick each) that are able to slip apart easily (making it the ideal material for pencils). The diamond’s tetrahedral three dimensional structure can withstand high pressure coming from all directions. Also, in a diamond molecule, each carbon atom is covalently bonded to four other atoms. This means a lot of energy is required to separate these atoms. According to Scientific American, diamonds’ unique structure and covalent bonds make them the hardest naturally occurring substance on Earth, for they cannot be scratched by anything other than another diamond. The diamond’s toughness and hardness not only make it an ever-lasting gem, but also make it an ideal material for cutting and polishing metals, rock countertops, and other gemstones. As mentioned previously, diamonds are very expensive, and cost around one 10
thousand to three thousand dollars per carat, or per two hundred milligrams. However, there is a way to make them affordable. Producing a Synthetic Diamond From 1954-1958, an American researcher at General Electric named Tracy Hall, along with a dozen other researchers, spent time researching and finding ways to create a synthetic diamond. Tracy and his colleagues designed a press made of tungsten carbide (a material almost as hard as diamond itself). After thirty-six minutes under a pressure of 100,000 atmospheres and at a temperature of 1,600 degrees Celsius, tiny diamond crystals encrusted the inside of the capsule of the press. It was a success, but the diamonds Hall created were only suitable for industrial purposes, such as grinding wheels and sand paper. The diamonds still were not of gem quality . In the 70’s and 80’s, newer presses did make some progress. They were able to create larger diamonds, instead of tiny crystals that coated the inside of the capsule. Only a few of the diamonds produced were of gem-quality and the rest were only suitable for industrial purposes. Still, there was hope… After the collapse of the USSR, Russian scientists designed the “BARS” press, which has become the most effective process to date, producing relatively large and gem-quality diamonds. It is a growth capsule that contains large pieces of high temperature alloy called “anvils”, which press together around a capsule containing a tiny diamond seed crystal surrounded by graphite, along with a metal solution. Under a HPHT (high pressure and high temperature) environment, the graphite melts, and the carbon atoms of the graphite start to bond with the seed crystal, forming a larger diamond. A process that took billions of years now only takes about four days. Usually, undesirable, pale yellow diamonds come out of the capsule, because of nitrogen atoms that sometimes replace carbon atoms in the crystal lattice. Where do the nitrogen atoms came from? Air is about seventy percent nitrogen gas and a pure vacuum is not easy to form, so some nitrogen can enter the formation process. This means a colorless diamond is hard to form, but it is possible. This also means, with the right amount of nitrogen, a bright, more desirable yellow diamond can be formed. Moreover, with different impurities other colors can be created. For example, boron can give a diamond a blue color. The process diamond making has drastically improved and the synthetic diamonds of today are so superb that it is hard to tell the difference between a real and a synthetic. This causes a problem for the natural diamond business. The well-known distributor of
A ball and stick model of the atomic structure of graphite, composed of carbon. The flat sheets have more room to slide along each other, with very little holding them in place. These sheets make a softer, weaker material.
natural diamonds, De Beers, has introduced a technology that is able to identify the difference. So now, the synthetic diamond has entered the market as a more affordable alternative to the natural diamond. A synthetic diamond is chemically, optically and structurally similar to a mined natural diamond. There is also a more interesting side to the diamond manufacturing industry‌ Ashes to Diamonds
A ball and stick model of the atomic structure of a diamond, composed of carbon. The triangular, tetrahedral structures interlock to create a strong, durable material. This shape occurs after coal undergoes enormous heat and pressure.
The BARS press is currently the best way we have found to produce synthetic diamonds. Thes BARS press consists of an inner synthesis capsule, inner and outer anvils, a rubber diaphragm and the dis type barrel.
QUANTA STUDENT MAGAZINE 2015
This very real-looking diamond is actually synthetic. This was one of the diamonds formed in a tight seal, so no nitrogen atoms were able to sneak in. It was also created in about four days rather than a few bilion years.
There is a very new way of preserving a deceased loved one. Instead of a jar of boring-looking ash, strangely enough, a diamond can be made from a person’s remains. A man named Rusty Van Den Bisen had thought about, as a child, what would happen to him after he died. He also wanted to stay as a part of his family forever. One day, he was watching a TV special about synthetic diamonds, which mentioned that anything of high carbon content could be made into a diamond. Rusty had an idea. Since human ash also contains a high content of carbon, this means it can be made into a diamond. As he grew up, he asked a few inventors to create a method to produce a gem-quality diamond out of ashes. After years of research, along with trial and error, Rusty and his researchers were able to extract carbon from ashes to produce diamonds. This process begins with the cremation ashes, which are actually mostly made of calcium compounds (because bones are made of calcium phosphate). The ashes are heated up to a high temperature in order to allow them to vaporize, which is done to extract the carbon . After that, there still might be some salts left to be separated from the carbon. The salts can be dissolved with a mixture of cobalt and iron serving as catalyst to further purify it. The carbon must be at least 99% pure for it to become a diamond. The carbon is then placed into a chamber where it becomes gradually heated and pressurized. This transforms the amorphous carbon into graphite. Lastly, the same process as previously mentioned is used to turn the graphite into diamond. Now, this has become a real business. Rusty’s company, along with many other companies, provide the ashes-to-diamonds services. Diamonds in the Future Scientists have come a long way in achieving the goal of making ordinary materials into a diamond. Synthetic diamonds might serve not only as affordable gems, abrasives or as a part of a family, but also as weapons or spacecraft windows. There might be an age in the future where diamonds are used as a construction material or ordinary windows in the average household.
11
Fangirling
explaining erotomania BY GABI BURKHOLZ Erotomania, also called de Clérambault’s Syndrome, is the unshaken belief that someone, usually a stranger or celebrity, is secretly in love with you. The syndrome often occurs during psychosis to treat schizophrenia or bipolar mania. The syndrome’s history is vast, and is often confused with obsessive love, unrequited love, and hypersexuality. In psychiatry, Jacques Ferrand first wrote about de Clérambault’s syndrome (also called erotomania) in a disquisition in 1623, where he described it as “erotic paranoia” and “erotic self-referent delusions.” By the nineteenth century, the definition changed to the practice of excessive physical love, similar to nymphomania. In the nineteenth century, the illness was considered to be unrequited love as a form of mental disease. Now, erotomania is presented as the delusional belief of “being loved by someone else.” An erotomaniac usually believes that another person is secretly in love with him or her, and could even have multiple secret admirers. The person also experiences other delusions concurrent with erotomania, such as the delusion that the person in love with them communicates with them through subtle movements such as body posture or the media if the person is famous. Famous examples of these delusions are John Hinckley Jr.’s attempted assassination of Ronald Reagan, which was reported to have been driven by an erotomanic fixation on Jodie Foster, and Margaret Mary Ray’s erotomanic obsession with David Letterman and Story Musgrave. These delusions are typically found as the main symptom of a delusional disorder or in the context of schizophrenia and can be treated with antipsychotics. Erotomania is a neurological disorder in its own right. However, it is often confused with obsessive love, unrequited love, and hypersexuality. Obsessive love is a theoretical state where a person feels an intensely strong surge to love to another person that they are attracted to and are unable to accept failure or rejection. But whereas obsessive love is just about one person that the sufferer knows they like, those who are diagnosed with erotomania believe that a multitude of people can like them. In other words, someone who is diagnosed with obsessive love creates the feeling of attraction towards someone, but erotomaniacs create the feeling of someone being attracted to them. Erotomania is also misinterpreted as unrequited love. Unrequited love is love that is not reciprocated and the beloved may not be aware of the “admirer’s” affections for them. This syndrome differs from erotomania because with unrequited love, the “admirer” fully knows that they love the other person, whereas with erotomania the sufferer is delusional. Finally, erotomania is confused with hypersexuality. Hypersexuality is commonly defined as the frequent or increased amount of sexual urges or activity. This disorder differs from erotomania because an erotomaniac maintains a 12
Not a Fangirl This man at a One Direction concert, contrary to popular belief, is probably not suffering fron a neurological disorder, much less erotomania.
psychological admiration for the person they’re attracted to, and someone who is hypersexual carries their attraction out to the physical. Also, with hypersexuality there is possibility for consent and understanding with both individuals, which cannot occur with erotomaniacs because the other person does not know of the affections of their “admirer”. If society can find a way to correct or help ease the minds of those who have this syndrome, then the world can be one step closer to clarity and freedom from delusions.
QUANTA STUDENT MAGAZINE 2015
COW FARTS
BY THE NUMBERS
and why they’re dangerous BY LINETTE PAN
A
mericans have heard about “reducing gas emissions” day in and day out since the turn of the 19th century. Since then, methane gas emissions have increased by 150%, and much of it has nothing to do with our cars. According to the United Nations’ Food and Agriculture Organization, the methane gas produced by cows has surpassed that of the transportation industry. Methane gas is also the least environmentally friendly, trapping 20 times more heat from the sun than carbon dioxide. But in what way can humans actually understand the methane gas excretory process of these mammals? Cows chew, digest, and re-digest in a process called rumination, and many of the 400 different microbes that help break down food produce methane gas, which is released through burps and farts. The large size of cows combined with the continued practice of large cattle farmers force feeding cows with unnatural foods such as cheap corn in order to maximize profit causes multiplied emissions of methane gas. Cows account for the production of more methane gas than all the other ruminants such as deer, antelope, and giraffe combined. The cause of this problem actually lies in the stomach of ruminants, animals with four stomach chambers. Due to the nature of their stomachs, cows have digestion problems causing increased amounts of methane gas released. Effects on the environment caused by the methane gas from cows are enormous. Livestock contribute to 28% of all emissions, and these greenhouse gases trap heat from the sun. Climate scientists estimate that the average cow releases 30-50 gallons of methane gas per day. With so many cow ranches filled with future steaks, the amount of gas adds up fast, which has a significant impact on gas emissions, contributing to global warming and climate change. As inhabitants of this Earth, it is our duty to leave this planet as a livable space for our posterity to live happy and healthy lives. Unless we want to suffer the consequences of extreme air pollution, we must contemplate our meat consumption to reduce climate impacts caused by livestock in order to deter our longing stomachs. Cow emissions make Earth less suitable for humans, and cows also reduce the amount of freshwater for humans and ingest agricultural crops that would have otherwise gone to people. Of course the solution isn’t to just eliminate cows or stop eating meat, but just consider your impact when you buy that pack of hamburger patties. Next time you complain it’s hot, or skip past a study on global warming, remember the cow farts, because the Earth needs you.
90,000,000 ESTIMATED COWS IN THE UNITED STATES
25.5 BILLION TOTAL POUNDS OF BEEF CONSUMED BY AMERICANS IN 2014
52.2 ESTIMATED POUNDS OF BEEF EACH AMERICAN WILL CONSUME IN 2015
990 LITERS OF WATER IT TAKES TO PRODUCE ONE LITER OF MILK
13
MAKING...
STEM
BY BIANCA YANG There is much hope among the scientific community that stem cells will be able to solve long-standing medical issues like cancer, diabetes, and heart disease. But stem cells are hard to come by, with the majority of these cells coming from embryos and adult bone marrow. The original issue with embryonic stem cell harvesting was that for a time, generating embryonic stem cells required destroying the embryo, which destroys a potential human life. Polarized debates over the status of the embryo have lasted for over two decades, with the two sides being “the duty to prevent or alleviate suffering” and “the duty to respect the value of human life”. Both sides offer strong arguments, but neither side has a strong conclusion to the moral dilemma. And the problem with adult bone marrow stem cells is that there is an extremely small number of cells available for harvest and that the harvested cells’ ability to divide is limited. Thus, their usefulness for scientific research is insufficient for prolonged studies. These two technological limitations have created great research interest in developing more efficient methods of generating a stem cell line with unlimited dividing potential (much like currently used cancer cell derived cell lines) for use in regenerative medicine. 14
In 2006, however, Shinya Yamanaka, a Japanese stem cell researcher, solved these problems by identifying a cocktail of four genes that could induce adult somatic (non-reproductive) cells to become stem cells. Dr. Yamanaka’s lab created a line of induced pluripotent (able to become many different cell types) mice skin cells and proved their pluripotency by regenerating all cell types. In 2012, Dr. Yamanaka, with John Gurdon were awarded the Nobel Prize for Physiology or Medicine for discovering the method for transforming mature cells into stem cells. The technology has since been named induced pluripotent stem cells, or iPS cells. How are these cells made? Normal cells are “reprogrammed” or transfected (infected by a virus) with four different genes, which integrate themselves into the DNA and are thus expressed in subsequent generations of cells. Cells which have successfully integrated the desired genes have also integrated an antibiotic-resistance gene, so they can be selected for with antibiotics. Eventually, a small subset of this initial set of transfected cells successfully forms a colony and can be classified as iPS cells and used for regenerative medicine research. The great benefits of this technology are that it bypasses the need for embryos and bone marrow, THE REPROGRAMMERS and that it also furthers the concept of patient-speShinya Yamanaka and John Gurdon receive crystal bowls from cific medical treatment. Since iPS cells can be created the Gladstone Institute in honor of from adult cells, each person can have his own stem their Nobel Prize cell line. Sound wonderful? Sure, but the research has not yet proven this technology to be safe for therapeutic use. Still, these cells help us to better understand the patient-specific nature of illnesses and are being used in personalized drug efforts, so in the next decade, doctors may be prescribing sets of patient-specific pluripotent stem cell lines to fight disease.
...AND USING
CELLS BY VENUS SUN The modern miracle of organ transplantation has, without a doubt, saved many lives. However, limited by the number of donors, organ transplants often only come to the few “lucky” ones. In order to meet the growing demand, an increasingly large number of studies on artificial organs have been conducted. And one proposed way of regenerating organs is utilizing stem cells. Stems cells have the ability to reproduce or transform into an astonishing variety of different cells. In the past, stem cells have been used for regenerative cell therapy. Essentially, they protect the body from damage. Using stem cells to manufacture artificial organs has many advantages over using non-living materials, and the most important is the avoidance of rejection by the immune system. The California Institute of Regenerative Medicine has suggested four different methods of generating organs with stem cells:
BLASTOCYST COMPLEMENTATION
This method is about injecting the embryonic stem cells or pluripotent stem cells, which possess the capacity of becoming any type of cell in the body, from one species into the blastocysts, a structure formed in the early development of mammals. Caltech researchers have succeeded in regenerating human pancreases and kidneys using this system created from two species of mice. This marked a total new field of organ regeneration. However, as it includes inter-species practices, it might not be used in consideration of ethics and safety.
DECELLULARIZATION AND REVASCULARIZATION
The first step of this method is decellularization, which is accomplished by taking cells out from a whole complex organ and leaving its complete scaffold. The second step is revascularization, which is putting various stem cells inside and letting the organ regrow itself. Researchers have succeeded in applying this method with the heart, liver and lung. However, more research still needs to be done on what kind of stem cells should be implanted before this method can enter clinical use. QUANTA STUDENT MAGAZINE 2015
USING A SINGLE STEM CELL
The process starts with the purification of stem cells, then it’s the addition of stromal cells, which serve the purpose of connecting tissue, and at last the transplantation of grown tissue into the recipient. Two groups of Caltech researchers concluded a potential for generating new organs from an original organ’s stem cells. The first group showed the ability of mouse mammary stem cells to grow to a whole mammary gland while the second group showed the possibility for mouse prostate stem cells to generate a whole prostate structure. The breakthrough from both groups illustrated the feasibility of such method into clinical practice.
TISSUE ENGINEERING
This method is similar to Decellularization and Revascularization, except in this process, the scaffold is only made up by polymer or other biocompatible materials. It’s obviously challenging to create a fully functioning organ with this method, because it’s hard to build vascular structures with the non-living materials. But scientists have made progress with that disadvantage. And on the other hand, it nevertheless still can be useful and important in the field of plastic surgery. 15
THE MOST
OVERLOOKED INVENTOR OF OUR TIME BY RACHEL HONG
AS THE MASTERMIND BEHIND COUNTLESS creative inventions and advances in technology, thirty-threeyear-old Pranav Mistry has completely surpassed the definition of innovator. After growing up in northern India and graduating from Gujarat University, Mistry earned his Master’s degree from MIT in Computer Science, and later worked as a researcher for MIT, as well as a researcher for Microsoft. Currently, Mistry serves as the Vice President of Research at Samsung. For many years, he has worked on combining the virtual world of technology with the physical realm. So far, he has created countless inventions; in fact, every American has most likely come into contact with one of his projects. Along with his multiple patents and dozens
16
of awards, he received the Invention Award in 2009 and was also nominated by Creativity 50 as one of the top “50 Most Creative People of the Year.” Chris Anderson calls Mistry “one of the ten best inventors in the world right now”, yet, the name “Pranav Mistry” is not universally known. Today, we are amidst a technological revolution. Especially with the development of the Apple Watch and the Google Glass, electronic devices are now merging with the physical realm. New technology, created by the brightest inventors such as Mistry, have seeped into our everyday lives. Although not yet implemented, Mistry’s devices in their prototype form provide the possibility for combining gestures and technology. Here are some of Quanta’s favorites →
QUANTA STUDENT MAGAZINE 2015
virtually reality 33-year-old Pranav Mistry demonstrates his invention SixthSense, which uses a camera worn around the neck and color markers on the fingers to project interactive images and videos
Sixth Sense “technology and human interaction should be blended together in a creative way.” 18
In 2009, Mistry developed his most innovative project, SixthSense, a small portable computer with many applications. This device consists of a portable projector, small computer chip, and a camera all attached to a lanyard to be worn around the neck. SixthSense uses image detection through an infrared device to detect movement of the hand. The handheld projector shines a screen onto any surface, while the camera takes pictures every few milliseconds to detect what the user places in front of the device. The small processor acts like an invisible computer, where it can access the Internet and other softwares, which can then be projected onto the screen. The capabilities of this tool are limitless. For example, by shining the projector onto a nearby wall, the user can digitally “paint”
the wall, as well as play games. When the camera detects a book, SixthSense technology can search up the book online and project book reviews or summaries. Holding a map up to the device causes the projector to display the weather temperatures for that day. The camera can also detect people, users can make phone calls on their hands, and the projector can show videos when given a picture. With this tool, the magical concept of moving pictures on paper is finally possible. So far, SixthSense has not been made for mass production. Instead, Mistry has put up the technology “as an open-source code so that anyone can improvise and make it their own.” Anyone in possession of a projector and camera can make their own SixthSense device.
Mouseless In July of 2010, Pranav Mistry presented his design for a device called Mouseless. Given from the name, the invention detects the user’s hand to move the cursor on the computer. In his presentation, he criticized the fact that “the computer mouse has remained largely unchanged over the last decades,” so instead, he presents an innovative idea of using gestures to interact with the computer. A twenty-dollar project consisting of an Infrared laser beam, Infrared camera, and a line cap, Mouseless creates an invisible layer, which detects where the user’s hand is located. A change in position of the fingertips alerts the camera of the intended mouse click. Mouseless is intended to be used like a mouse, where the user holds the “invisible mouse” like they would any mouse. Although Mistry is not the first to apply computer vision algorithms into everyday use, this innovative way of using laser beams to detect movement or touch allows us to use many gestures, instead of relying on a hardware device.
ThirdEye One of Mistry’s lesser known research projects is named the ThirdEye device. While not much information has been released about this product, this creative device seems impossible to make. ThirdEye is a form of computerized glasses, similar to the Google Glass, that enables multiple viewers to see different things on the same display screen at the same time. Applicable to everyday life, there could be, for instance, “a public sign board where a Japanese tourist sees all the instructions in Japanese and an American in English.” Mistry expands, “We don’t need to have the split screen in games now. Each player can see his/her personal view of the game on the TV screen. Two people watching TV can watch their favorite channels on a single screen.” Like in the ThirdEye device, Mistry performs wonders in his creative uses of technology, making his projects appear like magic. So what do all of Mistry’s inventions have in common? What does the talented engineer wish to accomplish? As Mistry himself puts it, “We don’t need new technology to change the future. We just need to change our thinking about [technology].” This man does not invent for much-deserved fame. He creates and innovates because he hopes for a better world. Mistry argues that technology and human interaction should be blended together in a creative way. Many of Mistry’s inventions were created using well understood, decades-old technologies. Yet his inventions are considered revolutionary because he uses his devices to detect human movement. The wonderful idea of controlling things from afar using only everyday gestures is astounding and offers a glimpse into the complex and beautiful world of technology.
QUANTA STUDENT MAGAZINE 2015
19
VENERA
The Soviet Explorers of Venus BY ZAFAR RUSTAMKULOV
The 1975 Venera probes transmitted data for less than an hour– the unforgiving environment of Venus quickly melted the probes’ electronic components.
Venera 1 was one of the smallest Venus-bound probes, carrying a small bus with fewer than a dozen instruments.
Venera 4 was a pioneer in the exploration of Venus.
I
n the same year that construction began on the Berlin Wall and Yuri Gagarin first took mankind into space, the Soviet Union inconspicuously launched Sputnik 7, the first space probe targeted at the planet Venus. The intrepid spacecraft rode atop a column of fire and blazed a path towards the wholly unexplored planet. The mission, however, was a failure. The upper stage of Sputnik 7’s carrier rocket, meant to hurl the probe across 100 million miles of space, failed to ignite due to a faulty pump. That same year, Venera 1, the Soviets’ second attempt at reaching Venus, made it out of Earth’s orbit and even flew past the planet. But alas, after 7 days of its 100-day voyage, all communications with the probe were lost. The first Soviet missions to Venus–Sputniks, Zonds, Kosmos’, and Veneras–were plagued by engine malfunctions, communication failures, and occasional catastrophic explosions; for the Soviets, the path to Venus proved to be a perilous one. In fact, it was not until their sixteenth attempt that a probe reached the planet and transmitted radio signals back to Earth. Venera 4, launched in 1967, was the first successful Soviet Venus explorer. The probe took the first direct measurements of Venus’ upper atmosphere and magnetic field. As it hurtled past the planet, the main probe deployed a small capsule carrying a suite of scientific instruments to determine the composition and density of the planet’s uninvestigated atmosphere. The capsule contained 2 thermometers, a barometer, a hydrometer (used to detect the water vapor), a radio altime-
20
A Soviet stamp celebrates Venera 8’s successful landing on Venus.
Venera 14 had a suite of instruments used to further understand Venus’s soil and atmospheric compositions.
ter, and 11 gas analyzers. Entering the thick Venusian clouds, the acorn-shaped capsule deployed a parachute and drifted to an altitude of 25 kilometers above the surface before its batteries died. The capsule took several measurements of Venus’ atmospheric pressure, temperature, and composition on its way down. Its gas analyzers determined that the main constituent of the planet’s atmosphere was carbon dioxide, a powerful greenhouse gas. After the capsule’s instruments detected temperatures of up to 500 ºF and pressures 22 times that of Earth’s atmosphere, the pioneering spacecraft succumbed to the environment. The data gathered by Venera 4 deepened our understanding of the mysterious atmosphere of Venus. It was the first spacecraft to enter the skies of another world and transmit data back to Earth. Venera 4 marked a new era of Soviet space exploration and opened the door to what became the most successful series of planetary explorations of the Space Race. In the wake of Venera 4’s triumph, several Venera missions were launched. Sister probes Venera 5 and 6 embarked on their journey to Venus two years after Venera 4. Equipped with more durable descent probes and improved gas analysis instruments, both missions were highly successful and transmitted data for a longer period. Venera 7, an improved version of the previous probes, was the first to endure the searing, acidic atmosphere and transmit from the surface of Venus. Venera 8 was the first to directly analyze the planet’s surface minerals. QUANTA STUDENT MAGAZINE 2015
AIRPLANES BECOME FLEXIBLE? BY TANUJA ADIGOPULA
Venus’s carbon dioxide atmosphere is featureless in visible light
Venera 7’s descent capsule was reinforced to survive the crushing pressures and searing temperatures of Venus’ atmosphere.
After gaining expertise in launching atmospheric missions to Venus, the Soviets decided to send two spacecraft with the purpose of landing and taking measurements from the surface of the planet. In 1975, Venera 9 and Venera 10 were the first spacecraft to image the surface of Venus. The panoramic images taken by the probes determined that Venus’s surface geology was highly volcanic; the cracked, jagged stones lying around the probes resembled crushed basaltic rock. Veneras 9 and 10 carried anemometers that discovered a light, 1 meter-per-second breeze rolling over the basaltic plains of Venus. Veneras 11 and 12 carried even more sophisticated instruments to the surface of the volcanic planet, including spectrometers, soil analysis experiments, and gas chromatographs. Venera 11 discovered the presence of lightning and both Veneras detected a presence of carbon monoxide at low altitudes, indicating a loss of water in the history of Venus’s atmosphere. The next iteration of Veneras, Venera 13 and 14, were launched in 1981, two decades after the failure of the first Venera mission. The probes returned the first color images of Venus’s surface and further analyzed the soil composition, determining it to be similar to Earth’s oceanic basalts. The pioneering Soviet space probes to Venus marked the beginning of a new era in planetary exploration. The most technologically advanced space probes of their time, the Venera probes demystified the once mysterious planet Venus and bettered humankind’s knowledge of the terrestrial worlds of the solar system.
I’m sure you’ve had that experience where you’re in an airplane and you wonder: how on earth is this staying up in the sky? Well, here is some news for you: airplanes have changed tremendously over the course of about 50 years. They’ve changed in terms of the type of fuel used, or even the structure. Well… here’s some exciting news– airplanes are going flexible. Because of the streamlined shape of airplanes, similar to the wings of a bird, airplanes are able to fly for miles and miles in the air. However, in the past, there have been malfunctions with airplanes. The rigidness of the streamlined shape doesn’t make for guaranteed protection in the event of an emergency. For example, a recent incident involving the Asiana Airlines Flight 214 crashed in San Francisco airport. The verdict of the crash was that: the captain couldn’t allow the wings or the sides of the airplane to prepare for a safe landing. Essentially, the captain lost control of the plane. However, as of May 2014, Sridhar Kota, a professor of engineering at the University of Michigan and founder and president of the company FlexSys, published an article about the possible technologies concerning the redesign of airplane structure: in other words, he wanted to take “complex, multipart machines” and transform them into “flexible, onepiece devices.” Although this project didn’t seem like it was going to have a lot of success, Kota and his colleagues worked diligently on perfecting the design of a flexible aircraft. Following the test run, Kota and his colleagues hard work paid off as the flexible aircraft had its first successful mission. According to a friend of Kota’s here were his exact words: “The first test flight was on Nov 6. Since then, NASA is conducting flight tests every week and they will continue for another 3 months. The flight tests already conducted to-date included speeds at 0.75 Mach at 20,000 and 40,000 ft altitude and various banking maneuvers up to 1.7G (continuous load) and high dynamic pressures subjecting the FlexFoil control surface to various load conditions. All the tests so far were completely successful – no issues what so ever. Most importantly, yesterday’s flight test subjected the wing to one of the most severe dynamic pressures (384 psf) and no issues at all!” The idea behind the flexible flap is to create a wing that can be quickly modified without much hassle. The result? It is both simpler to navigate, more reliable and more efficient. In 2006, FlexSys attached a prototype of the flexible wing to the Scaled Composites White Knight Aircraft and performed many test runs over the Mojave Desert. They were successful and a new discovery made it even better: this new technology can cut fuel consumption by a nice 12%. Although the Wright Brothers couldn’t perfect the technology of creating a “flexible aircraft,” Kota is making new strides in the field of aviation technology. Perhaps, one day we’ll even see these new fuel-efficient, reliable aircrafts in our airports.
EXPLORING MUTUALISM BY JOSHUA KAHN ONE OF THE MOST PECULIAR BIOLOGICAL PHENOMENA is symbiosis. Symbiosis is a relationship that occurs between two organisms. There are three types of symbiosis: parasitism, commensalism, and mutualism. Parasitism is a relationship between two organisms that benefits one and harms the other, colloquially phrased: good for me, bad for you. Commensalism occurs when two organisms interact in a way that benefits one and neither helps nor harms the other: good for me, neutral for you. Mutualism is when the relationship between two organisms is beneficial for both; good for me, good for you. This article will highlight a few particularly intriguing examples of mutualism.
ACACIA TREES AND ANTS The ants serve as a protector of the tree, defending it from harmful species. These harmful species includes other insects– such as grasshoppers–who would attempt to consume the leaves of the Acacia tree, and other plants who would try to outcompete the tree for resources, such as sunlight. In return for their role as the protector, the Acacia tree provides the ants with everything they would need to survive. The tree secretes nectar from its nectaries that serves as the ants’ primary food source, and the Acacia’s hollow spikes serve as the ants’ shelter.
HERMIT CRABS AND ANEMONES Despite its shell, a hermit crab is in constant danger of predation from predators such as small octopi. As a defense the crab will decorate its shell with stinging anemones that deter larger predators. This relationship is beneficial for the anemones as well; since the crab is a messy eater, the anemones receive nourishment from the food that is not ingested by the crab. A hermit crab will choose an anemone when it is very young and often the two will remain together for the duration of their lives. GOBI FISH AND PISTOL SHRIMP Pistol Shrimp are nearly blind. To compensate for this, they have developed a mutualistic relationship with Goby fish. Pistol shrimp build large homes within reefs that both the shrimp and the Goby fish then inhabit. The shrimp constantly improves its home; as it does this, the Goby fish waits at the mouth of the shrimp’s home waiting for passing prey, as illustrated. The shrimp uses a long antenna to keep contact with the Goby fish while improving its home. The Goby fish will remain at the mouth of the home until it senses a predator, at which point it retreats. As the shrimp loses contact with the fish, it recognizes that it is in danger and retreats into its home with the fish. The nearly blind shrimp receives an early warning system in exchange for allowing the Gobi fish a place to live and hunt from.
22
the OpenWorm project A 3D wiring diagram of a small roundworm’s entire nervous system. Dots represent neurons; long lines represent axons and dendrites.
Faking Life BY SAMUEL FU Have you imagined what life would be like if organisms could be simulated in a computer? It might just be possible, with the OpenWorm Project. The project is an informal collaboration of biologists and computer scientists from the United States, Great Britain, Russia, and other countries, and began by raising over $121,000 on the crowd-funding website Kickstarter. It is attempting to create the world’s most detailed virtual and simulated life form, with an accurate, open source, digital clone of the organism Caenorhabditis elegans. Caenorhabditis elegans is a 1-mm long nematode (a phylum of worms with slender, unsegmented, cylindrical bodies, including roundworms, threadworms, and eelworms) that lives in a temperate environment of soil and feeds on decomposing bacteria. The project’s researchers have chosen the Caenorhabditis elegans for many reasons. It is simple, transparent, easy to feed and easy to breed. It is also one of the best understood organisms in biology, and easier than any other to simulate. The hermaphQUANTA STUDENT MAGAZINE 2015
rodites (male and female) of this organism have exactly 959 cells, making them even easier to simulate, as there are fewer cells to program (the other genders of Caenorhabditis elegans have more than one thousand cells). Equally important is the fact that a map of the neuron cells of the organism already exists. This saves research time for the programmers and biologists and allows them to dive into creating the simulation program. This project could bring many important effects to our own lives. Through the creation of a virtual organism, it would be possible to simulate situations that are impossible in real life, simulations required to understand biological processes. The study and simulation of neurons could help humans and researchers to better understand many diseases. For instance, scientists still do not truly understand the cause of Alzheimer’s; they have only determined that it is a mix of genetic, environmental and lifestyle factors. To fully comprehend the origin of the disease and to find a possible cure, scientists need to understand
how the human brain functions. Brain imaging shows how the brain of an Alzheimer’s disease patient changes, but not why. Brain simulation, which is already supported on the hardware side by the IBM TrueNorth processor (a processor built with brain simulating capabilities), could bring a solution to the problems that have vexed us for so long. By using a simulated brain that functions exactly like any other human brain, researchers could test many things they could not possibly do before, leading to more knowledge of the neurodegenerative disease. The same could be done to many other diseases, bringing about new treatments, and perhaps even cures. The OpenWorm project creators believe that only by recreating a living organism can humans truly understand how that organism works. This would truly be the first artificial life form ever created, and not only would it prove that life simulation is possible, it would also eventually assist immensely in understanding human diseases.
23
GO
WITH THE FLOW Advances in Morphometric Analysis
BY ZOË SIDDALL
T
here are five liters of blood in the human body, three meters of DNA in every human cell, 206 bones in the human skeleton, and about three pounds of brain. For the most part, we understand how each system functions by itself and in the larger context of the body. We know how the heart beats, how the intestines digest, and how the muscles move. However, one organ still remains relatively mysterious: the brain. Hoping to solve some of these puzzles, neuroscientists all over the world are delving into the many complexities of the brain. Neurohydrodynamics, a division of neurophysics that applies fluid mechanics to neuroscience, is among the least researched fields of neuroscience, despite the fact that it has potent applications for disease management. Among those diseases is Chiari malformation: a brain disorder in which the cerebellar tonsils herniate out of the foramen magnum, the opening at the base of the skull. Very few people have heard of Chiari even though approximately 300,000 individuals in the United States have this condition, which is equivalent to the amount of patients affected by the more commonly known multiple sclerosis. A University of Michigan study predicted that, out of the general population, 1% of adult men, 2% of adult women, and 3% of children have a cerebellar tonsil herniation of 5.0mm or more. However, there is a “gray zone” because individuals with a 5.0mm tonsillar herniation may or may not exhibit Chiari symptoms. In other words, measuring the distance that the cerebellar tonsils have herniated out of the skull is an inaccurate predictor of which patients will develop symptoms. For physicians, this creates uncertainty as to which patients need to undergo continued treatment. In order to treat patients, it is necessary to develop a different way to morphometrically analyze a patient’s tonsillar herniation. Morphometrics are the quantitative analysis of form including both size and shape. As it turns out, Chiari symptoms are caused when the shape of the tonsillar herniation restricts the flow of cerebrospinal fluid (CSF) around the spinal cord and brain. When CSF flow is hindered by the tonsillar herniation, patients develop a related condition called syringomyelia where pockets of CSF collect
24
QUANTA STUDENT MAGAZINE 2015
NANOPARTICLES
AND THE CURE FOR HIV BY TYLER CHEN in the spinal cord. The syrinxes that form as a result of Chiari malformation cause pain, weakness, and even paralysis. Thus, if the shape of a patient’s tonsillar herniation will cause syrinxes to form, they will require decompression surgery to unobstruct the CSF flow in hopes of restoring their neurological function. At the Conquer Chiari Research Center in Akron, Ohio, researchers are attempting to create a model for the flow of CSF in order to predict the formation of syrinxes which, in turn, will allow neurosurgeons to predict which patients need decompression surgery. Currently, high resolution four-dimensional MRIs provide images for neurosurgeons to see a patient’s CSF flow over time. However, these images do not account for the CSF flow for a longer period of time than when the patient is in the scanner. Instead, neuroscientists are attempting to create a 4D, computational fluid dynamics simulation of CSF flow to study and understand what is happening in Chiarians’ brains. By taking the three-dimensional images of the unique anatomy of a patient’s tonsillar herniation, neuroscientists are hoping to model CSF flow over time in their own simulation and determine the likelihood that syrinxes will form. While this research tool has the potential to tell neurosurgeons when and where to cut, researchers have found that the CSF flow predicted by their computational simulations is not the actual CSF flow seen on 4D MRIs of the same patient. Computers are not powerful enough to model every anatomical complexity of the human body and, thus, it has proven difficult to make the 4D digital models match those of the 4D MRIs. Among these intricacies are tiny, hair-like structures that line the spinal column. Assuming their effect on the flow of CSF would be minimal, researchers excluded the hairs from original simulations. As a result of this microscopic omission, the simulated CSF flow was drastically different from MRI scans that scientists were attempting to model. Even seemingly insignificant changes can radically alter the accuracy of morphometric analysis of Chiari malformation and syringomyelia. But while these 4D simulations are far from perfect, the technology is rapidly advancing and providing hope for the future of patient care. In the more immediate future, the Conquer Chiari Research Center is studying more accurate means of describing Chiari malformations. This includes determining an alternate morphometric analysis of Chiari, potentially based on the position of the tonsillar herniation along the spine or the size of the intracranial cavity. However, the brain still has many problems yet to be solved. With the continued study of neuroscience, unlocking the mysteries of the brain will prove fruitful in the lives of many patients battling neurological diseases.
According to the World Health Organization, there are currently 35 million people living with HIV, and 1.5 million were killed by AIDS in 2013 alone. These are huge statistics; it’s no wonder there are many research groups seeking novel treatments for HIV and AIDS. Some of the more recent developments in HIV and AIDS treatment have been relied nanoparticles. Nanoparticles, or particles on the nanoscale (1-100nm), have many different properties compared to larger-scale solids of the same composition. For example, gold nanoparticles appear red in solution, allowing us to differentiate them even on a microscopic scale. These particles have a surface area to volume ratio that is many orders of magnitude larger than macro-scale molecules, giving nanoparticles properties that are surface-dependent rather than volume-dependent. Nanoparticles have the potential to create huge breakthroughs in medicine because they can be specifically engineered to contain other molecules. This allows for cell-specific as well as size-dependent drug targeting, a technique that was used in a novel treatment for HIV. In 2013, a researcher named Jallouk and his colleagues designed an extremely innovative treatment for HIV using nanoparticles loaded with a cytolytic peptide derived from melittin, or bee venom. It has long been known that melittin has the capacity to lyse the membranes of retroviruses. The lysis of the viral envelope of HIV would render the virus ineffective. Although melittin sounds like an obvious treatment for HIV, it was also shown to lyse the membranes of red blood cells, so a high-concentration application of melittin to an HIV patient would most likely kill the patient before the virus was destroyed. Jallouk’s proposed treatment involved a perfluorocarbon nanoparticle with melittin incorporated into the lipid structure of each particle. Due to molecular “bumpers” surrounding the majority of the nanoparticle, it is nearly impossible for red blood cells to come into contact with the melittin. Thus, the body is protected from the toxicity of free melittin. However, due to the small size of HIV, the virus easily passes between the bumpers and is effectively destroyed by contact with melittin. In their experiments, Jallouk found that “nanoparticle formulation of melittin reduced melittin cytotoxicity fivefold and prevented melittin toxicity at concentrations previously shown to inhibit HIV infectivity.” Thus, this treatment could theoretically be used to clear HIV from the bloodstream entirely, with minimal harm to body cells. With the advent of nanotechnology, an entirely new field of medicine has opened up, allowing for the integration of biological and engineered molecules. Through a combination of natural and synthetic particles, perhaps we can finally create a definitive cure for viral infections.
Explaining Neural Networks BY PETER GRIGGS Simply put, Neural Networks are an attempt to represent neurons mathematically in order to perform statistical computation. This model includes both values and connections between those values, which are called gradients. The gradients can be seen as the force with which a neural network layer is pulling on an input value. When there is only one circuit, we don’t have to worry about the interaction of several circuits and the outputs they give. However, once we start having multi-layered neural networks, we can calculate an output, but what next? We need to update the values for neurons in order to achieve our goal, whatever that may be. To solve this problem, the backpropagation technique was invented. Backpropagation uses the gradient between the final circuit output and an individual circuit in the network to pull on that individual circuit. In other words, the force that the final output is creating tugs on all of the other circuits. Now, not only do we have changing real values, but we also have changing gradients! These concepts of circuits and gradients, which make up the basis of neural networks, may not initially sound exciting. However, they can be applied to many problems in machine learning. We can train a neural network with data and teach it to analyze that data to solve both classification and regression problems. One popular training scheme for neural networks is called stochastic gradient descent. In stochastic gradient descent, we can just pick a random datapoint and feed it through the circuit. Then, we analyze the output of the forward pass. We should be able to tell if the output matches the prediction. In a linear classification case, if we have determined that the vector quantity is -1 and our output is also -1, then the output is fairly accurate. If the score is low relative to the prediction, we will want to make the gradient higher, tugging on the low value in the positive direction. Then we will take this tug on the output and backpropate it to the parameter values in what is referred to as a parameter update. In other words, we are updating the gradients or “tugs” on each of our inputs. Get enough data points, and enough predictions, and after a while, there is a higher likelihood that the computer will give an answer that matches the prediction. However, the learning rate depends on how good the data is. A computer is still not inherently intelligent. Whatever training protocol used will not work without solid data. This is how Google made their data centers more efficient. The idea was to use neural networks to keep track of all of the data center environmental variables with servers (temperature, load, output) and to calculate PUE (Power Usage Effectiveness) based on all of these variables, which are called “features” in machine learning algorithms.Google’s application of neural networks to calculate PUE for different variables in their data centers show how robust neural networks are for complex problems that involve tricky data analysis. Since our modern era is creating data at blazingly fast speeds, modern problems typically include huge data sets. Applying neural networks is a strategy that can help people solve some of the toughest computing problems of our time. 26
WHAT ARE BY DANIEL KIM
Our daily lives would be unimaginable without the support of devices such as automobiles, smart phones, gaming devices, and televisions. All of these incorporate integrated circuits (ICs), which are collections of several electronic circuits placed on a small plate made out of components called semiconductors. Any material can be classified as a conductor, a semiconductor, or an insulator based on its electrical conductivity. The difference between these classifications comes from each material’s availability to move free electrons. For a conductor, the electrons are combined such that only electrons at the outermost electron spheres (the outermost orbit of electrons) can be freely moved. Since the movement of electrons implies the flow of electric currents, the more free electrons there are, the easier electric currents can flow. In contrast, an insulator has little to no free electrons; thus, it is hard for electric currents to flow through it. In the middle of this scale is a semiconductor, which exhibits qualities of both a conductor and an insulator. It acts as an insulator in normal conditions, but it exhibits the characteristics of a conductor in other conditions. This is due to the fact that the electrons are loosely combined, which allows the electrons on the outer spheres to be easily released. Silicon and germanium are well-known semiconducting materials. This is because silicon and germanium atoms both have 4 electrons on their outer spheres and can combine with 4 other atoms by sharing those electrons. The electrical property of semiconductors can be enhanced through a doping process that adds impurities into pure semiconducting materials. For example, phosphorous and arsenic have 5 electrons on their outermost shells. Of these 5 electrons, 4 are used for combining with the 4 electrons from silicon or germanium, and the remaining one electron is a free electron. This type of semiconductor is known as an “n-type” semiconductor. The “n” stands for negative and the name is derived from the negative charge of electrons. There is another type of semiconductor known as a “p-type” semiconductor, where “p” stands for positive. This type
SEMICONDUCTORS? of semiconductor relies upon the usage of two elements that have 3 electrons on the outer sphere, such as boron and gallium. The 3 electrons on the outer sphere are used to combine with the 4 electrons from silicon or germanium. One electron from silicon or germanium has no electron to pair with from boron or gallium; it can be imagined that instead of an electron to pair with, there is a “hole.” This “hole” can be treated as a carrier of positive charge. One of the simplest components in ICs, diodes, which are used in many electronic devices, can be made by combining p-type and n-type semiconducting materials. By applying a voltage across it, it can allow the electric current to flow or not flow (see figure below). When the diode is forward-biased
(the positive side of a voltage source is connected to a p-type semiconductor and the negative side of a voltage source is connected to an n-type semiconductor), the electric current flows from p-type to n-type. When the diode is reverse-biased (the negative side of a voltage source is connected to a p-type semiconductor and the positive side of a voltage source is connected to a n-type semiconductor), no electric current flows. Thus, the diode regulates the direction of electric current flow. Today, we often use our electronic devices without thinking of the complexities behind their creation and their components. Semiconductors are complex but integral parts of our lives, and without them, we would not have inventions that we see in our world daily.
Figuring Out Diodes Forward-biased (left) and reverse-biased (right) operations of a diode. Diodes regulate electric flow. QUANTA STUDENT MAGAZINE 2015
27
THE FOUNDATION OF REALITY BY NATHAN CHENG
A question reverberates in high school math classes all across the United States: “When will I ever use this?” An honest answer: “Depending on what you decide to do with your life, you may never see these equations again.” But the same holds true for reading literature, listening to music, appreciating art, dancing, singing, writing, or musing about the philosophies of life. One may very well live without immersing oneself in any of these. But to do so would be a deprivation of the mind. Similarly, mathematics is an intellectual immersion in logic. In mathematics, we find the rules that constitute the fabric of our realities. In understanding the universe, mathematics sits at the heart. It is a discipline for those of us who seek the absolute truth.
There are infinitely many prime numbers. Consider the scope of that statement. In no discipline other than mathematics can we make such a vast generalization and be completely accurate. We are generalizing to infinity, an idea well beyond the capacity of human conceptualization. How can we make such bold arguments such as the infinitude of primes when we don’t even have a clear picture of what infinity even is? We can imagine that a proof for the fact that the primes continue on forever must be unimaginably complex. However, in reality, the standard proof is one of the simplest and most elegant ever conceived: Take any finite set of prime numbers. Let’s say 2, 11, and 23. Take their product and add one. (2 × 11 × 23) + 1 = 507
The result can’t be divisible by 2, 11, or 23, since we’ve “shifted” the product over by one. Thus, it must either be a new prime or the product of two new primes. In our case, it happens to be the latter: 507 = 3 × 13 × 13 Add these primes to our set and repeat the process. Essentially, from any finite set, we can continue generating new primes. Thus, there are infinitely many. This proof was formulated by Euclid in Euclid’s Elements, circa 300 BC. The original text for it was no more than 200 words. Before modern technology, before the age of computation, before the scientific revolution, sitting at the infancy of logical reasoning, Euclid made a completely true statement concerning the nature of primes and infinity in less than 200 words.
1i
Some numbers are more interesting to us than others, perhaps because they have more interesting properties, or that they are harder to understand. Listed below are five important mathematical constants that have survived centuries of weathering.
0
is infinity’s evil twin. It may seem harmless at first, as it is representative of nothing. Yet, it accomplishes many things. It is an even number, and neither positive nor negative. If you add 0 to a number, nothing happens. If you multiply a number by 0, you get 0. But if you divide by 0, your calculator breaks. If the universe divides by 0, it explodes. 0 must be approached with caution.
π
(3.14159265...) is an interesting number. Formally, it is the ratio of a circle’s circumference to its diameter. There are many circles, but only one . It’s ubiquity, however, extends beyond circles and even the realm of geometry. The area underneath the Gaussian function (the quintessential “bell curve” or “normal distribution”) is π. The probability that two randomly selected integers do not share prime factors is 6 / π 2. π is random, but deterministic. It is irrational. π is written in the physics of the universe.
e
is the building block of arithmetic. It is the only positive integer that is neither prime nor composite. From 1, you can derive all the numbers via addition or subtraction. If you multiply a number by 1, nothing happens. If you divide a number by 1, nothing happens. If you raise a number to the power of 1, nothing happens. 1 is its own reciprocal. 1 is sensible. 1 is our friend.
is an imaginary number. Formally, it is defined by the property i 2 = -1. At first, it may seem useless. Indeed, we’ve found a way of representing something previously insensible, and the more anxious of us can sleep soundly. Practicality? It is used frequently in investigating electrical currents, wavelengths, liquid flows, resonance, and motor design. i is the basis for the complex plane. Like ∞, i is a concept more so than it is an object. Distinguished from the real numbers, i is truly imaginary.
(2.718281828...) is Euler’s number. It has countless useful properties. There are dozens of ways of representing e using infinite summations, continued fractions, products, and limits. It is the only exponential base that is its own derivative and antiderivative (the rate at which the function e x changes is e x). In that sense, it is perfect. Then again, it is irrational. In that sense, there is a touch of randomness to its perfection. It appears in the study of compound interest. It appears in Fourier series. e occurs naturally.
Described above are 5 fundamental mathematical constants, each with their own distinct properties, applications, and derivations. Euler’s identity links all five of them together in a simple, elegant equation. Philosopher and mathematician Benjamin Peirce stated that “it is absolutely paradoxical; we cannot understand it, and we don’t know what it means, but we have proved it, and therefore we know it must be the truth.” It is: eπ i+1 = 0 QUANTA STUDENT MAGAZINE 2015
From infinity to circles, primes to imaginary numbers, nothing to unity, we have married five important mathematical constants in a single, unintuitive relationship. The constants weren’t contrived for the purpose of the equation; they were discovered or created independently. It was after they had been established that mathematicians stumbled upon the striking result. It may be just a marvelous coincidence. Or it may speak volumes about the fundamental nature of our reality.
29
FOUNTAIN OF YOUTH
the implications of mouse blood transfusion studies on human mortality BY STEPHANIE DAVIS PEOPLE HAVE BEEN SEARCHING for the key to eternal life for centuries. However, from Nicholas Flamel and the Philosopher’s stone to Spanish Conquistadors searching Florida for the fountain of youth, these searches have been deeply rooted in mythical legends and all have been fruitless. But today, in the 21st century, is a scientifically valid answer finaally within reach? Perhaps-- recent scientific research suggests that certain types of blood transfusions may slow or even reverse aging problems. This research began in the 1950’s, when Clive McCay of Cornell University tried surgically linking differently-aged mice to each other so that they shared a circulatory system to observe potential effects.The results of the study were shocking: the older mice seemed and acted dramatically younger. With the younger mouse’s blood infused into its circulatory system, the older mouse demonstrated revitalized muscle tissue and increased strength, the ability to run faster and longer on a treadmill, boosted brain function and memory, improved muscle recovery, restored blood flow to brain, and improved senses – especially its sense of smell. It seemed that infusions of young blood improved cognition and the health of several organs in the tested older mice. Understandably, both the magni30
tude and multitude of these improvements led researchers at Stanford, Harvard, the University of California at San Francisco , the Harvard Stem Cell Institute, and the Berkeley Stem Cell Center to try to identify what was causing these changes and why. In 2012, Amy Wagers of Harvard University identified growth differentiation factor 11 (GDF11), a protein found in blood plasma, as a potential source because the concentrations of GDF11 in blood has been observed to decline with age. This protein is known to be involved in several mechanisms that control growth, and is also thought to mediate some age-related effects on the brain by activating another protein that is involved in neuronal growth and long-term memory. But a full explanation of the results continues to evade researchers. In 2014, Tony Wyss-Coray at Stanford University took the next step and injected older mice with young blood from humans, and noted the same changes in the older mice: improved cognition and health of several organs. All of this begs the question: could a similar transfusion experiment work for humans as well? Researchers looking to begin such trials with Alzheimer’s patients were cautioned by the Berkeley Stem Cell Center against doing so. The Center noted that unusually high levels of the protein
GDF11 have been observed in many cancer patients, and while receiving infusions of young blood could increase an individual’s muscle and neuron function, doing so could also prove fatal. Despite the risk, in early October of just last year, clinical researchers at the Stanford School of Medicine started giving transfusions of blood plasma donated by people under 30 to older volunteers with mild to moderate Alzheimer’s. No definitive results have been released yet, but scientists around the world await the results of this monumental trial with bated breath. Although the success of mouse blood transfusions may never eventually translate into safe similarly-designed human trials, the potential for success is still a very real possibility. Such a breakthrough could make a world of difference for those with Alzheimer’s disease and and other muscular degenerative and neurodegenerative disorders. Additionally, it would raise important ethical questions and provide more insight into how the aging process functions. Regardless, these studies and clinical trials have proven blood transfusions to be the most promising development in the modern search for immortality, which continues on-- not on the word of local legends, but on the forefront of modern scientific research.
How to Live Forever *only applicable to massless creatures BY BAILEY BJORNSTAD The first step in achieving eternal life is not to find the fountain of youth or to drink from the holy grail as Indiana Jones did. Instead, it’s simple: move at the speed of light. Okay, it’s not that simple. Imagine a photon (the actual particle, not the light that we see–the difference is subtle). Now that you’ve done that, which is pretty impressive, think about the properties of that photon: it is massless and it travels at the speed of light (for now, we will neglect spin and other such properties). The two are deeply related; in fact, it’s a common phrase that we hear touted by curious eight year olds who think they’ve come upon something amazing after reading it on the internet or in a book. This is, of course, Einstein’s famous mass-energy equivalence: E=mc2. If a particle has a rest mass of zero, like that photon you’ve done so well to imagine, its energy is purely defined by the speed of light. In essence, it travels at the speed of light. This seems obvious. The photon, which is the particle responsible for visible light, travels at the speed of light. That’s not really interesting on its own, except that it means a photon does not experience time. The key principle, as theorized by Einstein in his groundbreaking theories of relativity, is that moving clocks run slow. Relatively, all things in motion move slower through the dimension of time than stationary objects. In order to aid in comprehension, consider Einstein’s gedankenexperiment (thought experiment) of the Light Clock. But first, we must understand a basic feature of special relativity. The basic feature is this: all objects in motion are moving relative to everything else. For example, Joe is driving a spaceship at 50 m/s towards Emily who is driving a different spaceship at 75 m/s towards Joe. Assuming both are moving in a constant velocity, Joe and Emily can both, without being wrong, measure the other’s car moving at a speed of 125 m/s, and their own at 0 m/s. What is not intuitive, however, is that the maximum speed of everything, without exception, is the speed of light. We shall draw on these basic principles of relativity to describe the light clock. Imagine now, a beam of light moving between two mirrors that are stationed one meter apart, in the vertical direction. This is our light clock. Now imagine that our two observers, Joe and Emily, have returned. Joe moves with one light clock (i.e. the clock is stationary relative to Joe but in motion QUANTA STUDENT MAGAZINE 2015
relative to Emily), and Emily is a stationary observer who also has her own light clock, which is stationary relative to her. What’s important about this situation is that it demonstrates the fact that a clock in motion runs slower than a stationary clock. From Emily’s point of view, as the stationary observer, Emily notices that the beam of light in Joe’s clock must travel a greater distance at the constant, unchanging speed of light–the distance here is the hypotenuse of a right triangle, which will be longer than either of the two legs. Therefore the time between each “tick” of the clock is greater than Emily’s own light clock. The moving clock must be ticking slower than the stationary one! Except when we view Emily’s clock from Joe’s point of view. Joe will see the exact same situation as Emily did when she viewed Joe’s clock. So both clocks must be running slow! Well, not really. It depends on your frame of reference. This feature has been experimentally tested, not just theorized. When an atomic clock on a plane and an atomic clock on the ground are both synchronized (using a complex synchronization method), then the plane is flown around the world, the amount of time ticked off by the atomic clock on the plane is slightly less than the amount ticked off by the clock on the ground (very slight, though not negligible due to the accuracy of atomic clocks). So, the real question becomes how can we utilize the fact that a moving clock runs slower than a stationary one to live forever? As the speed of light increases in the light clock thought experiment, the more distance must be covered and therefore the time must slow down. So, if we can get our light clocks up to the speed of light, the distance continues to increase until time stops. This, in a simple way, explains why a photon does not experience time... However, the trouble here arises when we reconsider the fact that no object with mass can really move at the speed of light. The amount of force it takes to accelerate an object up to the speed of light increases exponentially as you move closer to the speed of light, making it a physical impossibility for any object with mass, which, unfortunately includes all human beings. So while poetically, time stops at the speed of light, eternal life is not a physical possibility for those of us that have mass. 31
EXPLAINING
STRING THEORY THE ORACLE OF MODERN SCIENCE BY EVERS PUND
B
y 460 BCE, Greece is unquestioned as the source of brilliance in the ancient world. In its city of Abdera lives Democritus, a controversial philosopher, whose curiosity of science leads him to the idea of the atoma: the smallest, indivisible particle that everything is comprised of. He is the first person to connect the quantum world and the physical world, a feat that is still being attempted by physicists almost 2500 years later. Democritus’ idea ignited curiosity about the quantum world, and provided the foundation for new theories to sprout, along with more questions. Questioning is integral to scientific discovery. The urge to answer the questions we have has advanced our understanding of why things are the way they are. Each question, each experiment, and each observation beckons for more ways to explain the space and time in which we are submersed. How does the universe work? How did it come about? Thousands of years since Democritus, this question still does not have a proven answer. There are theories on parts and traits of the universe, but nothing that can explain what, fundamentally, ties everything together. What connects gravity, emotion, music, DNA, black holes, supernovas, and all the 93 billion light years of our universe’s mass, into one construct? Why are there four fundamental forces? Why not 2, or 1? Why does the electron have a mass that does not have any correlation to the seemingly random proton mass? Why is the speed of light 299,792,458 m/s? Underlying all questions of science is the mission to achieve one fundamental theory— one hypothesis that can explain all others. A master equation, if you will, that finds a perfect harmony for all of nature’s laws, forces, and energy. One such theory, known as string theory, may be the “Theory of Everything” that will investigate and resolve many new and old questions of science, and could connect the quantum and physical layers of the universe into one equation. String theory is just as revolutionary as Democritus’ atoma epiphany: it suggests that electrons, quarks (the particles that protons and neutrons are comprised of) and other subatomic particles are actually extremely small, 1 dimensional string(s) of energy. Each string has a specific harmonic that vibrates to produce a particle of mass. Particles of energy are also included in string theory: the photon for electromagnetism, the gluons and bosons for strong and weak forces between atoms, and the graviton, for gravity. The basics of the math of string theory can be understood simply with the visual of a vibrating string on a violin. Each string on the violin has a different musical pitch, that can produce different notes when the strings are pressed. Each “note” in string theory is a particle, being activated by the shape and vibration of the string. Strings loop, twist, and bend to produce all the known particles, and maybe many more that are still unknown. One surprising thing about string theory is that its equations call for a universe that has seven more dimensions than the three we can detect; curled up in spacetime fabric in loops around each other. Instead of three dimensions, the math for this theory predicts that there are actually ten; eleven, including time. These dimensions are so small in the space and time framework that they do not have effects on us. For instance, from far away, a radio antenna looks one-di-
QUANTA STUDENT MAGAZINE 2015
mensional— almost like a line. Of course, when more closely examined, it is three-dimensional. Similarly, the other seven dimensions are hidden due to our own perspectives of our level in space. That does not mean, however, that these dimensions do not have any effect on the world— the curled up dimensions we cannot see have a huge impact on the way the universe functions. The shape of these extra dimensions directly affect and constrict the way the strings vibrate, which then leads to the strengths of the fundamental forces and the masses of the particles in our universe. That means that the seemingly random numbers for the speed of light and the proton’s mass are due to the shape of the extra dimensions. An eleven-dimensional universe is only the beginning. According to the mathematics behind string theory, there are infinite universes, each with unique “scientific numbers” for the forces of nature and particle masses, and each existing in an infinitely large bubble bath known as the multiverse. Our universe has a specific shape, derived from
UNSEEN DIMENSIONS A true Klein Bottle, pictured here, is only possible in the fourth dimension. Klein Bottles were discovered by Felix Klein in 1882 as the intersection of two Mobius Loops that created a self-containing bottle with no boundaries. The inside of a Klein Bottle is also its outside. Although it exists in the fourth dimension, the surface of a Klein Bottle is 2-dimensional.
the way the strings vibrate upon this shape, that gives it its specific properties— everything from the strength of gravity to the mass of an electron. All the other possible shapes exist in separate universes, making the infinite number of other universes in the multiverse much different from our own. String theory is only beginning to be uncovered, yet it is already very established as what might be the modern scientific oracle, one that could give a very different perspective on the way the universe functions. It could change the future of science, and, if it is proven, would be capable of answering all the fundamental questions of physics. Like any theory, this may just be the basis for something else that is still unknown. Humans have the irrepressible trait of curiosity and eagerness to explore; our exploration of science will keep leading us to more questions, and these questions will keep leading to new discoveries. While it may be true that string theory is not the final contender for the theory of everything, it brings us one step up the ladder to a greater understanding our universe; to new realms and traits of the physical world that remain undiscovered, and many, many more questions. 33
LEARNING TO WALK (AGAIN)
suiting up Exosuits could help the paralyzed and elderly walk again.
BY CHRISTIAN YUN YOU’VE PROBABLY SEEN SOME form of an exosuit before. They’re those wearable robotic suits that enhance the user’s physical capabilities. From Gundam to the half-century old Iron Man franchise to the more recent Halo video games, powered exosuits have been extremely popular in media all around the world. However, what was once mere science fiction is now on its way to becoming a reality. In 2012, the Hybrid Assistive Limb 5 (HAL) exosuit was developed by the Japanese robotics institute Cyberdyne. Both the product and the company also happen to be unintentional references to the villains in science fiction movies 2001: A Space Odyssey and 34
Terminator respectively. Creepy. The HAL is a powered exosuit that will help the paralyzed and the elderly walk again. When you move your body, you think about the motion in your brain before actually executing it. Your brain then transmits the necessary signals through nerves to your muscles, allowing you to move. Paralysis is the result of nerve damage that limits the transmission of these signals to the affected muscle area. The HAL can actually restore motion in paralyzed patients by picking up these weakened nerve signals. After detecting these signals, the motors in the device move the paralyzed limb according to the motion intended by the
patient. Also, the machine “re-teaches” the patient how to move. Since the HAL enables direct neuronal control with the affected limbs, the patient will eventually regain motion of his or her limb even without the HAL. The exosuit functions on the science of electromyography, or the recording of bioelectrical signals in the skeletal muscles. The HAL exosuit will revolutionize physical therapy on the medical scene. It has already created a field for itself in the world of medicine: Neurobotic Movement Training. On another facet of application, a specialized HAL module has been used in disaster recovery operations. The unit doubles the strength of a healthy user and enhances his physical capabilities. In the Fukushima nuclear disaster, this unit assisted workers in the cleanup of radioactive waste. In addition, Cyberdyne, the creator of the exosuit, has received several requests from the U.S. Department of Defense and the South Korean government to mass produce these units for military use. However, Cyberdyne has spurned all of them in accordance with its mission statement, which is to provide its technology for the wellbeing of people and not the destruction of them. While it’s highly doubtful that Cyberdyne will produce a Terminator, the way science is progressing, it’s possible that if it falls into the wrong hands, this technology could change the way wars are fought. It won’t be on the level of the atomic bomb, but this technology has the potential to become a game-changing weapon. On a more positive note, the development of the HAL is proof of how quickly science and technology are LEARNING TO BECOME IRON MAN The HAL exosuit helps Japanese hospital patients practice walking to recover from their injuries.
advancing. The paralyzed can walk again. Those who were condemned to a wheelchair can move again. The suit can give those who wear it the strength to carry things twice their weight and move with ease. With this device, what were once impossible rescue operations will become feasible. What were once mere childhood fantasies are now slowly becoming a reality. Things will only become more advanced as time goes on, and there is a lot to look forward to.
2001: A SPACE ODYSSEY
Fermi’s paradox and artificial intelligence, in context of a great sci-fi film BY JAKE NEWMAN, BRANDON BERSON, AND JONAH GERCKE The movie 2001: A Space Odyssey is the story of the evolution of humankind and its achievements. These are marked by large black stone monoliths that spur on human advancement and evolution. In the movie, three monoliths are found: one on Earth, one on the Moon, and one on Jupiter. The monoliths were all hidden long before anyone found them, adding to the mystery behind them and suggesting the existence of intelligent life forms somewhere in space. It is, of course, a movie, but as we watch, we come to a question: is there intelligent life out there? And if there is, does it know about us? There are two ways to look at it, and depending on what theories you subscribe to, you can take this a couple of different ways. There is the obvious attitude that the fictionalized events of the movie are not possible. But we can look at it from another angle and consider that perhaps there is a chance that some other intelligent life form, outdating humanity and our society, has visited our solar system before and left hints of their existence. Some believe that we are alone, but others will wonder what life lives beyond what we know. The Fermi paradox is an apparQUANTA STUDENT MAGAZINE 2015
ent paradox regarding technologically advanced civilizations in the universe. Enrico Fermi, an Italian physicist from the 1900’s era, suggested on the basis of probability that if technologically advanced civilizations were common and lasted for a while, galaxies would be fully inhabited by extraterrestrial beings. Essentially, he claims that some intelligent civilizations will investigate interstellar travel, and if enough species do so, then a galaxy can be completely colonized within a few tens of millions of years. Also, the size of space would not inhibit us finding another technologically advanced civilization because the possibilities would grow exponentially with the increased area. Therefore, there should be some evidence of extraterrestrial beings on planet Earth, but there is no convincing evidence. The paradox arises from the contradiction of the high probability of extraterrestrial life and the lack of evidence for extraterrestrial life. Additionally, a large part of the movie has to do with technology and the advent of artificial intelligence. In the movie, HAL (Heuristically-programmed Algorithmic Computer) 9000 is a computer with programmed artificial intelligence referred to as Hal. Hal is wired into
the Discovery One spaceship and is able to talk to the astronauts on board. Hal is well known for being a very close representation of what artificial intelligence has turned into today and is also admired for being imagined in 1968. At the time of the movie, the study of artificial intelligence was a relatively new and rare field, but now artificial intelligence is critical to the functionality of most modern technology. For example, it can be found in Apple’s Siri, in Hasbro’s Furby, or in the internet sensation Cleverbot. Both of these different technologies contain artificial intelligence as a way to adapt and interact with its users. However, we are far from the glory days of artificial intelligence, where perhaps machines can effectively simulate human emotion. But, in its own way, artificial intelligence has reached beyond the expectations set in 2001: A Space Odyssey in 1968. Technology has continued to develop at a fast rate, but now we are certain that it has the ability to advance far beyond what we could have ever imagined. 2001: A Space Odyssey is a great example of how humans have always dreamed of the future and will continue to set higher and higher expectations. 35
The Neuroscience of Chess BY DANTE NARDO
An elephant’s power comes from its size. A cheetah’s power comes from its speed. A snake’s power comes from its venom. As humans, our power comes from our decisions. While other animals evolved to have strong armor and weapons on their body, we evolved to forge them in our brain. The greatest weapon in our arsenal is the frontal lobe, the heart of all human decision making. This part of our brain provides higher functions, going beyond the typical “sex is good, pain is bad, angry means fight” and analyzes problems with multiple variables and conclusions. How do we weigh these decisions, though? How does someone choose between the cheap, comfortable car and the fast, safe one? Questions like these and, more importantly, their answers, are helping us understand the complex process of human decision making. Chess is the perfect device for scientists to measure this human decision making process. It follows many of the variables of life, but on a more quantifiable scale. Chess is a terrain with a finite state, meaning the 64 squares and 32 pieces allow for an enormous number of positions and decisions that humans would have to make. In standard matches, players have a fixed amount of time that they are allotted for their decisions, allowing scientists to measure the correlations between decisions and time. In chess, every move can be calculated and evaluated, meaning the result of the decision is quantifiable. This means scientists can quantify the success of a player’s decision. A chess rating is a quantification of a person’s decision-making ability; the higher their rating the better they are at making decisions. Players can even recognize their own inner thinking through stages like planning and calculation, which allows scientists to study our ability to inquire about our own thoughts. Chess is a social event as well, allowing scientists to measure the impact of social interactions on decisions. Every chess player has moved a piece, punched the clock, and felt a surge of adrenaline as he or she perceived the now-changed board. This feeling of adrenaline is in part due to an increased heart rate. A chess player’s heart rate increases significantly after they have made a powerful move, but it also increases when they unconsciously realize they have made a mistake. Scientists wanted to study heart rates in correlation to moves, and whether or not a good or bad move, determined by a computer evaluation, would determine heart rate. The experiment was testing heart rates before and directly after a move. Looking at the subjects who believed they had made a good move, heart rates increased following the move, but bad moves correlated with a significantly higher increase. This significant increase in heart rate following a bad move is especially true when the subject realizes it was a mistake, but even if the subject has 100% confidence in their bad move, their heart rate will still 36
increase. This has led scientists to determine that, when we make a mistake, our hearts reflect the outcome even before we consciously perceive it. Everyone has competed against someone who is better than them. In the modern age of media, we worship actors, sports stars, and revolutionary innovators. All of these people are experts in their field, but what really distinguishes a novice from an experienced actor? A study compared expert and novice brain activity in response to chess board positions and live games. The study found that experts are better at chess mainly due to deliberate practice, but also because a novice’s brain is working heavily in the limbic system, specifically the hippocampus, which is trying to learn and understand the game. An expert’s brain is working heavily in the frontal lobe, having already learned and committed the game and common game states to memory. This means that any person who has already learned or committed stimulus to memory can devote more brain power to decision making. The fact that an expert doesn’t have to learn consequently improves his decision making ability. To understand how humans spend time on decisions, scientists analyzed chess decisions, measuring their accuracy and the time it took to make the move. Scientists analyzed how player response time changed according to the period of the game. The resulting response time clearly showed that openings and endgames are played much faster than the middle game, since they are common knowledge for most chess players. This trend was present among all players, but a revealing result emerged when the scientists compared time usage of low rated and high-rated players: strong players spend even less time on the opening and endgame, and invest significantly more time reflecting on middle game moves. The larger response time range of higher rated players suggested that they perceived the opening and endgame as more simplistic and the middle game as far more complex than lower rated players. From this the scientists hypothesized that the board complexity determined response time. They measured the complexity of board positions by calculating the number of viable moves based on how many times they had been played in a 650,000 game database. They found that while board complexity still remained high in the endgame, players would still speed up their moves, meaning board complexity was not the sole factor in response time. Using their database, the scientists calculated how score and remaining time combined to determine a player’s winning likelihood. This showed that the main factor determining the match outcome shifts from material advantage to time advantage as remaining time decreases. In 3 minute games, when remaining time is between 20 and 30 seconds, an 8 second advantage on the clock is equivalent to having a whole extra knight on the board. These results show that there is a positive correlation between time decrease and use of time per move. Decisions change from being practical and accurate to simplistic and precise. While this saves time it does decrease the potency of one’s decisions. Our brains are constantly and automatically processing this time change and accounting for it, changing rapidly between difficult and simplistic decisions. Chess is the perfect controlled setting for decision making with it’s many extensive, quantifiable positions and controlled time. Scientists have used this to discover phenomena relating to heart rate, learning, and timed decisions, giving us a deeper insight into the human triumph that is the brain. QUANTA STUDENT MAGAZINE 2015
37
FREUD
on life and death BY ELAINE CHEN THINK of the memory of being at a position high above the ground: when you were on a viewing platform of skyscraper, a tower, or the top of mountain. You were in midair, where you could stare over the edge of the city and look into the sky as far as you can. For a split second, have you ever be caught by an impulsive thought to jump off? Or, have you ever imagined the situation that you accidently hurt yourself, such as cutting your fingers while chopping apples, even though it never happened in your life? Instinct toward death and chaos is known as “death drive,” or Thanatos, from Greek mythology. Thanatos is the demon of death associated with fraudulence and pain. In some ways, he guided the dead to Hades. In light of such characteristics, Freud’s interest in human’s death instinct resided in mythology: he observed people suffering trauma and discovered that they have the tendency to repeat traumatic experiences, which violates the drive of pleasure named Eros. According to Beyond the Pleasure Principle, Freud concluded that all human beings were born with two opposite natural instinct – Thanatos and Eros. Thanatos leads people to fall into risky conditions and compels self-destructive actions, typically, suicide. Eros displays an innate desire to avoid danger and survive from disasters, which could be interpreted as “confidence to live”. Thanatos is engendered from a subconscious level, mostly displayed as anger, envy and aggressiveness to destruct balance, and it’s also reflected in social interaction that deteriorates conflicts and cause fights which lead to brutal violence such as murder. However, the ultimate goal of Thanatos is strongly correlated with the biological functions of human: terminating vitality and returning to the state of being unconscious organic matters, thus reaching eternal tranquility, as if returning to the womb, where there’s no light, conception, or awareness, only warmth and peace. Although Thanatos was regarded by others as merciless and indiscriminate, his legacy still lives on: humans actively seek thrills to elevate their lifestyle. Although this behavior may be classified as self-destructive, it allows humans to find fulfilment in their lives. Thanatology, a branch of Freudian study, focuses on the experiences and events leading up to one’s death. It is primarily an interdisciplinary study, which is taken up by many medical professionals. Similarly, Eros, or the study of why we live is another interdisciplinary study that questions our existence. Eros reminds us of the importance of reproduction in order to sustain the survival of our species, among the fact that we are social animals. Freud argues that this is the reason why we want to be productive and accomplish new things daily. It is the reason why we want to come up with novel ideas. In any case, Thanatos and Eros work together as one unit. We cannot have one without having the other. We must live while understanding that we must make the most of our life.
IS COFFEE BY JEROD SUN For hundreds of years, people have been drinking coffee. Coffee traces its origins to the Ethiopian highlands, where trees remain today as they have for centuries. It is said that a goatherd by the name of Kaldi discovered the miracles of coffee after noticing that his goats, upon eating berries from a certain tree, became so spirited that they couldn’t sleep at night. Slowly, knowledge of the energizing effects of the berries began to spread. By the end of the 18th century, coffee had become one of the world's most profitable export crops. Today, coffee is grown and consumed in a multitude of countries around the world. However, for all its success, is it beneficial to us at all? We can examine the reason why the average businessman, those who aren’t connoisseurs, drink coffee regularly. In the same way that people drink alcohol to boost their spirits, coffee is used for delaying a human need to rest in order to stay awake. This relies on the crucial ingredient in coffee - caffeine. A well-known French novelist and playwright, Honoré de Balzac is said to have consumed the equivalent of fifty cups of coffee a day at his peak. He did not drink coffee, though—he pulverized coffee beans into a fine dust and ingested the dry powder on an empty stomach. He described the approach as “horrible, rather brutal,” to be tried only by men of “excessive vigor” as described by his 1839 essay “Traité des Excitants Modernes” (“Treatise on Modern Stimulants”): “Sparks shoot all the way up to the brain” while “ideas quickmarch into motion like battalions of a grand army to its legendary fighting ground, and the battle rages.” Creativity is notoriously difficult to measure– however, carefully set up testing with a wide range of scenarios, controlled variables, and proven assumptions, produced some results. This factual data shows that, with a “healthy” amount of coffee, participants showed greater creativity, and an increased ability to focus. There are hundreds, maybe thousands, of research articles proving the usefulness of coffee in various aspects, scientific data
HARMFUL OR HELPFUL? aggregated over centuries of research. My findings are below as follows: The beneficial effects of human caffeine consumption deserve clarification. An analysis was conducted by Dr. Michael J. Glade tha aggregated thousands of scientific studies that test effects of coffee on drinkers over time, measuring various aspects of health. What
Moderate amounts of caffeine in adults increases energy availability, daily energy expenditure, enhances motor performance, and enhances cognitive ability. they discovered was that moderate amounts of caffeine in adults increases energy availability, daily energy expenditure, enhances motor performance, and enhances cognitive ability. In other words, it describes the general effects and predominantly assumed observed effects, and it also shows significant positive neurological or physical effects of human caffeine consumption on a number of physiological systems. What, then, is the ideal caffeine to productivity ratio? Many researchers have tried to answer this question, and most conclude that a few cups every week doesn’t hurt, especially in the late morning. According to the Harvard School of Public Health (HSPH), if you’re drinking so much coffee that you get tremors, have sleeping problems, or feel stressed and uncomfortable, then obviously you’re drinking too much coffee. But in terms of effects on mortality or other health factors, there are no reported negative effects of consuming up QUANTA STUDENT MAGAZINE 2015
to six cups of coffee a day. HSPH conducted a 18-24 year study with 130,000 volunteers. At the start of the study, these healthy men and women were in their 40s and 50s. “We followed them for 18 to 24 years, to see who died during that period, and to track their diet and lifestyle habits, including coffee consumption. We did not find any relationship between coffee consumption and increased risk of death from any cause, death from cancer, or death from cardiovascular disease. Even people who drank up to six cups of coffee per day were at no higher risk of death. This finding fits into the research picture that has been emerging over the past few years. For the general population, the evidence suggests that coffee drinking doesn’t have any serious detrimental health effects.” Therefore, to function at your best every day, you should drink caffeinated beverages all the time, right? Well, no. First off, increased levels of caffeine usually result in lack of motivation, or even ability, to rest. Long term, it is simply not a substitute for sleeping or taking breaks. However, with not enough energy, your reflexes are slower, you pay less attention and you could become one of the more than 100,000 Americans who fall asleep at the wheel and crash each year. The National Highway Traffic Safety Administration says that that is a conservative estimate, by the way. Definitely, a good supply of caffeine may prevent that if know your limits and rest as soon as possible. So far, the underlying effects of coffee are completely safe. Medically, moderation results in healthier lives for the average person in tests. Based on past and concurrent scientific research, the general conclusion professionals draw would be that drinking coffee – of course, taking care to maintain a good care of your own health – is not detrimental. Even if you aren’t taking something for healthy purposes, it is always acceptable to indulge every now and then. 39
THE
DEATH
OF THE HONEYBEE Honeybee populations have plummetted over the last decade. ... But what exactly is happening? BY SKYLAR GERING People have been predicting the end of the world for millennia. From asteroids to nuclear explosions the speculations go on and on, each more unlikely than the last. However, today, earth faces a very real threat: the extermination of the earth’s honeybees. It all began in October of 2006 when beekeepers noticed huge portions of their honeybees abandoning their hives—leaving the queen bee, the larvae, and large stores of food—never to return. Without these necessities, bees cannot survive and quickly die. While it isn’t unusual for beehive populations to decrease slightly after the cold of winter, beekeepers reported a startling 30-90 percent decrease in their hive populations according to the United States Department of Agriculture. Scientists named this phenomenon Colony Collapse Disorder (CCD). The Natural Resources Defense Council reports that CCD has killed an estimated one-third of the honeybee population in the United States. Although researchers suspected causes such as parasites or starvation, they had no proven explanation for what was actually killing off the massive amounts of honeybees. Recently however, researchers have started considering the possibility that a singular type of pesticide could be the cause of CCD. In 2014, a study from the Harvard School of Public Health strengthened earlier findings that neonicotinoids, a type of pesticide, are the prime cause of CCD. Neonicotinoids are a common class of systemic pesticide registered in over 120 countries and compro40
mising of close to a quarter of the global insecticide market. Systemic pesticides are sprayed onto plant’s leaves or placed into the soil, where the plant’s roots and tissues absorb the pesticide and transfer it throughout the entire plant—including the pollen and nectar that honeybees feed on. When transferred to the bees, either by direct contact or by ingestion, neonicotinoids impair bees’ neurological functions. Specifically, the neonicotinoids attack the central nervous system of the insects by attaching to their sense receptors. This overexcites the bees’ nervous systems, leading to paralysis and death. In both 2012 and 2014, researchers from the Harvard School of Public Health studied eighteen bee colonies from three separate locations in Massachusetts. One of the groups—that consisted of six colonies—acted as the control variable while the other two groups were treated with neonicotinoids. At first, all of the groups experienced population loss, which was typical given the onset of winter. However, as temperatures warmed back up, the control colony’s populations began to increase again, while the treated colonies’ populations continued to plummet. Almost all treated colonies experienced CCD symptoms. According to the Harvard School of Public Health, the 2012 study’s treated hives experienced a 94% CCD mortality rate while the 2014 study’s treated hives had a 50% CCD mortality rate. Researchers hypothesize that the difference was due to varying winter temperatures, as the winter before the 2012 study was unusually
cold and prolonged. The death of more honeybees in the United States would be catastrophic—not only for the economy but also for human health. Chen Shenglu, a lead researcher in the Harvard study stated, “the significance of bees to agriculture cannot be underestimated.” According to the United States Department of Agriculture, honeybees are solely responsible for pollinating much of the food humans consume on a daily basis—one in three mouthfuls of the average human diet. With the decrease in honeybee populations, many common fruits, vegetables, and nuts will become rare and expensive commodities. Roughly one-third of United States’ crops, such as apples, almonds, blueberries, and avocados, cannot grow without honeybee pollination. This would lead to an estimated fifteen billion dollar loss of crops in the United States. Today, the European commission (the executive body of the EU) has banned the use of three neonicotinoids for two-year period due to high risks to honeybee populations. However, the United States has yet to follow in its footsteps and implement a ban. All around the United States, environmental agencies such as the Sierra Club and the Center for Environmental Health have been joining together to fight for the ban of neonicotinoids that have now been linked to CCD in multiple studies. For the sake of the bees and our country’s future, people can only hope that their protests make an impact on lawmakers or the world might just be facing the next apocalypse.
Bubbles of Life
Protocells provide key insight on evolution BY ANDREW LI Let’s play a game. I will list a series of things and tell me whether or not it is living. Bacteria. Viruses. Protocells. If you were indoctrinated to believe that cells are the basic units of life, then only bacteria would count as living from the above, but I’m here to inform you otherwise. Rewind a few billion years and imagine a primordial earth, devoid of flora and fauna. Given life’s current biodiversity, what back then gave rise to the first life forms? Unless you said God, the last universal common ancestor (LUCA) would be the answer, but I would take this a step further: protocells, organized, spherical naturally-occurring units of life. Though protocells are only a model for the origin of self-replicating molecules arising through abiogenesis (natural process of life to form from simple organic compounds, which commonly contain carbon, oxygen, nitrogen, and hydrogen), they give us an idea of how inheritance, the engine of evolution, came to be. Organic compounds easily could have accumulated near volcanoes and ocean thermal vents, giving rise to quite a few fatty acids, simple carbohydrates, amino acids (as demonstrated in 1953 Miller-Urey Experiments where upwards of 20 amino acids were detected where they simulated lightning strikes in early earth atmosphere). The hot, mineral rich environment, thick with electron-passing, or reducing, gases could have even synthesized RNA catalysts known as ribozymes (ribonucleic acid + enzyme). In other words, volcanic vents were hot tubs offering free services, exactly the place protocells love. As shown in the picture on the right, protocells comprise of about 10 different molecules. They are encapsulated by a fatty acid layer, forming vesicles (bubbles) which come together spontaneously in an aqueous environment given those basic organic molecules. This selectively-permeable packaging allows protocells to maintain a different chemical environment than their surroundings, an important characteristic of life. Life also requires reproduction and harnessing energy, both of which they can do. With the addition of montmorillonite, a mineral clay that comes from volcanic ash, the environment becomes even more conducive to vesicle self-assembly. This clay, believed to be abundant, encourages vesicles to grow in size and divide. In addition, protocells can take up RNA and other organic molecules, perform simple metabolic reactions, and react to their surroundings (protocells can be coerced to quiver as if they were dancing in response to stimulants). More importantly, ribozymes, as investigated by Cechand Altman, can evolve and replicate in protocells. The protocell with the most adQUANTA STUDENT MAGAZINE 2015
#throwback A model of a protocell, created by a team of scientists at Harvard University vantageous capabilities of replicating and catalyzing reactions would then dominate, an early form of natural selection resulting in the inheritance of these traits in the next generation of protocells. This is the start of “life” at its most basic definition: capacity for reproduction, chemical activity, and evolution. Though we are unlikely to definitely prove how life began, protocells are quite the interesting model for building the animate out of the inanimate. 41
Black Hole for Sunlight A new nanomaterial enables Concentrated Solar Power plants to absorb and convert more than 90 percent of captured sunlight.
SOLAR PANELS: REIMAGINED BY DAVID DAI
Nowadays, natural resources such as copper, steel, and oil can be easily found; however they are going to be exhausted. People have begun to invent and research new, renewable, green energy to support the basic requirements to sustain the society. Energy exploration dates back to the 16th century, when British companies started to collect energy from the bottom of the sea. One of the more recent examples of energy sources that come from the sea is methane hydrate or combustible ice. Large amount of this new fuel was discovered in the China Sea. However, the most important energy resources are still oil and coal. Over 700 offshore oil drilling platforms have been built. In addition to energy resources, the sea is rich in minerals. They are mined using robots that go down to the bottom of the sea. Last year, the New Guinean government granted Canadian companies a 20-year license for mining a site 30 kilometers off the coast of Papua New Guinea in the Bismarck Sea. This company plans to get minerals from the site called Solwara 1. They are going to get energy by using underwater robotic technologies and the technologies of offshore oil and gas. They estimated that they will get 1.3 million tons of minerals per year. However, there are many problems that come with the energy exploitation in the sea. For example, there is the possibility of an oil spill, damage to the environment, contamination of the freshwater, danger to the sea animals, and heavy pollution if the energy is not used 42
correctly. On the other hand, the benefits of sea energy exploration are huge. Therefore, people are actively searching for ways that could both collect energy and not damage the environment. A multidisciplinary engineering team at the University of California, San Diego developed a new nanoparticle-based material for capturing sunlight and converting it to energy. The material is used in power plants and is 90 percent efficient. The new material can also withstand temperatures greater than 700 degrees Celsius and survive many years outdoors in spite of exposure to air and humidity. Their work, funded by the U.S. Department of Energy's SunShot program, was published recently in two separate articles in the journal Nano Energy. In contrast, current solar absorber material functions at lower temperatures and needs to be overhauled almost every year for high temperature operations. "We wanted to create a material that absorbs sunlight that doesn't let any of it escape. We want the black hole of sunlight," said Sungho Jin, a professor in the department of Mechanical and Aerospace Engineering at UC San Diego Jacobs School of Engineering. Jin, along with professor Zhaowei Liu of the department of Electrical and Computer Engineering, and Mechanical Engineering professor Renkun Chen, developed the Silicon boride-coated nanoshell material. They are all experts in functional materials engineering. The novel material features a "multiscale" surface created by using
particles of many sizes ranging from 10 nanometers to 10 micrometers. The multiscale structures can trap and absorb light, which contributes to the material's high efficiency when operated at higher temperatures. Concentrating solar power (CSP) is an emerging alternative clean energy market that produces approximately 3.5 gigawatts worth of power at power plants around the globe—enough to power more than 2 million homes, with additional construction in progress to provide as much as 20 gigawatts of power in upcoming years. One of the technology's attractions is that it can be used to retrofit existing power plants that use coal or fossil fuels because it uses the same process to generate electricity from steam, except with solar power. Traditional power plants burn coal or fossil fuels to create heat that evaporates water into steam. The steam turns a giant turbine that generates electricity from spinning magnets and conductor wire coils. CSP power plants create the steam needed to turn the turbine by using sunlight to heat molten salt. The molten salt can also be stored in thermal storage tanks overnight where it can continue to generate steam and electricity, 24 hours a day if desired, a significant advantage over traditional photovoltaic systems that stop producing energy after sunset. Therefore, there are many new alternative sources of energy capable of achieving great things. It’s just a matter of dreaming big. QUANTA STUDENT MAGAZINE 2015
Quanta Student Magazine
Angela Li
2015
Editor-In-Chief...................................................................................................................Angela Li Interschool Leadership Chair............................................................................................Angela Li Interschool Finance Manager.................................................................................Charlie Michael Interschool Design Manager.................................................................................Linlin Yamaguchi Co-president, The Bishop’s School Chapter.............................................................Sarah Burnett Co-President, The Bishop’s School Chapter....................................................................Angela Li President, Pacific Ridge Chapter..............................................................................Nathan Cheng President, La Jolla Country Day Chapter.............................................................Tanuja Adigopula
Editors The Bishop’s School.................................Angela Li The Bishop’s School..........................Sarah Burnett Pacific Ridge School.........................Nathan Cheng Pacific Ridge School......................Bailey Bjornstad Pacific Ridge School...............................Tyler Chen Pacific Ridge School.................Lindsey Sanderson Pacific Ridge School...........................Joshua Kahn La Jolla Country Day...............................Jerod Sun La Jolla Country Day....................Tanuja Adigopula
Layout Editors The Bishop’s School.................................Angela Li The Bishop’s School.....................Linlin Yamaguchi The Bishop’s School............................Avery Gulino The Bishop’s School..................Bettina King-Smith The Bishop’s School.............................Vivian Fond
Faculty Advisors The Bishop’s School.........................Marcus Milling La Jolla Country Day..........................Heidi Bruning
Faculty Editors The Bishop’s School.....................Anthony Pelletier The Bishop’s School.........................Pam Reynolds The Bishop’s School........................... John Rankin The Bishop’s School.........................Marcus Milling The Bishop’s School.............................Adam Davis The Bishop’s School............................Mark Radley The Bishop’s School.........................David Herman The Bishop’s School................................Amy Allen The Bishop’s School......................Julianne Zedalis Pacific Ridge School.............................Martha Belo
Pacific Ridge School........................Justin McCabe Pacific Ridge School.............................Todd Burkin Pacific Ridge School..................Christopher Wright
Charlie Michael
Linlin Yamaguchi
Student Writers The Bishop’s School..........................Aaron Liu ‘19 The Bishop’s School........................Andrew Li, ‘16 The Bishop’s School.....................Avery Gulino ‘16 The Bishop’s School..........Bettina King-Smith, ‘17 The Bishop’s School.......................Daniel Kim, ‘16 The Bishop’s School......................Evers Pund, ‘16 The Bishop’s School......................Linette Pan, ‘17 The Bishop’s School.....................Peter Griggs ‘15 The Bishop’s School...................Rachel Hong, ‘17 The Bishop’s School.......................Samuel Fu, ‘17 The Bishop’s School...................Sarah Burnett ‘15 The Bishop’s School..................Skylar Gering, ‘17 The Bishop’s School..............Stephanie Davis, ‘16 Pacific Ridge School...................Avery Rogers, ‘17 Pacific Ridge School...............Bailey Bjornstad, ‘15 Pacific Ridge School.....................Bianca Yang, ‘15 Pacific Ridge School...........Charlie LeMasters, ‘15 Pacific Ridge School...................Christian Yun, ‘17 Pacific Ridge School....................Dante Nardo, ‘15 Pacific Ridge School..................Gabi Burkholz, ‘17 Pacific Ridge School.................Hannah Lange, ‘17 Pacific Ridge School....................Joshua Kahn, ‘16 Pacific Ridge School..........Lindsey Sanderson, ‘15 Pacific Ridge School.................Nathan Cheng, ‘15 Pacific Ridge School.......................Tyler Chen, ‘15 Pacific Ridge School...........Zafar Rustamkulov, ‘15 Pacific Ridge School.......................Zoe Siddall, ‘15 La Jolla Country Day..........................David Dai ‘16 La Jolla Country Day......................Elaine Chen ‘16 La Jolla Country Day.........................Jerod Sun ‘17 La Jolla Country Day..............Tanuja Adigopula ‘15 La Jolla Country Day........................Venus Sun ‘15
Sarah Burnett
Nathan Cheng
Tanuja Adigopula
To learn more about Quanta Student Magazine, please visit our website at www.quanta-magazine.com or contact Editor-In-Chief Angela Li at angelali@quanta-magazine.com. Quanta Student Magazine is an annual non-profit student-run science magazine aimed at informing and inspiring student scientists. The opinions expressed within are those of the authors only. All images are used for educational purposes only.